You are on page 1of 84

Software Quality Assurance

(SQA)

Chapter 8, Pressman
Software Quality Assurance (SQA)
The goal of Software Engineering is to produce “high–quality”
software.
SQA is an “Umbrella Activity” that is applied throughout the
entire software process to achieve “high–quality” software.
SQA reduces the amount of rework, results in lower costs,
and more importantly, time–to–market.
A typical Software Process

Analysis Design Coding Testing

Software Quality Assurance (SQA) Activities


(An umbrella activity throughout the entire software process)
2
Software Quality Assurance (SQA)
SQA encompasses:
1. A quality management focus.
2. Effective use of software engineering methods and tools.
3. Formal technical review (FTR) throughout the entire
software process.
4. A multi–tiered testing strategy (unit testing, integration
testing, validation testing and system testing).
5. Change control and management.
6. Compliance with software development standards.
7. Measurement and reporting mechanism.
The work product:
A Software Quality Assurance Plan to define the software
team’s SQA strategy.
3
Quality Concept
All engineered and manufactured products exhibit “variation
between samples”.
Variation control, i.e., minimizing the variation between
samples, is the key to high–quality products.
For example, manufacturing process (mean and deviation),
MacDonald etc.

4
Quality Control
Quality control equals to variation control.
Quality control is achieved through a series of inspections,
reviews and tests applied throughout the development cycle,
to ensure that the products meet their requirements with
minimal variation.
In software development, we look for:
¡ variation (of the implementation) from the requirements.
¡ variation in the software process (the goal is to have a
repeatable process).

5
Quality Assurance
Quality assurance is a part of management.
Quality assurance consists of the analysis, auditing and
reporting functions of management.
The goal is to provide management with the data necessary
to be informed about product quality, thereby to gain insight
and confidence that product quality is meeting its goal.
If the data indicates problems, it is the management’s
responsibility to address the problems and apply the
necessary resources to resolve the quality problems.

6
Cost of Quality
Cost of quality includes all costs incurred in the pursuit of
quality. It can be divided into costs associated with
prevention, appraisal, and failure.
¡ Prevention cost:
– Quality planning
– Formal technical review
– Test equipment and training.
¡ Appraisal cost:
– Cost of inspection and testing.
– Equipment calibration and maintenance
¡ Failure cost
– Internal Failure cost: before the product is shipped,
includes rework, repair, failure mode analysis.
– External Failure cost: after the product is shipped,
includes complaint resolution, product return and
replacement, help–line support, warranty work.
7
Cost of Quality – Example
Case study by IBM’s Rochester development facility
With Inspection/Review
Total no. of codes 200,000 lines
No. of potential defects prevented 3112 defects
Time used for inspection 7053 hours
Programmer costs US$40/hour
Total prevention cost US$282,120
Prevention cost per error US91

Without Inspection/Review
No. of defects per KLOC 1 per 1000 lines
No. of defects shipped 200
Estimated cost by IBM per “field fix” US$25,000
Total cost to remove defects US$5 million

“Prevention is better than cure.”


8
Why SQA Activities Pay Off?
Cost to find
and fix a defect

100 60–100

10 10
3
1.5
1.0
1 0.75

System Field
Req. Design Code Test
Test Use
Benefits: Early detection of errors, less rework, less costly,
lower production cost, shorter time–to–market.
9
Software Quality Definition
Software quality is defined as the conformance to:
1. Explicitly stated functional and performance requirements.
2. Explicitly documented development standards.
3. Implicit characteristics that are accepted of all professionally
developed software.
The definition emphasizes 3 important points:
1. Requirements are the foundation from which software
quality is measured.
2. Specified standards define a set of development criteria that
guide the manner in which software is engineered.
3. There is a set of implicit requirements that often goes
unmentioned, e.g. the desire for good maintainability.
“Every program does something right, it just may not be the
thing that we want to do.” 10
SQA Players
SQA activities are carried out by two groups of people:
The software engineer group: who apply solid technical
methods and measures to address quality, conduct formal
technical reviews and perform well–planned software testing.
The SQA group: who are responsible for quality assurance,
planning, oversight, record–keeping, analysis and report.

11
The role of the SQA Group
The job is to assist the software team in achieving a high–
quality end products.
The roles of the independent SQA group are:
¡ Prepare a SQA plan for a product.
¡ Participates in the development of the product’s software
process description.
¡ Review software engineering activities to verify compliance with
the defined software process.
¡ Audit software work products to verify compliance with those
defined as part of the software process.
¡ Ensure that deviations in software work and work products are
documented and handled according to a documentation
procedure.
¡ Records any noncompliance and reports to senior management.

12
SQA Activities
SQA encompasses the following activities:
1. A quality management approach.
2. Effective software engineering methods and tools.
3. A multi–tiered testing strategy (unit testing, integration
testing, validation testing and system testing).
4. Formal technical review (FTR) throughout the entire
software process.
5. Change control and management.
6. Compliance with software development standards.
7. Measurement and reporting mechanism.
Activity 1 is a management commitment. Activity 2, 3 have
been covered. This portion covers the activity 4 – Formal
Technical Review. Activity 5 – change control and
management will be covered next.
13
Software Reviews
Software review is the one of the most important SQA
activities.
It acts as a filter for the software engineering process.
Apply at various points in time during the software
engineering process to uncover and then remove errors.
Purify the software work product of errors that occurred as a
result of analysis, design, coding and testing.

“To error is human.”


“Many types of errors escapes the
originator more easily than they
escape anyone else.”

14
Reviews are Umbrella Activities

15
Defect Amplification Model
A defect Amplification Model can be used to illustrate the
generation and detection of errors during the various stages
of the software cycle, and the effectiveness of the software
review.
A Process Step
Errors from the (e.g., Design) Errors pass
previous step Errors passed to next step
through Percent
Errors amplified Efficiency
1:x for
Error
Newly generated Detection
Errors in this step

Errors Detection
16
Errors vs. Defects
Error refers to a quality problem that is discovered before the
software is released.
Defect (or fault) refers to a quality problem that is discovered
after the software is shipped.
Errors, if not discovered during the software development
process, will become defects after the product is shipped.

17
Defect Amplification – Without Reviews
Analysis Design Code/ Unit Test
0 6 10
10 6 4x1.5 0% 37 10 93
0 0% 27x3 20%
4 27
10 25 25
10 37 116
Integration Test Validation Test System Test
93 47 24
47 24 12
– 50% – 50% – 50%
– – –
93 47 24
No review during analysis and design, zero error detection.
10, 25, 25 errors generated during analysis, design, coding.
Integration, Validation, System tests uncover and correct 50% of
errors and generate no new error.
Without review, 12 errors (or defects) released. 18
Defect Amplification – With Reviews
Analysis Design Code/ Unit Test
0 2 5
3 2 1x1.5 50% 15 5 24
0 70% 10x3 60%
1 10
10 25 25
10 29 60
Integration Test Validation Test System Test
24 12 6
12 6 3
– 50% – 50% – 50%
– – –
24 12 6

Review conducted during analysis and design. Effective


review can uncover up to 75% of errors.
With review, only 3 errors (or defects) released.
19
Example – Cost Impact
The costs with and without review are:
Cost Without Review With Review
Analysis/Design 1.5 – (7+14) x 1.5
Code/Unit testing 6 23 x 6 (36 x 6)
Testing 15 (46+23+12) x 15 (12+6+3) x 15
After release 67 12 x 67 3 x 67
Total 2157 764
The ratio is 2157:764 about 3 times.
Conclusion:
¡ Early detection of errors (via software review) reduces software
cost.
¡ Testing is necessary, but it is also a very expensive way to find
errors. Spend time to find errors early in the process and you
may be able to significantly reduce testing and debugging costs.

20
What are “Software Reviews”?
What Are Reviews?
A meeting conducted by technical people for technical people.
A technical assessment of a work product created during the
software engineering process.
A software quality assurance mechanism.
A training ground.

What Reviews Are Not!


A project budget summary.
A scheduling assessment.
An overall progress report.
A mechanism for reprisal or political intrigue!!
“There is no particular reason why your friend and colleague
cannot also be your sternest critic.”
21
Types of Reviews and their Effectiveness

Formal Technical Review (FTR) is the most effective among


all software review methods.
22
Formal Technical Review (FTR)
The main purpose of Formal Technical Review (FTR) is to find
errors during the early stages of the software development
process. This will cut down on the errors and eventually the
defects of a product, as illustrated in the earlier example
using defect amplification model.
Design activities typically introduce between 50% to 60% of
all errors.
Formal Technical Review has been shown up to be 75%
effective in uncovering design flaws.

23
Formal Technical Review (FTR)
The main objectives of FTR are:
¡ to uncover errors in function, logic, implementation.
¡ to validate that the software meets its specifications.
¡ to ensure that the software has been represented according to
the predefined standards.
¡ to achieve software that is developed in a uniform manner.
¡ to make project more manageable.

The other objectives of FTR include:


¡ to serve as training ground for junior staff.
¡ promote backup and continuity because more peoples (in the
review team) are familiar with the software.
FTR is effective only if it is properly planned, controlled, and
attended.

24
FTR Meeting
FTR meeting should abide by the following rules:
¡ Typicallybetween 3 to 7 people.
¡ Advanced preparation by the participants needed, but not more
than 2 hours.
¡ Duration of meeting should be less than 2 hours.

Given the constraints, each meeting should focus on a specific


(and small) part of the software.
The focus on FTR is on the work products (e.g., a use case, a
part of requirement specification, source code of a
component).

25
FTR Procedures
The procedure for conducting FTR is as follows:
1. The producer of the work product informs the project leader
that the work product is completed and ready for formal
review.
2. The project leader contacts a review leader, who evaluates
the product for its readiness, generates review materials,
and distributes to 2 to 3 reviewers for advance preparation.
3. Each reviewer is expected to spend between 1 to 2 hours
reviewing the product and “making review notes”.
4. The review leader establishes an agenda, and schedule the
review (which is typically the next day).

26
FTR Procedures (cont)
5. Review meeting is attended by the review leader, reviewers,
producer, project leader, and may be a user representative.
One of the reviewer serves as the recorder.

Review Leader
Reviewer / SQA Standards Bearer
Reviewer / Recorder
Project Leader

Producer Reviewer
User Rep

27
FTR Procedures (cont)
6. The producer walks through the work product, explain the
materials, while reviewers raise issues based on their
advance preparation.
7. At the end of the review, all reviewers must decide
whether to (a) accept the product without further
modification, (b) reject the product due to severe errors
(once corrected, another review will be performed), (c)
accept the product provisionally (minor errors
encountered and must be corrected, but no additional
review needed.)
8. All participants sign–off, indicating their attendance and
concurrence with the review team’s finding.

28
FTR Report – A Sample
Formal Technical Review (FTR) Report
1. Work Product under Review
1.1 Name & Description
1.2 Producer
1.3 Reviewers
1.4 Date, Time, & Location
2. Review Summary
2.1 Overall Comments
2.2 Appraisal: accepted, accepted with minor
modification, not accepted rework required.
2.3 Reviewer Signatures
3. Issue List
(List of issues raised during the review organized into
errors. Follow–up actions properly indicated if any.)

29
Review Guidelines
Set up agenda and stick to it.
Limit the number of participants.
Record and report the FTR findings.
Conduct proper training for all reviewers.
¡ Review the product, not the producer.
¡ keep your tone mild, ask questions instead of making
accusations.
¡ avoid discussion of style – stick to technical correctness.
¡ Be prepared – insist of advance preparation.
¡ Limit debate and rebuttal (record them for offline discussion).
¡ Raise issues, but don’t attempt to solve all the problems
(problems often can be used by the producer alone, or with the
help of some individual).
Schedule review as a project task, and allocate resources.

30
Metrics Derived from Reviews
The following measures can be derived from FTR for later
statistical analysis:
¡ inspection time per page of documentation.
¡ inspection time per KLOC or FP.
¡ inspection effort per KLOC or FP.
¡ errors uncovered per reviewer hour.
¡ errors uncovered per preparation hour.
¡ errors uncovered per SE task (e.g., design).
¡ number of minor errors (e.g., typos).
¡ number of moderate errors (e.g., inefficient codes).
¡ number of major errors (e.g., non–conformance to
requirements).
¡ number of errors found during preparation.

31
Statistical SQA
Statistical SQA is quantitative.
The procedures for Statistical SQA is:
¡ Information about software defects are collected over a period of
time.
¡ The cause of errors are traced and categorized, e.g., non–
conformance to specification, design error, violation of
standards, poor communication with customer.
¡ Using the 80–20 principle, identify the “vital few causes”.
¡ Improve and correct the process on the “vital few causes”.

Statistical SQA provides developers an understanding on how


to improve the software process and consequently improve
the quality of the software, by collecting and analyzing
statistical data on errors.

32
Statistical SQA – Example
Total Serious Moderate Minor
Error No. % No. % No. % No. %
IES 205 22 34 27 68 18 103 24
MCC 156 17 12 9 68 18 76 17
IDS 48 5 1 1 24 6 23 5
VPS 25 3 0 0 15 4 10 2
EDR 130 14 26 20 68 18 36 8
ICI 58 6 9 7 18 5 31 7
EDL 45 5 14 11 12 3 19 4
IET 95 10 12 9 35 9 48 11
IID 36 4 2 2 20 5 14 3
PLT 60 6 15 12 19 5 26 6
HCI 28 3 3 2 17 4 8 2
MIS 56 6 0 0 15 4 41 9
Total 942 100 128 100 379 100 435 100

Data Collected for Statistical SQA (Pg. 211 of Pressman)


33
Class of Errors
IES: Incomplete or Erroneous Specification
MCC: Misinterpretation of Customer Communication
IDS: Intentional Deviation from Specifications
VPS: Violation of Programming Standards
EDR: Error in Data Representation
ICI: Inconsistent Component Interface
EDL: Error in Design Logic
IET: Incomplete or Erroneous Testing
IID: Inaccurate or Incomplete Documentation
PLT: Error in Programming Language Translation of design
HCI: Ambiguous or Inconsistent Human/Computer Interface
MIS: Miscellaneous

34
Statistical Analysis
The data indicates that IES, MCC, and EDR are the “vital few
causes”.
Once the “vital few causes” are identified, corrective action
begins. For example:
¡ To correct MCC (Misinterpretation of customer requirement), the
developer might implement “facilitated application specification”
technique to improve the quality of customer communication and
specifications.
¡ To improve EDR (Error in data representation), the developer
might acquire a CASE tool for data modeling for conduct more
stringent data design reviews.
Note that “correct action” focuses primarily on the “vital few
causes”.
Statistical SQA can achieve a 50% reduction in defect.

35
Statistical Analysis (cont)
Developers can compute the Error Index (EI) for each major
step in the software process.
Ei = total no. of error uncovered in step i.
Si = total no. of serious errors in step i.
Mi = total no. of moderate errors in step i.
Ni = total no. of minor errors in step i.
Z = size of the product (in KLOC, FP etc).
Ws, Wm, Wn = weighting factor for serious, moderate, and minor
errors, recommended values are Ws=10, Wm=3, Wn=1
At each step (or phase) of the software process, a Phase
Index (PI) is computed as follows:
PIi = Wsx(Si/Ei) + Wmx(Mi/Ei) + Wnx(Ni/Ei)

36
Statistical Analysis (cont)
The Error Index (EI) for the product can be computed from
all the Phase Indices (PIs) as follows:
EI = Σ(i x PIi) / S
= (PI1 + 2xPI2 + 3xPI3 + ...) / Z
Note that errors encountered in the later step have more
weight.
The Error Index (EI) indicates the quality of the software.

37
SQA Plan – A Template
Introduction
1.1 Scope and intent of SQA activities
1.2 SQA organizational role
2. SQA Tasks
2.1 Task Overview
(description, work product, documentation, responsibility
for each task)
2.2 Standards, Practices, and Convention
2.3 SQA Resources
3. Reviews and Audits
3.1 Review Guidelines
3.2 Formal Technical Reviews
(Specification review, design review, code review etc)
3.3 SQA Audits
38
SQA Plan – A Template (cont.)
4. Problem Reporting and Corrective Action/Follow–up
4.1 Reporting mechanism
4.2 Responsibilities
4.3 Data Collection and evaluation
4.4 Statistical SQA
5. Software Process Improvement Activities
6. Software Configuration Management
7. SQA Tools, Techniques, Methods

39
Software
Configuration Management
(SCM)

Chapter 9, Pressman
The First Law of Software Engineering
“No matter where you are in the system life cycle, the
system will change, and the desire to change it will persist
throughout the life cycle.”
Bersoff, et al, 1980

When you build software, change happens and is inevitable.


Change increases the level of confusion among software
engineers and leads to errors and poor quality.
Because change happens, you have to control it effectively.

“If you don’t control change, change will control you.”


“There is nothing permanent except change.”

41
Most Changes are Justified!
Change may occur at any time, for any reason.
Origin of change:
¡ New business or market conditions dictate changes in product
requirements or business rules.
¡ New customer needs demand modification of data, functionality
and services provided by the software system.
¡ Reorganization or business growth/downsizing causes changes in
project priority or software engineering team structure.
¡ Budgetary or scheduling constraints cause a redefinition of the
system or product.
Most changes are justified (and you have to handle them)!
E.g., Customers want to modify requirements. Developers
want to modify the technical approach. Managers want to
modify the project strategy. As time passes, we know more.
This additional knowledge can be a driving force for change.
42
Software Configuration Management
Software Configuration Management (SCM) is a set of
activities designed to control change by:
1. Identifying the every piece of work products.
2. Managing different versions of software.
3. Controlling the change.
4. Auditing.
5. Reporting.
SCM is part of SQA, to achieve high–quality software.
As part of SQA, SCM is an “umbrella activity” that is applied
throughout the entire software process (because change
occurs at every stage of the software process). It begins
when a software project begins and terminates only when
the software is taken out of operation.

43
Software Configuration Item (SCI)
The output of the software process can be divided into 3
categories:
¡ Computerprograms
¡ Documentation Software Configuration
¡ Data

The outputs of a software are


collectively called a software
configuration.
Documents Data
Each individual item is called
a software configuration item
(SCI).
Programs
Software
Configuration
Items (SCIs)
44
Examples of SCIs and their interdependencies

Data Model
Design Spec
Data design
Architectural design
Module design
Interface design Component N
Interface desc
Algorithm desc
PDL
Test Spec
Test Plan
Test Procedure
Test Cases Source Code

45
Baselines
A baseline is “a specification or product that has been
formally reviewed and agreed upon, that thereafter serves as
the basis for further development, and that can be changed
only through formal change control procedures.”
That is, a SCI becomes a baseline after it has been reviewed
and approved.
Before a SCI becomes a baseline, change may be made
quickly and informally. However, once a baseline is
established, change can be made, but a specific, formal
procedure must be applied to evaluate and verify each
change.
Why Baseline? We have to establish a point at which we “cut
the chord”. That is, beyond this point, we will not change
something without careful and formal evaluation.

46
Baselines (cont.)
In the context of software engineering, a baseline is a
milestone in the development of software that is marked by
the delivery of one or more software configuration items and
the approval of these SCIs that is obtained through a formal
technical review.
The common software baselines are:
¡ System Specification
¡ Software Requirement
¡ Design Specification
¡ Source Code
¡ Test Plans / Procedure / Data
¡ Operational System

47
Baselines (cont.)
SCIs
Modified Formal Approved Project
Software
Technical Database
Engineering SCIs SCIs
Tasks Review
SCIs
Check Out
SCM
Controls SCIs

Software engineering tasks produce one ore more SCIs. After


the SCIs are reviewed and approved, they are placed into the
project database (or software repository).
When a member of a software team wants to make change to
a baseline SCI, he check out the item. The SCI can be
modified only if SCM controls are followed. 48
SCM Process
SCM process comprises five tasks:
1. Identification: identifying each item in a software
configuration.
2. Version Control: controlling the different version of the
software.
3. Change Control: controlling the change to an item.
4. Auditing: to ensure that change are properly made.
5. Reporting.

49
SCM Task 1: Identification
To control and manage SCIs, each must be uniquely named.
The relationship between SCIs must be properly identified.
E.g., <part–of>, <interrelated>, <uses>, <describes>, etc.
SCI evolves throughout the software process, an evolution
graph can be used to describe the change history. E.g.

Obj Obj
1.3 1.4
Obj Obj Obj
1.0 1.1 1.2
Obj Obj
2.0 2.1

Obj Obj
1.1.1 1.1.2

50
SCM Task 2: Version Control
Version control aims to manage different versions of the
software.
Each version of the software is a collections of SCIs (source,
documents, data). Each version may be composed of
different variants. E.g., consider a software with 5 entities: 1,
2, 3, 4 and 5. Entity 4 is used for color display and entity 5 is
used for monochrome display. There are two variants: (a)
Entities 1, 2, 3, 4 and (b) Entities 1, 2, 3, 5.

1 2 3

Version A 4 5 Version B

51
SCM Task 3: Change Control Process – I

Need for change is recognized The change request is


submitted and evaluated to
Change request from user assess the technical merit,
potential side effects,
Developer evaluates change request overall impact to other
SCIs and functions, and
Change report is generated the projected cost.
The evaluation result is
Change Control Authority decides
documented in a change
report.
Change request Change request Change Control Authority
is accepted is rejected (CCA) makes a final
decision on the status and
Continue to II Informed User
priority of the change.
52
Change Control Process – II
The ECO (Engineering
Request is queued for action, Change Order) describe the
a ECO generated. change to be made, the
constraints that must be
Assign individual to the item respected, and the criteria
for review and audit.
Check out the configuration item The item is check out,
change, SQA, and check in.
Make the change
Appropriate version control
Review and audit the change mechanism are used to
create the next version.
“Check in” the configuration item Check–in/check–out requires
that have been changed access control and
synchronization control.
Continue to III
53
Change Control Process – III

Establish a baseline for testing

Perform SQA & testing

“Promote” changes
for inclusion in next release

Rebuild appropriate version

Review and audit the change


to all configuration items

Include change in new version

Distribute the new version


54
Access and Synchronization Control
SCI (modified version) SCI (baseline version)
Check–in
Audit Info Unlock
Software
Engineer Access Control Ownership Info Project
Database
Lock

SCI (extracted version) Check–out SCI (baseline version)

Based on approved change and ECO, a software engineer


check out an item. Access control ensures that he has the
proper authority. Synchronization control locks the item in
the project database so that no updates can be made to it
until the it is check in. Other copies can be checked out for
reading, but no update.
55
Audit
To ensure that the change has been properly implemented:
¡ Formal Technical Review (FTR)
¡ Software Configuration Audit

FTR focuses on the technical correctness. The reviewers


access the SCI for consistency with other SCIs, omission, or
potential side effects.
Software configuration audit complements the FTR by
assessing a SCI for characteristics not considered during FTR.
¡ Has the change specified in the ECO been made? Have any
additional modifications been incorporated?
¡ Has FTR conducted?
¡ Have SCM procedure for noting the change, recording it and
reporting it been followed?
¡ Has the change been “documented” in the SCI.
¡ Have all related SCIs been properly updated?
56
Configuration Status Reporting
Configuration Status reporting (CSR) answers (a) What
happened? (b) Who do it? (c) When did it happen? (d) What
else will be affected?
Follow the flowchart for Control Process III:
¡ Each time a SCI is assigned new or updated identification, a CSR
entry is made.
¡ Each time a change is approved by the CCA (i.e., an ECO
issued), a CSR entry is made.
¡ Each time a configuration audit is conducted, a CSR entry is
made.
CSR is often placed in an on–line database accessible by all.
CSR plays a vital role especially for large project. When many
people are involved, one might modify an obsolete item or
something with serious side effect and not aware until months
later.
57
Software Reliability

Chapter 18, Sommerville, 5th


th eds
Software Reliability
Informally, reliability of a software is a measure of how well
users think it provides the services that they require.
More formally, reliability can be defined as the “probability of
failure–free operation” for a specific time in a specific
environment for a specific purpose.
Software reliability is a function of the number of failures
experienced by a particular user of that software.
Software failures occur when the software is executing and
does not deliver the service expected by the user.
Software failure is caused by a software defect (or fault). (A
software defect or fault is an uncovered error, such as
programming error or design error, after the software is
shipped.)

59
Failure Curves
Failure curve for hardware Actual failure curve for software

“Ideal” failure curve for software

Software doesn’t wear out – it


deteriorates.
Software will undergo changes, which
introduces new defects.
Software components that fail cannot
be replaced – it has to be redesigned.
60
Software Failures and Reliability
A software system can be viewed
as a mapping between input and
Input Set Ie
output.
A program has many possible
inputs.
Program The program responds to these
inputs by producing an output or a
set of outputs.
The program works properly for
Output Set Oe
most of the inputs.
However, a (small) subset of the
inputs (Ie) cause system failures
and generate erroneous outputs
(Oe) .
61
Software Reliability
Software reliability is the probability that, in a particular
execution of the program, the input will be a member of the
set of inputs (Ie) that cause an erroneous output.
Not all software faults are equally likely to be triggered.
Usually, there are a number of members of (Ie) which are
more likely to be selected than others are.
The reliability of the program, therefore, depends on the
number of inputs causing erroneous outputs, which arise
during normal, rather than exceptional, use of the system.
Hence, reliability is related to the probability of an error
occurring “in operational use”.

62
Software Reliability (cont.)
Removing software faults from parts of the system that are
rarely used makes little difference to the perceived reliability.
In fact, it has been found that in some software, removing
60% of product defects would only have led to a 3%
reliability improvement.
IBM has also noted that many defects in their products were
only likely to cause failures after hundreds or thousands of
months of product usage.

63
Software Reliability (cont)
Therefore, a program may contain known faults but may still
be seen as reliable by its users.
Some users (user1 and user3) never select an erroneous
input so program failures never arise. Others may even work
around software faults, which are known to cause failures.
For example, WinWord’s equation editor is very buggy.
However, if you are not using
equations in your document, Erroneous
the software is reasonably User 1 Inputs
reliable.
User 2
User 3

Software usage patterns


64
Costs vs. reliability

Cost rises exponentially as


the reliability requirement
increases.
It is because in a ultra–
high reliable system:
¡ Redundant hardware
may be required.
¡ Extra testing needed.
¡ Efficiency will be
affected because of
additional software
overhead.

65
Reliability Precedes over Efficiency
Computers are now cheap and fast: there is little need to
maximize equipment usage.
Unreliable software is liable to be discarded by users: a single
unreliable software may tarnish the image of the company
and affect future sales of all products.
System failure costs may be enormous: especially safety–
critical system such as a nuclear reactor.
Unreliable systems are difficult to improve: because the
causes of unreliability spread across the entire system.
Inefficiency is predictable: User can readjust their work for a
slow program. Software failure surprises the user and the
consequence may not be known immediately.
Unreliable systems may cause information loss.

66
Reliability metrics
Software reliability metrics have, by and large, evolved from
the hardware reliability metrics.
However,
¡ A hardware component failure tends to be permanent: the
component stops working. The system is not available
until it is repaired.
¡ A software component failure is transient: it only
manifests with some input. The system can often continue
in operation after a failure has occurred.
Hence, some hardware reliability metrics may not be really
useful.
4 software metrics are shown. The choice of metrics
depends on the type of the system and the application.

67
Software Metrics
Metric Explanation Example
POFOD This is a measure of the likelihood Safety–critical
Probability that the system will fail when a and non–stop
of failure on service request is made. For system, such
demand example, a POFOD of 0.001 means as hardware
that 1 out of 1000 service requests control system.
may result in failure.
ROCOF This is a measure of the frequency of Operating
Rate of occurrence with which unexpected systems,
failure behavior is likely to occur. For transaction
occurrence example, a ROCOF of 2/100 means processing
that 2 failures are likely to occur in system.
each 100 operational time units. This
metric is sometimes called the failure
intensity.
68
Software Metrics (cont.)
Metric Explanation Example
MTTF This is a measure of the time Systems with
Mean time between observed system failures. long transactions
to failure For example, an MTTF of 500 such as CAD
means that 1 failure can be systems. The
expected every 500 time units. If MTTF must be
the system is not being changed, it greater than the
is the reciprocal of the ROCOF. transaction time.

AVAIL This is a measure of how likely the Continuously


Availability system is to be available for use. running system
For example, an availability of such as
0.998 means that in every 1000 telephone
time units, the system is likely to exchange or
be available for 998 of these. ICU monitoring
system.
69
Reliability vs. Availability
There is a subtle difference between reliability and availability.
Some system can tolerate relatively frequent failures (low
reliability) so long as they can recover quickly from these
failure (high availability).
For example, for a telephone exchange system, the users
expect high availability (i.e., when they pick up a phone,
there is a dial tone), but can tolerate low reliability (if a
system fault causes a connection to fail and is quickly recover,
e.g. thru another route, the user may not even notice).

70
Software Metrics
The choice of the reliability metrics depends on the type of
system and applications.
In some cases, system users are most concerned about “how
often” the system will fail, perhaps because there is a
significant cost in restarting the system. In those cases, a
metric based on rate of failure occurrence (ROCOF) or the
mean time to failure (MTTR) should be used.
In other cases, it is essential that a system should always
meet a request for service because there is some cost in
failing to deliver the service. The number of failure over a
time period is less important. Probability of failure on
demand (POFOD) is more appropriate.
When users is most concern about the availability of the
service, availability (AVAIL) which takes into account the
repair/restart time, is more appropriate.
71
Measurement for assessing reliability
3 kinds of measurement can be made to access the reliability:
1. The number of system failures given a number of service
requests. This is used to measure POFOD (probability of
failure on demand).
2. The time (or number of transactions) between system
failures. This is used to measure ROCOF (rate of occurrence
of failure) and MTTF (mean time to failure).
3. The repair/ restart time when a system failure occurs, given
that the system must be continuously available. This is used
to measure AVAIL (availability).

Note: appropriate time unit should be carefully chosen.


Examples are calendar time, CPU time, transactions.

72
Software Reliability Specification
Reliability requirements should not be expressed in an
informal, qualitative and un–testable way (i.e., requirement
statements should not be subjective, irrelevant or un–
measurable).
The steps for establishing a reliability specification are:
1. For each identifiable sub–system, identify the different types of
system failures, and analyze the consequences of these
failures.
2. Categories the failure into appropriate class: e.g., transient,
permanent, recoverable, unrecoverable, non–corrupting,
corrupting, etc.
3. Define the reliability requirement using an appropriate metric.

73
Software Reliability Specification – Example
Example: An ATM banking system.

Failure
Example Reliability Metric
Class
Permanent, The system fails to operate ROCOF
Non–corrupting with any ATM card. 1 occurrence in
System must be restarted. 1000 days
Transient, The magnetic stripe of an POFOD
Non–corrupting ATM card cannot be read 1 in
1000 transactions
Transient, A transaction across the Should never happen
Corrupting network causes database in the lifetime of the
corruption software

74
Computer–Aided
Software Engineering (CASE)

Chapter 31, Pressman


What is CASE?
Computer–Aided Software Engineering (CASE) tools assists
software engineering managers and engineers in every
activity associated with the software process.
In other words, CASE tools “automate” the manual activities
in the software process.
CASE tools automate project management activities, manage
all work products produced throughout the process, and
assists engineers in their analysis, design, coding and test
work.
Tools
Methods
Process
A Quality Focus

76
Why CASE tools?
1. Software engineering is difficult. Tools that reduce the
amount of effect required to produce a work product or
accomplish some project milestone have substantiate
benefits.
2. Tools can also provide new ways of looking at software
engineering information – ways that improve the insight of
the engineers doing the work. This leads to better decisions
and higher software quality.

77
IPSE / CASE
A good workshop for any craftsman (mechanic, carpenter, or
a software engineer) has 3 characteristics:
¡A collection of useful tools.
¡ An organized layout that enables the tools to be found quickly
and used effectively.
¡ a skill person who understand how to use the tools in an
effective manner.
The workshop for software engineering is called an Integrated
Project Support Environment (IPSE) and the tools that fill the
workshop are collectively called Computer–aided Software
Engineering (CASE).
CASE tools helps to achieve better quality software by:
¡ automate the manual activities in the software process.
¡ improve engineering insight.

78
Building Blocks for CASE
CASE tools can be as simple as a single tool (e.g., a risk
analysis tool) or as complex as a complex environment to
encompasses tools, a database, people, hardware, network,
operating system, standards and others.
The building blocks consists of:
CASE tools
¡ Individual CASE tools.
¡ An integrated framework Integration Framework
to integrate various
CASE tools. Portability Services
¡ A portability services
for using or migrating Operating System
the CASE tools across
different platforms. Hardware Platform

Environment Architecture
79
CASE Tools Integration
At the low end of the integration is the Individual Tool
individual tool (point solution). E.g. Web (point solution)
development tool.
The integration improves slightly when data exchange
individual tools facilitate data exchange,
through standard output or via a bridge. bridge
Single–source integration occurs when a
single vendor integrates a number of
Single
tools into a package. But the close
source
architecture precludes addition of other
Integration
tools from other vendors.
The high end of integration is a
Integrated Project Support Environment
(IPSE) standards, where CASE tools
vendors build tools that are compatible
with each others. IPSE
80
A taxonomy of CASE tools
Business process engineering tool
Process modeling & management tool
Project planning & management tool
Risk Analysis tool
Requirement tracing tool
CASE Metrics and measurement tool
Database Documentation tools
Quality Assurance tools
SCM tools
Analysis and design tool
Prototyping tools
Programming tools
Web development tools
Integration and testing tools
Re–engineering tools
many others 81
Functional Classification of CASE tools
Tool Type Examples
Management tools PERT tools, estimation tools.
Editing tools Text editors, diagram editors.
Configuration Version control system, change management
management tools system.
Prototyping tools Visual languages, GUI generators.
Method support tools Design editors, data dictionary, code generators.
Language tools Compilers, interpreters.
Program analysis tools Cross–reference generators, analyzers, profilers.
Testing tools Test data generator, testing framework
Debugging tools Debuggers
Documentation tools Page layout tools, document generators.
Re–engineering tools Program re–structuring tools
82
CASE Repository
CASE repository (or CASE database) is the center for all
software engineering information.
It performs two roles:
¡ traditionaldatabase system management: e.g., non–redundancy,
data integrity, high–level data access and query, transaction
control, security etc.
¡ additional function to support software engineering: e.g., storage
of sophisticated data structures, semantic–rich tool interface,
process/project management, versioning, dependency tracking
and change management, requirement tracing, configuration
management, audit trails.

Objects Database

Integrated Services
83
THE END

You might also like