You are on page 1of 47

Chapter 6 Information and Software Quality Management

What is Software Quality Management?


Quality is a goal for an entire information system, not just the
software portion, and it must be managed accordingly.

Effective quality management assures that a process is established


and followed to build quality into Information Systems. It is NOT the
purpose of quality management to "inspect in" quality after a
product is built.

By the end of this lesson you will be able to identify commonly-used definitions of software
quality and list software quality “ilities” most applicable to different categories of software-
intensive systems.

Specifically, you will learn to:

• Recognize common definitions of software quality

• Define three different perspectives on software quality

• Describe ways of determining software “size"

• Define Error Density and its role as a software quality factor

• Recognize typical software quality factors and “ilities”

• Identify what influences the choice of software quality factors

• Define Software Quality Assurance and outline its key processes

• Describe methods and techniques that can influence software quality

Software Quality Management Program

A quality management program includes the following activities:

• Monitor, measure, analyze, control and improve processes

• Reduce product variation

• Measure/Verify product conformity

• Establish mechanisms for feedback on product performance


• Implement an effective root cause analysis and corrective action system

The Quality Management Team


Many of the quality management techniques originated in software
development, and were found to be of value throughout the acquisition of an
information system.

As previously noted, it is not the purpose of quality management to "inspect in"


quality; so, to enhance the success of obtaining high quality systems, quality
management teams should be formed.

To be effective, a quality management team must have an independent


reporting line to senior management.

The quality management team must be big enough to monitor the performance
of all key planning, implementation, and verification activities. This generally
requires a team of about 5% the size of the development team.

Software Quality
Engineering quality into a product requires an effective defect prevention
program that engages accurate defect detection and analysis to determine how
and why defects are inserted. In other words, it is not enough just to find
defects. We need to fix the processes that generated these errors as well.

Software is a complex, changeable, conformable, and invisible product.

Because of this, trying to define software quality and measuring it accurately


can be like nailing Jell-O® to a wall.

Nevertheless, the success of a software-intensive project is determined largely


by the quality of its software

Poor Software Quality


Poor-quality software in the DoD:

• Has caused cost overruns and late fielding of mission-critical systems

• Can impact national security and the safety of operational personnel

This illustrates why defining software quality and understanding the issues
associated with its definition and measurement are crucial software acquisition
management skills
Perspectives on Software Quality part (1)
Perspectives on Software Quality
Software quality is a difficult thing to define and measure. What constitutes
"quality" depends largely on your perspective, i.e., your relationship to the
product in question. That perspective drives the particular quality attributes that
are important to a project stakeholder.

Learning Objectives

This topic provides an overview of software quality and includes:

• Common definitions of software quality

• Different perspectives or viewpoints on what constitutes software quality

Once you have completed this topic, you will be able to apply different criteria and
perspectives to define what constitutes quality in software.

Software Quality Challenges


Defining and measuring software quality can be difficult for Program Offices
because:

• Software is an invisible product—it is hard to measure what cannot be


seen.

• Quality is often an unknown commodity until the testing phase, but if you
only measure quality at the end, it is often too costly to fix.

• There are a number of different ways that "software quality" can be


defined.

Definitions of Software Quality


Some of the more common ways that "software quality" can be defined are shown below.

Five Definitions of Software Quality

1. User Need—Ability to satisfy given user needs.


2. Combination of Attributes—The degree to which the software possesses a desired
combination of quality attributes.
3. Conformance—Conformance to requirements.
4. Measures— A set of measurable characteristics that satisfy the buyer, the users,
and the maintainers.
5. No Failures—The ability of a delivered product to support the user's needs without
failure, problems, and errors.
Software Quality Perspectives
Software quality can be viewed from many different, equally valid perspectives. Three
are shown on the cube below. Examine these perspectives to see how the definition of
software quality is influenced. Click each side on the cube before proceeding

1. Contractual Perspective—The Contractual Perspective on software quality is


sometimes referred to as a "buyer-seller" perspective. In this perspective of quality,
if the developed software meets the "requirements" specified in your contract, it is
by definition "Quality Software." The Defense Acquisition Guidebook (DAG)
specifically states that "the supplier is responsible for the quality of its product. The
government should allow the developer to define and use their preferred quality
management system that meets required program support capabilities".
2. Attributes Perspective—The Attribute Perspective is a generic model of software
quality that is based on a hierarchical, goal-oriented quality framework. This
framework is used as a disciplined tool both to help to define the components of
software quality, as well as to enable its measurement.
3. User-Focused Perspective—This perspective states that those software products
that best satisfy user needs are quality products.

The Contractual Perspective

This perspective is embodied in many software development standards.

Such standards are used in a software acquisition environment. They help define
how the "buyer" and "seller" interface, interact, and manage the project.

For example, some standards define software quality as "the ability of software
to satisfy its specified requirements."

A key to success is to incorporate systems engineering/design quality into the


product by defining the product or service quality requirements from the
beginning and then providing the supplier with the maximum degree of
flexibility to meet these requirements

The Attributes Perspective Framework


This perspective allows software quality to be quantified and measured.

There are three levels in the framework.

At the top level are software quality factors. These are project-specific quality
characteristics that are important to an acquirer or other customer of the system. For
example, the guidance control software in the LRATS E-Sentinel missile system is both
safety and mission-critical. Therefore RELIABILITY would be a key top-level software quality
characteristic!

The middle level of the framework includes those specific software-oriented attributes,
which are technically accepted as best supporting the top-level quality requirement. For
example, Coding Simplicity, Module Consistency and Error Handling are criteria that when
properly applied to software design and coding, are known to improve its overall reliability.
At the bottom are technical measures, some of which can be programming language-
specific, that determine the degree to which the quality attributes are actually present. For
example, Coding Simplicity depends on limited employment of branching and looping
commands, use of single entry and exit point to routines, etc. These can be measured in the
software by automated tools and an assessment of the degree of Coding Simplicity made.

The User-Focused Perspective


In the User-Focused Perspective, "the user" is not strictly limited to the end
operator of the delivered product. "Users" can be anyone with legitimate
interests in the delivered software. As such, users can include buyers, operators,
suppliers, maintainers, testers and other system stakeholders.

Quality Perspectives and Priorities


What project stakeholder(s) hold an attributes perspective on software quality?
Logisitician
Systems Engineer
End User

Each quality perspective embraces a different viewpoint of what is most important for
software quality. Each project stakeholder is likely to have different software quality
priorities based on his or her perspective.

• Logisticians, who are responsible for software life-cycle support, would likely have
an Attributes perspective. Quality attributes, such as maintainability and
supportability, would be among their top priorities.

• Systems Engineers have concerns about the ability of the software to work on a
wide variety of target hardware. From a systems perspective, they would be
concerned about all aspects of software quality but would consider portability and
transportability as particularly important.

• End Users, who use the system in a combat environment, operate from the User-
Focused perspective. For them, performance and reliability are key software quality
attributes.

• Software Suppliers work and are paid under the terms and conditions of a
contract. Quality from their perspective means meeting contractual requirements
Information and Software Quality Management Part (2)
Software Size
Some software quality measurements are evaluated relative to the "size" of the
software product. It is, therefore, important to understand the concept of
software size in order to make sense of those measurements.

Software size can be determined in several ways. The most commonly used
methods are:

• Source Lines of Code (SLOC) Counts

• Function Points (FP) Counts

Learning Objectives

This topic examines the SLOC and FP methods of determining software size and
discusses the issues associated with them.

Once you've completed this section, you will be able to describe these two ways of
measuring software size.

Source Lines of Code (SLOC)

A large body of Software Engineering quality data exists based on using SLOC, or
thousands of Source Lines of Code (KSLOC) to determine quality attributes, such as
Error Density. An example of Error Density values expressed in KSLOC (E/KSLOC) for
several types of generic systems are displayed in a separate chart.

The chart is adapted from: Donald J. Reifer, Reifer Consultants, Inc, Industry Software
Cost, Quality and Productivity Benchmarks "Error Rates Upon Delivery by Application
Domain."

Source Lines-of-Code (SLOC):

• Is a low-level, programming language-dependent measure of software


size.

• May be automatically calculated using a variety of Computer-Aided


Software Engineering (CASE) tools

• Appears easy to calculate but in reality can be difficult to accurately and


consistently determine
Computer-Aided Software Engineering (CASE)

Computer-Aided Software Engineering (CASE) is an acronym used to refer to


the use of computer-based tools to do software engineering. It includes software
tools that are used in any and all phases of developing an information system,
including analysis, design, and programming.

Counting SLOC
Some important factors to consider when measuring software size by SLOC:

• SLOC counts are dependent on the specific programming language


being used.

• Counting rules can vary by supplier, language used, and project.

• Precautions must be taken if direct comparisons of SLOC measures are


to be made across projects. Because of differences in SLOC counting
rules, such comparisons may be very difficult to determine accurately.

In the English language, many rules exist for "proper usage." For example, the
first sentence in a paragraph is indented, the first word in a sentence is
capitalized, and a sentence ends with a punctuation mark.

Therefore, counting punctuation marks such as "." or "!" or "?" would give an
accurate count of the number of sentences in a segment of text.

SLOC counting uses a similar approach.

Programming languages use rules of syntax analogous to those used in English.


These are used to help count SLOC.

For example:

• Many languages (e.g., C++, Pascal, Ada) use a ";" to end a single line-of-
code.

• Special characters identify "comments" (e.g., a leading "-"in Ada).


Though ignored by the computer, comments help make programs
understandable.

Programming language format characteristics such as these form the basis for
SLOC counting. However, other considerations regarding how they are actually
interpreted apply as well.
Estimating SLOC
The following page contains an Ada code module (named "Ignition countdown") that
might be used as part of the Long Range Acquisition & Targeting System (LRATS) E-
SENTINEL missile system. This simplified module is part of a much larger program that
activates the missile warhead.

SLOC counts can differ dramatically depending which counting criteria is used. Some
(many other equally valid ones exist!) of these criteria and perspectives on their use
include:

• Counting logical lines: only code that has meaning (is “executable”) to the
computer is counted since that directly impacts performance, memory use and
error density.

• Counting all lines: includes physical lines, comments and blank lines. Since
comments improve maintainability and blank lines improve legibility, the effort
taken by programmers to do this is reflected in the SLOC count.

• Counting physical lines: Comments and blank lines are excluded. However,
programmers frequently use inactive “debug” code to improve later testability;
using this criteria, they would not be counted.

These three counting perspectives are illustrated in the following example.

Counting Logical Lines of Code includes only the lines of code that have meaning to the
computer. Counting Physical Lines of Code includes all lines in the code; even blank
ones and comments in the count. Exclude Comments and Blank Lines. This method
excludes Blank lines and Comments from the count.

Many Ways to Count


The previous example illustrated three different ways to count SLOC on a
very small Ada code example. Even in this limited example, a variation of
nearly two-to-one (9 lines versus 16 lines) in SLOC count is possible.

Research done by the Software Engineering Institute (SEI) has identified


over 50 ways to count SLOC.
Software Engineering Institute (SEI)
The Software Engineering Institute (SEI) is a federally funded research and development
center sponsored by the DoD. It provides leadership in advancing the practice of
software engineering to improve the quality of software-intensive systems. The SEI is
located in Pittsburgh, PA, and staffed by personnel from industry, academia, and
Government.

Choosing a Method

When choosing a SLOC counting method, it is best to remember:

• There is no right way to count SLOC.


• Program Management Offices (PMOs) and suppliers must agree on the counting
method at the onset of a project and keep it constant.

• Lack of clear definitions or enforcement of selected definitions will yield SLOC


counts of little value and measures that depend on them.

• Measurement of Error Density is critically dependent on the accuracy of the SLOC


counts.

SLOC counts ultimately impact all three of the classic Cost, Schedule, and
Performance measures commonly used to judge project success.

Select each measure below to see how they are impacted by SLOC.

• Cost -Software-intensive systems typically include some measure of cost


expressed as dollar per Source Lines-of-Code or $/SLOC.

• Schedule -Software-intensive systems usually include a productivity measure


expressed in units of SLOC per Staff Month or SLOC/SM.

• Performance -For software-intensive systems, product quality and performance


are directly related because error-ridden software can directly impact a system's
operational performance.

In this regard, software quality is measured by Error Density where Software


Size is expressed in Thousands of SLOC (KSLOC) or more specifically,

ERROR DENSITY=ERRORS/KSLOC.

Functional Complexity

Suppose you counted the number of rivets in an aircraft. Would a greater number of
rivets make a more complex aircraft? Select the more complex aircraft.

Two images of the side of an aircraft. One has more rivets than the other.

No matter which image you choose, the feedback says "The number of rivets does not
necessarily correspond to the complexity of an aircraft! Similarly, a high SLOC count
does not always indicate a more complex software package."

Function Points
If you want to correlate a system's functional complexity to the ultimate
size of its software, another metric, called Function Points (FPs), can be
used. As opposed to SLOC, which is a low-level measure of size, Function
Points are based on what a system actually does.
Function Points (FPs):

• Provide an alternative way to calculate software size

• Estimate software size based on what the program does (analysis of


its high-level requirements)

• Increase in number as the software tasks increase in complexity

The Aircraft Example


Using the previous aircraft example, a Function Point analysis would evaluate the
aircraft's capability, not its rivet count. Now, select the more complex aircraft.

cropduster and an Air Superiority Fighter.

No matter which image you click the feedback says, "This approach would result in the
air superiority fighter having a much higher measure of complexity over that of a crop-
duster, even though both might have approximately the same number of rivets."

Measuring Function Points


Function Point measures:

• Are based on a set of rules that determine a system's size and


complexity

• Use the types and amounts of system-level requirements as input to


the calculation

• Should be identical for two systems that perform the same tasks

Function Points are weighted sums of counts of different factors. These factors
relate to overall system requirements and can include:

• Number of Inputs

• Number of Outputs

• Logic (or Master) Files Used

• Number of Inquiries
• Number of Interfaces

These factors are estimated and then multiplied by different complexity-based


weighted "adjustment" factors, resulting in an overall Function Point count for
the system being measured.

Comparing SLOC to FPs


As an example comparison of Function Point versus SLOC counts, consider
what you are doing right now.

You are probably running this courseware on a computer running an


operating system, such as Microsoft Windows XP®.

Consider that Microsoft Windows XP®, written in the programming language


C++, has approximately:

• 40,000,000 SLOC

• 730,000 Function Poin

Even though SLOC and Function Points are based on radically different
approaches, some researchers have been able to determine approximate
equivalents for two counting methodologies based on the programming
language used.

Function Point are based on empirical relationships discovered to exist between


source code and Function Points in all known languages. This method is based
on tables of average values. It is useful for doing retrospective studies of
projects completed long ago, and for easing the transition to Function Point
metrics for people who are familiar with lines-of-code metrics.

Click the ruler to see how many SLOCs in various


programming lanquages it takes to be equivalent to a single Function Point.

Click the image of a ruler to see a chart showing the average number of statements
required in each programming language to accomplish one function point.
Assembler=320 C=128 COBOL=107 Ada=71 DB Languages=40 Object Oriented=29
Query Languages=25 Generators=16.

Advantages of Measuring FPs

Compared to SLOC, Function Points, which are manually computed as a


weighted sum of factors, have the advantages of being:
• Language Independent: FPs do not depend on the specific computer
programming language used.

• Solution Independent: FPs are not impacted by the technology used.

• Determined Early: FPs are established early in the system life cycle and
thus useful for cost/schedule estimating.

• Understandable: FPs are an intuitively understandable concept.

Disadvantages of Measuring by FPs


Compared to SLOC, Function Points, which are manually computed as a
weighted sum of factors, have the disadvantage of being:

• Training: Measuring FPs requires that analysts be trained to exercise


judgment.

• Manual Calculations: FPs must be counted manually, as automatic


Function Point counters do not exist.

• Limited Application: Measuring FPs may not work well for


computationally-intensive software as typically found in DoD embedded
systems.

• Lack of Tracking Support: Unlike SLOC, FP counts do not support


detailed project tracking.

SLOC vs. FP: Which is Best?

Many heated technical arguments have occurred in trying to answer the question, "Should
SLOC or Function Points be used?" The best answer, as illustrated below, is to use both

Two robots labeled SLOC and FP boxing. The FP boxer is getting the better of the fight.

Function Points are useful during the early stages of a project, when top-level requirements
have been defined, but little is known about the detailed design. At this stage, SLOC takes a
beating! Function Points can be accurately assessed, but SLOC estimates may be
significantly in error.

The SLOC robot begins to fight back and overtake the FP robot.

SLOC makes a come back as the software design matures and more details are defined. As
software modules are coded, estimated SLOC values can be compared to actual counts, and
adjusted accordingly. Because SLOC is a low-level measure, it enables detailed project
tracking as programming progresses.
What statement describes the SLOC method of determing software size?

Is primarily a 'low-level' measure

Error Density Part (3)


Error Density is the most commonly-used indicator of software quality. It is
important to understand how Error Density is determined, because this quality
measure is only as accurate as the values used to calculate it, and it can be
biased in subtle ways.

Error Density measurements impact members of the Program Management


Office IPT in various ways. Error Density values are used as the basis for many
important project decisions, such as assessing development risks, judging
contract performance and estimating sustainment costs, to name a few.

Learning Objective

This section introduces the most commonly used software quality measure, Error
Density.

Error Density is:

• The ratio of total number of errors to software "size"

• An indicator of software quality

• Generated during software testing

• Deceptively simple in concept

Once you have completed this topic, you will be able to define Error Density and its role
as a software quality factor.

The Error Density Formula


Error Density is calculated by dividing the number of errors found during
testing by the software size. Accurate and consistent measurements of the
numerator (Number of Errors) and the denominator (Size) are:
• Essential for obtaining valid Error Density values

• Especially important to the Project Management Office (PMO) team


because Error Density values can impact many functional disciplines

How is Error Density Used?


Listed below are examples of how the Project Management Office (PMO)
can use Error Density values:

• Contracting Officers: To establish criteria for an award fee or


incentive contract.

• Logisticians: To determine software support costs by estimating


errors remaining in fielded software.

• Testers: To determine readiness to start critical testing phases,


such as Operational Testing.

• Systems Engineers: To assess development schedule risks and to


estimate software and system reliability.

How is Error Density Used?


Listed below are examples of how the Project Management Office (PMO)
can use Error Density values:

• Contracting Officers: To establish criteria for an award fee or


incentive contract.

• Logisticians: To determine software support costs by estimating


errors remaining in fielded software.

• Testers: To determine readiness to start critical testing phases,


such as Operational Testing.

• Systems Engineers: To assess development schedule risks and to


estimate software and system reliability.

Counting Errors
Software errors can be counted in a variety of ways. In many Department
of Defense (DoD) projects, a Software Problem Report (SPR) or a
Software Test Report (STR) is generated to:

• Formally document software errors discovered during testing


• Rank problems by their severity into priority categories, usually
from 1 (catastrophic errors) to 5 (cosmetic errors)

Error Counts and Quality


Using data from SPRs, software error counts can be plotted over time by
error category (e.g., 1 through 5). These charts can:

• Provide an overall insight into the current level of software quality

• Show possible trends in software quality, by the slope of the line

Error Density=Number of Errors/Software Size

Summary
Error Density is the most commonly used measure of software quality. It
is calculated by dividing the number of errors present by the software
size. It is important to calculate Error Density values as accurately as
possible, because the PMO uses it as the basis of many project decisions.

Additionally, Error Density:

• Is a common software quality measure because it is intuitively


understandable and related to the factor of "correctness."

• Is expressed as "Errors per KSLOC" or "Errors per Function Points."

• Tends to measure quality later in development when fixes are


expensive.

• Should be used with other software quality measures.

Software Quality Factors part (4)


Important research to identify dimensions of software quality beyond
Error Density was sponsored by the U.S. Air Force's Rome Air
Development Center, now the Information Directorate of Air Force
Research Laboratory, Rome NY.

Their research discovered that there were many variables making up


software quality. Most software-intensive projects do not have the
schedule or budget to accommodate tracking all of these variables. It is,
therefore, important to identify those that are critical to satisfy system
requirements, and do a good job of measuring and tracking them.

Learning Objectives

This section discusses software quality factors and their attributes. Research has been able
to identify many software quality attributes and link them to various programming
techniques. By evaluating characteristics of the software itself, this linkage provides an
indirect way to quantify software quality.

When you have completed this topic, you will be able to list and define typical information
and software quality factors and ways that they are measured.

The Attribute Perspective


One way to approach software quality is from an Attribute Perspective. The
Attribute Perspective:

• Uses a framework of quality factors

• Breaks factors into various combinations of relevant quality attributes

• Measures each individual attribute

• Yields an overall quality assessment by "rolling up" each attribute


measure

The Attributes Perspective framework has three levels.

At the top level are project specific quality factors that are important to the acquirer.

The middle level includes software-oriented attributes, which technically support top-level
quality requirements.

At the bottom are technical measures that determine the degree to which quality attributes
are present.

Selecting Quality Factors


The challenge lies in determining which factors and attributes are relevant to a
project. When selecting the quality attributes to measure, keep the following in
mind:

• It is better to assess a select few than to try to assess all attributes that
have been defined.

• Consider the tradeoffs associated with a given project. For example, cost
and schedule pressures may keep you from using some attributes.

Software Quality Research

Many individuals from a variety of organizations and disciplines have conducted research on
software quality attributes.

This research, which started in the late 1970's and still continues, focuses on answering the
following questions:

• What are the most relevant software quality attributes?

• How should these software quality attributes be defined?

• What is the best way to actually measure software quality?

Results of research on software quality factors and their attributes are described
in a Framework Guidebook published by what was then the Rome Air
Development Center. Currently an international standard, ISO 9126, also
provides similar guidance.

Many of these quality factors have a suffix of "ility," so quality attributes are often
referred to as the software quality "ilities."

Software Quality Factor Definintions

Correctness—Does the software do what I want?

Portability—Will I be able to use it on another machine? Relative effort to transfer


software to another system environment.

Efficiency—Will the software run on my hardware as well as it can?

Reliability—Does the software accurately do what I want all of the time? Extent to which
the software will perform without any failures within a specified time period

Expandability—Can I add new functions to the software?

Reusability—Will I be able to reuse some of the software? Relative effort to convert a


software component for use in another application.

Flexibility—Can I change it?

Survivability—If some of the system breaks will the software continue to function?

Integrity—Is the software secure?


Testability—Can I test it?

Interoperability—Will I be able to interface the software with another system? Relative


effort to couple one system with another.

Usability—Can I run the software?

Maintainability—Can I fix it?

Verifiability—Can I validate what the software does?

Embedded

Embedded software is specifically designed into (embedded) or dedicated to a weapon


system. As an integrated part of the overall system it performs highly specific functions.
Embedded software functions as an integral part of the weapon and cannot readily support
other applications without some form of significant modification.

• Efficiency

• Survivability

• Reliability

Automated Information Systems (AIS)

Automated Information Systems (AIS) are defined as a combination of computer hardware


and software, data or telecommunications. They perform functions such as collecting,
processing, transmitting and displaying information.

Portability
Maintainability
Usability

Command, Control, Communications and Intelligence (C3I)

Command, Control, Communications and Intelligence (C3I) Systems encompass command,


control, communications and intelligence software. These systems communicate, assimilate,
coordinate, analyze and interpret information. They also provide decision support to military
commanders.

Some software quality factors are "universal" and used across several categories of
software-intensive systems. Other software quality factors tend to be emphasized more in
one type of system because of the technical characteristics and operational requirements.
Integrity

expandability

interoperability

The graphic outlines possible responses to the previous KRs. Quality factors for
Embedded Systems:
• Efficiency
• Reliability
• Survivability

Quality factors for AIS:


• Usability
• Portability
• Maintainability

Quality factors for C3I Systems:


• Integrity
• Expandability
• Interoperability

Measuring Software Quality Attributes


After selecting key software quality factors, suppliers must:

• Accurately measure the attributes that make up each factor (Unlike Error
Density, determining values for those other quality attributes can be
highly technical.)

• Make these quality determinations part of their Software Quality


Assurance Program

In many cases, the specific value of a software attribute is critically dependent


on the way:

• The software module is designed

• The programmer used (or chose not to use) technical features of the
language when coding the module

To measure software quality, an analyst:

• Identifies the design techniques and programming language features that


contribute to the particular quality attribute
• Determines the extent to which these features have been employed in the
software

• Establishes an estimated "value" for the particular software quality factor


(value may combine other subjective evaluations)

Maintainability Example

Software design impacts quality factors. For example, the number of comments in
the code influences the program’s maintainability: the more comments, the easier to
maintain software. In this example, "maintainability" can be calculated as a
percentage of commented lines-of-code and assigning a rating to it.
Style Guides
A supplier-produced Programming Style Guide is typically used by project
programmers. Style Guides:

• Provide details of recommended and allowable programming language


usage

• Should be based on coding styles that contribute best to quality attributes

Those statement are true.

Software quality factors are broken into attributes that are then measured.

• Project tradeoffs such as cost and schedule drivers can influence choice of
quality factors.

• Unlike Error Density, determining other software quality attributes can be


highly technical.

• Programming Style Guides can help to foster software product quality.

Software Quality Assurance Part (5)


Software Quality Assurance (SQA) refers to the activities suppliers perform
within their software development process to ensure that a quality product is
delivered.

The Department of Defense (DoD) does not dictate any specific SQA system be
used, however, there are some key activities and processes present in all
effective Software Quality Management programs.

Learning Objectives
This section presents an overview of a Software Quality Assurance (SQA) program. It
includes:

• Some definitions of Software Quality Assurance

• The key goals and activities of SQA

• Guidelines for organizing an SQA team

• An example SQA Plan format

Once you have completed this lesson, you will be able to define SQA and outline its key
processes.

What is Software Quality Assurance?


Software Quality Assurance (SQA) is the term used to identify that portion of
a developer's quality management process that incorporates key quality
activities for the software development process.

An international standard, ISO/IEC 12207, on Information Technology-Software


Life-Cycle Processes, states that:

"Software Quality Assurance (SQA) is a process that provides adequate assurance


that the products and processes as used on the project conform to their specified
requirements and adhere to their established plans. To be unbiased, quality
assurance needs to have organizational freedom and authority from persons
directly responsible for developing the software product or executing the project."

Key Goals and Tasks


The key goals and tasks of SQA are to:

• Assure the quality of delivered software and its associated processes

• Evaluate the adequacy of development and documentation processes

• Assure that corrective actions are initiated

• Maintain documented evidence that the software meets contractual


requirements

• Provide acquirer access to any review of products and activities

• Ensure that noncompliance issues that cannot be resolved within the


software project are addressed by senior management
DoD Quality Assurance Goal
DoD's goal is to prevent errors, not count them.

The Acquisition, Technology and Logistics (AT&L) Knowledge Sharing System, in


discussing quality for DoD systems, states that it is much less costly to prevent
defects than to "inspect in" quality later after most of the development is done.

Flexible QA Process
A Program Manager (PM) cannot mandate the specific quality processes used by
a Supplier. The Defense Acquisition System Policies state that:

"A key to success is to incorporate systems engineer/design quality into the


product by defining the product or service quality requirements from the
beginning and then providing the contractor with the maximum degree of
flexibility to meet these requirements....the PM shall allow contractors the
flexibility to define and use their preferred quality management process that
meets program objectives."
SQA Organizational Structure
Depending on the size of the project and the supplier's internal organization,
SQA activities may be performed by:

• A separate staff organization that performs SQA activities for several


related projects
- or -

• An integral part of the design and development staff dedicated to a


specific project

However they are organized, software standards recommend that the SQA
group should have a formal reporting channel to senior management. This
reporting channel:

• Must be independent of the Project Manager and software engineering


group

• Is not used for routine matters

• Should be used to report quality concerns

Software Quality Assurance Plan (SQAP)


SQA reporting channels and other specific details of the SQA processes for a
particular project are typically documented in a Supplier's internal plans.

Sometimes suppliers will use a project-specific Software Quality Assurance Plan


(SQAP) to document their SQA processes.

No matter how or where a supplier's SQA processes are documented, it is


essential that they are:

• Documented somewhere

• Reviewed by the acquirer

• Approved by management

• Followed by the supplier

Organization

A plan should include information about the organization responsible for quality
management program, the relationship of the quality management organization to other
organizational entities (such as configuration management), and the number and skill levels
of personnel who perform the software quality management activities.

Furnished Items

The Quality Assurance Plan should identify Government furnished facilities, equipment,
software, and services (including manufacturer, model number, and equipment
configuration) to be used in quality management.

Schedule

The Quality Assurance Plan should provide a detailed schedule for quality management
activities. The schedule should include activity initiation, activity completion, and personnel
responsible, as well as key development milestones such as formal reviews, audits, and
delivery of items on the contract data requirements list.

Implementations

The plan addresses all tasks to be performed by the supplier in conducting a quality
management program. It includes the procedures to be used in the quality management
program and in conducting continuous assessments of the development process. It should
also describe the tools and measures that will be used to conduct the program.

Records
The Quality Assurance Plan includes the supplier's plans for preparing, maintaining and
making available for Government review, the records of each quality management program
activity performed.

Resources

The Quality Assurance Plan will identify any subcontractors, vendors, or other resources to
be used by the supplier to fulfill the development requirements of the prime contract.

SQAP Formats
Because DoD policy is to rely on a contractor's internal quality processes,
acquirers do not specify the format of a supplier's SQA plans.

There are a variety of formats a supplier's SQAP can take; IEEE standard 730
provides one commonly used format.

IEEE standard 730


SQA Plan Outline

I. Purpose

II. References

III. Management

IV. Documentation

V. Standards, Practices, Convention, and Metrics

VI. Reviews and Audits

VII. Testing

VIII. Problem Reporting and Corrective Actions

IX. Tools and Techniques

X. Code, Media, and Supplies Control

XI. Records Collection, Maintenance, Retention

XII. Training

XIII. Risk Management

Key Quality Assurance Activities


The Defense Acquisition System Policies lists key quality activities that make up
a thorough quality management process. They include:

• Monitor, measure, analyze, control and improve processes

• Reduce product variation

• Measure/Verify product conformity

• Establish mechanisms for feedback on product performance

• Implement an effective root cause analysis and corrective action system

These quality activities apply to any type of project, not only software-intensive
ones

Software Quality and Process Maturity


The existence of a viable software quality program is one of the first criteria that
must be met by a supplier to begin to improve software process maturity.

Better software quality is a natural by-product of higher levels of software


process maturity, as illustrated by the graph on the right.

High values of Process Maturity are recommended as a "Best Practice" by


acquisition policies. These publications emphasize “contracting with software
suppliers that have domain experience in developing comparable software
systems; with successful past performance; and with a mature software
development capability and process.”

• True

It is much less costly to prevent defects than to 'inspect quality in' after
development is done.

A key quality activity is to monitor, measure, analyze, control and improve


processes.

Better software quality is a natural by-product of higher levels of software


process maturity.

Information and Software Quality


Management: Methods and Techniques
Introduction

A variety of management and technical methods can be used to help improve


software quality. These range from low-cost, simple methods that are in universal
use to special, highly-complex techniques used rarely.

The choice of methods employed on a specific project depends on the size and
complexity of the software, the amount of new code developed, system and
software risks, and available budget and time.

Many such methods are available that impact software quality. This lesson surveys
some of them.

Information and Software Quality Management: Methods and Techniques

Learning Objectives

This section introduces methods and techniques that can be used by government
and industry to help improve software quality. Once you have completed this topic,
you will be able to describe them and how they can be used to improve software
quality.

Once you have completed this topic, you will be able to identify and describe these
techniques.

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques

This section introduces eight methods used in industry and government to assure
quality software products.

The eight methods we will discuss are listed below:

1.Peer Reviews

2.Walkthroughs

3.Formal Inspections

4.Cleanroom

5.Formal Specifications
6.Reviews and Audits

7.IV & V

8.Developmental and Operational Testing

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 1: Peer Reviews

Peer reviews are normally conducted by a supplier. Upon completion of a work


product (requirements, design, code, test procedure), the author, after desk
checking their work, asks one or more of her/his peers (other programmers with
similar domain experience) to review it for errors. Changes made to the original
product as a result of a peer review may or may not be subject to the configuration
management process, depending on the supplier’s software development process.

Popup Text:

Method 1: Peer Reviews

1. Peer Reviews 2.Walkthroughs and Informal Inspections 3.Formal Inspections


4.Cleanroom 5.Formal Specifications 6.Reviews and Audits 7.IV & V
8.Developmental and Operational Testing

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 2: Walkthroughs

Sometimes called an "informal inspection" a walkthrough is a SQA method


conducted by the supplier. Walkthroughs are performed by a group of people who
have no formal training in inspection techniques. There are no formal participant
roles, other than the product author, who drives the process. No formal checklists
are maintained, and no follow-up is required to verify that fixes have been made.
Informal inspections provide no guarantee of future quality or efficiency.

Popup Text:

Method 2: Walkthroughs

1.Peer Reviews 2. Walkthroughs and Informal Inspections 3.Formal Inspections


4.Cleanroom 5.Formal Specifications 6.Reviews and Audits 7.IV & V
8.Developmental and Operational Testing

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 3: Formal Inspections

Formal inspections are performed on small units of requirements documentation,


design documentation or actual code, prior to testing. Sometimes they are called
"Fagan Inspections," named for Michael Fagan who first developed this process
while at IBM.

The formal inspection process uses checklists and requires follow-up to ensure that
the defects and errors found are corrected. This process assures fewer future
defects and leads to improved process efficiency. Metrics on the number and type
of defects found, and the time spent in inspection are collected and recorded.

Popup Text:

Method 3: Formal Inspections

1.Peer Reviews 2.Walkthroughs and Informal Inspections 3. Formal Inspections


4.Cleanroom 5.Formal Specifications 6.Reviews and Audits 7.IV & V
8.Developmental and Operational Testing
Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 3: Formal Inspections, Cont.

Inspectors are formally trained and should be neutral, not involved in the product
development. The product author's role in the process is only to answer questions
and provide clarification to the inspection team. Each member of the inspection
team has a defined role.

The READER reads the material being inspected. The MODERATOR controls the
process flow of the inspection. The AUTHOR answers technical questions. The
RECORDER takes action notes.

Other participants ("Inspectors") assist in these roles and participate in reviews of


materials being inspected.

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 3: Formal Inspections, Cont.

Formal inspections are conducted by the supplier. When performed properly, a


formal inspection can find 60-90% of defects prior to testing. These results are
dependent on adherence to a strict process, formal training of the participants, and
assignment of adequate resources to support the process.

Formal Inspections are effective in detecting errors early on in the software


development process when they are the easiest and cheapest to remove. A general
rule of thumb is that errors which cost $1 to fix at the requirements stage will cost
approximately $10 at the design stage, $100 at the coding stage, and over $1,000
to fix if they remain undetected until Operational Testing.

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.


Method 4: Cleanroom

The cleanroom is a theory-based, team-oriented approach to the development and


verification of ultra-high reliability software systems. It is designed to improve
productivity through statistical quality control. It combines practical new methods of
specification, design, correctness verification, and statistical testing for certifying
software quality using a process based on incremental development.

The goal of cleanroom software engineering is defect prevention, rather than defect
removal. Proof of correctness is used to prevent defects. The emphasis shifts from
removing defects from software products, to preventing the introduction of defects
into the products.

Method 4: Cleanroom

1.Peer Reviews 2.Walkthroughs and Informal Inspections 3.Formal Inspections 4.


Cleanroom 5.Formal Specifications 6.Reviews and Audits 7.IV & V
8.Developmental and Operational Testing

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 4: Cleanroom, Cont.

The objectives of the cleanroom approach are to engineer software products under
statistical quality control using mathematical verification, not debugging; and
statistical certification of quality, through user testing at the system level and
reliability predictions (e.g., mean time between failures).

Unit testing and debugging by programmers is not part of the cleanroom


methodology, because it is claimed that they compromise the correctness of the
original design and introduce complex software defects from the "tunnel vision"
inherent in the debugging process. Under proper conditions, the cleanroom
approach has been shown to remove as many as 90% of all defects prior to initial
testing.
Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 5: Formal Specifications

Formal specifications use mathematical techniques to specify requirements, design,


or code. This approach allows requirements to be verified with mathematical-based
techniques such as proof of correctness. Additionally, such specifications can help in
modeling and simulating key parts of the system.

The cleanroom approach is based on the use of such structured specifications.


Because these specifications are algebraic in nature, early error detection is
improved.

Because of cost, formal specifications are used rarely. Requirements must be well
understood and the system must be critical enough to justify a substantial
investment in time and resources. Some DoD nuclear-critical and cryptographic
systems may fall into this category.

Popup Text:

Method 5: Formal Specifications

1.Peer Reviews 2.Walkthroughs and Informal Inspections 3.Formal Inspections


4.Cleanroom 5. Formal Specifications 6.Reviews and Audits 7.IV & V
8.Developmental and Operational Testing

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 6: Reviews and Audits

Periodic reviews and audits provide another way to gain insight into the quality of a
software product. They involve both the acquirer and the supplier in the process.
Audits are performed late in the development cycle, when there is an actual product
to be evaluated. As such, they are more useful as a final quality check than as an
error prevention technique.

Reviews may be formal, such as a Preliminary Design Review or Critical Design


Review, or informal, such as Integrated Product Team (IPT) meetings and In-Process
Reviews. Reviews provide an opportunity for the supplier to present the status of a
program to the acquiring organization, and sometimes to the user community as
well.

Popup Text:

Method 6: Reviews and Audits

1.Peer Reviews 2.Walkthroughs and Informal Inspections 3.Formal Inspections


4.Cleanroom 5.Formal Specifications 6. Reviews and Audits 7.IV & V
8.Developmental and Operational Testing

Preliminary Design Review

A Preliminary Design Review (PDR) is conducted to ascertain whether the


preliminary design is to be committed to detailed design.

This review is conducted for each Configuration Item (CI) or aggregate of CIs. Risk
management actions and the results of risk mitigation activities are evaluated. A
system level PDR may be conducted upon completion of all CI PDRs.

Critical Design Review

A Critical Design Review (CDR) is conducted when detailed design is complete.

For a Supplier using a waterfall approach, this is the point when the detailed design
documentation is released to fabricate, integrate and assemble hardware
qualification units and to code and integrate the software qualification units.
A system level CDR may be conducted after the CI CDRs have been completed to
review the progress of system development.

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 6: Reviews and Audits, Cont.

Reviews and audits are agreed upon during contract negotiation. The use of
Integrated Project Teams (IPTs) may eliminate the need for many traditional formal
reviews.

Other categories of common reviews and audits, performed as part of the systems
engineering process, are listed below.

 Functional Configuration Audit

 Physical Configuration Audit

 Subsystem Reviews

 Software Specification Reviews

 Test Readiness Reviews

 Functional Reviews

 Support Reviews

 Training Reviews

 Manufacturing Reviews

Popup Text:

Functional Configuration Audit

Functional Configuration Audits (FCA) are conducted to verify a configuration item's


performance against its approved and authenticated configuration documentation.
This review is conducted in accordance with established configuration management
procedures.
Functional Configuration Audits answer the question: Does the product do what it
was intended to do?

Physical Configuration Audit

Physical Configuration Audits (PCA) are formal evaluations of the as-built version of
a configuration item against its design documentation.

Physical Configuration Audits answer the question: Does the product as produced
conform to the design?

Subsystem Reviews

Subsystem reviews are held to assure that all requirements, including interface
requirements, for the

subsystem have been identified, balanced across prime mission products, and met.

These reviews allow subsystem review team members to address issues and assess
progress of a subsystem or configuration item (CI).

Software Specification Reviews

A type of Subsystem Review, a Software Specification Review is conducted to


evaluate Software Item (SI) requirements and operational concept. This review
should have sufficient detail to ensure a complete understanding among
participants on the software requirements specification and, if applicable, the
completed interface requirements specification for the SI.

This review will determine whether the specifications form a satisfactory basis for
proceeding into preliminary software design.

Test Readiness Reviews


Test Readiness Reviews are conducted to:

 Evaluate completeness of the test procedures

 Assure readiness for testing

 Assure the supplier is prepared for formal testing

Test procedures are evaluated for compliance with test plans and descriptions, and
for adequacy to accomplish testing requirements.

Functional Reviews

Functional reviews include:

 Development (systems engineering)

 Support (including disposal and deployment)

 Training

 Test Manufacturing

These reviews assess the functional area's status in satisfying the prime mission
products, surface issues, and support the development of required functional plans
and procedures.

Support Reviews

Support reviews are conducted to evaluate:

 The completeness of logistics support requirements integration

 Interface issues

 Common vs. peculiar support equipment utilization

 Integrated diagnostics requirements

 Level of maintenance planned


Training Reviews

Training reviews can be held to evaluate training requirements and potential


embedded training solutions to ensure they enhance the user’s capabilities,
improve readiness, and reduce individual and collective training costs over the life
of the system.

Manufacturing Reviews

Manufacturing reviews evaluate the development of manufacturing elements and


processes related to the prime mission product(s).

These reviews assess manufacturing concerns, such as the need to identify high
risk/low yield manufacturing processes or materials, and the manufacturing efforts
necessary to satisfy design requirements.

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 7: Independent Verification and Validation (IV&V)

Independent Verification and Validation (IV&V) was born during the early days of
the space and missile programs. Both NASA and the Department of Defense (DoD)
realized that the software developed for spacecraft and missile systems as well as
other "safety-critical" systems which had to perform correctly the first time required
special scrutiny.

For those system which are safety-critical (where failures can cause loss of life,
significant damage or security compromises) resources and time are set aside for
an organization or contractor, financially and managerially independent from the
software supplier, to perform several key functions for the PMO. These are to:

 Ensure that proper development steps are being followed

 Determine if the developed software satisfies its requirements


 Perform special oversight and evaluations of high-risk areas.

Popup Text:

Method 7: Independent Verification and Validation (IV&V)

1.Peer Reviews 2.Walkthroughs and Informal Inspections 3.Formal Inspections


4.Cleanroom 5.Formal Specifications 6.Reviews and Audits 7. IV & V
8.Developmental and Operational Testing

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 7: IV&V, Cont.

Software IV&V is an important aspect of developing quality software. To be


effective, IV&V must complement and reinforce the supplier's software engineering
process, configuration management, and qualification test functions.

Verification is the iterative process of determining whether the product of certain


steps in the software item (SI) development process fulfills the requirements levied
by previous steps.

Validation comprises evaluation, integration and test activities carried out at the
system level to ensure that the system developed satisfies the operational
requirements of the system specification.

Information and Software Quality Management: Methods and Techniques

Eight Methods and Techniques, Cont.

Method 8: Developmental and Operational Testing

Software is normally developed using a "top-down" approach. Testing and


integration of software occurs from the bottom up. This approach is called the
"Software V" Model.
Testing is the process of exercising or evaluating a system or system components
by manual or automated means to verify that it satisfies specified requirements or
to identify differences between expected and actual results.

Testing is typically the most labor-intensive activity performed during software


development. Although it is listed here for completeness, testing alone cannot
produce quality software, nor can it verify its correctness. Testing can only confirm
the presence (as opposed to the absence) of software defects.

Popup Text:

Method 8: Developmental and Operational Testing

1.Peer Reviews 2.Walkthroughs and Informal Inspections 3.Formal Inspections


4.Cleanroom 5.Formal Specifications 6.Reviews and Audits 7.IV & V 8.
Developmental and Operational Testing

Information and Software Quality Management: Methods and Techniques

Developmental Testing

The purpose of developmental testing is to ensure that the system and the software
comprising it function in accordance with various technical requirements
documents. These have a variety of names depending on the System Engineering
approach being used. Some of these technical requirements documents typically
include:

 System Requirements Specification

 System and Subsystem Specifications

 System Interface Requirements Specifications

 Software Requirements and Interface Specifications

There are various stages in the developmental test process. Some of the “lower
level” tests are performed by the supplier or software developer. Depending on the
test strategy and the type of system, as it is progressively integrated together in a
“bottoms-up” process, a government-industry team may perform integrated
developmental testing at the subsystem and system level. In other cases
government agencies will perform some of these tests.
Information and Software Quality Management: Methods and Techniques

Developmental Testing, Cont.

Developmental Testing Includes:

 Software Coding and Testing

 Software Integration

 Software Qualification Testing

 System Integration

 System Qualification Testing

On the following pages we will describe each of these tests in detail.

D-Link Text:

Long Description

Developmental Testing Diagram. Five rectangles labled from bottom to top,


Software Coding and Testing, Software Integration, Software Qualification Testing,
System Integration, System Qualification Testing. An arrow points from each
rectangle to the one above it.

Close window to continue.

Information and Software Quality Management: Methods and Techniques

Developmental Testing, Cont.

Software Coding and Testing

Once a software configuration item (SCI) has been designed, it enters into Software
Coding and Testing.

During Software Coding and Testing, the Supplier evaluates and documents the
software code and test results for each Software Unit (SU) comprising the SCI
considering specific criteria.

The result of the Software Coding and Testing process is executable and tested
code for all the Software Units comprising a given SCI.
D-Link Text:

Long Description

Developmental Testing Diagram. Five rectangles labled from bottom to top,


Software Coding and Testing, Software Integration, Software Qualification Testing,
System Integration, System Qualification Testing. An arrow points from each
rectangle to the one above it. The Software Coding and Testing rectangle is
highlighted.

Close window to continue.

Popup Text:

Software Coding and Testing

Software Coding and testing includes the following activities:

 Implementing each software unit (SU) comprising the SCI by:

 Developing the source code for each SU

 Developing the data definitions for each SU

 Preparing each SU for testing by:

 Developing test cases for each SU

 Developing bench mark test files

 Performing SU testing

 Revising and retesting each SU as required

specific criteria

Testers consider the following criteria when evaluating software code and test
results:

 Traceability to the requirements and design of the SCI

 External consistency with the requirements and design of the SCI

 Internal consistency with SU requirements

 Test coverage of SUs


 Appropriateness of coding methods and standards

 Feasibility of software integration and testing

 Feasibility of operation and maintenance

Information and Software Quality Management: Methods and Techniques

Developmental Testing, Cont.

Software Integration

After coding and testing each SU comprising the SCI, the next step is Software
Integration. The purpose of Software Integration is to ensure the SUs comprising the
SCI work together as intended.

The supplier evaluates and documents the integration plan, design, code, test, test
results, and user documentation using specific criteria.

The result of the Software Integration process is an SCI product baseline.

D-Link Text:

Long Description

Developmental Testing Diagram. Five rectangles labled from bottom to top,


Software Coding and Testing, Software Integration, Software Qualification Testing,
System Integration, System Qualification Testing. An arrow points from each
rectangle to the one above it. The Software Integration rectangle is highlighted.

Close window to continue.

Popup Text:

specific criteria

Testers consider the following criteria when evaluating the integration plan, design,
code, test, test results, and user documentation:

 Traceability to the system requirements

 External consistency with the system requirements


 Internal consistency

 Test coverage of the requirements of the SCI

 Appropriateness of test standards and methods

 Conformance to expected results

 Feasibility of software qualification testing

 Feasibility of operation and maintenance

Information and Software Quality Management: Methods and Techniques

Developmental Testing, Cont.

Software Qualification Testing

The next step is to run each Software Configuration Item (SCI) through Software
Qualification Testing. This type of testing demonstrates to the acquirer that the SCI
meets the software requirements that have been allocated to it as part of the
Systems Engineering process.

The supplier evaluates and documents the design, code, test, test results, and user
documentation considering specific criteria.

Software Qualification Testing produces a qualified product baseline for each SCI.

D-Link Text:

Long Description

Developmental Testing Diagram. Five rectangles labled from bottom to top,


Software Coding and Testing, Software Integration, Software Qualification Testing,
System Integration, System Qualification Testing. An arrow points from each
rectangle to the one above it. The Software Qualification Testing rectangle is
highlighted.

Close window to continue.

Popup Text:

specific criteria
Testers consider the following criteria when evaluating the integration plan, design,
code, test, test results, and user documentation:

 Test coverage of the requirements of the SCI

 Conformance to expected results

 Feasibility of system integration and testing

 Feasibility of operation and maintenance

Information and Software Quality Management: Methods and Techniques

Developmental Testing, Cont.

System Integration

Following software qualification testing, the next step required is System


Integration. The purpose of System Integration is to ensure that the software and
hardware items comprising the system work together as intended.

The supplier evaluates and documents the integrated system against specific
criteria.

The result of System Integration is a product baseline for the system that is ready
for system qualification testing.

D-Link Text:

Long Description

Developmental Testing Diagram. Five rectangles labled from bottom to top,


Software Coding and Testing, Software Integration, Software Qualification Testing,
System Integration, System Qualification Testing. An arrow points from each
rectangle to the one above it. The System Integration rectangle is highlighted.

Close window to continue.

Popup Text:

specific criteria

Testers consider the following criteria when evaluating the integrated system:
 Test coverage of system requirements

 Appropriateness of test methods and standards

 Conformance to expected results

 Feasibility of system qualification testing

 Feasibility of operation and maintenance

Developmental Testing, Cont.

System Qualification Testing

The purpose of System Qualification Testing (SQT) is to demonstrate to the acquirer


that the product baseline produced during System Integration meets the
performance and operational requirements defined in the System Specification.
Independent testers and system users conduct the SQT on the target hardware
using live data.

The system is evaluated and documented using specific criteria.

The result of the SQT is a qualified product baseline for the system. The system is
now ready to go before the Milestone C Decision Review.

D-Link Text:

Long Description

Developmental Testing Diagram. Five rectangles labled from bottom to top,


Software Coding and Testing, Software Integration, Software Qualification Testing,
System Integration, System Qualification Testing. An arrow points from each
rectangle to the one above it. The System Qualification Testing rectangle is
highlighted.

Close window to continue.

Popup Text:

specific criteria
Testers consider the following criteria when evaluating system qualification testing
results:

 Test coverage of the requirements of the SCI

 Conformance to expected results

 Feasibility of system integration and testing

 Feasibility of operation and maintenance

Information and Software Quality Management: Methods and Techniques

Operational Testing

Operational testing is conducted by an independent testing agency on behalf of the


system user. Each service has its own testing arm.

During operational testing, users exercise the system in a realistic environment to


try to invoke defects. Operational testing may use both live and simulated scenarios
to exercise the full range of the system. Results are then analyzed for conformance
to the operational requirements and Key Performance Parameters, such as the Net-
Ready KPP, that are articulated in various user requrements and capabilities
documents.

Popup Text:

Operational testing

Operational testing has five general objectives:

 Usability

 Effectiveness

 Software maturity

 Reliability (including system safety)


 Supportability

Information and Software Quality Management: Methods and Techniques

Knowledge Review (Alternate)

Match each term with its definition.

Terms

1. Developmental and Operational Testing

2. Cleanroom 3. Formal Inspection 4. Reviews and Audits

Definitions

A. The PM needs a way to periodically check on the status of the development


project.

B. A small, critical piece of software used in a DoD multi-level security system must
be error-free. A proof of correctness of the software is desired. C. End-users of the
system want to verify that the systm has appropriate functionality and is usable in
its intended environment.

D. Done properly, can find 60-90% of defects prior to testing.

2. 1-C, 2-A, 3-D, 4-B

Information and Software Quality Management: Methods and Techniques


Knowledge Review (Alternate)

This type of testing is performed to demonstrate to the acquirer that the software
configuration item's requirements have been met in accordance with its software
requirements specification (SRS).

A. Software Coding and Testing B. System Qualification Testing C. Software


Qualification Testing

Information and Software Quality Management: Methods and Techniques

Summary

This topic presented a range of methods and techniques that can be used to help
improve software quality. Because in some cases their use depends on the type of
system under development, not all these techniques are usable on every system.

Cleanroom techniques and Formal Specifications, in addition to their impact on


software quality, are software design methodologies as well. But because of their
cost and complexity, they are used on a very few, high-value critical DoD systems.
IV&V is used more often, typically for safety-critical systems.

Peer Reviews and Walkthroughs are commonly used by nearly all software
developers. Those with more mature development processes generally use the
more effective Formal Inspections, which are rigorous and require a trained cadre.
All of these techniques emphasize the detection and elimination of efforts early in
the lifecycle, when they are the easiest and cheapest to remove.

Reviews and audits should always be used in system and software development.
The use of Developmental and Operational Testing is universal as well.

You have reached the end of this topic. To launch the next topic, click the topic title
in the Table of Contents.