Professional Documents
Culture Documents
A. Concepts
Software Quality Engineering (SQE) is a process that evaluates,
assesses, and improves the quality of software. Software quality is
often defined as the degree to which software meets requirements
for reliability, maintainability, transportability, etc., as contrasted
with functional, performance, and interface requirements that are
satisfied as a result of software engineering.
B. Software Qualities
Qualities for which an SQE evaluation is to be done must first be
selected and requirements set for them. Some commonly used
qualities are reliability, maintainability, transportability,
interoperability, testability, usability, reusability, traceability,
sustainability, and efficiency.
Some of the key ones are discussed below.
1. Reliability
Hardware reliability is often defined in terms of the Mean-Time-To-
Failure, or MTTF, of a given set of equipment. An analogous notion
is useful for software, although the failure mechanisms are different
and the mathematical predictions used for hardware have not yet
been usefully applied to software. Software reliability is often
defined as the extent to which a program can be expected to
perform intended functions with required precision over a given
period of time. Software reliability engineering is concerned with
the detection and correction of errors in the software; even more, it
is concerned with techniques to
compensate for unknown software errors and for problems in the
hardware and data environments in which the software must
operate.
2. Maintainability
Software maintainability is defined as the ease of finding and
correcting errors in the software. It is analogous to the hardware
quality of Mean-Time-To-Repair, or MTTR. While
there is as yet no way to directly measure or predict software
maintainability, there is a significant body of knowledge about
software attributes that make software easier to maintain. These
include modularity, self (internal) documentation, code readability,
and structured coding techniques. These same attributes also
improve sustainability, the ability to make improvements to the
software.
3. Transportability
Transportability is defined as the ease of transporting a given set of
software to a new hardware and/or operating system environment.
4. Interoperability
Software interoperability is the ability of two or more software
systems to exchange information and to mutually use the
exchanged information.
5. Efficiency
Efficiency is the extent to which software uses minimum hardware
resources to perform its functions.
There are many other software qualities. Some of them will not be
important to a specific software system, thus no activities will be
performed to assess or improve them. Maximizing some qualities
may cause others to be decreased. For example, increasing the
efficiency of a piece of software may require writing parts of it in
assembly language. This will decrease the transportability and
maintainability of the software.
C. Metrics
Metrics are quantitative values, usually computed from the design
or code, that measure the quality in question, or some attribute of
the software related to the quality. Many metrics have been
invented, and a number have been successfully used in specific
environments, but none has gained widespread acceptance.
2. Quality Evaluations
Once some decisions have been made about the quality objectives
and software attributes, quality evaluations can be done. The intent
in an evaluation is to measure the effectiveness of a standard or
procedure in promoting the desired attributes of the software
product. For example, the design and coding standards should
undergo a quality evaluation. If modularity is desired, the standards
should clearly say so and should set standards for the size of units
or components. Since internal documentation is linked to
maintainability, the documentation standards should be clear and
require good internal documentation.
3. Nonconformance Analysis
One very useful SQE activity is an analysis of a project's
nonconformance records. The nonconformances should be analyzed
for unexpectedly high numbers of events in specific
sections or modules of code. If areas of code are found that have
had an unusually high error count (assuming it is not because the
code in question has been tested more thoroughly), then the code
should be examined. The high error count may be due to poor
quality code, an inappropriate design, or requirements that are not
well understood or defined. In any case, the analysis may indicate
changes and rework that can improve the reliability of the
completed software. In addition to code problems, the analysis may
also reveal software development or maintenance processes that
allow or cause a high proportion of errors to be introduced into the
software. If so, an evaluation of the procedures may lead to
changes, or an audit may discover that the procedures are not being
followed.
There are also tools that are useful for quality engineering. They
include system and software simulators, which allow the modeling
of system behavior; dynamic analyzers, which detect the portions of
the code that are used most intensively; software tools that are
used to compute metrics from code or designs; and a host of special
purpose tools that can, for example, detect all system calls to help
decide on portability limits.
Also known as glass box, structural, clear box and open box testing. A
software testing technique whereby explicit knowledge of the internal
workings of the item being tested are used to select the test data. Unlike
black box testing, white box testing uses specific knowledge of
programming code to examine outputs. The test is accurate only if the
tester knows what the program is supposed to do. He or she can then see
if the program diverges from its intended goal. White box testing does not
account for errors caused by omission, and all visible code must also be
readable.
Black-box and white-box are test design methods. Black-box test design
treats the system as a "black-box", so it doesn't explicitly use knowledge
of the internal structure. Black-box test design is usually described as
focusing on testing functional requirements. Synonyms for black-box
include: behavioral, functional, opaque-box, and closed-box. White-box
test design allows one to peek inside the "box", and it focuses specifically
on using internal knowledge of the software to guide the selection of test
data. Synonyms for white-box include: structural, glass-box and clear-
box.
While black-box and white-box are terms that are still in popular use,
many people prefer the terms "behavioral" and "structural". Behavioral
test design is slightly different from black-box test design because the use
of internal knowledge isn't strictly forbidden, but it's still discouraged. In
practice, it hasn't proven useful to use a single test design method. One
has to use a mixture of different methods so that they aren't hindered by
the limitations of a particular one. Some call this "gray-box" or
"translucent-box" test design, but others wish we'd stop talking about
boxes altogether.
It is important to understand that these methods are used during the test
design phase, and their influence is hard to see in the tests once they're
implemented. Note that any level of testing (unit testing, system testing,
etc.) can use any test design methods. Unit testing is usually associated
with structural test design, but this is because testers usually don't have
well-defined requirements at the unit level to validate.
Unit Testing
Benefits
The goal of unit testing is to isolate each part of the program and show
that the individual parts are correct. It provides a written contract that the
piece must satisfy. This isolated testing provides four main benefits:
Encourages change
Unit testing allows the programmer to refactor code at a later date, and
make sure the module still works correctly (regression testing). This
provides the benefit of encouraging programmers to make changes to the
code since it is easy for the programmer to check if the piece is still
working properly.
Simplifies Integration
Unit testing helps eliminate uncertainty in the pieces themselves and can
be used in a bottom-up testing style approach. By testing the parts of a
program first and then testing the sum of its parts will make integration
testing easier.
Unit testing provides a sort of "living document" for the class being tested.
Clients looking to learn how to use the class can look at the unit tests to
determine how to use the class to fit their needs.
Limitations
It is important to realize that unit-testing will not catch every error in the
program. By definition, it only tests the functionality of the units
themselves. Therefore, it will not catch integration errors, performance
problems and any other system-wide issues. In addition, it may not be
trivial to anticipate all special cases of input the program unit under study
may receive in reality. Unit testing is only effective if it is used in
conjunction with other software testing activities.
Integration Testing
Purpose
Performance testing
Technology
The test result shows how the performance varies with the load, given as
number of users vs response time. Various tools, including Compuware
Corporation's QACenter Performance Edition, are available to perform such
tests. Tools in this category usually execute a suite of tests which will
emulate real users against the system. Sometimes the results can reveal
oddities, e.g., that while the average response time might be acceptable,
there are outliers of a few key transactions that take considerably longer
to complete – something that might be caused by inefficient database
queries, etc.
Performance specifications
Tasks to undertake
Logged
Stress Testing
Security Testing
While automated tools can help to eliminate many generic security issues,
the detection of application vulnerabilities requires independent evaluation
of your specific application's features and functions by experts. An
external security vulnerability review by Third Eye Testing will give you the
best possible confidence that your application is as secure as possible.
Installation Testing
Alpha Testing
Usability testing is a means for measuring how well people can use some
human-made object (such as a web page, a computer interface, a
document, or a device) for its intended purpose, i.e. usability testing
measures the usability of the object. Usability testing focuses on a
particular object or a small set of objects, whereas general human-
computer interaction studies attempt to formulate universal principles.
Rather than showing users a rough draft and asking, "Do you understand
this?", usability testing involves watching people trying to use something
for its intended purpose. For example, when testing instructions for
assembling a toy, the test subjects should be given the instructions and a
box of parts. Instruction phrasing, illustration quality, and the toy's design
all affect the assembly process.
Beta Testing
In software development, testing is usually required before release to the
general public. In-house developers often test the software in what is
known as 'Beta' testing which is often performed under a debugger or
with hardware-assisted debugging to catch bugs quickly.
Product Testing
Product testing experts design the test process to take advantage of the
economies of scope and scale that are present in a software product.
These activities are sequenced and scheduled so that a test activity occurs
immediately following the construction activity whose output the test is
intended to validate.
Stability Testing
Acceptance Testing
Users of the system will perform these tests which, ideally, developers
have derived from the User Requirements Specification, to which the
system should conform.
Test designers will draw up a formal test plan and devise a range of
severity levels. The focus in this type of testing is less on simple problems
(spelling mistakes, cosmetic problems) and show stoppers (major
problems like the software crashing, software will not run etc.).
Developers should have worked out these issues during unit testing and
integration testing. Rather, the focus is on a final verification of the
required business function and flow of the system. The test scripts will
emulate real-world usage of the system. The idea is that if the software
works as intended and without issues during a simulation of normal use, it
will work just the same in production.
Results of these tests will allow both the customers and the developers to
be confident that the system will work as intended.
System Testing
System testing falls within the scope of Black box testing, and as such,
should require no knowledge of the inner design of the code or logic
(IEEE. IEEE Standard Computer Dictionary: A Compilation of IEEE
Standard Computer Glossaries. New York, NY. 1990.).
Regression Testing
Regression testing falls within the scope of Black box testing, and as such,
should require no knowledge of the inner design of the code or logic
(IEEE. IEEE Standard Computer Dictionary: A Compilation of IEEE
Standard Computer Glossaries. New York, NY. 1990.).
Fuzz testing
Fuzz testing is a software testing technique. The basic idea is to attach the
inputs of a program to a source of random data. If the program fails (for
example, by crashing, or by failing in-built code assertions), then there
are defects to correct.
The great advantage of fuzz testing is that the test design is extremely
simple, and free of preconceptions about system behavior.
Uses
Event-driven fuzz
One of the more interesting issues with real-time event handling is that if
error reporting is too verbose, simply providing error status can cause
resource problems or a crash. Robust error detection systems will report
only the most significant, or most recent error over a period of time.
Character-driven fuzz
Database fuzz
The standard database scheme is usually filled with fuzz that is random
data of random sizes. Some IT shops use software tools to migrate and
manipulate such databases. Often the same schema descriptions can be
used to automatically generate fuzz databases.