You are on page 1of 129

Table of Contents

Introduction to the Software Development Life Cycle (SDLC)...........................................4


The SDLC Waterfall...................................................................................................4
Other SDLC Models....................................................................................................7
Spiral Lifecycle..........................................................................................................7
V-Model....................................................................................................................8
Verification and Validation..........................................................................................9
The V & V process.....................................................................................................9
Verification Vs Validation............................................................................................9
Introduction to Software Testing..................................................................................11
Different approaches to software Testing..................................................................11
Levels of Testing.....................................................................................................11
Types of Testing......................................................................................................11
Test Techniques..........................................................................................................17
Black Box Testing....................................................................................................17
White Box Testing...................................................................................................18
Code Complexity......................................................................................................25
Bug Life Cycle.............................................................................................................27
Test cases...................................................................................................................29
What’s a Test Case?.................................................................................................29
What is Quality Writing?...........................................................................................29
Key Points to Remember..........................................................................................29
Use Cases...............................................................................................................30
Writing Test Cases...................................................................................................31
Software Inspection.....................................................................................................33
Steps Involved in Inspection.....................................................................................33
Participants and their Roles in Inspection..................................................................36
Reviews..................................................................................................................38
Walkthroughs..........................................................................................................39
Benefits of Inspection..............................................................................................40
Unit Testing /Module Testing........................................................................................41
The need for Levels of Testing.................................................................................41
Objectives of Unit Testing........................................................................................41
Unit Test Procedures................................................................................................44
Tools for Unit Testing..............................................................................................45
Integration testing.......................................................................................................46
Objective of Integration Testing...............................................................................46
Advantage of Integration Testing..............................................................................46
Integration Strategies..............................................................................................47
System Testing............................................................................................................55
Why System Testing?...............................................................................................55
Who does System Testing?......................................................................................55
Types of System Tests.............................................................................................55
Exploratory Testing..................................................................................................55
Why Exploratory Testing?.........................................................................................56
Testing Process...........................................................................................................57
Fundamental Test Process........................................................................................58
Test Planning..........................................................................................................59
Test Design.............................................................................................................59
Test Execution.........................................................................................................59
Test Recording........................................................................................................59
Checking for Test Completion...................................................................................60
What's a 'test plan'?.................................................................................................61
Test plan identifier...................................................................................................61
What should be done after a bug is found?...............................................................70
What is 'configuration management'?........................................................................71

© 2006 Zeta Cyber Solutions (P) Limited Page 2


Test Automation..........................................................................................................73
Scope of Automation................................................................................................73
Tool Support for Life-Cycle testing............................................................................73
Test Automation Framework.....................................................................................75
Benefits of Automation.............................................................................................77
Limitation of Test Automation...................................................................................78
Manual vs. Automated.............................................................................................79
Testing Terms - Glossary..............................................................................................80
Points to ponder..........................................................................................................85
References................................................................................................................127
Books....................................................................................................................127
Website Links........................................................................................................127

© 2006 Zeta Cyber Solutions (P) Limited Page 3


Introduction to the Software Development Life Cycle (SDLC)

This section of the document will describe the Software Development Life Cycle (SDLC) for
small to medium database application development efforts. This chapter presents an
overview of the SDLC, alternate lifecycle models, and associated references. This chapter also
describes the internal processes that are common across all stages of the SDLC and describes
the inputs, outputs, and processes of each stage.

The SDLC Waterfall


Small to medium database software projects are generally broken down into six stages:

The relationship of each stage to the others can be roughly described as a waterfall, where
the outputs from a specific stage serve as the initial inputs for the following stage.
To follow the waterfall model, one proceeds from one phase to the next in a purely
sequential manner. For example,
 After completing the “Project Planning” phase, one will be completing the
"requirements definitions" phase.
 When and only when the requirements are fully completed, one proceeds to design.
This design should be a plan for implementing the requirements given.
 When and only when the design is fully completed, an implementation of that design
is made by coders. Towards the later stages of this implementation phase, disparate
software components produced by different teams are integrated.

© 2006 Zeta Cyber Solutions (P) Limited Page 4


 After the implementation and integration phases are complete, the software product
is tested and debugged; any faults introduced in earlier phases are removed here.
 Then the software product is installed, and later maintained to introduce new
functionality and remove bugs.

Thus the waterfall model maintains that one should move to a phase only when it’s
proceeding phase is completed and perfected. Phases of development in the waterfall model
are thus discrete, and there is no jumping back and forth or overlap between them.

The central idea behind the waterfall model - time spent early on making sure that
requirements and design are absolutely correct is very useful in economic terms (it will save
you much time and effort later). One should also make sure that each phase is 100%
complete and absolutely correct before proceeding to the next phase of program creation.

It is argued that the waterfall model in general can be suited to software projects which are
stable (especially with unchanging requirements) and where it is possible and likely that
designers will be able to fully predict problem areas of the system and produce a correct
design before implementation is started.

The waterfall model also requires that implementers follow the well made, complete design
accurately, ensuring that the integration of the system proceeds smoothly.

The waterfall model however is argued by many to be a bad idea in practice, mainly because
of their belief that it is impossible to get one phase of a software product's lifecycle
"perfected" before moving on to the next phases and learning from them (or at least, the
belief that this is impossible for any non-trivial program). For example clients may not be
aware of exactly what requirements they want before they see a working prototype and can
comment upon it - they may change their requirements constantly, and program designers
and implementers may have little control over this.

If clients change their requirements after a design is finished, that design must be modified
to accommodate the new requirements, invalidating quite a good deal of effort if overly large
amounts of time have been invested into the model.

In response to the perceived problems with the "pure" waterfall model, many modified
waterfall models have been introduced namely Royce's final model, sashimi model, and other
alternative models. These models may address some or all of the criticism of the "pure"

© 2006 Zeta Cyber Solutions (P) Limited Page 5


waterfall model. There are other alternate SDLC models such as “Spiral” and “V” which have
been explained in the later part of this chapter.

After the project is completed, the Primary Developer Representative (PDR) and Primary End-
User Representative (PER), in concert with other customer and development team personnel
develop a list of recommendations for enhancement of the current software.

Prototypes

The software development team, to clarify requirements and/or design elements, may
generate mockups and prototypes of screens, reports, and processes. Although some of the
prototypes may appear to be very substantial, they're generally similar to a movie set:
everything looks good from the front but there's nothing in the back.

When a prototype is generated, the developer produces the minimum amount of code
necessary to clarify the requirements or design elements under consideration. No effort is
made to comply with coding standards, provide robust error management or integrate with
other database tables or modules. As a result, it is generally more expensive to retrofit a
prototype with the necessary elements to produce a production module than it is to develop
the module from scratch using the final system design document. For these reasons,
prototypes are never intended for business use, and are generally crippled in one way or
another to prevent them from being mistakenly used as production modules by end-users.

Allowed Variations

In some cases, additional information is made available to the development team that
requires changes in the outputs of previous stages. In this case, the development effort is
usually suspended until the changes can be reconciled with the current design, and the new
results are passed down the waterfall until the project reaches the point where it was
suspended.

The PER and PDR may, at their discretion, allow the development effort to continue while
previous stage deliverables are updated in cases where the impacts are minimal and strictly
limited in scope. In this case, the changes must be carefully tracked to make sure all their
impacts are appropriately handled.

© 2006 Zeta Cyber Solutions (P) Limited Page 6


Other SDLC Models

Apart from the waterfall model other popular SDLC modules are the “Spiral” model and “V”
model which have been explained in this section.

Spiral Lifecycle

The spiral model starts with an initial pass through a standard waterfall lifecycle, using a
subset of the total requirements to develop a robust prototype. After an evaluation period,
the cycle is initiated again, adding new functionality and releasing the next prototype. This
process continues with the prototype becoming larger and larger with each iteration, hence
the “spiral.”

The Spiral model is used most often in large projects and needs constant review to stay on
target. For smaller projects, the concept of agile software development is becoming a viable
alternative. Agile software development tends to be rather more extreme in their approach
than the spiral model.

© 2006 Zeta Cyber Solutions (P) Limited Page 7


The theory is that the set of requirements is hierarchical in nature, with additional
functionality building on the first efforts. This is a sound practice for systems where the entire
problem is well defined from the start, such as modeling and simulating software. Business-
oriented database projects do not enjoy this advantage. Most of the functions in a database
solution are essentially independent of one another, although they may make use of common
data. As a result, the prototype suffers from the same flaws as the prototyping lifecycle. For
this reason, the software development teams usually decide against the use of the spiral
lifecycle for database projects.

V-Model

The V-model was originally developed from the waterfall software process model. The four
main process phases – requirements, specification, design and Implementation – have a
corresponding verification and validation testing phase. Implementation of modules is tested
by unit testing, system design is tested by Integration testing, system specifications are
tested by system testing and finally Acceptance testing verifies the requirements. The V-
model gets its name from the timing of the phases. Starting from the requirements, the
system is developed one phase at a time until the lowest phase, the implementation phase, is
finished. At this stage testing begins, starting from unit testing and moving up one test level
at a time until the acceptance testing phase is completed. During development stage the
program will be tested at all levels simultaneously.

V-Model

The different levels in V-Model are unit tests, integration tests, system tests and acceptance
test. The unit tests and integration tests ensure that the system design is followed in the
code. The system and acceptance tests ensure that the system does what the customer
wants it to do. The test levels are planned so that each level tests different aspects of the

© 2006 Zeta Cyber Solutions (P) Limited Page 8


program and so that the testing levels are independent of each other. The traditional V-model
states that testing at a higher level is started only when the previous test level is completed.

Verification and Validation

Verification: Are we building the product right?


The software should conform to its specification. The process of evaluating a system or
component to determine whether the products of a given development phase satisfy the
conditions imposed at the start of that phase and form formal proof of program correctness.

Validation: Are we building the right product?


The software should do what the user really requires. The process of evaluating a system or
component during or at the end of the development process to determine whether it satisfies
specified requirements.

The V & V process


V & V is a whole life-cycle process and it must be applied at each stage in the software
process. It has two principal objectives
 The discovery of defects in a system
 The assessment of whether or not the system is usable in an operational situation.

Verification Vs Validation

Verification Validation
Am I building the product right Am I building the right product
The review of interim work steps and
interim deliverables during a project to Determining if the system complies with the
ensure they are acceptable. To determine requirements and performs functions for which it
if the system is consistent, adheres to is intended and meets the organization’s goals
standards, uses reliable techniques and and user needs. It is traditional and is performed
prudent practices, and performs the at the end of the project.
selected functions in the correct manner.
Am I accessing the data right (in the right Am I accessing the right data (in terms of the
place; in the right way) data required to satisfy the requirement)
Low level activity High level activity
Performed during development on key
Performed after a work product is produced
artifacts, like walkthroughs, reviews and
against established criteria ensuring that the
inspections, mentor feedback, training,
product integrates correctly into the environment
checklists and standards
Demonstration of consistency, Determination of correctness of the final
completeness, and correctness of the software product by a development project with
software at each stage and between each respect to the user needs and requirements

© 2006 Zeta Cyber Solutions (P) Limited Page 9


stage of the development life cycle.

© 2006 Zeta Cyber Solutions (P) Limited Page 10


Introduction to Software Testing

The software testing is a process of devising a set of inputs to a given piece of software that
will check if the results produced by the software are in accordance with the expected results.
Software testing will identify the correctness, completeness and quality of developed
application. Testing can only find defects, but cannot prove that there are none.

Different approaches to software Testing

Positive Approach:

It is a process of establishing confidence that a program/system does what it is supposed to.

Negative Approach:

It is a process of executing a program or system with the intent of finding errors.

Levels of Testing

Types of Testing

The test strategy consists of a series of tests that will fully exercise the product. The
primary purpose of these tests is to uncover limitations, defects and measure its full
capability. A list of few of the tests and a brief explanation is given below.

 Sanity test
 System test
 Performance test
 Security test
 Functionality/Automated test
 Stress and Volume test
 Recovery test

© 2006 Zeta Cyber Solutions (P) Limited Page 11


 Document test
 Beta test
 User Acceptance test
 Load
 Volume
 Usability
 Reliability
 Storage
 Internationalization
 Configuration
 Compatibility
 Installation
 Scalability
 …….

Sanity Test

Sanity testing is a cursory testing. It is performed whenever a cursory testing is sufficient to


prove that the application is functioning according to specifications. This level of testing is a
subset of regression testing. It normally includes a set of core tests of basic GUI functionality
to demonstrate connectivity to the database, application servers, printers, etc.

System Test

System tests focus on the behavior of the product. User scenarios will be executed against
the system as well as screen mapping and error message testing. Overall, system tests will
test the integrated system and verify that it meets the requirements defined in the
requirements document.

Performance Test

Performance test will be conducted to ensure that the product’s response time meets user
expectations and does not fall outside the specified performance criteria. During these tests,
the response time will be measured under simulated heavy stress and/or volume.

Security Test

Security tests will determine how secure the product is. The tests will verify that unauthorized
user access to confidential data is prevented. It will also verify data storage security,
strength of encryption algorithms, vulnerability to hacking, etc.

© 2006 Zeta Cyber Solutions (P) Limited Page 12


Functionality/Automated Test

A suite of automated tests will be developed to test the basic functionality of the product and
perform regression testing on all areas of the system to identify and log all errors. The
automated testing tool will also assist in executing user scenarios thereby emulating several
users simultaneously.

Stress and Volume Test

The product will be subjected to high input conditions and a high volume of data simulating
peak load scenarios. The System will be stress tested based on expected load and
performance criteria specified by the customer.

Recovery Test

Recovery tests will force the system to fail in various ways (all scenarios created to resemble
real-life failures like disk failure, network failure, power failure, etc.) and verify that the
system recovers itself based on customer’s requirement.

Documentation Test

Tests are conducted to verify the accuracy of user documentation. These tests will ensure
that the documentation is a true and fair reflection of the product in full and can be easily
understood by the reader.

Beta test

Beta tests are carried out on a product prior to it is commercially released. This is usually the
last set of tests carried out on the system and normally involves sending the product to beta
test sites outside the company for real-world exposure.

User Acceptance Test

Once the product is ready for implementation, the Business Analysts and a team that
resembles the typical user profile uses the application to verify that it conforms to all
expectations they have of the product. These tests are carried out to confirm that the
system performs to specifications, ready for operational use and meets user expectations.

Load Testing

Load testing generally refers to the practice of modeling the expected usage of a software
program by simulating multiple users accessing the program's services concurrently. As such,
this testing is most relevant for multi-user systems; often one built using a client/server
model, such as web servers. However, other types of software systems can be load-tested
also. For example, a word processor or graphics editor can be forced to read an extremely

© 2006 Zeta Cyber Solutions (P) Limited Page 13


large document; or a financial package can be forced to generate a report based on several
years' worth of data.

When the load placed on the system is raised beyond normal usage patterns, in order to test
the system's response at unusually high or peak loads, it is known as Stress testing. The load
is usually so great that error conditions are the expected result, although there is a gray area
between the two domains and no clear boundary exists when an activity ceases to be a load
test and becomes a stress test.

Volume Testing

Volume Testing, as its name implies, is testing that purposely subjects a system (both
hardware and software) to a series of tests where the volume of data being processed is the
subject of the test. Such systems can be transactions processing systems capturing real time
sales or could be database updates and or data retrieval.

Volume testing will seek to verify the physical and logical limits to a system's capacity and
ascertain whether such limits are acceptable to meet the projected capacity of the
organization’s business processing.

Usability testing

Usability testing it is carried out pre-release so that any significant issues identified can be
addressed. Usability testing can be carried out at various stages of the design process. In the
early stages, however, techniques such as walkthroughs are often more appropriate.
Usability testing is not a substitute for a human-centered design process. Usability testing
comes in many flavors and should occur at different points in the development process.

Explorative testing gathers input from participants in the early stages of site development.
Based on the experience and opinions of target users, the development team can decide the
appropriate direction for the site's look and feel, navigation, and functionality.

Assessment testing occurs when the site is close to launch. Here you can get feedback on
issues that might present huge problems for users but are relatively simple to fix.

Evaluation testing can be useful to validate the success of a site subsequent to launch. A site
can be scored and compared to competitors, and this scorecard can be used to evaluate the
project's success.

© 2006 Zeta Cyber Solutions (P) Limited Page 14


Storage testing

It is used to study how memory and space is used by the program, either in resident memory
or on disk. If there are limits of these amounts, storage tests attempt to prove that the
program will exceed them.

Reliability testing

Verify the probability of failure free operation of a computer program in a specified


environment for a specified time.

Reliability of an object is defined as the probability that it will not fail under specified
conditions, over a period of time. The specified conditions are usually taken to be fixed, while
the time is taken as an independent variable. Thus reliability is often written R (t) as a
function of time t, the probability that the object will not fail within time t.

Any computer user would probably agree that most software is flawed, and the evidence for
this is that it does fail. All software flaws are designed in; that the software does not break,
rather it was always broken. But unless conditions are right to excite the flaw, it will go
unnoticed -- the software will appear to work properly.

Internationalization testing

Testing related to handling foreign text and data (for ex: currency) within the program. This
would include sorting, importing and exporting test and data, correct handling of currency
and date and time formats, string parsing, upper and lower case handling and so forth.

Compatibility Testing

It is the process of determining the ability of two or more systems to exchange information.
In a situation where the developed software replaces an already working program, an
investigation should be conducted to assess possible comparability problems between the
new software and other programs or systems.

Install/uninstall testing

Testing of full, partial, or upgrade install/uninstall processes.

© 2006 Zeta Cyber Solutions (P) Limited Page 15


Scalability testing

It is a subtype of performance test where performance requirements for response time,


throughput, and/or utilization are tested as load on the SUT is increased over time.

Configuration testing

Quality Monitor runs tests to automatically verify (before deployment) that all installation
elements – such as file extensions and shortcuts – are configured correctly and that all key
functionality will work in the intended environment.

For example, all of the application shortcuts are automatically listed. Each one can be
selected and run. If the shortcut launches the application as expected, a positive comment
can be logged and the test will be marked as successful.

© 2006 Zeta Cyber Solutions (P) Limited Page 16


Test Techniques

There are mainly two types of test techniques,


 Functional Testing : Black Box
 Structural Testing : White Box

Black Box Testing

Black Box Testing is testing a product without having the knowledge of the internal workings
of the item being tested. For example, when black box testing is applied to software
engineering, the tester would only know the "legal" inputs and what the expected outputs
should be, but not how the program actually arrives at those outputs. It is because of this
that black box testing can be considered testing with respect to the specifications, no other
knowledge of the program is necessary. The opposite of this would be white box (also known
as glass box) testing, where test data are derived from direct examination of the code to be
tested. For white box testing, the test cases cannot be determined until the code has
actually been written. Both of these testing techniques have advantages and disadvantages,
but when combined, they help to ensure thorough testing of the product.

Advantages of Black Box Testing


 more effective on larger units of code than white box testing
 tester needs no knowledge of implementation, including specific programming
languages
 tester and programmer are independent of each other
 tests are done from a user's point of view
 will help to expose any ambiguities or inconsistencies in the specifications
 test cases can be designed as soon as the specifications are complete

Disadvantages of Black Box Testing


 only a small number of possible inputs can actually be tested, to test every possible
input stream would take nearly forever
 without clear and concise specifications, test cases are hard to design
 there may be unnecessary repetition of test inputs if the tester is not informed of test
cases the programmer has already tried
 may leave many program paths untested
 cannot be directed toward specific segments of code which may be very complex
(and therefore more error prone)
 most testing related research has been directed towards white box testing

© 2006 Zeta Cyber Solutions (P) Limited Page 17


White Box Testing
"Bugs lurk in corner and congregate in the boundaries." White box testing is far more likely to
uncover them.

White box testing is a testing technique where knowledge of the internal structure and logic
of the system is necessary to develop hypothetical test cases. It's sometimes called structural
testing or glass box testing. It is also a test case design method that uses the control
structure of the procedural design to derive test cases.

If a software development team creates a block of code that will allow the system to process
information in a certain way, a test team would verify this structurally by reading the code
and given the systems' structure, see if the code could work reasonably. If they felt it could,
they would plug the code into the system and run an application to structurally validate the
system.

White box tests are derived from the internal design specification or the code. The knowledge
needed for white box test design approach often becomes available during the detailed
design of development cycle.

Objectives of White Box Testing


Using white box testing methods, software engineer can improve the test cases that will

 Guarantee that all independent paths within a module have been executed at least
once

 Exercise all logical decisions on their true and false sides.

 Execute all loops at their boundaries and within their operational bounds

 Exercise internal data structures to assure their validity.

Why White Box Testing?


A reasonable question that arises in our mind is why should we carry out white box testing on
a software product? Answer lies in the following points.

Logic errors and incorrect assumptions are inversely proportional to the probability that a
program path will be executed - errors tend to creep into our work when we design and
implement functions, conditions and control that are out of the mainstream. Everyday
processing tends to be well understood while special case processing tends to fall in cracks.

We often believe that a logical path is not likely to be executed when, in fact, it may be
executed on a regular basis - our assumptions about the logical flow of a program and data
may lead us to make design errors that are uncovered only when the path testing

© 2006 Zeta Cyber Solutions (P) Limited Page 18


commences.

Typographical errors are random - when a program is translated into its programming
language source code, it is likely that some typing errors will occur. It is likely that a typo will
exist on obscure logical paths on a mainstream path.

Aspects of Code to consider


White box testing of software is predicated on a close examination of procedural detail. White
box test cases exercise specific sets of conditions and / or loops tests logical paths through
the software. The "Status of the program" may be examined at various points to determine if
the expected or asserted status corresponds to the actual status.

At first glance it would seem that very through white box testing would lead to 100% correct
program. All we need to do is to define all logical paths. Develop test cases to exercise them
and evaluate their results, i.e. generate test cases to exercise program logic exhaustively.
Unfortunately, exhaustive testing presents certain logistical problems. For even small
programs, the number of possible logical paths can be very large. For example, the
procedural design that might correspond to a 100-line Pascal program with a single loop that
may be executed no more than 20 times. There are approximately 10*14 possible paths that
may be executed!

To put this number in perspective, we assume that a magic test processor ("magic", because
no such processor exists) has been developed for exhaustive testing. The processor can
develop a test case, execute it and evaluate the result in one millisecond. Working 24 hours a
day, 365 days a year, the processor would work for 3170 years to test the program
represented in figure. This would, undeniably cause havoc in most development schedules.
Exhaustive testing is impossible for large software systems. White box testing should not,
however be dismissed as impractical. Limited number of important logical paths can be
selected and exercised. Important data structure can be probed for validity.

Test Adequacy Criteria


White box testing is very useful in achieving the test adequacy rather than designing test
cases. The test adequacy criteria are based on the "code coverage analysis" which includes
coverage of:

 Statements in software program.

 Branches or multiple condition branches those are present in a code.

 Exercising program paths from entry to exit.

 Execution of specific path segments derived from data flow combinations definitions
and uses of variables.

© 2006 Zeta Cyber Solutions (P) Limited Page 19


The code coverage analysis methods mentioned above are discussed in the section below.

The goal for white box testing is to ensure that internal components of a program are
working properly. A common focus is on structural elements such as statements and braches.
The tester develops test cases that exercise these structural elements to determine if defects
exist in the program structure. By exercising all of these structural elements, the tester hopes
to improve the chances of detecting defects.

The testers need a framework for deciding which structural elements to select as a focus of
testing, for choosing the appropriate test data and for deciding when the testing efforts are
adequate enough to terminate the process with confidence that the software is working
properly. The criteria can be viewed as representing minimal standards for testing a program.

The application scope of adequacy criteria also includes:

 Helping testers to select properties of a program to focus on during tests.

 Helping them to select a test data set for the program based on the selected
properties.

 Supporting testers with the development of quantitative objectives for testing.

 Indicating to testers whether or not testing can be stopped for that program.

A program is adequately tested to a given criterion if all the target structural elements have
been exercised according to the selected criterion. Using the selected adequacy criterion a
tester can terminate testing, when the target structures have been exercised. Adequacy
criteria are usually expressed as statements that depict the property or feature of interest,
and the conditions under which testing can be stopped.

In addition to statement and branch adequacy criteria, other types of program-based test
adequacy criteria in use are the path testing and loop testing wherein, the total paths and all
the loops are executed.

© 2006 Zeta Cyber Solutions (P) Limited Page 20


Code Coverage Methods
Logic based white box test design and use of test data adequacy/ coverage concepts provide
quantitative coverage goals to the tester. As mentioned in the previous section test adequacy
criteria and coverage analysis play an important role in testing. The major techniques
involved in code coverage analysis are:

 Statement Testing

 Branch testing

 Multiple condition Coverage

 Path testing

 Modified path testing

 Loop testing

Statement testing

In this method every source language statement in the program is executed at least once so
that no statements are missed out. Here we need to be concerned about the statements
controlled by decisions. The statement testing is usually regarded as the minimum level of
coverage in the hierarchy of code coverage and is therefore a weak technique. It is based on
- feeling absurd to release a piece of software without having executed every statement.

In statement testing, 100% coverage may be difficult considering the following,

 There exists unreachable code

 The exception handling in the code is improper

 The missing statements also creates issues

Statement coverage can be calculated as:

Statement coverage = #Statements Executed / #Total Statements

Branch testing

Branch testing is used to exercise the true and false outcomes of every decision in a software
program or a code. It is a superset of statement testing where the chances of achieving code
coverage is more. The disadvantage of branch testing is that it is weak in handling multiple
conditions.

Branch coverage can be found out as:

Branch coverage = #Branches Executed / #Total Branches

© 2006 Zeta Cyber Solutions (P) Limited Page 21


Multiple condition coverage

Multiple condition coverage is a testing metrics that exercise on all possible combinations of
true and false outcomes of conditions within a decision in the control structure of a program.
This is a superset of branch testing.

The primary advantage of multiple condition testing is that it tests all feasible combinations of
outcomes for each condition with in program.

The drawback of this metrics is that it provides no assistance for choosing data. Another
expensive drawback of this coverage is that the decision involving n Boolean conditions has
2n combinations => 2n test cases.

Multiple condition coverage can be calculated as:

Multiple condition coverage = #Conditions Executed I #Total Multiple Conditions

Path testing

Set of test cases by which every path in the program (in control flow graph) is executed at
least once.

The figure in page#25 shows the graphical representation of flow of control in a program.
The number of paths in this representation can be found out and we can clearly that all paths
are distinct. Note that the path shown by 1-2-5-6 is different from that depicted in 1-3-4-6.

The major disadvantage of path testing is that we cannot achieve 100% path testing, as
there are potentially infinite paths in a program, especially if the code is large and complex
then path testing is not feasible.

Modified path testing

Modified path testing as the name suggests is a modification of existing path testing and is
based on number of distinct paths to be executed, which is in turn based on McCabe
complexity number. The McCabe number is given as V = E - N + 2, where E is the number of
edges in the flow graph and N is the number of nodes.

Testing criterion for modified path testing is as given below:

 Every branch is executed at least once

 At least V distinct paths must be executed

Choosing the distinct path is not easy and different people may choose different distinct
paths.

Path coverage is assessed by:

Path coverage = #Paths Executed / #Total Paths

© 2006 Zeta Cyber Solutions (P) Limited Page 22


Loop testing

Loop testing strategies focus on detecting common defects associated with loop structures
like simple loops, concatenated loops, nested loops, etc. The loop testing makes sure that all
the loops in the program have been traversed at least once during testing. Defects in these
areas are normally due to poor programming practices or inadequate reviewing.

Why Code Coverage Analysis?


Code coverage analysis is necessary to satisfy test adequacy criteria and in practice is also
used to set testing goals and, to develop and evaluate test data. In the context of coverage
analysis, testers often refer to test adequacy criteria as "coverage criteria."

When coverage related testing goal is expressed as a percentage, it is often called "degree of
coverage." The planned degree of coverage is usually specified in the test plan and then
measured when the tests are actually executed by a coverage tool. The planned degree of
coverage is usually specified as 100% if the tester wants to completely satisfy the commonly
applied test adequacy or coverage criteria.

Under some circumstances, the planned degree of coverage may be less than 100% possibly
due to the following reasons:

1. The nature of the unit

 Some statements or branches may not be reachable.

 The unit may be simple.

2. The lack of resources

 The time set aside for testing is not adequate to achieve 100% coverage.

 There are not enough trained testers to complete coverage for all units.

 There might be lack of tools to support complete coverage.

3. Other project related issues such as timing, scheduling, etc.

The concept of coverage is not only associated with white box testing. Testers also use
coverage concepts to support black box testing where a testing goal might be to exercise or
cover all functional requirements, all equivalence classes or all system features. In contrast to
black box approaches, white box based coverage goals have stronger theoretical and
practical support.

The application of coverage analysis is associated with the use of control and data models to
represent structural elements and data. The logic elements are based on the flow of control
in a code. They are:

 Program statements
© 2006 Zeta Cyber Solutions (P) Limited Page 23
 Decisions or branches (these influence the program flow of control)

 Conditions (expression that evaluate true/false and do not contain any other
true/false valued expression)

 Combinations of decisions and conditions

 Paths (node sequence in flow graphs)

All structured programs can be built from three basic primes - sequential, decision and
iterative. Primes are nothing but the representation of flow of control in a software program
which can take anyone form. The figure shows the graphical representation of primes
mentioned above.

1 1 1
True False
False
False
True

2 3 2

4
2 3
Iteration
Sequence Condition

Using the concept of a prime and the ability to use the combinations of primes to develop
structured code, a control flow diagram for the software under test can be developed. The
tester to evaluate the code with respect to its testability as well as to develop white box test
cases can use the flow graph. The direction of the transfer depends upon the outcome of the
condition.

© 2006 Zeta Cyber Solutions (P) Limited Page 24


Code Complexity

Cyclomatic Complexity
Thomas McCabe coined the term Cyclomatic complexity. The Cyclomatic complexity is
software metric that provides a quantitative measure of the logical complexity of a program.
When used in the context of the basis path testing method, the value computed for
Cyclomatic complexity defines the number of independent paths in the basis set of a program
and provides us with an upper bound for the number of tests that must be conducted to
ensure that all statements have been executed at least once. The concept of Cyclomatic
complexity can be well explained using a flow graph representation of a program as shown
below in figure.

2 4 3

6
In the diagram above circles denote nodes, and arrows denote edges. Each circle represents
a processing task (one or more source code statements) and arrows represent flow of
control.

Code Complexity Value


McCabe defines a software complexity measure that is based on the Cyclomatic complexity of
a program graph for a module. Otherwise known as code complexity number, the Cyclomatic
complexity is a very useful attribute to the tester. The formula proposed by McCabe can be
applied to flow graphs where there are no disconnected components. Cyclomatic Complexity
is computed in one of the following ways:

1. The number of regions of the flow graph edges corresponds to the cyclomatic complexity.

2. Cyclomatic complexity, V, for a flow graph is defined as V = E - N + 2, where E is the


number of flow graph edges and N is the number of flow graph nodes.

3. Cyclomatic complexity, V is also defined as V = P + 1, where P is the number of predicate


nodes contained in the flow graph. A predicate note is the one having a condition in it. In
the figure the predicate node is the circle 1.

© 2006 Zeta Cyber Solutions (P) Limited Page 25


Use of Cyclomatic Complexity
The code complexity value is useful to programmers and testers in a number ways which are
enlisted below:

 It provides an approximation on number of test cases that must be developed for


branch coverage in a module of structured code.

 It provides an approximation of the testability of module.

 The tester can use the value of V along with past project data to approximate the
testing time and resources required to test a software module.

 The complexity value V along with control flow graph give the tester another tool for
developing white box test cases using the concept of path.

 It is also a measure of number of so called independent or distinct paths in the


graph.

© 2006 Zeta Cyber Solutions (P) Limited Page 26


Bug Life Cycle

Bug life cycles are similar to software development life cycles. At any time during the
software development life cycle errors can be made during the gathering of requirements,
requirements analysis, functional design, internal design, documentation planning, document
preparation, coding, unit testing, test planning, integration, testing, maintenance, updates,
retesting and phase-out.

Bug life cycle begins when a programmer, software developer, or architect makes a mistake,
creates an unintentional software defect, i.e. a bug, and ends when the bug is fixed, and the
bug is no longer in existence.

© 2006 Zeta Cyber Solutions (P) Limited Page 27


What should be done after a bug is found? When a bug is found, it needs to be
communicated and assigned to developers that can fix it. After the problem is resolved, fixes
should be retested.

Additionally, determinations should be made regarding requirements, software, hardware,


safety impact, etc., for regression testing to check the fixes didn't create other problems
elsewhere. If a problem-tracking system is in place, it should encapsulate these
determinations.

A variety of commercial, problem tracking/management software tools are available which


can be used by software test engineers. These tools, with the detailed input of software test
engineers, will give the team complete information so developers can understand the bug,
get an idea of its severity, reproduce it and fix it.

© 2006 Zeta Cyber Solutions (P) Limited Page 28


Test cases

What’s a Test Case?

A test case specifies the pretest state of the IUT and its environment, the test inputs or
conditions, and the expected result. The expected result specifies what the IUT should
produce from the test inputs. This specification includes messages generated by the IUT,
exceptions, returned values, and resultant state of the IUT and its environment. Test cases
may also specify initial and resulting conditions for other objects that constitute the IUT and
its environment. Some more definitions are listed below.

“Test case is a set of test inputs, execution conditions, and expected results developed for a
particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement”

“Test case is a documentation specifying inputs, predicted results, and a set of execution
conditions for a test item” (as per IEEE Standard 829-1983 definition)

“Test cases are the specific inputs that you’ll try and the procedures that you’ll follow when
you test the software”

What is Quality Writing?

Quality writing is clear, concise, well-thought-out, and grammatically and syntactically


correct. It conveys the author’s thoughts to the intended audience without additional,
unnecessary information. When someone reads a document that has been written well, he or
she is able to quickly and easily ascertain the author’s purpose for writing the document and
what key points or issues the author covered.

There are two important elements in quality writing— the way it is written and what is written.
There are particular benefits from each element on its own, but your writing will be most
effective if you pay attention to both.

Key Points to Remember

The ability to write well can provide a number of direct and indirect benefits to your testing
efforts. The following things are important to remember each time you write:
 better writing leads to better thinking
 always write in your own voice
 always keep your audience in mind

© 2006 Zeta Cyber Solutions (P) Limited Page 29


 use the tools available to you (such as spell-check, grammar-check, and reference
books)
 use the people available to you (peer reviews, technical writers' suggestions, and
marketing professionals’ suggestions)
 always review your work, particularly after several days (when possible)
 writing does get easier with practice, but it never gets easy
(There are no hard-and-fast rules in writing)

Use Cases
To write test cases, it is best if Use Cases were made available. A Use Case describes the
functional component in great detail. It forms the specification of the component. A Use
Case ill contain information that will form the basis of the tests that need to be carried out for
the component as it is the basis on which the software has been written. A Use Case
contains the following information:

 Name. The name should implicitly express the user's intent or purpose of the
use case, such as "Enroll Student in Seminar."
 Identifier [Optional]. A unique identifier, such as "UC1701," that can be used
in other project artifacts (such as your class model) to refer to the use case.
 Description. Several sentences summarizing the use case.
 Actors [Optional]. The list of actors associated with the use case. Although this
information is contained in the use case itself, it helps to increase the
understandability of the use case when the diagram is unavailable.
 Status [Optional]. An indication of the status of the use case, typically one of:
work in progress, ready for review, passed review, or failed review.
 Frequency. How often this use case is invoked by the actor. This is often a free-
form answer such as once per each user login or once per month.
 Pre-conditions. A list of the conditions, if any, that must be met before a use
case may be invoked.
 Post-conditions. A list of the conditions, if any, that will be true after the use
case finishes successfully.
 Extended use case [Optional]. The use case that this use case extends (if
any). An extend association is a generalization relationship where an extending
use case continues the behavior of a base use case. The extending use case
accomplishes this by inserting additional action sequences into the base use-case
sequence. This is modeled using a use-case association with the <<extend>>
stereotype.

© 2006 Zeta Cyber Solutions (P) Limited Page 30


 Included use cases [Optional]. A list of the use cases this one includes. An
include association is a generalization relationship denoting the inclusion of the
behavior described by a use case within another use case. This is modeled using
a use-case association with the <<include>> stereotype. Also known as a uses
or a has-a relationship.
 Assumptions [Optional]. Any important assumptions about the domain that
you have made when writing this use case. At some point you should verify these
assumptions and evolve them either into decisions or simply into parts of the
basic course or alternate courses of action.
 Basic course of action. The main path of logic an actor follows through a use
case. Often referred to as the happy path or the main path because it describes
how the use case works when everything works as it normally should.
 Alternate courses of action. The infrequently used paths of logic in a use
case, paths that are the result of an alternate way to work, an exception, or an
error condition.
 Change history [Optional]. Details about when the use case was modified,
why, and by whom.
 Issues [Optional]. A list of issues or action items, if any, which are related to
the development of this use case.
 Decisions: A list of critical decisions, typically made by the SME pertaining to the
content of the use case. It is important to record these decisions to maintain a
knowledgebase that others can refer to.

Where such use cases are not available, do not attempt to create it as it is very difficult to
create accurate and complete Use Cases unless the design documents are available.

Writing Test Cases

To create a test case when Use Cases are not available, the best approach is to start with
creating the work-flow of the application and ending up with test cases. The sequence of
activities is as follows:
a) Draw the work-flow diagram from the descriptive document that you created when
you navigated through the application.
b) For each component in the work-flow, create a list of the screens that are involved.
c) For each screen, create a list of fields that must be checked. For each field, create a
list of tests that need to be carried out.
d) Refer to the DB Schema and the DB Structures to test that all integrity checks are in
place and that all necessary fields appear on the screen and are active.

© 2006 Zeta Cyber Solutions (P) Limited Page 31


e) For each function described, create a test case that will yield only one resulted value
each. Where necessary, split the test case into more than one TC to ensure that the
total number of steps in each test case does not exceed eight (exceptions are
allowed). Ideally, “ONLY ONE RESULTED VALUE IS ALLOWED PER TEST CASE”.
f) The Test Cases must be ideally numbered in the following sequence:
999.999.999.999. The first 3 sets of numbers (3 digits each) refer to the product
center and the ID within the application. The fourth three digit number is a serial
number.
g) As far as possible, do not batch more than 50 test cases per module. Split the
module into two at a convenient place.
h) Ensure that no test case is dependent on the previous test case’s execution to avoid
a problem of daisy-chaining them so that even if one test case fails, the others must
continue to run without problems.
i) There should not be any ambiguity in the steps nor the resulted values
j) Test cases must be written to arrive at positive results. When writing test cases for
applications where the use cases are not available, negative result test cases are also
needed

© 2006 Zeta Cyber Solutions (P) Limited Page 32


Software Inspection

Software inspection is a verification process, which is carried out in a formal, systematic and
organized manner. Verification techniques are used during all phases of software
development life cycle to ensure that the product generation is happening the "right way".
The various verification techniques other than formal inspection are reviews and
walkthroughs. These processes work directly on the work product under development. The
other verification techniques, which also ensure product quality, are management reviews
and audits. We will be focusing more on reviews, walkthroughs and formal inspections.

Steps Involved in Inspection


Inspection is a team review of a work product by peers of the author of the work product.
This is a very effective verification technique. Several steps are involved in the inspection
process that is outlined as shown in the figure.

Inspection
Entry
Policies and plan
Criteria

Checklist
Invitation

Preparation

Inspection
Meeting
Defect
Database
Reporting
Results
Metric
Database
Rework and
Follow-up

Exi
t

The figure given above reveals the several steps that are carried out during the inspection

© 2006 Zeta Cyber Solutions (P) Limited Page 33


process. The responsibility for initiating and carrying through the steps belongs to the
inspection leader or moderator, who is usually a member of technical staff or the software
quality assurance team. The steps in the figure above are as discussed:

 Inspection policies and plans


The inspection leader plans for the inspection, sets the date, schedules the meeting, runs the
inspection meeting, appoints a recorder to record the results and monitor the follow up
period after the review.

 Entry criteria
The inspection process begins when the inspection pre-conditions are met as specified in the
inspection policies, procedures and plans. A personal pre-inspection be performed carefully by
each team member. Error, problems and items should be noted by each individual for each
item on the list. When the actual meeting takes place, the document under review is
presented by a reader and is discussed as it is read. Attention is paid to issues related to
quality, adherence to standard, testability, traceability and satisfaction of the user's
requirements

 Checklist
The checklist varies with the software artifact being inspected. It contains items that the
inspection participants should focus their attention on, check and evaluate. The inspection
participants address each item on the checklist. The recorder records any discrepancies,
misunderstandings, errors and ambiguities or in general any problem associated with an item.
The competed checklist is part of the review summary document.

Reviewers use organization's standard checklist for all work-products to look for common
errors. Specific checklist is also prepared for all work-products to increase review
effectiveness before going for actual review/inspection. Checklist is dynamic and improves
over time in the organization by analyzing root causes for the common issues found during
inspection process.

 Invitation
The inspection leader invites each member participating in the meeting and distributes the
documents that are essential for the conduct of the meeting.

 Preparation
The key item that the inspection leader prepares is the check list of items that serves as
agenda for the review which helps in determining the area of focus, its objectives and tactics
to be used.

© 2006 Zeta Cyber Solutions (P) Limited Page 34


 Inspection meeting
The inspection leader announces the inspection meeting, and distributes the items to be
inspected, the checklist and any other auxiliary materials to the participants usually a day or
two before the scheduled meeting. Participants must do their homework and study the items
and the checklist. The group as a whole addresses all the items in the checklist and the
problems is recorded. The recorder documents all the findings and the measurements.

 Reporting results
When the inspection meeting has been completed, (all agenda items covered), the inspectors
are usually asked to sign a written document that is sometimes called as summary report.
The defect report is stored in the defect database. Inspection metrics are also recorded in the
metric database.

 Rework and follow-up


The inspection process requires a formal follow-up process. Rework session should be
scheduled as needed and monitored to ensure that all problems identified at the inspection
meeting have been addressed and resolved. This is the responsibility of the inspection leader.
Only when all problems have been resolved and the item is either re-inspected by the group
or the moderator is the inspection process completed.

© 2006 Zeta Cyber Solutions (P) Limited Page 35


Participants and their Roles in Inspection
The organization of the inspection team is critical for the success and efficiency of the
inspection process. The following roles are defined for an inspection process. The roles of
participants in the inspection process are as shown below.
Planning, check entry and exit criteria
Moderator/Inspection leader Co-ordinate meeting
Preparing checklists
Distributing review document
Managing the review meeting
Issuing review reports
Follow-up oversight

Recorder/scribe
Recording and documenting problems,
defects, findings, and recommendations

Author
Owner of the document, Present review item
Perform any needed
rework on the reviewed item

Inspectors
Attend review-training sessions, Prepare for
reviews, Participate in meetings;
Evaluate reviewed item, Perform rework
where appropriate

© 2006 Zeta Cyber Solutions (P) Limited Page 36


Focus Areas of Software Inspection
The focus area of inspections depends on the work product under review. The following
section discusses the focus area of inspection process.

Requirements specification document


In case of a requirements specification document, the following attributes can be the focus
areas:

o Usability: Effort required in learning, operating, preparing, inputting and


interpreting the output

o Reliability: Extent to which the work product can be expected to perform its
required functions

o Serviceability or maintainability: Effort required locating and fixing an error


in the work product

o Traceability to a customer baseline: Ability to trace a work product or a


software component back to its requirements

o Performance: Measure of efficiency of a work product.

o Stability: Ability of the work product to remain stable when a change occurs.

o Portability: Ability of the work product to adapt to new working environments.

o Understandability:

o Modifiability: Ability of a software work product to sustain the modification


made without affecting its other parts.

Functional design document


In this document we look how successfully the user requirements were incorporated in the
system or the software product. An end user point of view is adopted to check for reviewing
the functional design document. This also helps to trace the incompleteness of the document.

Internal design document inspection


By inspecting the internal design document correctness of data structure, clarity on data flow,
clarity on algorithms/logic used, traceability of the document and clarity on interfaces
between modules can be verified.

Code inspection
When a software code is inspected, one can check the code's adherence to design
specification and the constraints it is supposed to handle. We can also check for its language

© 2006 Zeta Cyber Solutions (P) Limited Page 37


and correspondence of the terms in code with the data dictionary.

Reviews
Review involves a meeting of a group of people, whose intention is to evaluate a software
related item. Reviews are a type of static testing technique that can be used to evaluate the
quality of software artifacts such as requirements document, a test plan, a design document
or a code component.

The composition of a review group may consist if managers, clients, developers, testers and
other personnel, depending on the type of artifact under review.

Review objectives
The general goals for the reviewers are to

- Identify problem components or components in the software artifacts that need


improvement.

- Identify problem components in the software artifact that do not need improvement.

- Identify specific errors or defects in the software artifact.

- Ensure that the artifact conforms to organizational standards.

Other review goals are informational, communicational and educational where by review
participants learn about the contents of the developing software artifacts, to help them
understand the role of their own work and to pan for the future stages of development.

Benefits of reviewing
Reviews often represent project milestones and support the establishment of a baseline for a
software artifact. Review data can also have an influence on test planning which can help test
planners, select effective classes of test and may also have an influence on testing goals. In
some cases, clients attend the review meetings and give feedback to development team so
reviews are also a means for client communication. To summarize, the benefits of review
programs are

- The quality of the software is higher.

- Productivity (shorter rework time) is increased.

- There is a closer adherence to project schedules (improved process


control).

- Awareness of software quality issues can be increased.

- It provides opportunity to identify reusable software artifacts.

- Maintenance costs can be reduced.

© 2006 Zeta Cyber Solutions (P) Limited Page 38


- More effective test planning can be done.

- Customer satisfaction is higher.

- A more professional attitude is developed on the part of development


staff.

Reviews are characterized by its less formal nature compared to inspections. The reviews are
more like a peer group discussion with no specific focus on defect identification, data
collection and analysis. Reviews do not require elaborate preparation.

Walkthroughs
Walkthroughs are a type of technical review where the producer of the review material
servers as the review leader and actually guides the progress of the review. They are
normally applied on the following documents.

- The design document

- The software code

- The data description

- The reference manuals

- The software specification documents

In the case of detailed design and code walkthroughs, test inputs may be selected and review
participants walkthrough the design or code with the set of inputs in a line by line manner.
The reader can compare this process to manual execution of the code. If the presenter gives
a skilled presentation of the material, the walkthrough participants are able to build a
comprehensive mental model of the detailed design or code and are able to evaluate its
quality and detect defects.

The primary intent of a walkthrough is to make a team familiarized with a work product.
Walkthroughs are useful when one wants to impart a complex work product to a team. For
example, a walkthrough for a complex design document can be done by the lead designer to
the coding team.

© 2006 Zeta Cyber Solutions (P) Limited Page 39


Advantages of walkthroughs
The following are the advantages of a walkthrough that makes it attractive for reviewing less
critical artifacts

- One can eliminate the checklist and preparation step for a walkthrough.

- There is usually no mandatory requirement for a walkthrough, neither a defect list.

- There is also no formal requirement for a follow-up step.

Benefits of Inspection
The benefits of inspection can be classified as direct and indirect benefits which are as
discussed below:

Direct benefits:
- By using various inspection techniques we can improve the development productivity by
25% to 30%.

- There will be a considerable reduction in the development timescale by 35% to 50%.

- We can reduce the testing cost and in turn the time required for testing by 50% to
90%.

- It helps in achieving a toward-zero-defect state.

Indirect benefits:
The indirect benefits include the following:

Management benefits:
- The software inspection process gives visibility to the relevant facts and figures about
their software engineering environment earlier in the software life cycle.

- It helps in improving and impressing the quality of audit functioning.

Deadline benefits:
- Inspection will give early danger signals so that one can act appropriately and reduce the
impact to the deadline of the project.

Organization and people benefits:


- The quality of the work is improved to a greater extent and it becomes more maintainable
when inspection is carried out.

- Also the software professionals can expect to live under less intense deadline pressure.

© 2006 Zeta Cyber Solutions (P) Limited Page 40


Unit Testing /Module Testing

The need for Levels of Testing


Execution based software testing, especially for large systems, is usually carried out at
different levels. Each of these may consist of one or more sublevels or phases. At each level
there are specific testing goals.

A unit test is a level of testing where a single component or a unit is tested. Unit testing is
also called module testing. A unit is the smallest possible testable software component which
can be a function or a group of functions, class or collection of classes, a screen or collection
of screens or a report.. It can be characterized in several ways. For example, a unit in a
typical procedure oriented software system:

- Performs a single cohesive function

- Can be compiled separately

- Is a task In a work breakdown structure (from the manager's point of view)

- Contains code that can fit on a single page or screen

A unit is traditionally viewed as a function or procedure implemented in a procedural


programming language.

Objectives of Unit Testing


A principal goal is to detect functional and structural defects in the unit. Or in other words
unit testing is done to insure that each individual software unit is functioning according to the
specification. Since the software component being tested is relatively small in size and simple
in function, it is easier to design, execute, record and analyze test results. If a defect is
revealed by the tests it is easier to locate and repair since only one unit is under
consideration.

Developers perform unit test in many cases soon after the module is completed. Some
developers also perform an informal review of the unit.

Faults detected by Unit testing

The tests that occur as a part of unit testing are as discussed.

Interface and implementation issues

The module interface is tested to ensure that information properly flows into and out off the
program unit which is under test. Tests of dataflow across the module interface are required
before any other test is initiated because; if data does not enter and exit properly all other
tests are debatable.

© 2006 Zeta Cyber Solutions (P) Limited Page 41


Local data structures

The local data structures (groups of related data items) are examined to ensure that the data
stored temporarily maintains its integrity during all steps in an algorithms execution.

Data handling errors

These include handling data types and boundary conditions which help in ensuring that the
data types like characters, integers etc are properly checked along with boundary value
checks.

Exception handling errors

The exceptions that arise during the running of .the program need to be anticipated and
handled so that the program recovers and continues to function properly. An example is
when a network disconnect occurs, the program should inform the user of the same and
allow him to restart is activities when the network is restored.

Memory related errors

These errors include reading or writing beyond array bounds and not releasing allocated
memory (results in resource leaks during running).

Display format errors

These include all errors related to displaying and formatting data. For example, name getting
truncated and displayed incorrectly on the screen or using positive number format for
negative values while using string formatted output functions like 'printf' function in C
language.

Independent paths

Unit tests need to unearth non-reachable code segments. All independent paths through the
control structure need to be exercised to ensure that all statements in the module are
executed at least once.

Unit Test considerations

Among the more common errors, that are detected using unit tests, some are listed below.

- Some errors may be misunderstood or incorrect arithmetic precedence that occur in a


code or module.

- There are mixed mode operations that can give rise to errors in the program.

- Sometimes incorrect initialization too leads to issues.


© 2006 Zeta Cyber Solutions (P) Limited Page 42
- Precision inaccuracy is another issue that can be encountered during unit testing.

- The incorrect symbolic representation of an expression also causes defects.

Selective testing or execution paths are essential tasks during the unit tests. Test cases
should be designed to uncover erroneous computations, incorrect comparisons and improper
control flow. Path and loop testing are effective techniques to uncover broad array of path
errors. They should also uncover potential defects in

- Comparison of different data types in the work product.

- Incorrect logical operators or precedence.

- Expectation of equality when precision error makes equality unlikely.

- Incorrect comparison of variables.

- Improper of non-existent loop termination.

- Failure to exit when divergent iteration is encountered.

- Improperly modified loop variables.

Be sure that we design tests to execute every error handling path. If we do not do that the
path may fail when it is invoked. Good design dictates that error conditions be anticipated
and error handling paths set reroute or cleanly terminate processing when an error occurs.

Among the potential errors that should be tested when error handling is evaluated are:

- When error description is unintelligible in a software work product.

- When error noted in unit code does not correspond to error encountered.

- If the error condition causes system intervention prior to error handling.

- Exception condition processing is incorrect in a module.

- When an error description does not provide enough information to assist in the location
of the cause of the error.

© 2006 Zeta Cyber Solutions (P) Limited Page 43


Unit Test Procedures
Unit testing is normally considered as an adjunct to the coding step. After source level code
has been developed, reviewed and verified for correspondence to component level, unit test
cases begin.

Interface
Driver
Local data structures
Boundary conditions
Independent paths
Error handling paths
Module
To be tested

Stub Test
Stub Test
Cases
Cases
RESULTS

The unit test environment is illustrated in Figure above. Since the component is not a stand-
alone program, driver and/or stub software must be developed for each unit test. In most
applications the driver is nothing other than a "main program" that accepts test case data,
passes such data to the component to be tested, and prints the relevant results. Stubs serve
to replace modules that are subordinate to (called by) the component under test. A stub or
"dummy subprogram" uses the subordinate module's interface, may do minimal data
manipulation, prints verification of entry, and returns control to the module under testing.

Drivers and stubs present overhead, as they are not part of the final deliverable. Judicious
decision must be taken whether to write too many stubs and drivers of postpone the some of
the testing to integration phase.

Unit testing is simplified when a component with high cohesion is designed. Hence only one
function is addressed by a component, the number of test cases is reduced and errors can be
more easily predicted and uncovered.

© 2006 Zeta Cyber Solutions (P) Limited Page 44


Tools for Unit Testing
The following tools are available for unit testing. Most of them are open source.

Coverage analyzers
These are tools that help in coverage analysis of the software work product. Some of
them are: JCover, PureCoverage, and McCabe Toolset

Test framework
Test frameworks provide one with a standard set of functionality that will take care of
common tasks specific to a domain, so that the tester can quickly develop application
specific test cases using facilities provided by the framework. Frameworks contain
specialized APls, services, and tools. This will allow the tester to develop test tools
without having to know the underlying technologies, thereby saving time. Some of
the test frameworks available are JVerify, JUnit, CppUnit, Rational Realtime, Cantata,
and JTest

Static Analyzers
Static analyzers carry out analysis of the program without executing it. Static analysis
can be used to find the quality of the software by doing objective measurements on
attributes such as cyclomatic complexity, nesting levels etc. Some of the commonly
used static analyzers are JStyle, QualityAnalyzer, JTest

Memory leak detectors


The tools mentioned below detects whether any memory leak is affecting the
software program. Memory leaks happen when resources allocated dynamically
during testing are not freed after use, which will result in the program utilizing all the
available resources over a period of time, making the system unusable. Some of the
tools that can be used for memory leak detection are BoundsChecker, Purify, Insure+
+

© 2006 Zeta Cyber Solutions (P) Limited Page 45


Integration testing
There is a risk that Data can be lost at the interface. Similarly one module can have
inadvertent affect on another. When several sub-functions are combined the desired major
function may not be produced and individually acceptable imprecision may be magnified to
unacceptable levels. The global data structures can also present problems. So in order to
rectify similar issues we need to carry out integration testing.
Integration testing is a systematic testing technique for constructing the program structure
while at the same time conducting tests to uncover errors associated with the interfaces.

Objective of Integration Testing


The objective of unit testing is to take unit tested modules and build a program structure that
has been dictated by the design.
The two major goals of integration testing are
o To detect the defects that occurs on the interface of the units.
o To assemble the individual unit into working subsystems and finally a complete
system that is ready for system test.
The interfaces are more adequately tested when the units are finally connected to a full and
working implementation of those units it calls and those that call it. As a consequence of this
integration process software subsystems are put together to form a complete system and
integration testing is carried out.
With a few minor exceptions, integration test should be performed on units that have been
reviewed and successfully passed unit testing. A tester might believe erroneously that since a
unit has already been tested during unit test with a driver and stub, it does not need to be
retested in combination with other units during integration tests. However, a unit tested in
isolation may not have been tested adequately for the situation where it is combined with
other modules.
One unit at a time is integrated into a set of previously integrated modules which have
passed a set of integration tests. The interfaces and the functionality of the new unit in
combination with the previously integrated units are tested.

Advantage of Integration Testing


Integrating one unit at a time helpful in several ways
It keeps the number of new interfaces to be examined ‘small’, so tests can focus on these
interfaces only as experienced testers know that many defects occur at the module
interfaces.
Massive failures that occur when multiple units are integrated at once can be avoided. For
developers it allows the defect search and repair to be confined to a small known number of
components and interfaces.

© 2006 Zeta Cyber Solutions (P) Limited Page 46


Independent subsystems can be integrated in parallel as long as required units are available.

© 2006 Zeta Cyber Solutions (P) Limited Page 47


Integration Strategies

Non Incremental Approach


This approach is also called big bang approach. A representation of modules in big bang
approach is as shown in the Figure.

M1

M2 M3 M4

M5 M6 M7

M8

In this, all the modules are put together and combined randomly into integrated system and
then tested as a whole. Here all modules are combined in advance and the entire program is
tested. The result is a usually a chaos because a set of errors are encountered. Correction is
difficult because isolation of the causes is complicated by the vast expansion of the program.
Once these errors are corrected one can find that new defects pop up and this becomes a
seemingly endless loop. Therefore to solve this problem we come up with another strategy,
which is called incremental approach.

Incremental Approach
As the name suggests here we have assemble different units one by one in a systematic or
incremental manner. Each module will be tested before it is integrated with another unit. And
this approach is taken care when all the units are integrated into one system. We can classify
the incremental approach as
 Top down approach
 Bottom up approach
 Sandwich approach

© 2006 Zeta Cyber Solutions (P) Limited Page 48


Top down Approach
In this approach, modules are integrated by moving downward through the control hierarchy
beginning with the main control module or the main program and gradually coming down one
by one. Modules subordinate and ultimately subordinate to the main control module and are
incorporated into the structure either in depth first or breadth first manner.
Referring to the figure below which represents the depth first integration, it would integrate
all the modules on a major control path of the structure. Selection of the major path is
somewhat arbitrary and depends on application-specific characteristics.
For example, selecting the left hand path with components M1, M2, M8 would be integrated
first (They are shown by the dotted line)

M1

M2 M3 M4

M5 M6 M7

5M8

Advantage of top-down integration


The top down integration strategy verifies the major control or decision- points early in test
process. In a well-factored program, structure decision-making occurs at upper level and if a
major control problem exists then it needs to be recognized early.
If depth first approach is used then a complete function of the software may be implemented
and demonstrated. Early demonstration of functional capability is a confidence builder.

Disadvantage of top-down Integration


Here logistical problems can arise. It occurs commonly when processing at low levels in the
beginning of top down approach therefore no significant data can flow upward in the
hierarchy is required to test the upper levels adequately. Stubs replace the low level modules
at the program structure. Now the testers are left with three choices:
1. Delay many tests until stubs are replaced with actual models.
This delay in tests causes us to loose some control over the correspondence between
specific tasks and incorporation of specific modules. This can lead to the difficulty in

© 2006 Zeta Cyber Solutions (P) Limited Page 49


determining the cause of errors and tends to violate the highly constrained nature of
the top-down approach.

2. Develop stubs that perform limited functions that stimulate the actual module.
This approach develops stubs to perform limited functions is workable but can lead to
significant overhead as stubs become more and more complex.
3. Integrate the software form bottom of the hierarchy upward.
This approach called the bottom-up approach is discussed in the next section

Bottom-UP INTEGRATION
As its name implies, in case of bottom-up integration, testing begins with construction and
testing with atomic modules i.e., components at the lower levels in the program structure.
Since components are integrated from the bottom up, processing required for the
components subordinate to a given level is always available and the need for stubs is
eliminated.

Bottom-up integration is a technique in integration testing where modules are added to the
growing subsystem starting from the bottom or lower level. Bottom-up integration of modules
begins with testing the lowest-level modules. These modules do not call other modules.
Drivers are needed to test these modules. The next step is to integrate modules on the next
upper level of the structure. In the process for bottom-up integration after a module has
been tested, its driver is replaced by an actual module (the next one to be integrated. This
next module to be integrated may also need a driver and this will be the case until we reach
the highest level of the structure. Bottom-up integration follows the pattern illustrated in the
figure below:

Mc
ccc
ccc
Mx Mb
vx cc1
vx ccc
D1 D2 D3
vx
111
vx
111
v
111
111
Cluster 3
111
111
111
Cluster1
11
Cluster 2

© 2006 Zeta Cyber Solutions (P) Limited Page 50


Cluster consist of classes that are related and that may work together to support a required
functionality for the complete system. In the figure 3 components are combined to form
clusters 1, 2, 3. Each of these dusters is tested using a driver shown as a dashed box.
Components in clusters in 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed
and the clusters are interfaced directly to Ma. Similarly driver D3 for cluster 3 is removed
prior to integration with module Mb. This process is followed for other modules also,

Advantage of bottom-up integration


As integration moves upward the need for separate test drivers lessen. In fact if the top two
levels of the program structure are integrated top down then the number of drivers can be
substantially reduced and the integration of clusters is greatly simplified. Such an approach is
called sandwich integration testing.
Bottom-up integration has the advantage that the lower-level modules are usually well tested
early in the integration process. This is important if these modules are candidates to be
reused.
Disadvantage of bottom-up integration
The major disadvantage of this strategy is that the program as an entity does not exist until
the last module has been added to it. This drawback is tempered by easier test case design
and a lack of stubs.
The upper-levels modules in the bottom-up integration are tested later in the integration
process and consequently may not be tested well. If they are critical decision makers then
this would be risky.

Sandwich Integration
It is a strategy where both top-down and bottom-up integration strategies are used to
integration test a program. This is a combined approach, which is useful when the program
structure is very complex and frequent use of drivers and stubs becomes unavoidable.
Because using sandwich approach at some portions of the code we can eliminate the excess
use of drivers and stubs thereby simplifying the integration testing process. It is an approach
that uses top down approach for modules in the upper level of the hierarchy and bottom up
approach for lower levels.

© 2006 Zeta Cyber Solutions (P) Limited Page 51


The figure below shows the top down and bottom approach incorporated in a sandwich
model:

M1

M2 M3 M4

M5 M6 M7 M8

In the figure bottom up approach is applied to the modules M1, M2 and M3. Similarly a top
down approach is applied to M1, M4 and M8. The sandwich model has been widely used
since it uses both top down and bottom up approach.

Regression Testing
They are tests that are run every time a change is made to the software so that we can find
that the change has not broken or altered any other part of the software program.
Regression testing is an important strategy for reducing side effects.
Each time a new module is added as part of integration testing, the software changes. New
data flow paths are established, new I/O may occur and new control logic is invoked. These
changes may cause problems with functions that previously worked flawlessly. In the context
of an integration test strategy, the regression testing is the re-execution of some subset of
tests that have already been conducted to ensure that changes have not propagated
unintended side effects.

In a broader context, successful tests results in the discovery of errors and errors must be
corrected. Whenever software is corrected some aspect of software configuration like the
program, its documentation, etc. is changed. Regression testing is the activity that helps to
ensure that changes do not introduce unintended behavior or additional errors.

As the integration testing proceeds the number of regression tests can also grow quite large.
Regression testing may be conducted manually or by using automated tools. It is impractical
and inefficient to re-execute every test for every program function once change has occurred.
Therefore, regression test suite should be designed to include only those tests that address
one or more classes of errors in each of the major program functions.

© 2006 Zeta Cyber Solutions (P) Limited Page 52


A regression test suite is the subset of tests to be executed. It contains three different classes
of test cases:
 A representative sample of tests that will exercise all software function is one of the
classes
 Additional tests that focus on software functions that are likely to be affected by the
change
 Tests that focus on the software components that have been changed constitute the
third type of class

Smoke Testing

Smoke testing is an integration testing strategy that is commonly used when "shrink-
wrapped" software products are being developed. It is designed as a pacing mechanism for
time-critical projects, allowing the software team to assess its projects on a frequent basis.
The smoke testing approach encompasses the following activities:

 Software components that have been translated into code are integrated into a
"build." A build includes all data field, libraries, reusable modules and engineered
components that are required to implement one or more product components.
 A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover "show stopper" errors that
have the highest likelihood of throwing the software project behind schedule.
 The build is integrated with other builds and the entire product in its current form is
smoke tested daily. The integration approach may be top down or bottom up.

Treat the daily build as the heartbeat of the project. If there is no heartbeat then the project
is dead. The smoke test should exercise the entire system from end to end. It does not have
to be exhaustive but it should be capable of exposing major problems. The smoke test should
be thorough enough that if the build passes, you can assume that it is stable enough to be
tested more thoroughly.

Benefits of smoke testing


Smoke testing provides a number of benefits when it is applied on complex time critical
software engineering projects. Some of them are as discussed:
 Integration risk is minimized - Since smoke tests are conducted daily;
incompatibilities and other show stopper errors are uncovered early, thereby reducing
the likelihood of serious schedule impact when errors are uncovered.
 The quality of the end product is improved - Smoke testing is likely to uncover
both functional errors as well as architectural and component level design defects,

© 2006 Zeta Cyber Solutions (P) Limited Page 53


which will in turn improve the quality of the product
 Error diagnosis and Correction are simplified - Software that has just been
added to the builds is a probable cause of a newly discovered defect Smoke testing
makes that defect identification easier.
 Progress is easier to assess - With each build being added to the previous one the
progress can be tracked easily.

Validating Integration Testing

During integration testing it becomes important that for every integrated module, necessary
validation be carried out so that faults or defects are not injected and carried over to other
parts of the program. While integrating we need to check out several areas where the issues
are likely to gather.
As integration testing is conducted, tester should identify issues in the following areas:
Critical modules:
Critical modules are those that have the following characteristics:
 Addresses several software requirements
 Has a high level control (resides relatively high in the program structure)
 A module which is complex or error prone (cyclomatic complexity can be used as
an indicator)
 Modules that have definite performance requirements
 Modules that use critical system resources like CPU, memory, I/O devices, etc.

Critical modules should be tested as early as possible. In addition regression tests should
focus on critical module function.
Interface integrity:
Internal and external interfaces are tested as each module or duster is incorporated into the
structure.
Functional validity:
Tests designed to uncover functional errors associated with the system are conducted to
track the faults
Information content:
Tests designed to uncover errors associated with local or global data structures are
conducted.
Non-functional issues:
Performance - Tests designed to verify performance bounds established during
software design are conducted. Performance issues such as time requirements for a
transaction should also be subjected to tests.
Resource hogging issues - When modules are integrated, then one module may

© 2006 Zeta Cyber Solutions (P) Limited Page 54


interfere with the resources used by another module or might require resources for
its functions. This usage of additional resources like CPU, memory, external devices,
etc can cause interference and performance problems which may res1Tict the system
function. This contributes to resource hogging.
Scalability issues - The requirement of additional resources by various integrated
modules can cause scalability issues.

Integration Test Planning

Integration test must be planned. Planning can begin when high-level design is complete so
that the system architecture is defined. Other documents relevant to integration test planning
are requirements document, the user manual, and usage scenarios. These documents contain
structure char3, state char3, data dictionaries, cross-reference tables, module interface
descriptions, data flow descriptions, messages and event descriptions, all necessary to plan
integration tests.

The strategy for integration should be defined. For procedural-oriented system the order of
integration of the units of the units should be defined. This depends on the strategy selected.
Consider the fact that the testing objectives are to assemble components into subsystems
and to demonstrate that the sub system functions properly with the integration test cases.
For object-oriented systems a working definition of a cluster or similar construct must be
described, and relevant test cases must be specified. In addition, testing resources and
schedules for integration should be included in the test plan.

As stated earlier in the section, one of the integration tests is to build working subsystems,
and then combine theses into the system as a whole. When planning for integration test the
planner selects subsystems to build based upon the requirements and user needs. Very often
subsystems selected for integration are prioritized. Those that represent key features, critical
features, and/or user-oriented functions may be given the highest priority. Developers may
want to show the clients that certain key subsystems have been assembled and are minimally
functional.

© 2006 Zeta Cyber Solutions (P) Limited Page 55


System Testing

Why System Testing?


The system testing requires a large amount of resources. The goal is to ensure that the
system performs according to its requirements. The system test evaluates both functional
behavior as well as quality requirements such as performance, reliability, usability, security
etc. This phase of testing is especially useful for detecting external hardware and software
interface defects for example those causing race conditions, deadlocks, problems with
interrupts and exception handling, and ineffective memory usage. After system test, the
software will be turned over to the users for evaluation during acceptance testing, alpha/beta
tests. The organization would like to ensure that the quality of the software has been
measured and evaluated before users or clients are invited to use the system.

Who does System Testing?


Since system test often require many resources, special laboratory equipments and long
testing time, a team of system testers usually performs it. The best scenario is for the team
to be part of an independent testing group. The team must do their best to find any weak
areas in the software, therefore it is best that no developers are directly involved.

Types of System Tests


As the system has been assembled from its component parts, many of the types of tests that
are implemented on the component parts and subsystems are basically non-functional in
nature. In fact when a system is subjected to system testing, we check for the non-functional
requirements of that system.

Exploratory Testing

How it differs from Scripted Testing?

The plainest definition of exploratory testing is test design and test execution at the same
time. This is the opposite of scripted testing (predefined test procedures, whether manual or
automated). Exploratory tests, unlike scripted tests, are not defined in advance and carried
out precisely according to plan. This may sound like a straightforward distinction, but in
practice it's murky. That's because "defined" is a spectrum. Even an otherwise elaborately
defined test procedure will leave many interesting details (such as how quickly to type on the
keyboard, or what kinds of behavior to recognize as a failure) to the discretion of the tester.
Likewise, even a free-form exploratory test session will involve tacit constraints or mandates
about what parts of the product to test, or what strategies to use. A good exploratory tester
will write down test ideas and use them in later test cycles. Such notes sometimes look a lot
like test scripts, even if they aren't. Exploratory testing is sometimes confused with "ad hoc"
testing. Ad hoc testing normally refers to a process of improvised, impromptu bug searching.

© 2006 Zeta Cyber Solutions (P) Limited Page 56


By definition, anyone can do ad hoc testing. The term "exploratory testing"--coined by Cem
Kaner, in Testing

Balancing Exploratory Testing with Scripted Testing

To the extent that the next test we do is influenced by the result of the last test we did, we
are doing exploratory testing. We become more exploratory when we can't tell what tests
should be run, in advance of the test cycle, or when we haven't yet had the opportunity to
create those tests. If we are running scripted tests, and new information comes to light that
suggests a better test strategy, we may switch to an exploratory mode (as in the case of
discovering a new failure that requires investigation). Conversely, we take a more scripted
approach when there is little uncertainty about how we want to test, new tests are relatively
unimportant, the need for efficiency and reliability in executing those tests is worth the effort
of scripting, and when we are prepared to pay the cost of documenting and maintaining
tests. The results of exploratory testing aren't necessarily radically different than those of
scripted testing, and the two approaches to testing are fully compatible.

Why Exploratory Testing?

Recurring themes in the management of an effective exploratory test cycle are tester, test
strategy, test reporting and test mission. The scripted approach to testing, attempts to
mechanize the test process by taking test ideas out of a test designer's head and putting
them on paper. There's a lot of value in that way of testing. But exploratory testers take the
view that writing down test scripts and following them tends to disrupt the intellectual
processes that make testers able to find important problems quickly. The more we can make
testing intellectually rich and fluid, the more likely we will hit upon the right tests at the right
time. That's where the power of exploratory testing comes in: the richness of this process is
only limited by the breadth and depth of our imagination and our emerging insights into the
nature of the product under test.

Scripting has its place. We can imagine testing situations where efficiency and repeatability
are so important that we should script or automate them. For example, in the case where a
test platform is only intermittently available, such as a client-server project where there are
only a few configured servers available and they must be shared by testing and development.
The logistics of such a situation may dictate that we script tests carefully in advance to get
the most out of every second of limited test execution time. Exploratory testing is especially
useful in complex testing situations, when little is known about the product, or as part of
preparing a set of scripted tests. The basic rule is this: exploratory testing is called for any
time the next test you should perform is not obvious, or when you want to go beyond the
obvious.

© 2006 Zeta Cyber Solutions (P) Limited Page 57


Testing Process

The diagram above outlines the Test Process approach that will be followed.

a. Organize Project involves creating a System Test Plan, Schedule & Test Approach, and
requesting/assigning resources.
b. Design/Build System Test involves identifying Test Cycles, Test Cases, Entrance & Exit
Criteria, Expected Results, etc. In general, test conditions/expected results will be
identified by the Test Team in conjunction with the Project Business Analyst or Business
Expert. The Test Team will then identify Test Cases and the Data required. The Test
conditions are derived from the Business Design and the Transaction Requirements
Documents
c. Design/Build Test Procedures includes setting up procedures such as Error
Management systems and Status reporting, and setting up the data tables for the
Automated Testing Tool.
d. Build Test Environment includes requesting/building hardware, software and data set-
ups.
e. Execute Project Integration Test
f. Execute Operations Acceptance Test
g. Signoff - Signoff happens when all pre-defined exit criteria have been achieved.

© 2006 Zeta Cyber Solutions (P) Limited Page 58


Fundamental Test Process
Testing must be planned. Good testing requires thinking out an overall approach, designing
tests and establishing expected results for each of the test cases we choose. The
fundamental test process comprises planning, specification, execution, recording and
checking for completion. The Planning process consists of five main stages as figure below
depicts.

Test Planning

Define Design

Test Execution

Test Recording

Check for Test Completion

Test Phases

© 2006 Zeta Cyber Solutions (P) Limited Page 59


Test Planning
Test Planning involves producing document that describes your overall approach and test
objectives. Completion or exit criteria must be specified so that you know when testing (at
any stage) is complete. Plan your test.

A tactical test plan must be developed to describe when and how testing will occur. The test
plan should provide background information on the software being tested, on the test
objectives and risks, as well as on the business functions to be tested and the specific tests to
be performed.

Entrance Criteria
 Hardware has been be acquired and installed
 Software has been acquired and installed
 Test cases and test data have been identified and are available
 Updated test cases
 Complete functional testing database
 Automated test cases (scripts) for regression testing
 The Software Requirements Specification (SRS) and Test Plan have been signed off
 Technical specifications for the system are available
 Acceptance tests have been completed, with a pass rate of not less than 80%

Resumption Criteria
In the event that system testing is suspended resumption criteria will be specified and testing
will not re-commence until the software reaches these criteria.

Test Design
This involves designing test conditions and test cases using many of the automated tools out
in the market today. You produce a document that describes the tests that you will carry out.
It is important to determine the expected results prior to test execution.

Test Execution
This involves actually running the specified tests on a computer system either manually or by
using an automated Test tool like WinRunner, Rational Robot or SilkTest.

Test Recording
This involves keeping good records of the test activities that you have carried out. Versions of
the software you have tested and the test design are recorded along with the actual results
of each test.

© 2006 Zeta Cyber Solutions (P) Limited Page 60


Checking for Test Completion
This involves looking at the previously specified test completion criteria to see if they have
been met. If not, some tests may need to be rerun and, in some instances, it may be
appropriate to design some new test cases to meet a particular coverage target.

Exit Criteria
 The system has the ability to recover gracefully after failure.
 All test cases have been executed successfully.
 The system meets all of the requirements described in the Software Requirements
Specification.
 All the first and second severity bugs found during QA testing have been resolved.
 At least all high exposure minor and insignificant bugs have been fixed and a
resolution has been identified and a plan is in place to address the remaining bugs.
 100% of test cases have been executed at least once.
 All high-risk test cases (high-risk functions) have been successful executed.
 The system can be successfully installed in the pre-live and production environments
and has the ability to recover gracefully from installation problems.
 The system must be successfully configured and administered from the GUI.
 The system must co-exist with other production applications software.
 The system must successfully migrate from the prior version.
 The system must meet all stated security requirements. The system and data
resources must be protected against accidental and/or intentional modifications or
misuse.
 Should errors/bugs be encountered, fixes will be applied and included in a scheduled
release cycle dependent upon the priority level of the error. This process will continue
until an acceptable level of stability and test coverage is achieved.
 The system displays maintainability.

© 2006 Zeta Cyber Solutions (P) Limited Page 61


What's a 'test plan'?

A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and 'how'
of product validation. It should be thorough enough to be useful but not so thorough that no
one outside the test group will read it. The following are some of the items that might be
included in a test plan, depending on the particular project:

Test plan identifier

Some type of unique company-generated number to identify this test plan, its level and the
level of software that it is related to. Preferably the test plan level will be the same as the
related software level. The number may also identify whether the test plan is a Master plan, a
Level plan, an integration plan or whichever plan level it represents. This is to assist in
coordinating software and test ware versions within configuration management.

1. References
2. Introduction
3. Test Items
4. Software Risk Issues
5. Features to be Tested
6. Features not to be Tested
7. Approach
8. Item Pass/Fail Criteria
9. Suspension Criteria and Resumption Requirements
10. Test Deliverables
11. Remaining Test Tasks
12. Environmental Needs
13. Staffing and Training Needs
14. Responsibilities
15. Schedule
16. Planning Risks and Contingencies
17. Approvals
18. Glossary

© 2006 Zeta Cyber Solutions (P) Limited Page 62


References

List all documents that support this test plan. Refer to the actual version/release number of
the document as stored in the configuration management system. Do not duplicate the text
from other documents as this will reduce the viability of this document and increase the
maintenance effort. Documents that can be referenced include:

 Project Plan
 Requirements specifications
 High Level design document
 Detail design document
 Development and Test process standards
 Methodology guidelines and examples
 Corporate standards and guidelines

Introduction

State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is
essentially the executive summary part of the plan.

You may want to include any references to other plans, documents or items that contain
information relevant to this project/process. If preferable, you can create a references section
to contain all reference documents.

Identify the Scope of the plan in relation to the Software Project plan that it relates to. Other
items may include, resource and budget constraints, scope of the testing effort, how testing
relates to other evaluation activities (Analysis & Reviews), and possibly the process to be
used for change control and communication and coordination of key activities.

As this is the "Executive Summary" keep information brief and to the point.

Test items (functions)

These are things you intend to test within the scope of this test plan. Essentially, something
you will test, a list of what is to be tested. This can be developed from the software
application inventories as well as other sources of documentation and information.

This can be controlled on a local Configuration Management (CM) process if you have one.
This information includes version numbers, configuration requirements where needed,
(especially if multiple versions of the product are supported). It may also include key delivery
schedule issues for critical elements.

© 2006 Zeta Cyber Solutions (P) Limited Page 63


Remember, what you are testing is what you intend to deliver to the Client.

This section can be oriented to the level of the test plan. For higher levels it may be by
application or functional area, for lower levels it may be by program, unit, module or build.

Software risk issues

Identify what software is to be tested and what the critical areas are, such as:

1. Delivery of a third party product.


2. New version of interfacing software
3. Ability to use and understand a new package/tool, etc.
4. Extremely complex functions
5. Modifications to components with a past history of failure
6. Poorly documented modules or change requests

There are some inherent software risks such as complexity; these need to be identified.

1. Safety
2. Multiple interfaces
3. Impacts on Client
4. Government regulations and rules

Another key area of risk is a misunderstanding of the original requirements. This can occur at
the management, user and developer levels. Be aware of vague or unclear requirements and
requirements that cannot be tested.

The past history of defects (bugs) discovered during Unit testing will help identify potential
areas within the software that are risky. If the unit testing discovered a large number of
defects or a tendency towards defects in a particular area of the software, this is an
indication of potential future problems. It is the nature of defects to cluster and clump
together. If it was defect ridden earlier, it will most likely continue to be defect prone.

One good approach to define where the risks are is to have several brainstorming sessions.

Features to be tested

This is a listing of what is to be tested from the user's viewpoint of what the system does.
This is not a technical description of the software, but a USERS view of the functions.

© 2006 Zeta Cyber Solutions (P) Limited Page 64


Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High,
Medium and Low. These types of levels are understandable to a User. You should be
prepared to discuss why a particular level was chosen.

Features not to be tested

This is a listing of what is 'not to be tested from both the user's viewpoint of what the system
does and a configuration management/version control view. This is not a technical description
of the software, but a user's view of the functions.

Identify why the feature is not to be tested, there can be any number of reasons.

 Not to be included in this release of the Software.


 Low risk, has been used before and is considered stable.
 Will be released but not tested or documented as a functional part of the release of
this version of the software.

Approach (strategy)

This is your overall test strategy for this test plan; it should be appropriate to the level of the
plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels
of plans. Overall rules and processes should be identified.

 Are any special tools to be used and what are they?


 Will the tool require special training?
 What metrics will be collected?
 Which level is each metric to be collected at?
 How is Configuration Management to be handled?
 How many different configurations will be tested?
 Hardware
 Software
 Combinations of HW, SW and other vendor packages
 What levels of regression testing will be done and how much at each test level?
 Will regression testing be based on severity of defects detected?
 How will elements in the requirements and design that do not make sense or are un-
testable be processed?

If this is a master test plan the overall project testing approach and coverage requirements
must also be identified.

Specify if there are special requirements for the testing.

© 2006 Zeta Cyber Solutions (P) Limited Page 65


 Only the full component will be tested.
 A specified segment of grouping of features/components must be tested together.

Other information that may be useful in setting the approach are:

 MTBF, Mean Time between Failures - if this is a valid measurement for the test
involved and if the data is available.
 SRE, Software Reliability Engineering - if this methodology is in use and if the
information is available.

Item pass/fail criteria

What are the Completion criteria for this plan? This is a critical aspect of any test plan and
should be appropriate to the level of the plan.

 At the Unit test level this could be items such as:


o All test cases completed.
o A specified percentage of cases completed with a percentage containing
some number of minor defects.
o Code coverage tool indicates all code covered.
 At the Master test plan level this could be items such as:
o All lower level plans completed.
o A specified number of plans completed without errors and a percentage with
minor defects.

This could be an individual test case level criterion or a unit level plan or it can be general
functional requirements for higher level plans.

What is the number and severity of defects located?

 Is it possible to compare this to the total number of defects? This may be impossible,
as some defects are never detected.
o A defect is something that may cause a failure, and may be acceptable to
leave in the application.
o A failure is the result of a defect as seen by the User, the system crashes,
etc.

Suspension criteria and resumption requirements

Know when to pause in a series of tests.

© 2006 Zeta Cyber Solutions (P) Limited Page 66


 If the number or type of defects reaches a point where the follow on testing has no
value, it makes no sense to continue the test; you are just wasting resources.

Specify what constitutes stoppage for a test or series of tests and what is the acceptable level
of defects that will allow the testing to proceed past the defects.

Testing after a truly fatal error will generate conditions that may be identified as defects but
are in fact ghost errors caused by the earlier defects that were ignored.

Test deliverables

What is to be delivered as part of this plan?

 Test plan document.


 Test cases.
 Test design specifications.
 Tools and their outputs.
 Simulators.
 Static and dynamic generators.
 Error logs and execution logs.
 Problem reports and corrective actions.

One thing that is not a test deliverable is the software itself that is listed under test items and
is delivered by development.

Remaining test tasks

If this is a multi-phase process or if the application is to be released in increments there may


be parts of the application that this plan does not address. These areas need to be identified
to avoid any confusion should defects be reported back on those future functions. This will
also allow the users and testers to avoid incomplete functions and prevent waste of resources
chasing non-defects.

If the project is being developed as a multi-party process, this plan may only cover a portion
of the total functions/features. This status needs to be identified so that those other areas
have plans developed for them and to avoid wasting resources tracking defects that do not
relate to this plan.

When a third party is developing the software, this section may contain descriptions of those
test tasks belonging to both the internal groups and the external groups.

© 2006 Zeta Cyber Solutions (P) Limited Page 67


Environmental needs

Are there any special requirements for this test plan, such as:

 Special hardware such as simulators, static generators etc.


 How will test data be provided? Are there special collection requirements or specific
ranges of data that must be provided?
 How much testing will be done on each component of a multi-part feature?
 Special power requirements.
 Specific versions of other supporting software.
 Restricted use of the system during testing.

Staffing and training needs

Training on the application/system

Training for any test tools to be used

What is to be tested and who is responsible for the testing and training.

Responsibilities

Who is in charge?

This issue includes all areas of the plan. Here are some examples:

 Setting risks.
 Selecting features to be tested and not tested.
 Setting overall strategy for this level of plan.
 Ensuring all required elements are in place for testing.
 Providing for resolution of scheduling conflicts, especially, if testing is done on the
production system.
 Who provides the required training?
 Who makes the critical go/no go decisions for items not covered in the test plans?

Schedule

A schedule should be based on realistic and validated estimates. If the estimates for the
development of the application are inaccurate, the entire project plan will slip and the testing
is part of the overall project plan.

 As we all know, the first area of a project plan to get cut when it comes to crunch
time at the end of a project is the testing. It usually comes down to the decision,

© 2006 Zeta Cyber Solutions (P) Limited Page 68


‘Let’s put something out even if it does not really work all that well’. And, as we all
know, this is usually the worst possible decision.

How slippage in the schedule will to be handled should also be addressed here.

 If the users know in advance that a slippage in the development will cause a slippage
in the test and the overall delivery of the system, they just may be a little more
tolerant, if they know it’s in their interest to get a better tested application.
 By spelling out the effects here you have a chance to discuss them in advance of
their actual occurrence. You may even get the users to agree to a few defects in
advance, if the schedule slips.

At this point, all relevant milestones should be identified with their relationship to the
development process identified. This will also help in identifying and tracking potential
slippage in the schedule caused by the test process.

It is always best to tie all test dates directly to their related development activity dates. This
prevents the test team from being perceived as the cause of a delay. For example, if system
testing is to begin after delivery of the final build, then system testing begins the day after
delivery. If the delivery is late, system testing starts from the day of delivery, not on a
specific date. This is called dependent or relative dating.

Planning risks and contingencies

What are the overall risks to the project with an emphasis on the testing process?

 Lack of personnel resources when testing is to begin.


 Lack of availability of required hardware, software, data or tools.
 Late delivery of the software, hardware or tools.
 Delays in training on the application and/or tools.
 Changes to the original requirements or designs.

Specify what will be done for various events, for example:

Requirements definition will be complete by January 1, 20XX, and, if the requirements change
after that date, the following actions will be taken:

 The test schedule and development schedule will move out an appropriate number of
days. This rarely occurs, as most projects tend to have fixed delivery dates.
 The number of tests performed will be reduced.
 The number of acceptable defects will be increased.

© 2006 Zeta Cyber Solutions (P) Limited Page 69


 These two items could lower the overall quality of the delivered product.
 Resources will be added to the test team.
 The test team will work overtime (this could affect team morale).
 The scope of the plan may be changed.
 There may be some optimization of resources. This should be avoided, if possible, for
obvious reasons.

Management is usually reluctant to accept scenarios such as the one above even though they
have seen it happen in the past.

The important thing to remember is that, if you do nothing at all, the usual result is that
testing is cut back or omitted completely, neither of which should be an acceptable option.

Approvals

Who can approve the process as complete and allow the project to proceed to the next level
(depending on the level of the plan)?

At the master test plan level, this may be all involved parties.

When determining the approval process, keep in mind who the audience is:

 The audience for a unit test level plan is different than that of an integration, system
or master level plan.
 The levels and type of knowledge at the various levels will be different as well.
 Programmers are very technical but may not have a clear understanding of the
overall business process driving the project.
 Users may have varying levels of business acumen and very little technical skills.
 Always be wary of users who claim high levels of technical skills and programmers
that claim to fully understand the business process. These types of individuals can
cause more harm than good if they do not have the skills they believe they possess.

Glossary

Used to define terms and acronyms used in the document, and testing in general, to
eliminate confusion and promote consistent communications.

© 2006 Zeta Cyber Solutions (P) Limited Page 70


What should be done after a bug is found?

The bug needs to be communicated and assigned to developers who can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere. If a
problem-tracking system is in place, it should encapsulate these processes. A variety of
commercial problem-tracking/management software tools are available.

The following are items to consider in the bug tracking process:

 Complete information such that developers can understand the bug, get an idea of its
severity, and reproduce it if necessary.
 Bug identifier (number, ID, etc.)
 Current bug status (e.g., 'Released for Retest', 'new', etc.)
 The application name or identifier and version
 The function, module, feature, object, screen, etc. where the bug occurred
 Environment specifics, system, platform, relevant hardware specifics
 Test case name/number/identifier
 One-line bug description
 Full bug description
 Description of steps needed to reproduce the bug if not covered by a test case or if
the developer doesn't have easy access to the test case/test script/test tool
 Names and/or descriptions of file/data/messages/etc. used in test
 File excerpts/error messages/log file excerpts/screen shots/test tool logs that would
be helpful in finding the cause of the problem
 Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
 Was the bug reproducible?
 Tester name
 Test date
 Bug reporting date
 Name of developer/group/organization the problem is assigned to
 Description of problem cause
 Description of fix
 Code section/file/module/class/method that was fixed
 Date of fix
 Application version that contains the fix
 Tester responsible for retest
 Retest date

© 2006 Zeta Cyber Solutions (P) Limited Page 71


 Retest results
 Regression testing requirements
 Tester responsible for regression tests
 Regression testing results

A reporting or tracking process should enable notification of appropriate personnel at various


stages. For instance, testers need to know when retesting is needed, developers need to
know when bugs are found and how to get the needed information, and reporting/summary
capabilities are needed for managers.

What is 'configuration management'?

Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so complex, and
run in such an interdependent environment, that complete testing can never be done.
Common factors in deciding when to stop are:

 Deadlines (release deadlines, testing deadlines, etc.)


 Test cases completed with certain percentage passed
 Test budget depleted
 Coverage of code/functionality/requirements reaches a specified point
 Bug rate falls below a certain level
 Beta or alpha testing period ends

What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused.


Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. This requires judgment skills, common
sense, and experience. Considerations can include:

 Which functionality is most important to the project's intended purpose?


 Which functionality is most visible to the user?
 Which functionality has the largest safety impact?
 Which functionality has the largest financial impact on users?

© 2006 Zeta Cyber Solutions (P) Limited Page 72


 Which aspects of the application are most important to the customer?
 Which aspects of the application can be tested early in the development cycle?
 Which parts of the code are most complex, and thus most subject to errors?
 Which parts of the application were developed in rush or panic mode?
 Which aspects of similar/related previous projects caused problems?
 Which aspects of similar/related previous projects had large maintenance expenses?
 Which parts of the requirements and design are unclear or poorly thought out?
 What do the developers think are the highest-risk aspects of the application?
 What kinds of problems would cause the worst publicity?
 What kinds of problems would cause the most customer service complaints?
 What kinds of tests could easily cover multiple functionalities?
 Which tests will have the best high-risk-coverage to time-required ratio?

© 2006 Zeta Cyber Solutions (P) Limited Page 73


Test Automation

Scope of Automation
Software must be tested to have confidence that it will work as it should in its intended
environment. Software testing needs to be effective at finding any defects which are there,
but it should also be efficient, performing the tests as quickly as possible.
Automating software testing can significantly reduce the effort required for adequate testing
or significantly increase the testing that can be done in limited time. Tests can be run in
minutes that would take hours to run manually. Savings as high as 80% of manual testing
effort have been achieved using automation.
At first glance it seems that automating testing is an easy task. Just buy one of the popular
test execution tools, record the manual tests, and play them back whenever you want them
to. Unfortunately, it doesn't work like that in practice. Just as there is more to software
design than knowing a programming language, there is more to automating testing than
knowing a testing tool.

Tool Support for Life-Cycle testing


The figure shows the availability of the tools support for testing in every stage of the
software development life cycle. The different types of tools and their positions are as shown:

Requirement
Specification Performance Acceptance
stimulator tools test

Architectural System test


design

Test execution
Test design and comparison
tools:
Logical design
tools: Detailed Integration test Dynamic
Physical design design analysis tools
tools:

Management Static Debugging


Code Unit Covera
tools analysis tools
test ge tools
tools

© 2006 Zeta Cyber Solutions (P) Limited Page 74


Test design tools:
They help to derive test inputs or test data.

Management tools:
These include tools that assist in test planning, keeping track of what tests have been run,
etc.

Static tools:
They analyze code without execution.

Coverage tools:
They help in assessing how much of the software under test has been exercised by a set of
tests.

Debugging tools:
These are not essentially testing tools as debugging is not a part of testing. However they are
used in testing when trying to isolate a low-level defect. It is of more use to a developer.

Dynamic analysis tools:


They assess the system while the software is running. They are of great use for detecting
memory leaks. The memory leaks occur if the program does not release blocks of memory
when it should, so the block has leaked out of the pool of memory blocks available to the
program.

Performance Simulator tools:


They are tools that enable parts of a system to be tested in ways which would not be
otherwise possible in the real world.

© 2006 Zeta Cyber Solutions (P) Limited Page 75


Test Automation Framework

Stub(s) Simulator(s) Emulator(s)

Setup Execute Oracle Cleanup


Driver

Test Data Test Test


Result Log

The various components of test automation framework are as mentioned below:

 Driver
Drivers are tools used to control and operate the software being tested.

 Stubs
Stubs are essentially the opposite of drivers and they receive or respond to the data that the
software sends. Stubs are frequently used when software needs to communicate with
external devices.

 Simulator
Simulators are used in place of actual systems and behave like the actual system. Simulators
are excellent way of test automation when the actual system that the software interfaces is
not available.

 Emulator
Emulator is used to describe a device that is a plug-in replacement for the real device. A PC
acting as a printer, understanding the printer codes and responding to the software as
though it were a printer is an emulator. The difference between an emulator and stub is that
the stub also provides a means for a tester to view and interpret the data send to it.

© 2006 Zeta Cyber Solutions (P) Limited Page 76


 Setup
The task required to be done before a test or a set of tests can be executed. The setups
maybe build for a single test, a test set or a test suite as per the need.

 Execute
During this phase, the test cases are run.

 Oracle
Test oracle is a mechanism to produce the predicted outcomes to compare with the actual
outcomes of the software under test.

 Test data
Data that exists (for example in a database) before it a test is executed and that affects by
the software under test

 Cleanup
This task is done after This task is done after a test or set of tests has finished or stopped in
order to leave the system in a clean state for the next test or set of tests. It is particularly
important where a test has failed to complete.

© 2006 Zeta Cyber Solutions (P) Limited Page 77


Benefits of Automation

Test automation can enable some testing tasks to be performed far more efficiently that
could ever be done by testing manually. Some of the benefits are included below.

 Run existing tests on a new version of the Program


This is the most obvious task, particularly in an environment where many programs are
frequently modified. Given that the test already exist and have been automated to run on an
earlier version of the program it should be possible to select the tests and initiate their
execution with just a few minutes of manual effort.

 Run more tests more often


A clear benefit of automation is the ability to run more tests in less time and therefore, to
make it possible to run those more often. This will lead to a greater confidence in the system.

 Performs tests, which should be difficult or impossible to do manually


When testing manually, expected outcome typically include the obvious things that are visible
to the tester. However, there are attributes that should be tested which are not easy to verify
manually. For example a GUI object may trigger some event that does not produce an
immediate output. A test execution tool maybe able to check that the event has been
triggered which would otherwise not be possible to check without using a tool.

 Better use of resources


Automating menial and boring tasks such as repeatedly inputting the same test inputs gives
greater accuracy as well as improved staff morale and frees skilled testers to put more effort
in designing better test cases to be run.
There will also be some test which is best done manually. The testers can do a better job of
manual testing, if there are fewer tests to be run manually. Machines which would otherwise
lay idle overnight or weekends can be used to run automated tests.

 Consistency and repeatability of tests


Tests that are repeated automatically will be repeated exactly every time (at least the inputs
will be, the output may differ due to timing). This gives a level of consistency to the tests,
which is very difficult to achieve manually.
The same tests can be executed on different configurations using different operating systems
or databases giving consistency of cross platform quality for multi-platform products.

© 2006 Zeta Cyber Solutions (P) Limited Page 78


 Reuse of Tests
The effort put into deciding what to test, designing the tests and building the tests can be
distributed over many executions of those tests. Test which will be reused are worth
spending time on to make sure that they are reliable.

 Increased confidence
Knowing that an extensive set of automated tests have run successfully, there can be greater
confidence when the system is released provided that tests being run are good tests.

The other benefits of test automation are


 It helps in reducing risks
 It is fast
 Helps in providing better estimation of the test effort It boosts the morale of
the tester
 It enhances volume testing
 It helps in efficient regression testing

Limitation of Test Automation


Automated testing is not a panacea and there are limitations to what it can accomplish. Some
of the limitations of test automation are

 Does not replace manual testing


It is not possible and some times not desirable to automate all testing activities or all tests.
Tests that should probably not automated include
 Tests that are run only rarely
 When the software is very volatile
 Tests where the results is easily verified by humans, for example, the
suitability of a color scheme

 Manual tests find more defects than automated tests


A test is most likely to reveal a defect when it is run for the first time. If a test case needs to
be automated, it first needs to be tested to make sure it is correct. There is little point in
automating defective tests. If the software being tested has defects that the test cases
reveal, it will be revealed when run manually.

 Greater reliance on quality of tests


A tool can only identify the differences between the actual and expected outcomes. That is, it
helps in making a comparison. When tests are run, the tool will tell you that whether the test
© 2006 Zeta Cyber Solutions (P) Limited Page 79
has passed or failed, when in fact they have only matched your expected outcomes. It is
therefore more important to be confident of the quality of the tests that are to be automated.

 Test automation does not improve effectiveness


Automating a set of tests does not make them more effective that those same tests run
manually. Automation can eventually improve the efficiency of tests i.e. how much they cost
to run and how long they take to run. It also affects the resolvability of the tests.

 Tools have no imagination


Tool is only software, which follows instructions to execute a set of test cases. But a human
tester will perform the same tasks differently, effectively and creatively. When unexpected
event happen that are not part of the planned sequence of test cases, human tester can
identify it easily.

Manual vs. Automated

Some writers believe that test automation is so expensive relative to its value that it should
be used sparingly. Others, such as advocates of agile development, recommend automating
100% of all tests. A challenge with automation is that automated testing requires automated
test oracles (an oracle is a mechanism or principle by which a problem in the software can be
recognized). Such tools have value in load testing software (by signing on to an application
with hundreds or thousands of instances simultaneously), or in checking for intermittent
errors in software. The success of automated software testing depends on complete and
comprehensive test planning. Software development strategies such as test-driven
development are highly compatible with the idea of devoting a large part of an organization's
testing resources to automated testing. Many large software organizations perform
automated testing. Some have developed their own automated testing environments
specifically for internal development, and not for resale. The debate is still on…….

© 2006 Zeta Cyber Solutions (P) Limited Page 80


Testing Terms - Glossary

Acceptance test: Formal tests conducted to determine whether or not a system satisfies its
acceptance criteria and to enable the customer to determine whether or not to accept a
system. This particular kind of testing is performed with the STW/Regression suite of tools.
Back-to-back testing: For software subject to parallel implementation, back-to-back
testing is the execution of a test on the similar implementations and comparing the results.
Basis paths: The set of non-iterative paths.
Black Box testing: A test method where the tester views the program as a black box, that
is the test is completely unconcerned about the internal behavior and structure of the
program. Rather the tester is only interested in finding circumstances in which the program
does not behave according to its specifications. Test data are derived solely from the
specifications without taking advantage of knowledge of the internal structure of the
program. Black-box testing is performed with the STW/Regression suite of tools.
Bottom-up testing: Testing starts with lower level units. Driver units must be created for
units not yet completed, each time a new higher level unit is added to those already tested.
Again a set of units may be added to the software system at the time, and for enhancements
the software system may be complete before the bottom up tests starts. The test plan must
reflect the approach, though. The STW/Coverage suite of tools supports this type of testing.
Built-in testing: Any hardware or software device which is part of an equipment, subsystem
of system and which is used for the purpose of testing that equipment, subsystem or system.
Byte mask: A differencing mask used by EXDIFF that specifies to disregard differences
based on byte counts.
C0 coverage: The percentage of the total number of statements in a module that are
exercised, divided by the total number of statements present in the module.
C1 coverage: The percentage of logical branches exercised in a test as compared with the
total number of logical branches known in a program.
Call graph: The function call tree capability of S-TCAT. This utility shows caller-callee
relationship of a program. It helps the user to determine which function calls need to be
tested further.
Call pair: A connection between two functions in which one function "calls" (references)
another function.
Complexity: A relative measurement of the ``degree of internal complexity'' of a software
system, expressed possibly in terms of some algorithmic complexity measure.
Component: A part of a software system smaller than the entire system but larger than an
element.

© 2006 Zeta Cyber Solutions (P) Limited Page 81


Control statement: A statement that involves some predicate operation. For example: an if
statement or a while statement.
Correctness proof: A mathematical process which demonstrates the consistency between a
set of assertions about a program and the properties of the program, when executed in a
known environment.
Coverage testing: Coverage testing is concerned with the degree to which test cases
exercise or cover the logic (source code) of the software module or unit. It is also a measure
of coverage of code lines, code branches and code branch combinations.
Cyclomatic number: A number which assesses program complexity according to a
program's flow of control. A program's flow of control is based on the number and
arrangement of decision statements within the code. The cyclomatic number of a flow graph
can be calculated as follows
Decision statement: A decision statement in a module is one in which an evaluation of
some predicate is made, which (potentially) affects the subsequent execution behavior of the
module.
Defect: Any difference between program specifications and actual program behavior of any
kind, whether critical or not. What is reported as causing any kind of software problem?
Directed graph: A directed graph consists of a set of nodes which are interconnected with
oriented arcs. An arbitrary directed graph may have many entry nodes and many exit nodes.
A program directed graph has only one entry and one exit node.
End-to-end testing: Test activity aimed at proving the correct implementation of a required
function at a level where the entire hardware/software chain involved in the execution of the
function is available.
Error: A difference between program behavior and specification that renders the program
results unacceptable. See Defect.
Flow graph: The oriented diagram composed with nodes and edges with arrows, the shows
the flow of control in a program. It is also called a flow chart or a directed graph.
Functional specifications: Set of behavioral and performance requirements which in
aggregate determine the functional properties of a software system.
Functional test cases: A set of test case data sets for software which are derived from
structural test cases.
Infeasible path: A logical branch sequence is logically impossible if there is no collection of
setting of the input space relative to the first branch in the sequence, which permits the
sequence to execute.
Inspection/review: A process of systematically studying and inspecting programs in order
to identify certain types of errors, usually accomplished by human rather than mechanical
means.

© 2006 Zeta Cyber Solutions (P) Limited Page 82


Instrumentation: The first step in analyzing test coverage is to instrument the source code.
Instrumentation modifies the source code so that special markers are positioned at every
logical branch or call-pair or path. Later, during program execution of the instrumented
Source code, these markers will be tracked and counted to provide data for coverage reports.
Integration Testing: Exposes faults during the process of integration of software
components or software units and it is specifically aimed at exposing faults in their
interactions. The integration approach could be bottom-up (using drivers), top-down (using
stubs) or a mixture of the two. The bottom up is the recommended approach.
Interface: The informational boundary between two software systems, software system
components, elements, or modules.
Module: A module is a separately invocable element of a software system. Similar terms are
procedure, function, or program.
Regression Testing: Testing which is performed after making a functional improvement or
repair of the software. Its purpose is to determine if the change has regressed other aspects
of the software. As a general principle, software unit tests are fully repeated if a module is
modified, and additional tests which expose the fault removed, are added to the test set. The
software unit will then be re-integrated and integration testing repeated.
S0 coverage: The percentage of modules that are invoked at least once during a test or
during a set of tests.
S1 coverage: The percentage of call-pairs exercised in a test as compared with the total
number of call-pairs known in a program. By definition the S1 value for a module which has
no call pairs is 100% if the module has been called at least once and 0% otherwise.
Software sub-system: A part of a software system, but one which includes many modules.
Intermediate between module and system.
Software system: A collection of modules, possibly organized into components and
subsystems, which solves some problem or performs some task.
Spaghetti code: A program whose control structure is so entangled by a surfeit of GOTO's
that is flow graph resembles a bowl of spaghetti.
Statement complexity: A complexity value assigned to each statement which is based on
(1) the statement type, and (2) the total length of postfix representations of expressions
within the statement (if any). The statement complexity values are intended to represent an
approximation to potential execution time.
Static analysis: The process of analyzing a program without executing it. This may involve
wide range of analyses. The STW/Advisor suite of tools performs static analyses.
System Testing: Verifies that the total software system satisfies all of its functional, quality
attribute and operational requirements in simulated or real hardware environment. It
primarily demonstrates that the software system does fulfill requirements specified in the
requirements specification during exposure to the anticipated environmental conditions. All

© 2006 Zeta Cyber Solutions (P) Limited Page 83


testing objectives relevant to specific requirements should be included during the software
system testing. Software system testing is mainly based on black-box methods. The
STW/Coverage suite of tools supports this type of testing.
Test: A [unit] test of a single module consists of (1) a collection of settings for the inputs of
the module, and (2) exactly one invocation of the module. A unit test may or may not include
the effect of other modules which are invoked by the undergoing testing.
Test Bed: See Test Harness.
Test coverage measure: A measure of the testing coverage achieved as the result of one
unit test usually expressed as a percentage of the number logical branches within a module
traversed in the test.
Test data set: A specific set of values for variables in the communication space of a module
which are used in a test.
Test harness: A tool that supports automated testing of a module or small group of
modules.
Test object, object under test: The central object on which testing attention is focused.
Test path: A test path is a specific (sequence) set of logical branches which is traversed as
the result of a unit test operation on a set of test case data. A module can have many test
paths.
Test purpose: The free-text description indicating the objective of a test, which is usually
specified in the source clause of a SMARTS ATS file.
Test stub: A testing stub is a module which simulates the operations of a module which is
invoked within a test. The testing stub can replace the real module for testing purposes.
Test target: The current module (system testing) or the current logical branch (unit testing)
upon which testing effort is focused.
Test target selector: A function which identifies a recommended next testing target.
Testability: A design characteristic which allows the status (operable, inoperable, or
degrade) of a system of any of its sub-system to be confidently determined in a timely
fashion. Testability attempts to qualify those attributes of system designs which facilitate
detection and isolation of faults that affect system performance. Testability can be defined as
the characteristic of a design which allows the status of a system of any of its subsystems to
be confidently determined in a timely fashion.
Testing: Testing is the execution of a system in a real or simulated environment with the
intent of finding faults.
Testing Techniques: Can be used in order to obtain a structured and efficient testing which
covers the testing objectives during the different phases in the software life cycle.
Top-Down Testing: The testing starts with the main program. The main program becomes
the test harness and the subordinated units are added as they are completed and testing
continues. Stubs must be created for units not yet completed. This type of testing results in

© 2006 Zeta Cyber Solutions (P) Limited Page 84


retesting of higher level units when lower level units are added. The adding of new units one
by one should not be taken too literary. Sometimes a collection of units will be included
simultaneously, and the whole set of units will serve as test harness for each unit test. Each
unit is tested according to a unit test plan, with a top-down strategy.
Unit Testing: Unit testing is meant to expose faults on each software unit as soon as this is
available regardless of its interaction with other units. The unit is exercised against its
detailed design and by ensuring that a defined logic coverage is performed. Informal tests on
module level which will be done by the software development team and are informal tests
which are necessary to check that the coded software modules reflect the requirements and
design for that module. White-box oriented testing in combination with at least one black box
method is used.
Unreachability: A statement (or logical branch) is unreachable if there is no logically
obtainable set of input-space settings which can cause the statement (or logical branch) to be
traversed.
Validation: The process of evaluation software at the end of the software development
process to ensure compliance with software requirements. The techniques for validation are
testing, inspection and reviewing.
Verification: The process of determining whether or not the products of a given phase of
the software development cycle meet the implementation steps and can be traced to the
incoming objectives established during the previous phase. The techniques for verification are
testing, inspection and reviewing.
White-box testing: A test method where the tester views the internal behavior and
structure of the program. The testing strategy permits one to examine the internal structure
of the program. In using this strategy, the tester derives test data from an examination of the
program's logic without neglecting the requirements in the specification. The goal of this test
method is to achieve a high-test coverage, which is examination of as much of the
statements, branches, paths as possible.

© 2006 Zeta Cyber Solutions (P) Limited Page 85


Points to ponder

1. What testing approaches can you tell me about?


A: Each of the followings represents a different testing approach: black box testing, white
box testing, unit testing, incremental testing, integration testing, functional testing, system
testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load
testing, performance testing, usability testing, install/uninstall testing, recovery testing,
security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance
testing, comparison testing, alpha testing, beta testing, and mutation testing.

2. What is stress testing?


A: Stress testing is testing that investigates the behavior of software (and hardware) under
extraordinary operating conditions. For example, when a web server is stress tested, testing
aims to find out how many users can be on-line, at the same time, without crashing the
server. Stress testing tests the stability of a given system or entity. Stress testing tests
something beyond its normal operational capacity, in order to observe any negative results.

3. What is load testing?


A: Load testing simulates the expected usage of a software program, by simulating multiple
users that access the program's services concurrently. Load testing is most useful and most
relevant for multi-user systems, client/server models, including web servers. For example, the
load placed on the system is increased above normal usage patterns, in order to test the
system's response at peak loads.

4. What is the difference between stress testing and load testing?


A: Load testing generally stops short of stress testing. During stress testing, the load is so
great that the expected results are errors, though there is gray area in between stress testing
and load testing. Load testing is a blanket term that is used in many different ways across the
professional software testing community. The term, load testing, is often used synonymously
with stress testing, performance testing, reliability testing, and volume testing.

5. What is the difference between performance testing and load testing?


A: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally
stops short of stress testing. During stress testing, the load is so great that errors are the
expected results, though there is gray area in between stress testing and load testing.

© 2006 Zeta Cyber Solutions (P) Limited Page 86


6. What is the difference between reliability testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally
stops short of stress testing. During stress testing, the load is so great that errors are the
expected results, though there is gray area in between stress testing and load testing.

7. What is automated testing?


A: Automated testing is a formally specified and controlled method of formal testing
approach.

8. What is the difference between volume testing and load testing?


A: Load testing is a blanket term that is used in many different ways across the professional
software testing community. The term, load testing, is often used synonymously with stress
testing, performance testing, reliability testing, and volume testing. Load testing generally
stops short of stress testing. During stress testing, the load is so great that errors are the
expected results, though there is gray area in between stress testing and load testing.

9. What is incremental testing?


A: Incremental testing is partial testing of an incomplete product. The goal of incremental
testing is to provide an early feedback to software developers.

10. What is software testing?


A: Software testing is a process that identifies the correctness, completeness, and quality of
software. Actually, testing cannot establish the correctness of software. It can find defects,
but cannot prove there are no defects.

11. What is alpha testing?


A: Alpha testing is final testing before the software is released to the general public. First,
(and this is called the first phase of alpha testing), the software is tested by in-house
developers. They use either debugger software, or hardware assisted debuggers. The goal is
to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is
handed over to software QA staff for additional testing in an environment that is similar to
the intended use.

© 2006 Zeta Cyber Solutions (P) Limited Page 87


12. What is beta testing?
A: Following alpha testing, "beta versions" of the software are released to a group of people,
and limited public tests are performed, so that further testing can ensure the product has few
bugs. Other times, beta versions are made available to the general public, in order to receive
as much feedback as possible. The goal is to benefit the maximum number of future users.

13. What is the difference between alpha and beta testing?


A: Alpha testing is performed by in-house developers and software QA personnel. Beta
testing is performed by the public, a few select prospective customers, or the general public.

14. What is gamma testing?


A: Gamma testing is testing of software that has all the required features, but it did not go
through all the in-house quality checks. Cynics tend to refer to software releases as "gamma
testing".

15. What is boundary value analysis?


A: Boundary value analysis is a technique for test data selection. A test engineer chooses
values that lie along data extremes. Boundary values include maximum, minimum, just inside
boundaries, just outside boundaries, typical values, and error values. The expectation is that,
if a systems works correctly for these extreme or special values, then it will work correctly for
all values in between. An effective way to test code is to exercise it at its natural boundaries.

16. What is ad hoc testing?


A: Ad hoc testing is a testing approach; it is the least formal testing approach.

17. What is clear box testing?


A: Clear box testing is the same as white box testing. It is a testing approach that examines
the application's program structure, and derives test cases from the application's program
logic.

18. What is glass box testing?


A: Glass box testing is the same as white box testing. It is a testing approach that examines
the application's program structure, and derives test cases from the application's program
logic.

© 2006 Zeta Cyber Solutions (P) Limited Page 88


19. What is open box testing?
A: Open box testing is the same as white box testing. It is a testing approach that examines
the application's program structure, and derives test cases from the application's program
logic.

20. What is black box testing?


A: Black box testing a type of testing that considers only externally visible behavior. Black
box testing considers neither the code itself, nor the "inner workings" of the software.

21. What is functional testing?


A: Functional testing is the same as black box testing. Black box testing a type of testing that
considers only externally visible behavior. Black box testing considers neither the code itself,
nor the "inner workings" of the software.

22. What is closed box testing?


A: Closed box testing is the same as black box testing. Black box testing a type of testing
that considers only externally visible behavior. Black box testing considers neither the code
itself, nor the "inner workings" of the software.

23. What is bottom-up testing?


A: Bottom-up testing is a technique for integration testing. A test engineer creates and uses
test drivers for components that have not yet been developed, because, with bottom-up
testing, low-level components are tested first. The objective of bottom-up testing is to call
low-level components, for testing purposes.

24. What is software quality?


A: The quality of the software does vary widely from system to system. Some common
quality attributes are stability, usability, reliability, portability, and maintainability. See quality
standard ISO 9126 for more information on this subject.

25. What is software fault?


A: A software fault is a hidden programming error. A software fault is an error in the
correctness of the semantics of a computer program.

26. What is software failure?


A: Software failure occurs when the software does not do what the user expects to see.

© 2006 Zeta Cyber Solutions (P) Limited Page 89


27. What is the difference between a software fault and software failure?
A: A software failure occurs when the software does not do what the user expects to see.
Software faults, on the other hand, are hidden programming errors. Software faults become
software failures only when the exact computation conditions are met, and the faulty portion
of the code is executed on the CPU. This can occur during normal usage. Other times it
occurs when the software is ported to a different hardware platform, or, when the software is
ported to a different complier, or, when the software gets extended.

28. Who is a test engineer?


A: We, test engineers, are engineers who specialize in testing. We create test cases,
procedures, scripts and generate data. We execute test procedures and scripts, analyze
standards of measurements, and evaluate results of system/integration/regression testing.

29. Who is a QA engineer?


A: QA engineers are test engineer, but they do more than just testing. Good QA engineers
understand the entire software development process and how it fits into the business
approach and the goals of the organization. Communication skills and the ability to
understand various sides of issues are important. A QA engineer is successful if people listen
to him, if people use his tests, if people think that he's useful, and if he's happy doing his
work. I would love to see QA departments staffed with experienced software developers who
coach development teams to write better code. But I've never seen it. Instead of coaching,
QA engineers tend to be process people.

30. How do test case templates look like?


A: Software test cases are documents that describe inputs, actions, or events and their
expected results, in order to determine if all features of an application are working correctly.
A software test case template is, for example, a 6-column table, where column 1 is the "Test
case ID number", column 2 is the "Test case name", column 3 is the "Test objective", column
4 is the "Test conditions/setup", column 5 is the "Input data requirements/steps", and column
6 is the "Expected results". All documents should be written to a certain standard and
template. Standards and templates maintain document uniformity. It also helps in learning
where information is located, making it easier for a user to find what they want. Lastly, with
standards and templates, information will not be accidentally omitted from a document

31. What is the role of the test engineer?


A: We, test engineers, speed up the work of the development staff, and reduce the risk of
your company's legal liability. We also give your company the evidence that the software is
correct and operates properly. We, test engineers, improve problem tracking and reporting,

© 2006 Zeta Cyber Solutions (P) Limited Page 90


maximize the value of the software, and the value of the devices that use it. We, test
engineers, assure the successful launch of the product by discovering bugs and design flaws,
before users get discouraged, before shareholders loose their cool and before employees get
bogged down. We, test engineers, help the work of the software development staff, so the
development team can devote its time to build up the product. We, test engineers, promote
continual improvement. We provide documentation required by FDA, FAA, other regulatory
agencies, and your customers. We, test engineers, save your company Helping companies
solve and avoid FDA problems and get FDA approvals money by discovering defects EARLY in
the design process, before failures occur in production, or in the field. We save the reputation
of your company by discovering bugs and design flaws, before bugs and design flaws
damage the reputation of your company.

32. What are the QA engineer's responsibilities?


A: Let's say, an engineer is hired for a small software company's QA role, and there is no QA
team. Should he take responsibility to set up a QA infrastructure/process, testing and quality
of the entire product? No, because taking this responsibility is a classic trap that QA people
get caught in. Why? Because QA engineers cannot assure quality and QA departments cannot
create quality. What we CAN do is to detect lack of quality, and prevent low-quality products
from going out the door. What is the solution? We need to drop the QA label, and tell the
developers that they are responsible for the quality of their own work. The problem is,
sometimes, as soon as the developers learn that there is a test department, they will slack off
on their testing. We need to offer to help with quality assessment, only.

33. What metrics can be used in for software development?


A: Metrics refer to statistical process control. The idea of statistical process control is a great
one, but it has only a limited use in software development. On the negative side, statistical
process control works only with processes that are sufficiently well defined AND unvaried, so
that they can be analyzed in terms of statistics. The problem is, most software development
projects are NOT sufficiently well defined and NOT sufficiently unvaried. On the positive side,
one CAN use statistics. Statistics are excellent tools that project managers can use. Statistics
can be used, for example, to determine when to stop testing, i.e. test cases completed with
certain percentage passed, or when bug rate falls below a certain level. But, if these are
project management tools, why should we label them quality assurance tools?

34. What is role of the QA engineer?


A: The QA Engineer's function is to use the system much like real users would, find all the
bugs, find ways to replicate the bugs, submit bug reports to the developers, and to provide
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

© 2006 Zeta Cyber Solutions (P) Limited Page 91


35. What metrics can be used for bug tracking?
A: Metrics that can be used for bug tracking include the total number of bugs, total number
of bugs that have been fixed, number of new bugs per week, and number of fixes per week.
Other metrics in quality assurance include...
McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC),
module design complexity metric (iv(G)), essential complexity metric (ev (G)), pathological
complexity metric (pv (G)), design complexity metric (S0), integration complexity metric (S1),
object integration complexity metric (OS1), global data complexity metric (gdv(G)), data
complexity etric (DV), tested data complexity metric (TDV), data reference metric (DR),
tested data reference etric (TDR), maintenance severity metric (maint_severity), data
reference severity metric (DR_severity), data complexity severity metric (DV_severity),
global data severity metric (gdv_severity).

36. What metrics can be used for bug tracking? (Cont'd...)


McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB),
access to public data (PUBDATA), polymorphism percent of unoverloaded calls (PCTCALL),
number of roots (ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G)
(MAXEV), and hierarchy quality (QUAL).
Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods
(LOCM), number of children (NOC), response for a class (RFC), weighted methods per class
(WMC), Halstead software metrics program length, program volume, program level and
program difficulty, intelligent content, programming effort, error estimate, and programming
time.
Line count software metrics: lines of code, lines of comment, lines of mixed code and
comments, and lines left blank.

37. How do you perform integration testing?


A: First, unit testing has to be completed. Upon completion of unit testing, integration testing
begins. Integration testing is black box testing. The purpose of integration testing is to
ensure distinct components of the application still work in accordance to customer
requirements. Test cases are developed with the express purpose of exercising the interfaces
between the components. This activity is carried out by the test team. Integration testing is
considered complete, when actual results and expected results are either in line or
differences are explainable/acceptable based on client input.

38. What metrics are used for test report generation?


A: Metrics that can be used for test report generation include...

© 2006 Zeta Cyber Solutions (P) Limited Page 92


McCabe metrics: Cyclomatic complexity metric (v(G)), Actual complexity metric (AC),
Module design complexity metric (iv(G)), Essential complexity metric (ev(G)), Pathological
complexity metric (pv(G)), design complexity metric (S0), Integration complexity metric (S1),
Object integration complexity metric (OS1), Global data complexity metric (gdv(G)), Data
complexity metric (DV), Tested data complexity metric (TDV), Data reference metric (DR),
Tested data reference metric (TDR), Maintenance severity metric (maint_severity), Data
reference severity metric (DR_severity), Data complexity severity metric (DV_severity), Global
data severity metric (gdv_severity).
McCabe object oriented software metrics: Encapsulation percent public data (PCTPUB),
and Access to public data (PUBDATA), Polymorphism percent of unoverloaded calls
(PCTCALL), Number of roots (ROOTCNT), Fan-in (FANIN), quality maximum v(G) (MAXV),
Maximum ev(G) (MAXEV), and Hierarchy quality(QUAL).
Other object oriented software metrics: Depth (DEPTH), Lack of cohesion of methods
(LOCM), Number of children (NOC), Response for a class (RFC), Weighted methods per class
(WMC), Halstead software metrics program length, Program volume, Program level and
program difficulty, Intelligent content, Programming effort, Error estimate, and Programming
time.
Line count software metrics: Lines of code, Lines of comment, Lines of mixed code and
comments, and Lines left blank.

39. What is the "bug life cycle"?


A: Bug life cycles are similar to software development life cycles. At any time during the
software development life cycle errors can be made during the gathering of requirements,
requirements analysis, functional design, internal design, documentation planning, document
preparation, coding, unit testing, test planning, integration, testing, maintenance, updates,
retesting and phase-out. Bug life cycle begins when a programmer, software developer, or
architect makes a mistake, creates an unintentional software defect, i.e. a bug, and ends
when the bug is fixed, and the bug is no longer in existence. What should be done after a
bug is found? When a bug is found, it needs to be communicated and assigned to developers
that can fix it. After the problem is resolved, fixes should be retested. Additionally,
determinations should be made regarding requirements, software, hardware, safety impact,
etc., for regression testing to check the fixes didn't create other problems elsewhere. If a
problem-tracking system is in place, it should encapsulate these determinations. A variety of
commercial, problem tracking/management software tools are available. These tools, with the
detailed input of software test engineers, will give the team complete information so
developers can understand the bug, get an idea of its severity, reproduce it and fix it.

© 2006 Zeta Cyber Solutions (P) Limited Page 93


40. What is integration testing?
A: Integration testing is black box testing. The purpose of integration testing is to ensure
distinct components of the application still work in accordance to customer requirements.
Test cases are developed with the express purpose of exercising the interfaces between the
components. This activity is carried out by the test team. Integration testing is considered
complete, when actual results and expected results are either in line or differences are
explainable / acceptable, based on client input.

41. How do test plan templates look like?


A: The test plan document template describes the objectives, scope, approach and focus of a
software testing effort. Test document templates are often in the form of documents that are
divided into sections and subsections. One example of this template is a 4-section document,
where section 1 is the "Test Objective", section 2 is the "Scope of Testing", and section 3 is
the "Test Approach", and section 4 is the "Focus of the Testing Effort". All documents should
be written to a certain standard and template. Standards and templates maintain document
uniformity. It also helps in learning where information is located, making it easier for a user
to find what they want. With standards and templates, information will not be accidentally
omitted from a document. Once this document is reviewed for standards it will be
recommended for improvements and/or additions.

42. What is a software project test plan?


A: A software project test plan is a document that describes the objectives, scope, approach
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the why and how of
product validation. It should be thorough enough to be useful, but not so thorough that no
one outside the test group will be able to read it.

43. When do you choose automated testing?


A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But
for small projects, the time needed to learn and implement the automated testing tools is
usually not worthwhile. Automated testing tools sometimes do not make testing easier. One
problem with automated testing tools is that if there are continual changes to the product
being tested, the recordings have to be changed so often, that it becomes a very time-
consuming task to continuously update the scripts. Another problem with such tools is the
interpretation of the results (screens, data, logs, etc.) that can be a time consuming task.

© 2006 Zeta Cyber Solutions (P) Limited Page 94


44. What's the ratio between developers and testers?
A: This ratio is not a fixed one, but depends on what phase of the software development life
cycle the project is in. When a product is first conceived, organized, and developed, this ratio
tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the
software is near the end of alpha testing, this ratio tends to be 1:1 or 1:2, in favor of testers.

45. What is your role in your current organization if you are a QA Engineer?
A: The QA Engineer's function is to use the system much like real users would, find all the
bugs, find ways to replicate the bugs, submit bug reports to the developers, and to provide
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.

46. What software tools are in demand these days?


A: There is no good answer to this question. The answer to this question can and will change
from day to day. What is in demand today is not necessarily in demand tomorrow. To give
you some recent examples, some of the software tools on end clients' lists of requirements
include LabView, LoadRunner, Rational Tools, Winrunner and QTP. But, as a general rule of
thumb, there are many-many other items on their lists, depending on the end client, their
needs and preferences. It is worth repeating... the answer to this question can and will
change from one day to the next. What is in demand today will not likely be in demand
tomorrow.

47. What is software configuration management?


A: Software Configuration management (SCM) relates to Configuration Management (CM).
SCM is the control, and the recording of, changes that are made to the software and
documentation throughout the software development life cycle (SDLC). SCM covers the tools
and processes used to control, coordinate and track code, requirements, documentation,
problems, change requests, designs, tools, compilers, libraries, patches, and changes made
to them, and to keep track of who makes the changes. We, test engineers have experience
with a full range of CM tools and concepts, and can easily adapt to an organization's software
tool and process needs.

48. What are some of the software configuration management tools?


A: Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS;
and there are many others. Rational ClearCase is a popular software tool, made by Rational
Software, for revision control of source code. DOORS or "Dynamic Object Oriented
Requirements System" is a requirements version control software tool. CVS, or "Concurrent
Version System", is a popular, open source version control system to keep track of changes in
documents associated with software projects. CVS enables several, often distant, developers

© 2006 Zeta Cyber Solutions (P) Limited Page 95


to work together on the same source code. PVCS is a document version control tool, a
competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX utility
that compares the difference between two text files.

49. Which of these roles are the best and most popular?
A: In testing, Tester roles tend to be the most popular. The less popular roles include the
roles of System Administrator, Test/QA Team Lead, and Test/QA Managers.

50. What other roles are in testing?


A: Depending on the organization, the following roles are more or less standard on most
testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System
Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test
Configuration Managers. Depending on the project, one person can and often wears more
than one hat. For instance, Test Engineers often wear the hat of Technical Analyst, Test Build
Manager and Test Configuration Manager as well.

51. What's the difference between priority and severity?


A: The simple answer is, "Priority is about scheduling, and severity is about standards." The
complex answer is, "Priority means something is afforded or deserves prior attention; a
precedence established by order of importance (or urgency). Severity is the state or quality of
being severe; severe implies adherence to rigorous standards or high principles and often
suggests harshness; severe is marked by or requires strict adherence to rigorous standards or
high principles, e.g. a severe code of behavior."

52. What's the difference between efficient and effective?


A: "Efficient" means having a high ratio of output to input; working or producing with a
minimum of waste. For example, "An efficient test engineer wastes no time", or "An efficient
engine saves gas". "Effective" on the other hand means producing or capable of producing an
intended result, or having a striking effect. For example, "For automated testing,
WinRunner/QTP is more effective than an oscilloscope", or "For rapid long distance
transportation, the jet engine is more effective than a witch's broomstick".

53. What is the difference between verification and validation?


A: Verification takes place before validation, and not vice versa. Verification evaluates
documents, plans, code, requirements, and specifications. Validation, on the other hand,
evaluates the product itself. The inputs of verification are checklists, issues lists, walk-troughs
and inspection meetings, reviews and meetings. The input of validation, on the other hand, is
the actual testing of an actual product. The output of verification is a nearly perfect set of

© 2006 Zeta Cyber Solutions (P) Limited Page 96


documents, plans, specifications, and requirements document. The output of validation, on
the other hand, is a nearly perfect, actual product.

54. What is documentation change management?


A: Documentation change management is part of configuration management (CM). CM
covers the tools and processes used to control, coordinate and track code, requirements,
documentation, problems, change requests, designs, tools, compilers, libraries, patches,
changes made to them and who makes the changes.

55. What is up time?


A: "Up time" is the time period when a system is operational and in service. Up time is the
sum of busy time and idle time. For example, if, out of 168 hours, a system has been busy
for 50 hours, idle for 110 hours, and down for 8 hours, then the busy time is 50 hours, idle
time is 110 hours, and up time is (110 + 50 =) 160 hours.

56. What is upwardly compatible software?


A: Upwardly compatible software is compatible with a later or more complex version of itself.
For example, upwardly compatible software is able to handle files created by a later version
of itself.

57. What is upward compression?


A: In software design, upward compression means a form of demodularization, in which a
subordinate module is copied into the body of a superior module.

58. What is usability?


A: Usability means ease of use; the ease with which a user can learn to operate, prepares
inputs for, and interprets outputs of a software product.

59. What is user documentation?


A: User documentation is a document that describes the way a software product or system
should be used to obtain the desired results.

60. What is a user manual?


A: User manual is a document that presents information necessary to employ software or a
system to obtain the desired results. Typically, what is described are system and component
capabilities, limitations, options, permitted inputs, expected outputs, error messages, and
special instructions.

© 2006 Zeta Cyber Solutions (P) Limited Page 97


61. What is the difference between user documentation and user manual?
A: When a distinction is made between those who operate and use a computer system for its
intended purpose, separate user documentation and user manual is created. Operators get
user documentation, and users get user manuals.

62. What is user friendly software?


A: A computer program is user friendly, when it is designed with ease of use, as one of the
primary objectives of its design.

63. What is a user friendly document?


A: A document is user friendly, when it is designed with ease of use, as one of the primary
objectives of its design.

64. What is a user guide?


A: User guide is the same as the user manual. It is a document that presents information
necessary to employ a system or component to obtain the desired results. Typically, what is
described are system and component capabilities, limitations, options, permitted inputs,
expected outputs, error messages, and special instructions.

65. What is user interface?


A: User interface is the interface between a human user and a computer system. It enables
the passage of information between a human user and hardware or software components of
a computer system.

66. What is a utility?


A: Utility is a software tool designed to perform some frequently used support function. For
example: program to print files.

67. What is utilization?


A: Utilization is the ratio of time a system is busy, divided by the time it is available.
Utilization is a useful measure in evaluating computer performance.

68. What is V&V?


A: V&V is an acronym for verification and validation.

69. What is variable trace?


A: Variable trace is a record of the names and values of variables accessed and changed
during the execution of a computer program.

© 2006 Zeta Cyber Solutions (P) Limited Page 98


70. What is value trace?
A: Value trace is same as variable trace. It is a record of the names and values of variables
accessed and changed during the execution of a computer program.

71. What is a variable?


A: Variables are data items whose values can change. One example is a variable we've
named "capacitor_voltage_10000", where "capacitor_value_10000" can be any whole number
between -10000 and +10000. Keep in mind, there are local and global variables.

72. What is a variant?


A: Variants are versions of a program. Variants result from the application of software
diversity.

73. What is verification and validation (V&V)?


A: Verification and validation (V&V) is a process that helps to determine if the software
requirements are complete, correct; and if the software of each development phase fulfills
the requirements and conditions imposed by the pervious phase; and if the final software
complies with the applicable software requirements.

74. What is a software version?


A: A software version is an initial release (or re-release) of a software associated with a
complete compilation (or recompilation) of the software.

75. What is a document version?


A: A document version is an initial release (or a complete re-release) of a document, as
opposed to a revision resulting from issuing change pages to a previous release.

76. What is VDD?


A: VDD is an acronym. It stands for "version description document".

77. What is a version description document (VDD)?


A: Version description document (VDD) is a document that accompanies and identifies a
given version of a software product. Typically the VDD includes a description, and
identification of the software, identification of changes incorporated into this version, and
installation and operating information unique to this version of the software.

© 2006 Zeta Cyber Solutions (P) Limited Page 99


78. What is a vertical microinstruction?
A: A vertical microinstruction is a microinstruction that specifies one of a sequence of
operations needed to carry out a machine language instruction. Vertical microinstructions are
short, 12 to 24 bit instructions. They're called vertical because they are normally listed
vertically on a page. These 12 to 24 bit microinstructions instructions are required to carry
out a single machine language instruction. Besides vertical microinstructions, there are
horizontal as well as diagonal microinstructions as well.

79. What is a virtual address?


A: In virtual storage systems, virtual addresses are assigned to auxiliary storage locations.
They allow those locations to be accessed as though they were part of the main storage.

80. What is virtual memory?


A: Virtual memory relates to virtual storage. In virtual storage, portions of a user's program
and data are placed in auxiliary storage, and the operating system automatically swaps them
in and out of main storage as needed.

81. What is virtual storage?


A: Virtual storage is a storage allocation technique, in which auxiliary storage can be
addressed as though it was part of main storage. Portions of a user's program and data are
placed in auxiliary storage, and the operating system automatically swaps them in and out of
main storage as needed.

82. What is a waiver?


A: Waivers are authorizations to accept software that has been submitted for inspection,
found to depart from specified requirements, but is nevertheless considered suitable for use
"as is", or after rework by an approved method.

83. What is the waterfall model?


A: Waterfall is a model of the software development process in which the concept phase,
requirements phase, design phase, implementation phase, test phase, installation phase, and
checkout phase are performed in that order, probably with overlap, but with little or no
iteration.

84. What are the phases of the software development process?


A: Software development process consists of the concept phase, requirements phase, design
phase, implementation phase, test phase, installation phase, and checkout phase.

© 2006 Zeta Cyber Solutions (P) Limited Page 100


85. What models are used in software development?
A: In software development process the following models are used: waterfall model,
incremental development model, rapid prototyping model, and spiral model.

86. What is SDLC?


A: SDLC is an acronym. It stands for "software development life cycle".

87. What is the difference between system testing and integration testing?
A: System testing is high level testing, and integration testing is a lower level testing.
Integration testing is completed first, not the system testing. In other words, upon
completion of integration testing, system testing is started, and not vice versa. For integration
testing, test cases are developed with the express purpose of exercising the interfaces
between the components. For system testing, on the other hand, the complete system is
configured in a controlled environment, and test cases are developed to simulate real life
scenarios that occur in a simulated real life test environment. The purpose of integration
testing is to ensure distinct components of the application still work in accordance to
customer requirements. The purpose of system testing, on the other hand, is to validate an
application's accuracy and completeness in performing the functions as designed, and to test
all functions of the system that are required in real life.

88. What are the parameters of performance testing?


A: Performance testing verifies loads, volumes, and response times, as defined by
requirements. Performance testing is a part of system testing, but it is also a distinct level of
testing. The term 'performance testing' is often used synonymously with stress testing, load
testing, reliability testing, and volume testing.

89. What is disaster recovery testing?


A: Disaster recovery testing is testing how well the system recovers from disasters, crashes,
hardware failures, or other catastrophic problems.

90. How do you conduct peer reviews?


A: Peer reviews, sometimes called PDR, are formal meeting, more formalized than a walk-
through, and typically consists of 3-10 people including the test lead, task lead (the author of
whatever is being reviewed) and a facilitator (to make notes). The subject of the PDR is
typically a code block, release, or feature, or document. The purpose of the PDR is to find
problems and see what is missing, not to fix anything. The result of the meeting is
documented in a written report. Attendees should prepare for PDRs by reading through
documents, before the meeting starts; most problems are found during this preparation. Why

© 2006 Zeta Cyber Solutions (P) Limited Page 101


are PDRs so useful? Because PDRs are cost-effective methods of ensuring quality, because
bug prevention is more cost effective than bug detection.

91. How do you check the security of your application?


A: To check the security of an application, we can use security/penetration testing.
Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage. This type of testing usually requires
sophisticated testing techniques.

92. When testing the password field, what is your focus?


A: When testing the password field, one needs to verify that passwords are encrypted.

93. What is the objective of regression testing?


A: The objective of regression testing is to test that the fixes have not created any other
problems elsewhere. In other words, the objective is to ensure the software has remained
intact. A baseline set of data and scripts are maintained and executed, to verify that changes
introduced during the release have not "undone" any previous code. Expected results from
the baseline are compared to results of the software under test. All discrepancies are
highlighted and accounted for, before testing proceeds to the next level.

94. What stage of bug fixing is the most cost effective?


A: Bug prevention, i.e. inspections, PDRs, and walk-throughs, is more cost effective than bug
detection.

95. What can you tell about white box testing?


A: White box testing is a testing approach that examines the application's program structure,
and derives test cases from the application's program logic. Clear box testing is a white box
type of testing. Glass box testing is also a white box type of testing. Open box testing is also
a white box type of testing.

96. What black box testing types can you tell me about?
A: Black box testing is functional testing, not based on any knowledge of internal software
design or code. Black box testing is based on requirements and functionality. Functional
testing is also a black-box type of testing geared to functional requirements of an application.
System testing is also a black box type of testing. Acceptance testing is also a black box type
of testing. Functional testing is also a black box type of testing. Closed box testing is also a
black box type of testing. Integration testing is also a black box type of testing.

© 2006 Zeta Cyber Solutions (P) Limited Page 102


97. Is regression testing performed manually?
A: It depends on the initial testing approach. If the initial testing approach was manual
testing, then the regression testing is normally performed manually. Conversely, if the initial
testing approach was automated testing, then the regression testing is normally performed
by automated testing.

98. What is your view of software QA/testing?


A: Software QA/testing is easy, if requirements are solid, clear, complete, detailed, cohesive,
attainable and testable, and if schedules are realistic, and if there is good communication.
Software QA/testing is a piece of cake, if project schedules are realistic, if adequate time is
allowed for planning, design, testing, bug fixing, re-testing, changes, and documentation.
Software QA/testing is relatively easy, if testing is started early on, and if fixes or changes are
re-tested, and if sufficient time is planned for both testing and bug fixing. Software
QA/testing is easy, if new features are avoided, and if one sticks to initial requirements as
much as possible.

99. How can I be a good tester?


A: We, good testers, take the customers' point of view. We are tactful and diplomatic. We
have a "test to break" attitude, a strong desire for quality, an attention to detail, and good
communication skills, both oral and written. Previous software development experience is
also helpful as it provides a deeper understanding of the software development process.

100. What is the difference between software bug and software defect?
A: A 'software bug' is a nonspecific term that means an inexplicable defect, error, flaw,
mistake, failure, fault, or unwanted behavior of a computer program. Other terms, e.g.
software defect and software failure, are more specific. While there are many who believe the
term 'bug' is a reference to insects that caused malfunctions in early electromechanical
computers (1950-1970), the term 'bug' had been a part of engineering jargon for many
decades before the 1950s; even the great inventor, Thomas Edison (1847-1931), wrote about
a 'bug' in one of his letters.

101. When is a process repeatable?


A: If we use detailed and well-written processes and procedures, we ensure the correct steps
are being executed. This facilitates a successful completion of a task. This is a way we also
ensure a process is repeatable.

© 2006 Zeta Cyber Solutions (P) Limited Page 103


102. What is test methodology?
A: One test methodology is a three-step process. “Creating a test strategy”, “creating a test
plan/design” and “executing tests”. This methodology can be used and molded to your
organization's needs. Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his customers' applications.

103. What does a Test Strategy Document contain?


A: The test strategy document is a formal description of how a software product will be
tested. A test strategy is developed for all levels of testing, as required. The test team
analyzes the requirements, writes the test strategy and reviews the plan with the project
team. The test plan may include test cases, conditions, the test environment, and a list of
related tasks, pass/fail criteria and risk assessment. Additional sections in the test strategy
document include: A description of the required hardware and software components,
including test tools. This information comes from the test environment, including test tool
data. A description of roles and responsibilities of the resources required for the test and
schedule constraints. This information comes from man-hours and schedules. Testing
methodology: This is based on known standards, functional and technical requirements of the
application. This information comes from requirements, change request, technical, and
functional design documents, requirements that the system cannot provide, ex: system
limitations.

104. What is monkey testing?


A: "Monkey testing" is random testing performed by automated testing tools. These
automated testing tools are considered "monkeys", if they work at random. We call them
"monkeys" because it is widely believed, if we allow six monkeys to pound on six typewriters
at random, for a million years, they will recreate all the works of Isaac Asimov. There are
"smart monkeys" and "dumb monkeys". "Smart monkeys" are valuable for load and stress
testing, and will find a significant number of bugs, but they're also very expensive to develop.
"Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic
testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be
hangs and crashes, i.e. the bugs you least want to have in your software product. "Monkey
testing" can be valuable, but they should not be your only testing.

105. What is stochastic testing?


A: Stochastic testing is the same as "monkey testing", but stochastic testing is a more
technical sounding name for the same testing process. Stochastic testing is black box testing,
random testing, performed by automated testing tools. Stochastic testing is a series of

© 2006 Zeta Cyber Solutions (P) Limited Page 104


random tests over time. The software under test typically passes the individual tests, but our
goal is to see if it can pass a large series of the individual tests.

106. What is mutation testing?


A: In mutation testing, we create mutant software, we make mutant software to fail, and
thus demonstrate the adequacy of our test case. When we create a set of mutant software
each mutant software, differs from the original software by one mutation, i.e. one single
syntax change made to one of its program statements, i.e. each mutant software contains
only one single fault.
When we apply test cases to the original software and to the mutant software, we evaluate if
our test case is adequate. Our test case is inadequate, if both the original software and all
mutant software generate the same output. Our test case is adequate, if our test case
detects faults or if at least one mutant software generates a different output than does the
original software for our test case.

107. What is PDR?


A: PDR is an acronym. In the world of software QA/testing, it stands for "peer design
review", or "peer review".

108. What is good about PDRs?


A: PDRs are informal meetings. PDRs make perfect sense, because they're for the mutual
benefit of you and your end client. Your end client requires a PDR, because they work on a
product, and want to come up with the very best possible design and documentation. Your
end client requires you to have a PDR, because when you organize a PDR, you invite and
assemble the end client's best experts and encourage them to voice their concerns as to what
should or should not go into the design and documentation, and why. When you're a
developer, designer, author, or writer, it's also to your advantage to come up with the best
possible design and documentation. Therefore you want to embrace the idea of the PDR,
because holding a PDR gives you a significant opportunity to invite and assemble the end
client's best experts and make them work for you for one hour, for your own benefit. To
come up with the best possible design and documentation, you want to encourage your end
client's experts to speak up and voice their concerns as to what should or should not go into
your design and documentation, and why.

109. Why is that My Company requires a PDR?


A: Your Company requires a PDR, because your company wants to be the owner of the very
best possible design and documentation. Your company requires a PDR, because when you
organize a PDR, you invite, assemble and encourage the company's best experts to voice

© 2006 Zeta Cyber Solutions (P) Limited Page 105


their concerns as to what should or should not go into your design and documentation, and
why. Remember, PDRs are not about you, but about design and documentation. Please don't
be negative; please do not assume your company is finding fault with your work, or
distrusting you in any way. There is a 90+ per cent probability your company wants you, likes
you and trusts you, because you're a specialist, and because your company hired you after a
long and careful selection process.
Your company requires a PDR, because PDRs are useful and constructive. Just about
everyone - even corporate chief executive officers (CEOs) - attend PDRs from time to time.
When a corporate CEO attends a PDR, he has to listen for "feedback" from shareholders.
When a CEO attends a PDR, the meeting is called the "annual shareholders' meeting".

110. Give me a list of ten good things about PDRs!


A:
1. PDRs are easy, because all your meeting attendees are your coworkers and
friends.
2. PDRs do produce results. With the help of your meeting attendees, PDRs help
you produce better designs and better documents than the ones you could come
up with, without the help of your meeting attendees.
3. Preparation for PDRs helps a lot, but, in the worst case, if you had no time to
read every page of every document, it's still OK for you to show up at the PDR.
4. It's technical expertise that counts the most, but many times you can influence
your group just as much, or even more so, if you're dominant or have good
acting skills.
5. PDRs are easy, because, even at the best and biggest companies, you can
dominate the meeting by being either very negative, or very bright and wise.
6. It is easy to deliver gentle suggestions and constructive criticism. The brightest
and wisest meeting attendees are usually gentle on you; they deliver gentle
suggestions that are constructive, not destructive.
7. You get many-many chances to express your ideas, every time a meeting
attendee asks you to justify why you wrote what you wrote.
8. PDRs are effective, because there is no need to wait for anything or anyone;
because the attendees make decisions quickly (as to what errors are in your
document). There is no confusion either, because all the group's
recommendations are clearly written down for you by the PDR's facilitator.
9. Your work goes faster, because the group itself is an independent decision
making authority. Your work gets done faster, because the group's decisions are
subject to neither oversight nor supervision.

© 2006 Zeta Cyber Solutions (P) Limited Page 106


10. At PDRs, your meeting attendees are the very best experts anyone can find, and
they work for you, for FREE!

111. What are the Exit criteria?


A: "Exit criteria" is a checklist, sometimes known as the "PDR sign-off sheet", i.e. a list of
peer design review related tasks that have to be done by the facilitator or other attendees of
the PDR, during or near the conclusion of the PDR. By having a checklist, and by going
through a checklist, the facilitator can...
1. Verify that the attendees have inspected all the relevant documents and reports, and
2. Verify that all suggestions and recommendations for each issue have been recorded, and
3. Verify that all relevant facts of the meeting have been recorded.
The facilitator's checklist includes the following questions:
1. "Have we inspected all the relevant documents, code blocks, or products?"
2. "Have we completed all the required checklists?"
3. "Have I recorded all the facts relevant to this peer review?"
4. "Does anyone have any additional suggestions, recommendations, or comments?"
5. "What is the outcome of this peer review?" At the end of the peer review, the facilitator
asks the attendees of the peer review to make a decision as to the outcome of the peer
review. I.e., "What is our consensus?" "Are we accepting the design (or document or code)?"
Or, "Are we accepting it with minor modifications?” Or "Are we accepting it, after it is
modified, and approved through e-mails to the participants?" Or "Do we want another peer
review?" This is a phase, during which the attendees of the PDR work as a committee, and
the committee's decision is final.

112. What are the Entry criteria?


A: The entry criteria is a checklist, or a combination of checklists that includes the
"developer's checklist", "testing checklist", and the "PDR checklist". Checklists are list of tasks
that have to be done by developers, testers, or the facilitator, at or before the start of the
peer review. Using these checklists, before the start of the peer review, the developer, tester
and facilitator can determine if all the documents, reports, code blocks or software products
are ready to be reviewed, and if the peer review's attendees are prepared to inspect them.
The facilitator can ask the peer review's attendees if they have been able to prepare for the
peer review, and if they're not well prepared, the facilitator can send them back to their
desks, and even ask the task lead to reschedule the peer review. The facilitator's script for
the entry criteria includes the following questions:
1. Are all the required attendees present at the peer review?
2. Have all the attendees received all the relevant documents and reports?
3. Are all the attendees well prepared for this peer review?

© 2006 Zeta Cyber Solutions (P) Limited Page 107


4. Have all the preceding life cycle activities been concluded?
5. Are there any changes to the baseline?

113. What are the parameters of peer reviews?


A: By definition, parameters are values on which something else depends. Peer reviews
depend on the attendance and active participation of several key people; usually the
facilitator, task lead, test lead, and at least one additional reviewer. The attendances of these
four people are usually required for the approval of the PDR. According to company policy,
depending on your company, other participants are often invited, but generally not required
for approval. Peer reviews depend on the facilitator, sometimes known as the moderator,
who controls the meeting, keeps the meeting on schedule, and records all suggestions from
all attendees. Peer reviews greatly depend on the developer, also known as the designer,
author, or task lead, usually a software engineer, who is most familiar with the project, and
most likely able to answer any questions or address any concerns that may come up during
the peer review. Peer reviews greatly depend on the tester, also known as test lead, or bench
test person -- usually another software engineer -- who is also familiar with the project, and
most likely able to answer any questions or address any concerns that may come up during
the peer review. Peer reviews greatly depend on the participation of additional reviewers and
additional attendees who often make specific suggestions and recommendations, and ask the
largest number of questions.

114. What types of review meetings can you tell me about?


A: Of review meetings, peer design reviews are the most common. Peer design reviews are
so common that they tend to replace both inspections and walk-throughs. Peer design
reviews can be classified according to the 'subject' of the review. I.e., "Is this a document
review, design review, or code review?" Peer design reviews can be classified according to
the 'role' you play at the meeting. I.e., "Are you the task lead, test lead, facilitator,
moderator, or additional reviewer?" Peer design reviews can be classified according to the
'job title of attendees. I.e., "Is this a meeting of peers, managers, systems engineers, or
system integration testers?" Peer design reviews can be classified according to what is being
reviewed at the meeting. I.e., "Are we reviewing the work of a developer, tester, engineer, or
technical document writer?" Peer design reviews can be classified according to the 'objective'
of the review. I.e., "Is this document for the file cabinets of our company, or that of the
government (e.g. the FAA or FDA)?" PDRs of government documents tend to attract the
attention of managers, and the meeting quickly becomes a meeting of managers.

© 2006 Zeta Cyber Solutions (P) Limited Page 108


115. How can I shift my focus and area of work from QC to QA?
A:
1. Focus on your strengths, skills, and abilities! Realize that there are MANY similarities
between Quality Control and Quality Assurance! Realize that you have MANY transferable
skills!
2. Make a plan! Develop a belief that getting a job in QA is easy! HR professionals cannot tell
the difference between quality control and quality assurance! HR professionals tend to
respond to keywords (i.e. QC and QA), without knowing the exact meaning of those
keywords!
3. Make it a reality! Invest your time! Get some hands-on experience! Do some QA work! Do
any QA work, even if, for a few months, you get paid a little less than usual! Your goals,
beliefs, enthusiasm, and action will make a huge difference in your life!
4. Read all you can, and that includes reading product pamphlets, manuals, books,
information on the Internet, and whatever information you can lay your hands on! If there is
a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to do QA work,
with little or no outside help!

116. What techniques and tools can enable me to migrate from QC to QA?
A: Refer to above answers for question#115

117. What is the difference between build and release?


A: Builds and releases are similar, because both builds and releases are end products of
software development processes. Builds and releases are similar, because both builds and
releases help developers and QA teams to deliver reliable software. Build means a version of
software, typically one that is still in testing. Usually a version number is given to a released
product, but, sometimes, a build number is used instead.
Difference #1: Builds refer to software that is still in testing, release refers to software that
is usually no longer in testing.
Difference #2: Builds occur more frequently; releases occur less frequently.
Difference #3: Versions are based on builds, and not vice versa. Builds, or usually a series
of builds, are generated first, as often as one build per every morning, depending on the
company, and then every release is based on a build, or several builds, i.e. the accumulated
code of several builds.

118. What is CMM?


A: CMM is an acronym that stands for Capability Maturity Model. The idea of CMM is, as to
future efforts in developing and testing software, concepts and experiences do not always

© 2006 Zeta Cyber Solutions (P) Limited Page 109


point us in the right direction, and therefore we should develop processes, and then refine
those processes. There are five CMM levels, of which Level 5 is the highest;
CMM Level 1 is called "Initial".
CMM Level 2 is called "Repeatable".
CMM Level 3 is called "Defined".
CMM Level 4 is called "Managed".
CMM Level 5 is called "Optimized".
There are not many Level 5 companies; most hardly need to be. Within the United States,
fewer than 8% of software companies are rated CMM Level 4, or higher. The U.S.
government requires that all companies with federal government contracts to maintain a
minimum of a CMM Level 3 assessment. CMM assessments take two weeks. They're
conducted by a nine-member team led by a SEI-certified lead assessor.

119. What are CMM levels and their definitions?


A: There are five CMM levels of which level 5 is the highest.
 CMM level 1 is called "initial". The software process is at CMM level 1, if it is an ad
hoc process. At CMM level 1, few processes are defined, and success, in general,
depends on individual effort and heroism.
 CMM level 2 is called "repeatable". The software process is at CMM level 2, if the
subject company has some basic project management processes, in order to track
cost, schedule, and functionality. Software processes are at CMM level 2, if necessary
processes are in place, in order to repeat earlier successes on projects with similar
applications. Software processes are at CMM level 2, if there are requirements
management, project planning, project tracking, subcontract management, QA, and
configuration management.
 CMM level 3 is called "defined". The software process is at CMM level 3, if the
software process is documented, standardized, and integrated into a standard
software process for the subject company. The software process is at CMM level 3, if
all projects use approved, tailored versions of the company's standard software
process for developing and maintaining software. Software processes are at CMM
level 3, if there are process definition, training programs, process focus, integrated
software management, software product engineering, inter group coordination, and
peer reviews.
 CMM level 4 is called "managed". The software process is at CMM level 4, if the
subject company collects detailed data on the software process and product quality,
and if both the software process and the software products are quantitatively
understood and controlled. Software processes are at CMM level 4, if there is
software quality management (SQM) and quantitative process management.

© 2006 Zeta Cyber Solutions (P) Limited Page 110


 CMM level 5 is called "optimized". The software process is at CMM level 5, if there is
continuous process improvement, if there is quantitative feedback from the process
and from piloting innovative ideas and technologies. Software processes are at CMM
level 5, if there are process change management, and defect prevention technology
change management.

120. What is the difference between bug and defect in software testing?
A: In software testing, the difference between bug and defect is small, and depends on your
company. For some companies, bug and defect are synonymous, while others believe bug is
a subset of defect. Generally speaking, we, software test engineers, discover BOTH bugs and
defects, before bugs and defects damage the reputation of our company. We, QA engineers,
use the software much like real users would, to find BOTH bugs and defects, to find ways to
replicate BOTH bugs and defects, to submit bug reports to the developers, and to provide
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.
Therefore, we, software engineers, do not differentiate between bugs and defects. In our bug
reports, we include BOTH bugs and defects, and any differences between them are minor.
Difference number one: In bug reports, the defects are usually easier to describe. Difference
number two: In bug reports, it is usually easier to write the descriptions on how to replicate
the defects. Defects tend to require brief explanations only.

121. What is grey box testing?


A: Grey box testing is a software testing technique that uses a combination of black box
testing and white box testing. Gray box testing is not black box testing, because the tester
does know some of the internal workings of the software under test.

In grey box testing, the tester applies a limited number of test cases to the internal workings
of the software under test. In the remaining part of the grey box testing, one takes a black
box approach in applying inputs to the software under test and observing the outputs. Gray
box testing is a powerful idea. The concept is simple; if one knows something about how the
product works on the inside, one can test it better, even from the outside.

Grey box testing is not to be confused with white box testing; i.e. a testing approach that
attempts to cover the internals of the product in detail. Grey box testing is a test strategy
based partly on internals. The testing approach is known as gray box testing, when one does
have some knowledge, but not the full knowledge of the internals of the product one is
testing. In gray box testing, just as in black box testing, you test from the outside of a
product, just as you do with black box, but you make better-informed testing choices because

© 2006 Zeta Cyber Solutions (P) Limited Page 111


you're better informed; Because you know how the underlying software components operate
and interact.

122. What is the difference between version and release?


A: Both version and release indicate a particular point in the software development life cycle,
or in the lifecycle of a document. The two terms, version and release, are similar (i.e. mean
pretty much the same thing), but there are minor differences between them. Version means
a VARIATION of an earlier, or original, type; for example, "I've downloaded the latest version
of the software from the Internet. The latest version number is 3.3." Release, on the other
hand, is the ACT OR INSTANCE of issuing something for publication, use, or distribution.
Release is something thus released. For example: "A new release of a software program."

123. What is data integrity?


A: Data integrity is one of the six fundamental components of information security. Data
integrity is the completeness, soundness, and wholeness of the data that also complies with
the intention of the creators of the data. In databases, important data – including customer
information, order database, and pricing tables -- may be stored. In databases, data integrity
is achieved by preventing accidental, or deliberate, or unauthorized insertion, or modification,
or destruction of data.

124. How do you test data integrity?


A: Data integrity testing should verify the completeness, soundness, and wholeness of the
stored data. Testing should be performed on a regular basis, because important data can and
will change over time. Data integrity tests include the followings:
1. Verify that you can create, modify, and delete any data in tables.
2. Verify that sets of radio buttons represent fixed sets of values.
3. Verify that a blank value can be retrieved from the database.
4. Verify that, when a particular set of data is saved to the database, each value gets saved
fully, and the truncation of strings and rounding of numeric values do not occur.
5. Verify that the default values are saved in the database, if the user input is not specified.
6. Verify compatibility with old data, old hardware, versions of operating systems, and
interfaces with other software.

125. What is data validity?


A: Data validity is the correctness and reasonableness of data. Reasonableness of data
means, for example, account numbers falling within a range, numeric data being all digits,
dates having a valid month, day and year, spelling of proper names. Data validity errors are
probably the most common, and the most difficult to detect, data-related errors. What causes

© 2006 Zeta Cyber Solutions (P) Limited Page 112


data validity errors? Data validity errors are usually caused by incorrect data entries, when a
large volume of data is entered in a short period of time. For example, 12/25/2005 is entered
as 13/25/2005 by mistake. This date is therefore invalid. How can you reduce data validity
errors? Use simple field validation rules. Technique 1: If the date field in a database uses the
MM/DD/YYYY format, then use a program with the following two data validation rules: "MM
should not exceed 12, and DD should not exceed 31". Technique 2: If the original figures do
not seem to match the ones in the database, then use a program to validate data fields.
Compare the sum of the numbers in the database data field to the original sum of numbers
from the source. If there is a difference between the figures, it is an indication of an error in
at least one data element.

126. What is the difference between data validity and data integrity?
A:
 Difference number one: Data validity is about the correctness and reasonableness of
data, while data integrity is about the completeness, soundness, and wholeness of
the data that also complies with the intention of the creators of the data.

 Difference number two: Data validity errors are more common, while data integrity
errors are less common. Difference number three: Errors in data validity are caused
by HUMANS -- usually data entry personnel – who enter, for example, 13/25/2005,
by mistake, while errors in data integrity are caused by BUGS in computer programs
that, for example, cause the overwriting of some of the data in the database, when
one attempts to retrieve a blank value from the database.

127. Tell me about 'Test Director'.


A: Made by Mercury Interactive, 'Test Director' is a single browser-based application that
streamlines the software QA process. It is a software tool that helps software QA
professionals to gather requirements, to plan, schedule and run tests, and to manage and
track defects/issues/bugs. Test Director’s Requirements Manager links test cases to
requirements, ensures traceability, and calculates what percentage of the requirements are
covered by tests, how many of these tests have been run, and how many have passed or
failed. As to planning, test plans can be created, or imported, for both manual and automated
tests. The test plans can then be reused, shared, and preserved. As to running tests, the Test
Director’s Test Lab Manager allows you to schedule tests to run unattended, or run even
overnight. The Test Director’s Defect Manager supports the entire bug life cycle, from initial
problem detection through fixing the defect, and verifying the fix. Additionally, the Test
Director can create customizable graphs and reports, including test execution reports and
release status assessments.

© 2006 Zeta Cyber Solutions (P) Limited Page 113


128. What is structural testing?
A: Structural testing is also known as clear box testing, glass box testing. Structural testing is
a way to test software with knowledge of the internal workings of the code being tested.
Structural testing is white box testing, not black box testing, since black boxes are considered
opaque and do not permit visibility into the code.

129. What is the difference between static and dynamic testing?


A: The differences between static and dynamic testing are as follows:
 #1: Static testing is about prevention, dynamic testing is about cure.
 #2: She static tools offer greater marginal benefits.
 #3: Static testing is many times more cost-effective than dynamic testing.
 #4: Static testing beats dynamic testing by a wide margin.
 #5: Static testing is more effective!
 #6: Static testing gives you comprehensive diagnostics for your code.
 #7: Static testing achieves 100% statement coverage in a relatively short time, while
dynamic testing often achieves less than 50% statement coverage, because dynamic
testing finds bugs only in parts of the code that are actually executed.
 #8: Dynamic testing usually takes longer than static testing. Dynamic testing may
involve running several test cases, each of which may take longer than compilation.
 #9: Dynamic testing finds fewer bugs than static testing.
 #10: Static testing can be done before compilation, while dynamic testing can take
place only after compilation and linking.
 #11: Static testing can find all of the following that dynamic testing cannot find:
syntax errors, code that is hard to maintain, code that is hard to test, code that does
not conform to coding standards, and ANSI violations.

130. What testing tools should I use?


A: Ideally, you should use both static and dynamic testing tools. To maximize software
reliability, you should use both static and dynamic techniques, supported by appropriate
static and dynamic testing tools. Static and dynamic testing are complementary. Static and
dynamic testing finds different classes of bugs. Some bugs are detectable only by static
testing, some only by dynamic.
Dynamic testing does detect some errors that static testing misses. To eliminate as many
errors as possible, both static and dynamic testing should be used. All this static testing (i.e.
testing for syntax errors, testing for code that is hard to maintain, testing for code that is
hard to test, testing for code that does not conform to coding standards, and testing for ANSI
violations) takes place before compilation.

© 2006 Zeta Cyber Solutions (P) Limited Page 114


Static testing takes roughly as long as compilation and checks every statement you have
written.

131. Why should I use static testing techniques?


A: You should use static testing techniques because static testing is a bargain, compared to
dynamic testing. Static testing is up to 100 times more effective. Even in selective testing,
static testing may be up to 10 times more effective. The most pessimistic estimates suggest a
factor of 4. Since static testing is faster and achieves 100% coverage, the unit cost of
detecting these bugs by static testing is many times lower than that of by dynamic testing.
About half of the bugs, detectable by dynamic testing, can be detected earlier by static
testing. If you use neither static nor dynamic test tools, the static tools offer greater marginal
benefits. If urgent deadlines loom on the horizon, the use of dynamic testing tools can be
omitted, but tool-supported static testing should never be omitted.

132. What is the definition of top down design?


A: Top down design progresses from simple design to detailed design. Top down design
solves problems by breaking them down into smaller, easier to solve sub problems. Top down
design creates solutions to these smaller problems, and then tests them using test drivers. In
other words, top down design starts the design process with the main module or system, and
then progresses down to lower level modules and subsystems. To put it differently, top down
design looks at the whole system, and then explodes it into subsystems, or smaller parts. A
systems engineer or systems analyst determines what the top level objectives are, and how
they can be met. He then divides the system into subsystems, i.e. breaks the whole system
into logical, manageable-size modules, and deals with them individually.

133. How can I be effective and efficient, when I do black box testing of
ecommerce web sites?
A: When you're doing black box testing of e-commerce web sites, you're most efficient and
effective when you're testing the sites' Visual Appeal, Contents, and Home Pages. When you
want to be effective and efficient, you need to verify that the site is well planned. Verify that
the site is customer-friendly. Verify that the choices of colors are attractive. Verify that the
choices of fonts are attractive. Verify that the site's audio is customer friendly. Verify that the
site's video is attractive. Verify that the choice of graphics is attractive. Verify that every page
of the site is displayed properly on all the popular browsers. Verify the authenticity of facts.
Ensure the site provides reliable and consistent information. Test the site for appearance.
Test the site for grammatical and spelling errors. Test the site for visual appeal, choice of
browsers, consistency of font size, download time, broken links, missing links, incorrect links,
and browser compatibility. Test each toolbar, each menu item, every window, every field

© 2006 Zeta Cyber Solutions (P) Limited Page 115


prompt, every pop-up text, and every error message. Test every page of the site for left and
right justifications, every shortcut key, each control, each push button, every radio button,
and each item on every drop-down menu. Test each list box, and each help menu item. Also
check, if the command buttons are grayed out when they're not in use.

134. What is the difference between top down and bottom up design?
A: Top down design proceeds from the abstract (entity) to get to the concrete (design). The
Bottom up design proceeds from the concrete (design) to get to the abstract (entity). Top
down design is most often used in designing brand new systems, while bottom up design is
sometimes used when one is reverse engineering a design; i.e. when one is trying to figure
out what somebody else designed in an existing system. Bottom up design begins the design
with the lowest level modules or subsystems, and progresses upward to the main program,
module, or subsystem. With bottom up design, a structure chart is necessary to determine
the order of execution, and the development of drivers is necessary to complete the bottom
up approach. Top down design, on the other hand, begins the design with the main or top
level module, and progresses downward to the lowest level modules or subsystems. Real life
sometimes is a combination of top down design and bottom up design. For instance, data
modeling sessions tend to be iterative, bouncing back and forth between top down and
bottom up modes, as the need arises.

135. What is the definition of bottom up design?


A: Bottom up design begins the design at the lowest level modules or subsystems, and
progresses upward to the design of the main program, main module, or main subsystem. To
determine the order of execution, a structure chart is needed, and, to complete the bottom
up design, the development of drivers is needed. In software design - assuming that the data
you start with is a pretty good model of what you're trying to do - bottom up design
generally starts with the known data (e.g. customer lists, order forms), then the data is
broken into chunks (i.e. entities) appropriate for planning a relational database. This process
reveals what relationships the entities have, and what the entities' attributes are. In software
design, bottom up design doesn't only mean writing the program in a different order, but
there is more to it. When you design bottom up, you often end up with a different program.
Instead of a single, monolithic program, you get a larger language, with more abstract
operators, and a smaller program written in it. Once you abstract out the parts which are
merely utilities, what is left is much shorter program. The higher you build up the language,
the less distance you will have to travel down to it, from the top. Bottom up design makes it
easy to reuse code blocks. For example, many of the utilities you write for one program are
also useful for programs you have to write later. Bottom up design also makes programs
easier to read.

© 2006 Zeta Cyber Solutions (P) Limited Page 116


136. What is smoke testing?
A: Smoke testing is a relatively simple check to see whether the product "smokes" when it
runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.
With many projects, smoke testing is carried out in addition to formal testing. If smoke
testing is carried out by a skilled tester, it can often find problems that are not caught during
regular testing. Sometimes, if testing occurs very early or very late in the software
development cycle, this can be the only kind of testing that can be performed. Smoke tests
are, by definition, not exhaustive, but, over time, you can increase your coverage of smoke
testing. A common practice at Microsoft, and some other software companies, is the daily
build and smoke test process. This means, every file is compiled, linked, and combined into
an executable file every single day, and then the software is smoke tested. Smoke testing
minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis,
and improves morale. Smoke testing does not have to be exhaustive, but should expose any
major problems. Smoke testing should be thorough enough that, if it passes, the tester can
assume the product is stable enough to be tested more thoroughly. Without smoke testing,
the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against
any errors in development and future problems during integration.
At first, smoke testing might be the testing of something that is easy to test. Then as the
system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or
more.

137. What is the difference between monkey testing and smoke testing?
 Difference#1: Monkey testing is random testing, and smoke testing is a nonrandom
check to see whether the product "smokes" when it runs. Smoke testing is
nonrandom testing that deliberately exercises the entire system from end to end,
with the goal of exposing any major problems.
 Difference#2: Monkey testing is performed by automated testing tools. On the other
hand, smoke testing, more often than not, is a manual check to see whether the
product "smokes" when it runs.
 Difference#3: Monkey testing is performed by "monkeys", while smoke testing is
performed by skilled testers (to see whether the product "smokes" when it runs).
 Difference#4: "Smart monkeys" are valuable for load and stress testing, but not very
valuable for smoke testing, because they are too expensive for smoke testing.
 Difference#5: "Dumb monkeys" are inexpensive to develop, are able to do some
basic testing, but, if we use them for smoke testing, they find few bugs during smoke
testing.

© 2006 Zeta Cyber Solutions (P) Limited Page 117


 Difference#6: Monkey testing is not a thorough testing, but smoke testing is
thorough enough that, if the build passes, one can assume that the program is stable
enough to be tested more thoroughly.
 Difference#7: Monkey testing does not evolve. Smoke testing, on the other hand,
evolves as the system evolves from something simple to something more thorough.
 Difference#8: Monkey testing takes "six monkeys" and a "million years" to run.
Smoke testing, on the other hand, takes much less time to run, i.e. anywhere from a
few seconds to a couple of hours.

138. Tell me about the process of daily builds and smoke tests
A: The idea behind the process of daily builds and smoke tests is to build the product every
day, and test it every day. The software development process at Microsoft and many other
software companies requires daily builds and smoke tests. According to their process, every
day, every single file has to be compiled, linked, and combined into an executable program.
And, then, the program has to be "smoke tested". Smoke testing is a relatively simple check
to see whether the product "smokes" when it runs. You should add revisions to the build only
when it makes sense to do so. You should to establish a Build Group, and build *daily*; set
your *own standard* for what constitutes "breaking the build", and create a penalty for
breaking the build, and check for broken builds *every day*. In addition to the daily builds,
you should smoke test the builds, and smoke test them Daily. You should make the smoke
test Evolve, as the system evolves. You should build and smoke test Daily, even when the
project is under pressure. Think about the many benefits of this process! The process of daily
builds and smoke tests minimizes the integration risk, reduces the risk of low quality,
supports easier defect diagnosis, improves morale, enforces discipline, and keeps pressure-
cooker projects on track. If you build and smoke test *daily*, success will come, even when
you're working on large projects!

139. What is the purpose of test strategy?


A:
 Reason#1: The number one reason of writing a test strategy document is to "have" a
signed, sealed, and delivered (approved document), where the document includes a
written testing methodology, test plan, and test cases.
 Reason#2: Having a test strategy does satisfy one important step in the software
testing process.
 Reason#3: The test strategy document tells us how the software product will be
tested.
 Reason#4: The creation of a test strategy document presents an opportunity to
review the test plan with the project team.

© 2006 Zeta Cyber Solutions (P) Limited Page 118


 Reason#5: The test strategy document describes the roles, responsibilities, and the
resources required for the test and schedule constraints.
 Reason#6: When we create a test strategy document, we have to put into writing
any testing issues requiring resolution (and usually this means additional negotiation
at the project management level).
 Reason#7: The test strategy is decided first, before lower level decisions are made
on the test plan, test design, and other testing issues.

140. What do you mean by 'the process is repeatable'?


A: A process is repeatable, whenever we have the necessary processes in place, in order to
repeat earlier successes on projects with similar applications. A process is repeatable, if we
use detailed and well-written processes and procedures. A process is repeatable, if we ensure
that the correct steps are executed. When the correct steps are executed, we facilitate a
successful completion of the task. Documentation is critical. A software process is repeatable,
if there are requirements management, project planning, project tracking, subcontract
management, QA, and configuration management. Both QA processes and practices should
be documented, so that they are repeatable. Specifications, designs, business rules,
inspection reports, configurations, code changes, test plans, test cases, bug reports, user
manuals should all be documented, so that they are repeatable. Document files should be
well organized. There should be a system for easily finding and obtaining documents, and
determining what document has a particular piece of information. We should use
documentation change management, if possible.

141. What is the purpose of a test plan?


 A: Reason#1: We create a test plan because preparing it helps us to think through
the efforts needed to validate the acceptability of a software product.
 Reason#2: We create a test plan because it can and will help people outside the test
group to understand the why and how of product validation.
 Reason#3: We create a test plan because, in regulated environments, we have to
have a written test plan.
 Reason#4: We create a test plan because the general testing process includes the
creation of a test plan.
 Reason#5: We create a test plan because we want a document that describes the
objectives, scope, approach and focus of the software testing effort.
 Reason#6: We create a test plan because it includes test cases, conditions, the test
environment, a list of related tasks, pass/fail criteria, and risk assessment.
 Reason#7: We create test plan because one of the outputs for creating a test
strategy is an approved and signed off test plan document.

© 2006 Zeta Cyber Solutions (P) Limited Page 119


 Reason#8: We create a test plan because the software testing methodology a three
step process and one of the steps is the creation of a test plan.
 Reason#9: We create a test plan because we want an opportunity to review the test
plan with the project team.
 Reason#10: We create a test plan document because test plans should be
documented, so that they are repeatable.

142. Give me one test case that catches all the bugs!
A: If there is a "magic bullet", i.e. the one test case that has a good possibility to catch ALL
the bugs, or at least the most important bugs, it is a challenge to find it, because test cases
depend on requirements; requirements depend on what customers need; and customers can
have great many different needs. As software systems are getting increasingly complex, it is
increasingly more challenging to write test cases. It is true that there are ways to create
"minimal test cases" which can greatly simplify the test steps to be executed. But, writing
such test cases is time consuming, and project deadlines often prevent us from going that
route. Often the lack of enough time for testing is the reason for bugs to occur in the field.
However, even with ample time to catch the "most important bugs", bugs still surface with
amazing spontaneity. The challenge is, developers do not seem to know how to avoid
providing the many opportunities for bugs to hide, and testers do not seem to know where
the bugs are hiding.

143. What is the difference between a test plan and a test scenario?
A:
Difference#1: A test plan is a document that describes the scope, approach, resources, and
schedule of intended testing activities, while a test scenario is a document that describes
both typical and atypical situations that may occur in the use of an application.
Difference#2: Test plans define the scope, approach, resources, and schedule of the
intended testing activities, while test procedures define test conditions, data to be used for
testing, and expected results, including database updates, file outputs, and report results.
Difference#3: A test plan is a description of the scope, approach, resources, and schedule of
intended testing activities, while a test scenario is a description of test cases that ensure that
a business process flow, applicable to the customer, is tested from end to end.

144. What is a test scenario?


A: The terms "test scenario" and "test case" are often used synonymously. Test scenarios are
test cases, or test scripts, and the sequence in which they are to be executed. Test scenarios
are test cases that ensure that business process flows are tested from end to end. Test
scenarios are independent tests, or a series of tests, that follow each other, where each of

© 2006 Zeta Cyber Solutions (P) Limited Page 120


them dependent upon the output of the previous one. Test scenarios are prepared by
reviewing functional requirements, and preparing logical groups of functions that can be
further broken into test procedures. Test scenarios are designed to represent both typical and
unusual situations that may occur in the application. Test engineers define unit test
requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the
test team that, with assistance of developers and clients, develops test scenarios for
integration and system testing. Test scenarios are executed through the use of test
procedures or scripts. Test procedures or scripts define a series of steps necessary to perform
one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.

145. Give me some sample test cases you would write!


A: For instance, if one of the requirements is, "Brake lights shall be on, when the brake pedal
is depressed", then, based on this one simple requirement, for starters, I would write all of
the following test cases:
Test case#1: "Inputs: The headlights are on. The brake pedal is depressed. Expected result:
The brake lights are on. Verify that the brake lights are on, when the brake pedal is
depressed."
Test case#2: "Inputs: The left turn lights are on. The brake pedal is depressed. Expected
result: The brake lights are on. Verify that the brake lights are on, when the brake pedal is
depressed."
Test case number#3: "Inputs: The right turn lights are on. The brake pedal is depressed.
Expected result: The brake lights are on. Verify that the brake lights are on, when the brake
pedal is depressed."
As you might have guessed, in the work place, in real life, requirements are more complex
than this one; and, just to verify this one, simple requirement, there is a need for many more
test cases.

146. How do you write test cases?


A: When I write test cases, I concentrate on one requirement at a time. Then, based on that
one requirement, I come up with several real life scenarios that are likely to occur in the use
of the application by end users. When I write test cases, I describe the inputs, action, or
event, and their expected results, in order to determine if a feature of an application is
working correctly. To make the test case complete, I also add particulars e.g. test case
identifiers, test case names, objectives, test conditions (or setups), input data requirements
(or steps), and expected results. If I have a choice, I prefer writing test cases as early as
possible in the development life cycle. Why? Because, as a side benefit of writing test cases,
many times I am able to find problems in the requirements or design of an application. And,

© 2006 Zeta Cyber Solutions (P) Limited Page 121


because the process of developing test cases makes me completely think through the
operation of the application. You can learn to write test cases.

147. What is a parameter?


A: A parameter is an item of information - such as a name, a number, or a selected option -
that is passed to a program by a user or another program. By definition, a parameter is a
value on which something else depends. Any desired numerical value may be given as a
parameter. We use parameters when we want to allow a specified range of variables. We use
parameters when we want to differentiate behavior or pass input data to computer programs
or their subprograms. Thus, when we are testing, the parameters of the test can be varied to
produce different results, because parameters do affect the operation of the program
receiving them. Example 1: We use a parameter, such as temperature that defines a system.
In this definition, it is temperature that defines the system and determines its behavior.
Example 2: In the definition of function f(x) = x + 10, x is a parameter. In this definition, x
defines the f(x) function and determines its behavior. Thus, when we are testing, x can be
varied to make f(x) produce different values, because the value of x does affect the value of
f(x).When parameters are passed to a function subroutine, they are called arguments.

148. What is a constant?


A: In software or software testing, a constant is a meaningful name that represents a
number, or string, that does not change. Constants are variables whose value remains the
same, i.e. constant, throughout the execution of a program. Why do developers use
constants? Because if we have code that contains constant values that keep reappearing, or,
if we have code that depends on certain numbers that are difficult to remember, we can
improve both the readability and maintainability of our code, by using constants. To give you
an example, let's suppose we declare a constant and we call it Pi. We set it to 3.14159265
and use it throughout our code. Constants, such as Pi, as the name implies, store values that
remain constant throughout the execution of our program. Keep in mind that, unlike variables
which can be read from and written to, constants are read-only variables. Although constants
resemble variables, we cannot modify or assign new values to them, as we can to variables.
But we can make constants public, or private. We can also specify what data type they are.

149. What is a requirements test matrix?


A: The requirements test matrix is a project management tool for tracking and managing
testing efforts, based on requirements, throughout the project's life cycle. The requirements
test matrix is a table, where requirement descriptions are put in the rows of the table, and
the descriptions of testing efforts are put in the column headers of the same table. The
requirements test matrix is similar to the requirements traceability matrix, which is a

© 2006 Zeta Cyber Solutions (P) Limited Page 122


representation of user requirements aligned against system functionality. The requirements
traceability matrix ensures that all user requirements are addressed by the system integration
team and implemented in the system integration effort. The requirements test matrix is a
representation of user requirements aligned against system testing. Similarly to the
requirements traceability matrix, the requirements test matrix ensures that all user
requirements are addressed by the system test team and implemented in the system testing
effort.

150. What is reliability testing?


A: Reliability testing is designing reliability test cases, using accelerated reliability techniques
(e.g. step-stress, test/analyze/fix, and continuously increasing stress testing techniques),
AND testing units or systems to failure, in order to obtain raw failure time data for product
life analysis. The purpose of reliability testing is to determine product reliability, and to
determine whether the software meets the customer's reliability requirements. In the system
test phase, or after the software is fully developed, one reliability testing technique we use is
a test/analyze/fix technique, where we couple reliability testing with the removal of faults.
When we identify a failure, we send the software back to the developers, for repair. The
developers build a new version of the software, and then we do test iteration. We track
failure intensity (e.g. failures per transaction, or failures per hour) in order to guide our test
process, and to determine the feasibility of the software release, and to determine whether
the software meets the customer's reliability requirements.

151. Give me an example on reliability testing.


A: For example, our products are defibrillators. From direct contact with customers during the
requirements gathering phase, our sales team learns that a large hospital wants to purchase
defibrillators with the assurance that 99 out of every 100 shocks will be delivered properly. In
this example, the fact that our defibrillator is able to run for 250 hours without any failure, in
order to demonstrate the reliability, is irrelevant to these customers. In order to test for
reliability we need to translate terminology that is meaningful to the customers into
equivalent delivery units, such as the number of shocks. We describe the customer needs in a
quantifiable manner, using the customer’s terminology. For example, our goal of quantified
reliability testing becomes as follows: Our defibrillator will be considered sufficiently reliable if
10 (or fewer) failures occur from 1,000 shocks. Then, for example, we use a test/analyze/fix
technique, and couple reliability testing with the removal of errors. When we identify a failed
delivery of a shock, we send the software back to the developers, for repair. The developers
build a new version of the software, and then we deliver another 1,000 shocks into dummy
resistor loads. We track failure intensity (i.e. number of failures per 1,000 shocks) in order to

© 2006 Zeta Cyber Solutions (P) Limited Page 123


guide our reliability testing, and to determine the feasibility of the software release, and to
determine whether the software meets our customers' reliability requirements.

152. What is verification?


A: Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code, requirements
and specifications; this can be done with checklists, issues lists, and walk-through and
inspection meetings.

153. What is validation?


A: Validation ensures that functionality, as defined in requirements, is the intended behavior
of the product; validation typically involves actual testing and takes place after Verifications
are completed.

154. What is a walk-through?


A: A walk-through is an informal meeting for evaluation or informational purposes. A walk-
through is also a process at an abstract level. It's the process of inspecting software code by
following paths through the code (as determined by input conditions and choices made along
the way). The purpose of code walk-through is to ensure the code fits the purpose. Walk-
through also offers opportunities to assess an individual's or team's competency.

155. What is an inspection?


A: An inspection is a formal meeting, more formalized than a walk-through and typically
consists of 3-10 people including a moderator, reader (the author of whatever is being
reviewed) and a recorder (to make notes in the document). The subject of the inspection is
typically a document, such as a requirements document or a test plan. The purpose of an
inspection is to find problems and see what is missing, not to fix anything. The result of the
meeting should be documented in a written report. Attendees should prepare for this type of
meeting by reading through the document, before the meeting starts; most problems are
found during this preparation. Preparation for inspections is difficult, but is one of the most
cost effective methods of ensuring quality, since bug prevention is more cost effective than
bug detection.

156. What is quality?


A: Quality software is software that is reasonably bug-free, delivered on time and within
budget, meets requirements and expectations and is maintainable. However, quality is a
subjective term. Quality depends on who the customer is and their overall influence in the
scheme of things. Customers of a software development project include end-users, customer

© 2006 Zeta Cyber Solutions (P) Limited Page 124


acceptance test engineers, testers, customer contract officers, customer management, the
development organization's management, test engineers, testers, salespeople, software
engineers, stockholders and accountants. Each type of customer will have his or her own
slant on quality. The accounting department might define quality in terms of profits, while an
end-user might define quality as user friendly and bug free.

157. What is good code?


A: A good code is code that works, is free of bugs and is readable and maintainable.
Organizations usually have coding standards all developers should adhere to, but every
programmer and software engineer has different ideas about what is best and what are too
many or too few rules. We need to keep in mind that excessive use of rules can stifle both
productivity and creativity. Peer reviews and code analysis tools can be used to check for
problems and Enforce standards.

158. What is good design?


A: Design could mean too many things, but often refers to functional design or internal
design. Good functional design is indicated by software functionality can be traced back to
customer and end-user requirements. Good internal design is indicated by software code
whose overall structure is clear, understandable, easily modifiable and maintainable; is robust
with sufficient error handling and status logging capability; and works correctly when
implemented.

159. What is software life cycle?


A: Software life cycle begins when a software product is first conceived and ends when it is
no longer in use. It includes phases like initial concept, requirements analysis, functional
design, internal design, documentation planning, test planning, coding, document
preparation, integration, testing, maintenance, updates, retesting and phase-out.

160. What is the role of documentation in QA?


A: Documentation plays a critical role in QA. QA practices should be documented, so that
they are repeatable. Specifications, designs, business rules, inspection reports,
configurations, code changes, test plans, test cases, bug reports, user manuals should all be
documented. Ideally, there should be a system for easily finding and obtaining of documents
and determining what document will have a particular piece of information. Use
documentation change management, if possible.

© 2006 Zeta Cyber Solutions (P) Limited Page 125


161. Why are there so many software bugs?
A: Generally speaking, there are bugs in software because of unclear requirements, software
complexity, programming errors, changes in requirements, errors made in bug tracking, time
pressure, poorly documented code and/or bugs in tools used in software development. There
are unclear software requirements because there is miscommunication as to what the
software should or shouldn't do. All of the followings contribute to the exponential growth in
software and system complexity: Windows interfaces, client-server and distributed
applications, data communications, enormous relational databases and the sheer size of
applications. Programming errors occur because programmers and software engineers, like
everyone else, can make mistakes. As to changing requirements, in some fast-changing
business environments, continuously modified requirements are a fact of life. Sometimes
customers do not understand the effects of changes, or understand them but request them
anyway. And the changes require redesign of the software, rescheduling of resources and
some of the work already completed have to be redone or discarded and hardware
requirements can be effected, too. Bug tracking can result in errors because the complexity
of keeping track of changes can result in errors, too. Time pressures can cause problems,
because scheduling of software projects is not easy and it often requires a lot of guesswork
and when deadlines loom and the crunch comes, mistakes will be made. Code documentation
is tough to maintain and it is also tough to modify code that is poorly documented. The result
is bugs. Sometimes there is no incentive for programmers and software engineers to
document their code and write clearly documented, understandable code. Sometimes
developers get kudos for quickly turning out code, or programmers and software engineers
feel they cannot have job security if everyone can understand the code they write, or they
believe if the code was hard to write, it should be hard to read. Software development tools,
including visual tools, class libraries, compilers, scripting tools, can introduce their own bugs.
There are other cases where the tools are poorly documented, which can create additional
bugs.

162. Give me five common problems that occur during software development.
A: Poorly written requirements, unrealistic schedules, inadequate testing, and adding new
features after development is underway and poor communication.
1. Requirements are poorly written when requirements are unclear, incomplete, too general,
or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good until
Customers complain or the system crashes.
4. It is extremely common that new features are added after development is underway.

© 2006 Zeta Cyber Solutions (P) Limited Page 126


5. Miscommunication either means the developers don't know what is needed, or customers
have unrealistic expectations and therefore problems are guaranteed.

163. What is a backward compatible design?


A: The design is backward compatible, if the design continues to work with earlier versions of
a language, program, code, or software. When the design is backward compatible, the
signals or data that had to be changed, did not break the existing code. For instance, our
mythical web designer decides that the fun of using Javascript and Flash is more important
than backward compatible design, or, he decides that he doesn't have the resources to
maintain multiple styles of backward compatible web design. This decision of his will
inconvenience some users, because some of the earlier versions of Internet Explorer and
Netscape will not display his web pages properly, as there are some serious improvements in
the newer versions of Internet Explorer and Netscape that make the older versions of these
browsers incompatible with, for example, DHTML. This is when we say, "This design doesn't
continue to work with earlier versions of browser software. Therefore our mythical designer's
web design is not backward compatible". On the other hand, if the same mythical web
designer decides that backward compatibility is more important than fun, or, if he decides
that he has the resources to maintain multiple styles of backward compatible code, then no
user will be inconvenienced. No one will be inconvenienced, even when Microsoft and
Netscape make some serious improvements in their web browsers. This is when we can say,
"Our mythical web designer's design is backward compatible".

© 2006 Zeta Cyber Solutions (P) Limited Page 127


References

Books
ROGER S. PRESSMAN: Software Engineering - A Practitioner’s approach, McGrawHill, 2001
ILENE BURNSTEIN: Practical Software Testing, Springer, 2002

WILLIAM E. PERRY: Effective Methods for software testing, Willey 2000

C.R.PANDIAN: Software Metrics, Arunodaya Printers, 2003

Website Links
http://www.stickyminds.com

http://en.wikipedia.org/wiki/Software_testing

http://www.reference.com/browse/wiki/Software_testing

http://www.softwareqatest.com/

http://www.testingstuff.com/autotest.html

© 2006 Zeta Cyber Solutions (P) Limited Page 128


© 2006 Zeta Cyber Solutions (P) Limited Page 129

You might also like