You are on page 1of 19



The process consisting of all life cycle activities, both static and dynamic, concerned with
Planning, Preparation and evaluation of software products and related work products
to determine that they satisfy specified requirements, to demonstrate that they are fit
for purpose and to detect defects.


 Testing is context dependent
 Exhaustive testing is impossible
 Early testing
 Defect clustering
 Pesticide Paradox
 Testing shows presence of a defects
 Absence of errors fallacy


Errors:- A human action the produces an incorrect results

Defects (bug, faults):- A flaw in a component or system that can cause the
component or the system to fail to perform its required function. A defect
encountered during the execution, may cause a failure to component
or system.

Failure: Deviation of the component or system from its expected delivery, service or


Defects in software , systems or documents may results in failure , but not all do cause
It is not just defects that give rise to failure. Failures can caused by :

 Environmental conditions for eg. Radiations burst
 Human error in interacting with the software, for eg. Wrong input entered or output
being misinterpreted.

 Malicious damage: someone deliberately trying to cause a failure in a system.

When we think about what might go wrong we have to consider defects and failures
arising from:

 Errors in specification, design and implementation of the software and system
 Errors in use of the system
 Environmental conditions
 Intentional damage
 Potential consequences of earlier errors, intentional damage, defects and failures.

What is the Cost of defects?

 The cost of finding and fixing defects rises considerably across the life cycle
 If an error is made and consequent defect is detected in the requirements at the
Specification stage , then it is relatively cheap to find and fix and then specification
can be corrected and re-issued.
 If the defects detected in the design stage then the design can be corrected and
 re-issued with relatively little expenses.
 If the defect is introduced in the requirement , specification and it is not detected
Until accepatance testing or even once the system has been implemented then
it will be much more expensive to fix.

Testing and Quality

 Testing can give confidence in the quality of software if it finds few or no

 Testing helps us to measure the quality of the software in terms of the
number of defects found , the tests run and the system covered by the

Quality : The degree to which a component , system or process meets specified
requirements or user or customer needs and expectations.

Validation: Is the right Specification?
Verification : Is the system correct to specification?
How much testing is enough? ( Test Principle – Exhaustive testing impossible)

 Instead of exhaustive testing, we use risks and priorities to focus testing efforts.
 Pressures on a project include time and budget as well as pressure to deliver technical
solution that meets customer needs.

 Customer and project manager will want to spend an amount on testing the produces
Return on Investments for them.

 Return on Investments  Preventing failures after releases that are costly.

 By assessing and managing risk is are of the important activities.

 How much testing is enough is according to level of risk, technical and business Risks related
to product and project

Detect Defects:

Help us understand the risks associated with putting the software into
operational .

Fixing the defects improves the quality of the products. Identifying the
defects has another benefits to improve the development process and
make fewer mistakes in future work.

When can we meet our test objective? (Test principle – Early Testing)

 Finding the defects
 Gaining confidence in and providing the information about level of quality.
 Preventing defects.

Benefits of early testing
Early test design and review activites — finds defects early on when they are
cheap to find and fix.

Fousing on defects can helps us plan our tests --- (Testing Principle – Defect clustering)
Main focus of reviews and other static tests is to carry out testing as early as possible
finding and fixing defects are more cheaply and preventing defects from appearing
at later stages of this project. These activites helps us find out about defects earlier
and identify potential clusters.

Debugging : The process of finding , analyzing and removing the causes of failures in


 Determine the scope and risks and identify the objectives of testing
 Determine the test approach (techniques, test items, coverage,testware) .
 Implement the test policy and the test strategy.
 Determine the required resources
 Schedule test analysis and design tasks,test implementation ,execution and evaluation
 Determine the exit criteria

 Measure and analyze the results of reviews and testing .
 Monitor and document progress,test coverage and exit criteria.
 Provide information on testing.
 Initiate corrective actions
 Make decisions.

 Review the test basis.
 Identify test conditions based on analysis of test items, their specifications.
 Design the tests
 Evaluate testability of the requirements and system.
 Design the test environment set-up and identify any required infrastructures
and tools


 Develop and prioritize our test cases.
 Create the test suites from the test cases for efficient test execution.

 Execute the test suites and individual test cases.
 Log the outcome of test execution and record the identities and version, test tools and
 Compare the actual results with expected results
 Repeat the activities as a result of action taken for each discrepancy.

 Check test logs against the exit criteria specified in the test planning
 Assess if more tests are needed or if the exit criteria specified should be changed.
 Write a test summary report for stakeholders.

• Check which planned deliverables with actually delivered and ensure all incident reports
have been resolved through defects repair or deferral.
• Finalize and archive testware,such as scripts, test environment and infrastructure.
• Hand over testware to the maintenance team.
• Evaluate the testing and analyze the lessons learned for future projects.

We need to be careful when we are reviewing and when we are testing.

 Communicate findings on the product in a neutral, fact focused without criticizing
the person who created it.

 Explain that by knowing about this now we can work round it or fix it so the
delivered the system is better for the customer.
 Start with collaboration rather than battles. Remind everyone common goal
of better quality system.

In every development life cycle , a part of testing is performed on VERIFICATION
Testing and part is focused on VALIDATION Testing.

VERIFICATION: To determine whether it meets the requirements. Is the deliverable
built according to the specification?

VALIDATION: To determine whether it meets the user needs ---Is the deliverable
Fit for purpose?.


Water fall model was one of the earliest models to be designed . It has the
natural timeline where the tasks are executed in a sequential fashion.
Draw backs of this model is difficult to get feedback passed backwards up the
waterfalls and there are difficulties if we need to carry out numerous
iterations for a particular phase.

The V-Model was developed to address the problems experienced using
the traditional Waterfall approach. The V-Model provides guidance that
testing needs to begin as early as possible in the life cycle.

The type V-Model uses four test levels.

Component testing
Integration testing
System testing
Acceptance testing
Iterative life cycles
 A common feature of iterative approaches is that the delivery is divided into
Increments or builds with each increments adding a new functionality.
 Intial increment will contain infrastructure required to support the
build functionali\ty

 The increment produced by a iteration may be tested at several level
as part of its development.

Testing within a life cyle model
In summary, whichever life cycle model is being used, there are several
Characteristics of good testing:

 For every department activity there is a corresponding testing activity.
 Each test level has test objectives specific to that level.
 The analysis and design of tests for a given test level should begin during
the corresponding development activity

 Testers should be involved in reviewing documents as soon as drafts are
are available in the development cycle.

Test Levels

Component Testing:
Also known as unit, module and program testing, that are separately testable.
Component testing may include testing of functionality and specific non-functional
Characteristics such as resource-behavior (e.g. memory leaks), performance or
Robustess testing as well as structural testing.

One approach in component testing, used in Extreme Programming (XP), is to
Prepare and automate test cases before coding. This is called a test-first approach
or test-driven development.

Integration Testing:
Integration testing tests interfaces between components, interactions to different
parts of system such as an operating system, file system and hardware or
interfaces between systems.

There may be more than one level of integration testing and it may carried
out on test objects of varying size.
 Component integration testing tests the interaction between software components
and after component testing
 System integration testing tests the interaction between the different systems and
may be done after system testing.

‘Big-Bang’ Integration testing
one extreme is that all component or system are integrated simultaneously, after
which everything is tested as a whole. Big –Bang testing has the advantage that
everything is finished before integration testing starts. Disadvantage is time
consuming and difficult to trace the cause of failures.

Different approach of integration

 Top-down approach
 Bottom- up approach
 Functional incremental.

System Testing:
System testing is concerned with the behavior of the whole system/product as
defined by the scope of a development project or product.

System testing should investigate both Functional and non-functional requirements
of the system.

System testing requires a controlled test environment with the regard to amongst
Others things, control of software versions, testware and the test data.

Acceptance Testing:
The goal of acceptance testing is to establish confidence in the system.
Acceptance testing is focuses on validation type of testing, determine whether
the system is fit for purpose. Finding defects should not be the main focus
in acceptance testing.

Acceptance testing may occur at more than just a single level.

 A Commercial off the Shelf (COTS) software product may be acceptance tested
when it is installed or integrated
 Acceptance testing of the usability of a components may be done during the
component testing.
 Acceptance testing of a new functional enhancement may come before system

Different types of Acceptance testing
 Operational Acceptance test (testing of backup/restore, disaster recovery)
 Compliance Acceptance test (testing is performed against the regulations, such as
legal or Safety regulations).
 Contract Acceptance test(performed against a contract’s acceptance criteria for
producing custom-developed software).

Two stages of Acceptance tests.
 Alpha testing: Tests take place at the developer’s site.
 Beta testing: Tests take place at the customer’s site (under real world working

Test Types:
Testing of function:
Functional testing considers the specified behavior and is often as referred as
Black- box testing.

Function testing can based upon ISO 9216, be done focusing on suitability
Interoperability, security, accuracy and compliance

Testing functionality can be done from two perspectives:
 Requirements – based testing uses a specification of the functional requirement
for the system as the basis for desiging tests.

 Business- process- based testing uses knowledge of the business processes, which
describes the scenarios involved in day – to- day business use of the system.
Use cases are a very useful basis for test cases from business perspective.

Testing of software product characteristics (Non-functional testing)
Non-functional testing includes of performance testing, load testing, stress testing
Usability testing , maintainability testing, reliability testing and portability testing.

The ISO 9216 standard defines Six quality characteristics and the subdivision

Reliability: sub-characteristics maturity (robustness), fault-tolerance,
Recoverability and compliance
Usability: understandability, learnability, operability, attractiveness and
Efficiency: Time behavior, resource utilization and compliance.

Maintainability: analyzability, changeability, stability, testability and compliance.
Portability: adaptability, installability, co-existence, replaceability and compliance.
Testing software structure/architecture (structural testing)

Structural testing is often referred as ‘white-box’ or ‘glass-box’ because we are
interested in what is happening ‘inside the box’.

Structural testing is most often used as a way of measuring the thoroughness
of testing through the coverage of a set of structural elements or coverage

Testing related to changes:

Confirmation testing (re-testing):
When a test fails and we determine that the cause of failure is software
defect, the defect is reported and we can expect a new version of the software
that has had the defect fixed. We will need to execute the test again to confirm
that the defect has indeed been fixed. This is known as Confirmation Testing.

Regression testing
Testing of a previously tested program following modification to ensure that
defects have not been introduced or uncovered in unchanged areas of the
software as a result of the changes made. It is performed when the software or its
environment is changed.

Maintenance Testing:
Modification of a software product after delivery to correct defects, to improve
performance or other attributes or to adapt the product to the modified
environment .

Impact analysis and regression testing:
Usually maintenance testing will consist two parts:
 Testing the changes
 Regression tests to show that the rest of the system has not been affected
by the maintenance work.

Impact analysis: The assessment of change to the layers of development
documentation, test documentation and components,
in order to implement a given change to specified

Triggers for maintenance testing:-
It is triggered by Modifications, Migration, or retirement of the system.

Include to planned enhancement changes, corrective and emergency changes
and changes of environment(planned O/S or database upgrades, or patches to
newly exposed discovered vulnerabilities of O/S).

Operational testing of the new environment, as well as the changed software.

Retirement of the system:
Testing of data migration or archiving, if long data-retention periods are

Planned Modification:
Types of Planned Modification:
 Perfective modification: - by supplying new functions or enhancing performance.
 Adaptive modification :- adapting software to environmental changes such as
new hardware, new systems software or new legislations.
 Corrective Modification :- deferrable corrections of defects.

On average, planned modification represents over 90% of maintenance work on


During static testing, software work products are examined manually, or with a
Set of tools, but not executed.

Types of Defects can detect during static testing:

 Deviations from standards
 Missing requirements
 Design defects
 Non-maintainable code
 Inconsistent interface specifications

The use of static testing and various advantages of reviews on software products:
 Static testing can start early in the life cycle, early feedback on quality
Issues can be established.
 By detecting defects at an early stage, rework of costs are most often
relatively low and relatively cheap improvement of software quality
 Rework effort substantially reduced, development products figure likely to
 The evaluation by a team has the additional advantage that there
exchage information between the participants.
 Static tests contribute to an increased awareness of quality issues.

Phases of a formal review:
Review meeting

In this phase the entry check is carried out to ensure that the reviewer’s time is
not wasted on a document that is not ready for review.

Within reviews the following focuses can be identified:
♦ Focus on higher-level documents, e.g. the design comply to the requirements
♦ Focus on standards, e.g. internal consistency, clarity, naming conventions, templates.
♦ Focus on related documents at the same level, e.g. interfaces between software
♦ Focus on usage, e.g. for testability or maintainability.

The goal of this meeting is to get everybody on the same wavelength regarding
the document under review and to commit to the time that will spent time
on checking the document. The relationships between the documents under
review and the other documents(sources) are explained. Role assignments,
checking rate , the pages to be checked, process changes are discussed in this

The individual participants identify defects, questions and comments, according
to their understanding of the document and role . Using checklists in this phase
can make reviews more effective and efficient. A critical success factor for a
thorough preparation is the number of pages checked per hour. By collecting data
and measuring the review process, company-specific criteria for checking rate
and document size can be set.

Review meeting:
This meeting typically consists of following elements:
Logging phase: The focus is on logging as many as defects possible within
a certain timeframe.
Discussion phase: Participants can bring forward their comments and reasoning.
Moderator ensure that all discussed items have an outcome
by the end of this meeting.
Decision phase. : A decision on the document under review has to be made
by participants, based on formal exit criteria.

Types of review:

♦ Walkthrough
♦ Inspection
♦ Technical review

Walkthrough: A step- by-step presentation by the author of a document in
order to gather information and to establish the common understanding of
its contents. A Walkthrough is especially useful for higher-level documents
such as requirements, specifications and architectural documents.

Key characteristics of walkthrough:

♦ The meeting is led by authors, often a separate scribe is present
♦ Scenarios and dry runs may be used to validate the content
♦ Separate pre-meeting preparations for reviewers is optional

Technical review: A Peer group discussions activity forced on achieving
Consensus on the technical approach to be taken. There is a little or no
focus on defect indentification on basis if referenced documents, intended
readership and rules.

Key characteristics:
♦ It is a documented defect-detection process that involves peers and technical
♦ It is often performed as peer review without management participation.
♦ Led by trained moderator but possibly also by a technical experts
♦ A Separate preparation is carried out during which product is examined and
defects are found
♦ formal characteristics such as the use of checklists & logging list or issue log optional

Inspection: is the most formal review type. The document under inspection
Is prepared and checked thoroughly by the reviewers before the meeting,
Comparing the work product with its sources and other referenced documents
And using rules and checklists.

Key Characteristics:
 Led by a trained moderator
 It uses defined roles during the process
 It involves peers to examine the product.
 Rules and checklists are used during the preparation phase.
 A separate preparation carried out during which the product is examined and the
defects found.
 The defects found are documented in a logging lists or issue log
 A formal follow-up is carried out by the moderator applying exit criteria
 Optionally, a causal analysis step is introduced to address process
improvements issues and learn from the defects found
 Metrics are gathered and analyzed to optimize the process.

Statics Analysis Tools:
Statics analysis tools are typically used by developers before and sometimes
during component and integration testing and by designers during software

 Statics analysis is an examination of requirenments, design or code without
actual executing the software artifacts being examined.
 Statics analysis is ideally performed before the types of formal review.
 Static analysis is unrelated to dynamic properties of the requirements, design
and code, such as test coverage.
 The goal of static analysis is to find defects, whether or not they may cause
failures. As with reviews, static analysis finds defects rather than failure.
Coding standards:
Coding standard consists of a set of programming rules
The three main causes for this:
 The number of rules in a coding standard is large that nobody can
remember them all.
 Some context-sensitive rules that demands review of several files are
very hard to check by human being.
 If people spend tine checking coding standards in reviews, that will distract
them from other defects.
Code metrics:
Structural attributes of code:- comment frequency, depth of nesting ,
Cyclomatic number and number of lines of code.
Complexity metrics:
Identify high risk, complex areas.
Code Structure:
There are several aspects of code structures:
 Control flow structure
 Data flow structure
 Data structure

Control flow analysis can also be used indentify unreachable (dead) code.
The summary the value of static analysis is especially for:-
 Early detection of defects prior to test execution
 Early warning about suspicious aspects of the code, design or requirements
 Identification of defects not easily found in dynamic testing
 Improved maintainability of code and design engineers work according to
documented standards and rules.
 Prevention of defects, provided that engineers are willing to learn from their
errors and continuous improvement is practiced.

Test conditions, Test cases and procedures( or scripts ) are prepared according
to the test documentation standard(IEEE 829)

Test Conditions: An items or event of the component or system is verified
One or more test cases.

Test design technique: A procedure used to derive and or select test cases.

Traceability: The ability to identify related items in documentation and
Software , such as requirements with associated tests.


Test design identifier
Features to be tested
Approach refinements
Features pass/fail criteria
Test identification.

Test Case Specification:
A document specifying a set test cases, inputs, test action, expected results and
execution preconditions for a test item.

Test Oracle:
A source to determine expected results with the actual results of the software
under test.

Test case specification identifier
Test item
Input specification
Output specification
Environmental needs
Special procedural requirements
Inter-case dependencies.

TEST IMPLEMENTATION:- Specifying test procedures or test scripts :
Test Procedure specification: A document specifying the sequences of action for the
execution of a test. Also known as a test script or manual test script.

Test script: Commonly used to refer to a test procedure specification, especially an
automated one.


Test Procedure Identifier
Special requirements
Procedure steps.

A procedure to derive and / or select a test based on analysis of the
Specification, either functional or non-functional, of a system without
reference to its internal structure.

A procedure to derive and/or select a test cases based on internal structure
of a component or system.

People’s knowledge, skills and background are prime contributor to the test
Conditions and test cases.


Equivalence – partionining
Boundary Value Analysis
Decision table
State Transisition

Equivalence - Partitioning: A black box testing technique in which test case designed
To execute from the representative of equivalence partition. In principle designed to
Cover each partition atleast once.

Boundary Value Analysis: An Input or output value which is on the edge of the
equivalence partitioning or smallest incremental distance on either side of
the edge.

BS-7925 -2 standard for software component testing.

Decision table: are more focused on business logic and business rules and good way
To deal with combination of things. This technique referred as “cause – effect” table

State Transition testing: In which test case are designed to execute the valid
And invalid state transition, where some aspect of the system can be
described in what is called a “finite state machine”. A finite state machine is often
known as state diagram.

Four basic parts of the state transition model:

The states that the software may occupy;
The transitions from one state to another;
The events that cause a transition;
The actions that result from a transition.

Use case testing: In which the test case are designed to execute user sceneriors

Actor- a user of the system.

Each use case describes the interactions the actor has with the system in order
to achieve the specific task.

Error guessing is a technique that should always be used as a complement to
Other more formal techniques.

Exploratory testing: a test design technique where the tester actively controls the
design of the tests as those test are performed and uses information gained
while testing to design new and better tests.