You are on page 1of 30

SOFTWARE QUALITY

ASSURANCE –TESTING

By Dr. Muhammad Ali Memon

1
TESTING OVERVIEW [9.1]
 testing is the process of finding differences between the specified
(expected) and the observed (existing) system behavior

Goal: design tests that will systematically find defects


è aim is to break the system (make it fail)

 usually done by developers that were not involved in system


implementation
 to test a system effectively, must have a detailed understanding
of the whole system
è not a job for novices(a person new to or inexperienced in a field
or situation)
 it is impossible to completely test a nontrivial system

è systems often deployed without being completely


COMP 211 è tested TESTING2
Who Tests the Software?

developer independent tester

Understands the system Must learn about the system,


but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by quality

3
COMP 211 TESTING3
TESTING — VERIFICATION & VALIDATION
 verification is the process of making sure that we have built the
product right (i.e., it meets its stated requirements)
è most of the testing workflow is targeted at doing verification

 validation is the process of making sure that we have built the


right product (i.e., it is fit for its purpose)
è acceptance tests deal mainly with validation

 testing verifies the results of implementation by testing each


build as well as final versions of the system by
– planning the tests required for each iteration
– designing and implementing the tests by creating
å test cases that specify what to test
å test procedures that specify how to test
å test components to automate the testing, if possible

COMP 211 TESTING4


TESTING ARTIFACTS & WORKERS

Test Component Integration System


Engineer Engineer Tester Tester

responsible for responsible for responsible for

Test
Model
X
Test
Case
X
Test Test Test
Procedure Plan Evaluation
Test
Component
X
Defect

COMP 211 TESTING5


TESTING ARTIFACTS
 test case – specifies one way of testing the system—what to
test with which input or result and under what conditions
 test model – a collection of test cases, test procedures and test
components that describe how executable components (e.g.,
builds) are tested by integration and system tests
 test procedure – specifies how to perform one or several test
cases or parts of them
 test plan – describes the testing strategies, resources and
schedule
 test evaluation – the results of the testing efforts
 test component – automates one or several test procedures or
parts of them (sometimes called test drivers, test harnesses or
test scripts)
 defect – a system anomaly (i.e., a bug)
COMP 211 TESTING6
Test Team
Testers: devise test plans, design,
organize and run tests
Analysts: assist testers, provide
guidance for the process verification
as well as on the appearance of the
test results
Designers: assist testers, provide
guidance for testing components and
software architecture
Users: provide feedback for the
development team
Causes of Software Faults
System Testing Process
System Testing

Unit & Integration Testing


Objective: to make sure that the
program code implements the
design correctly.
System Testing Objective: to
ensure that the software does
what the customer wants it to
do.
Unit Testing

module
to be
tested

results

software
engineer
test cases

11
Integration Testing
Strategies
Options:
• the “big bang” approach
• an incremental construction strategy

12
g) Regression Testing

• Check for defects propagated to other modules by changes made to


existing program
– Representative sample of existing test cases is used to exercise
all software functions.
– Additional test cases focusing software functions likely to be
affected by the change.
– Tests cases that focus on the changed software components.

• The primary objective of regression testing is to ensure that all bug


free features stay that way.

• Bugs which been fixed once should not turn up again in subsequent
program versions.

13
Stress Testing

 Test the system under extreme conditions (i.e., beyond the limits of
normal use)

 Create test cases that demand resources in abnormal quantity, frequency,


or volume
– Low memory conditions
– Disk faults (read/write failures, full disk, file corruption, etc.)
– Network faults
– Unusually high number of requests
– Unusually large requests or files
– Unusually high data rates (what happens if the network suddenly
becomes ten times faster?)

COMP 211 TESTING14


Usability Testing
 Is the user interface intuitive, easy to use, organized, logical?

 Does it frustrate users?

 Are common tasks simple to do?

 Does it conform to platform-specific conventions?

 Get real users to sit down and use the software to perform some
tasks

 Watch them performing the tasks, noting things that seem to


give
COMP 211 them trouble TESTING15
Security Testing
 Any system that manages sensitive information or performs
sensitive functions may become a target for intrusion (i.e.,
hackers)

 How feasible is it to break into the system?

 Learn the techniques used by hackers

 Try whatever attacks you can think of

 Hire a security expert to break into the system

COMP 211 TESTING16


Configuration Testing
 Test on all required hardware configurations
– CPU, memory, disk, graphics card, network
card, etc.

 Test on all required operating systems and versions thereof


– Virtualization technologies such as
VMWare and Virtual PC are very helpful for
this

COMP 211 TESTING17


Compatibility Testing
 Test to make sure the program is compatible with other
programs it is supposed to work with

 Ex: Can Word 12.0 load files created with Word 11.0?

 Ex: "Save As… Word, Word Perfect, PDF, HTML, Plain Text"

 Ex: "This program is compatible with Internet Explorer and


Firefox"

 Test
COMP 211 all compatibility requirements TESTING18
Documentation Testing
 Test all instructions given in the documentation to ensure their
completeness and accuracy

 For example, “How To ...” instructions are sometimes not


updated to reflect changes in the user interface

 Test user documentation on real users to ensure it is clear and


complete

COMP 211 TESTING19


Deployment Testing

The software is installed on a target


system and tested with:
• Various hardware configurations
• Various supported OS versions and
service packs
• Various versions of third-party
components (e.g. database servers,
web servers, etc.)
• Various configurations of resident
software
System Testing Process
Function Testing: the system must perform
functions specified in the requirements.
Performance Testing: the system must satisfy
security, precision, load and speed
constrains specified in the requirements.
Acceptance Testing: customers try the system
(in the lab) to make sure that the system
built is the system they requested.
Deployment Testing: the software is deployed
and tested in the production environment.
Performance Testing
Load / Stress Testing: large amount of
users / requests
Volume Testing: large quantities of data
Security Testing: test access,
authentication, data safety
Timing Testing: for control and event-
driven systems
Quality / Precision Testing: verify the
result accuracy (when applies)
Recovery Testing: verify recovery from
failure process (when applies)
Acceptance Testing
The purpose is to enable customers and users to determine if
the system built really meets their needs and
expectations.
Benchmarking: a predetermined set of test cases
corresponding to typical usage conditions is executed
against the system
Pilot Testing: users employ the software as a small-scale
experiment or in a controlled environment
Alpha-Testing: pre-release closed / in-house user testing
Beta-Testing: pre-release public user testing
Parallel Testing: old and new software are used together and
the old software is gradually phased out.

Acceptance testing uncovers requirement discrepancies as well


as helps users to find out what they really want (hopefully
not at the developer’s expense!)
Safety-Critical Systems

Safe means free from accident or loss.


Hazard: a system state that, together
with the right conditions, can lead to
an accident.
Failure Mode: a situation that can lead
to a hazard.
We can build a fault tree to trace known
failure modes to unknown effects /
hazards.
Safety-Critical System Issues

Remember Murphy’s Laws:


- If a fault can occur it will
- If a user can make a mistake he will
- Least probable fault causes are not

* 100% reliable system is not necessarily


safe or secure!
* Budget with testing in mind.
Hazard and Operability Studies

HAZOPS involves structured


analysis to anticipate system
hazards and to suggest means
to avoid or deal with them.

During testing we must select test


cases to exercise each failure
mode to make sure that the
system reacts in a safe manner.
Cleanroom

Method devised by IBM for


developing high-quality software
with a high-productivity team.
Principles:
1) Software must be certified with
respect to the specifications
(rather than tested)
2) Software must be zero-fault.
Cleanroom Process
1) The software as specified as a black box, then
refined as a state box, then refined as a clear box.
2) The box structure encourages analysts to find flaws
and omissions early in the project life cycle (when it
is cheaper and faster to fix them).
3) The clear-box specification is converted into an
intended function using natural language or
mathematic notation.
4) For each function a correctness theorem is devised
and proven.
* Unit testing is not necessary or even permitted!
* Errors found by statistical testing tends to be simple
mistakes that are easy to fix as the cleanroom
eliminates deep principle problems.
IBM claims an order of magnitude fault reduction (e.g. 3
faults per 1000 lines of code).
Source & Version Control
Open Source: CVS flavors
Microsoft: Visual Source Safe, TFS

• Each version is stored separately


• Delta changes
• Modification comments
• Revision history including users

Do not use conditional compilation for tasks


other than debugging, tracing or checked
builds.
Automated Testing

• Automated testing tools are used for this purpose.

• Important Points:
• Test automation has to be learned.
• Testing tools are no replacement for testing staff.
• Not everything can be tested, even with automation tools.
• Automated tests must be continuously updated.
• Automating tests cost time & money.

30

You might also like