Professional Documents
Culture Documents
CTS Software Testing Notes
CTS Software Testing Notes
Confidential
Cognizant
Technology
Solutions
Table of Contents
1 INTRODUCTION TO SOFTWARE...........................................................................................7
1.1 EVOLUTION OF THE SOFTWARE TESTING DISCIPLINE ...........................................................................7
1.2 THE TESTING PROCESS AND THE SOFTWARE TESTING LIFE CYCLE........................................................7
1.3 BROAD CATEGORIES OF TESTING.....................................................................................................8
1.4 WIDELY EMPLOYED TYPES OF TESTING ...........................................................................................8
1.5 THE TESTING TECHNIQUES.............................................................................................................9
1.6 CHAPTER SUMMARY......................................................................................................................9
2 BLACK BOX AND WHITE BOX TESTING..........................................................................11
2.1 INTRODUCTION............................................................................................................................11
2.2 BLACK BOX TESTING....................................................................................................................11
2.3 TESTING STRATEGIES/TECHNIQUES................................................................................................13
2.4 BLACK BOX TESTING METHODS.....................................................................................................14
2.5 BLACK BOX (VS) WHITE BOX....................................................................................................16
2.6 WHITE BOX TESTING........................................................................................................18
3 GUI TESTING............................................................................................................................23
3.1 SECTION 1 - WINDOWS COMPLIANCE TESTING................................................................................23
3.2 SECTION 2 - SCREEN VALIDATION CHECKLIST................................................................................25
3.3 SPECIFIC FIELD TESTS.................................................................................................................29
3.4 VALIDATION TESTING - STANDARD ACTIONS...................................................................................30
4 REGRESSION TESTING..........................................................................................................33
4.1 WHAT IS REGRESSION TESTING......................................................................................................33
4.2 TEST EXECUTION .......................................................................................................................34
4.3 CHANGE REQUEST......................................................................................................................35
4.4 BUG TRACKING .........................................................................................................................35
4.5 TRACEABILITY MATRIX................................................................................................................36
5 PHASES OF TESTING..............................................................................................................39
5.1 INTRODUCTION ...........................................................................................................................39
5.2 TYPES AND PHASES OF TESTING....................................................................................................39
5.3 THE VMODEL........................................................................................................................40
........................................................................................................................................................42
6 INTEGRATION TESTING........................................................................................................43
6.1 GENERALIZATION OF MODULE TESTING CRITERIA..............................................................................44
.........................................................................................................................................................46
7 ACCEPTANCE TESTING.........................................................................................................49
7.1 INTRODUCTION ACCEPTANCE TESTING.........................................................................................49
7.2 FACTORS INFLUENCING ACCEPTANCE TESTING.................................................................................49
7.3 CONCLUSION..............................................................................................................................50
8 SYSTEM TESTING....................................................................................................................51
8.1 INTRODUCTION TO SYSTEM TESTING....................................................................................51
8.2 NEED FOR SYSTEM TESTING ........................................................................................................51
Performance Testing Process & Methodology
2-
15 DEFECT MANAGEMENT......................................................................................................95
15.1 DEFECT...................................................................................................................................95
15.2 DEFECT FUNDAMENTALS ...........................................................................................................95
15.3 DEFECT TRACKING....................................................................................................................96
15.4 DEFECT CLASSIFICATION............................................................................................................97
15.5 DEFECT REPORTING GUIDELINES.................................................................................................98
16 AUTOMATION.......................................................................................................................101
16.1 WHY AUTOMATE THE TESTING PROCESS?..................................................................................101
16.2 AUTOMATION LIFE CYCLE........................................................................................................103
16.3 PREPARING THE TEST ENVIRONMENT.........................................................................................105
16.4 AUTOMATION METHODS...........................................................................................................108
17 GENERAL AUTOMATION TOOL COMPARISON..........................................................111
17.1 FUNCTIONAL TEST TOOL MATRIX.............................................................................................111
17.2 RECORD AND PLAYBACK..........................................................................................................111
17.3 WEB TESTING........................................................................................................................112
17.4 DATABASE TESTS....................................................................................................................112
17.5 DATA FUNCTIONS....................................................................................................................112
17.6 OBJECT MAPPING...................................................................................................................113
17.7 IMAGE TESTING......................................................................................................................114
17.8 TEST/ERROR RECOVERY...........................................................................................................114
17.9 OBJECT NAME MAP................................................................................................................114
17.10 OBJECT IDENTITY TOOL.........................................................................................................115
17.11 EXTENSIBLE LANGUAGE.........................................................................................................115
17.12 ENVIRONMENT SUPPORT.........................................................................................................116
17.13 INTEGRATION........................................................................................................................116
17.14 COST..................................................................................................................................116
17.15 EASE OF USE......................................................................................................................117
17.16 SUPPORT..............................................................................................................................117
17.17 OBJECT TESTS......................................................................................................................117
17.18 MATRIX...............................................................................................................................118
17.19 MATRIX SCORE.....................................................................................................................118
18 SAMPLE TEST AUTOMATION TOOL..............................................................................119
18.1 RATIONAL SUITE OF TOOLS ......................................................................................................119
18.2 RATIONAL ADMINISTRATOR.......................................................................................................120
18.3 RATIONAL ROBOT...................................................................................................................124
18.4 ROBOT LOGIN WINDOW............................................................................................................125
18.5 RATIONAL ROBOT MAIN WINDOW-GUI SCRIPT............................................................................126
18.6 RECORD AND PLAYBACK OPTIONS..............................................................................................127
18.7 VERIFICATION POINTS...............................................................................................................129
18.8 ABOUT SQABASIC HEADER FILES...........................................................................................131
18.9 ADDING DECLARATIONS TO THE GLOBAL HEADER FILE...............................................................131
18.10 INSERTING A COMMENT INTO A GUI SCRIPT:...........................................................................131
18.11 ABOUT DATA POOLS..............................................................................................................132
18.12 DEBUG MENU.......................................................................................................................132
18.13 COMPILING THE SCRIPT..........................................................................................................133
18.14 COMPILATION ERRORS............................................................................................................134
Performance Testing Process & Methodology
4-
1 Introduction to Software
1.1 Evolution of the Software Testing discipline
The effective functioning of modern systems depends on our ability to produce software
in a cost-effective way. The term software engineering was first used at a 1968 NATO
workshop in West Germany. It focused on the growing software crisis! Thus we see that
the software crisis on quality, reliability, high costs etc. started way back when most of
todays software testers were not even born!
The attitude towards Software Testing underwent a major positive change in the recent
years. In the 1950s when Machine languages were used, testing is nothing but
debugging. When in the 1960s, compilers were developed, testing started to be
considered a separate activity from debugging. In the 1970s when the software
engineering concepts were introduced, software testing began to evolve as a technical
discipline. Over the last two decades there has been an increased focus on better, faster
and cost-effective software. Also there has been a growing interest in software safety,
protection and security and hence an increased acceptance of testing as a technical
discipline and also a career choice!.
Now to answer, What is Testing? we can go by the famous definition of Myers, which
says, Testing is the process of executing a program with the intent of finding errors
software development life cycle has become a necessity as part of the software quality
assurance process. Right from the Requirements study till the implementation, there
needs to be testing done on every phase. The V-Model of the Software Testing Life Cycle
along with the Software Development Life cycle given below indicates the various phases
or levels of testing.
Requirement
Study
Production Verification
Testing
High Level
Design
User Acceptance
Testing
Low Level
Design
System Testing
Unit
Testing
Integration
Testing
SDLC - STLC
System Testing: Testing the software for the required specifications on the intended
hardware
Acceptance Testing: Formal testing conducted to determine whether or not a system
satisfies its acceptance criteria, which enables a customer to determine whether to
accept the system or not.
Performance Testing: To evaluate the time taken or response time of the system to
perform its required functions in comparison
Stress Testing: To evaluate a system beyond the limits of the specified requirements or
system resources (such as disk space, memory, processor utilization) to ensure the
system do not break unexpectedly
Load Testing: Load Testing, a subset of stress testing, verifies that a web site can
handle a particular number of concurrent users while maintaining acceptable response
times
Alpha Testing: Testing of a software product or system conducted at the developers site
by the customer
Beta Testing: Testing conducted at one or more customer sites by the end user of a
delivered software product system.
This chapter covered the Introduction and basics of software testing mentioning
about
BLACK BOX
WHITE BOX
Black-box test design treats the system as a literal "black-box", so it doesn't explicitly
use knowledge of the internal structure. It is usually described as focusing on testing
functional requirements. Synonyms for black-box include: behavioral, functional, opaquebox, and closed-box.
White-box test design allows one to peek inside the "box", and it focuses specifically on
using internal knowledge of the software to guide the selection of test data. It is used to
detect errors by means of execution-oriented test cases. Synonyms for white-box include:
structural, glass-box and clear-box.
While black-box and white-box are terms that are still in popular use, many people prefer
the terms "behavioral" and "structural". Behavioral test design is slightly different from
black-box test design because the use of internal knowledge isn't strictly forbidden, but
it's still discouraged. In practice, it hasn't proven useful to use a single test design
method. One has to use a mixture of different methods so that they aren't hindered by
the limitations of a particular one. Some call this "gray-box" or "translucent-box" test
design, but others wish we'd stop talking about boxes altogether!!!
Black box testing should make use of randomly generated inputs (only a test
range should be specified by the tester), to eliminate any guess work by the
tester as to the methods of the function
Data outside of the specified input range should be tested to check the
robustness of the program
Boundary cases should be tested (top and bottom of specified range) to make
sure the highest and lowest allowable inputs produce proper output
The number zero should be tested when numerical data is to be input
Stress testing should be performed (try to overload the program with inputs to
see where it reaches its maximum capacity), especially with real time systems
Crash testing should be performed to see what it takes to bring the system down
Test monitoring tools should be used whenever possible to track which tests
have already been performed and the outputs of these tests to avoid repetition
and to aid in the software maintenance
Other functional testing techniques include: transaction testing, syntax testing,
domain testing, logic testing, and state testing.
Finite state machine models can be used as a guide to design functional tests
According to Beizer the following is a general order by which tests should be
designed:
1. Clean tests against requirements.
2. Additional structural tests for branch coverage, as
needed.
3. Additional tests for data-flow coverage as needed.
4. Domain tests not covered by the above.
5. Special techniques as appropriate--syntax, loop,
state, etc.
6. Any dirty tests not covered by the above.
2.4.2
Black-box methods based on the nature of the relationships (links) among the
program objects (nodes), test cases are designed to traverse the entire graph
Transaction flow testing (nodes represent steps in some transaction and links
represent logical connections between steps that need to be validated)
Finite state modeling (nodes represent user observable states of the software
and links represent transitions between states)
Data flow modeling (nodes are data objects and links are transformations from
one data object to another)
Timing modeling (nodes are program objects and links are sequential
connections between these objects, link weights are required execution times)
Equivalence Partitioning
Black-box technique that divides the input domain into classes of data from which
test cases can be derived
An ideal test case uncovers a class of errors that might require many arbitrary
test cases to be executed before a general error is observed
Equivalence class guidelines:
1. If input condition specifies a range, one valid and two invalid equivalence
classes are defined
2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined
3. If an input condition specifies a member of a set, one valid and one
invalid equivalence class is defined
4. If an input condition is Boolean, one valid and one invalid equivalence
class is defined
Black-box technique that focuses on the boundaries of the input domain rather
than its center
BVA guidelines:
1. If input condition specifies a range bounded by values a and b, test
cases should include a and b, values just above and just below a and b
2. If an input condition specifies and number of values, test cases should be
exercise the minimum and maximum numbers, as well as values just
above and just below the minimum and maximum values
3. Apply guidelines 1 and 2 to output conditions, test cases should be
designed to produce the minimum and maxim output reports
4. If internal program data structures have boundaries (e.g. size limitations),
be certain to test the boundaries
Black-box technique that enables the design of a reasonably small set of test
cases that provide maximum test coverage
Focus is on categories of faulty logic likely to be present in the software
component (without examining the code)
Priorities for assessing tests using an orthogonal array
1. Detect and isolate all single mode faults
2. Detect all double mode faults
3. Multimode faults
Only a small number of possible inputs can actually be tested, to test every
possible input stream would take nearly forever
Without clear and concise specifications, test cases are hard to design
There may be unnecessary repetition of test inputs if the tester is not informed of
test cases the programmer has already tried
Lets use this system to understand and clarify the characteristics of black box and white
box
testing.
People: Who does the testing?
Some people know how software works (developers) and others just use it (users).
Accordingly, any testing by users or other non-developers is sometimes called black box
testing. Developer testing is called white box testing. The distinction here is based on
what the person knows or can understand.
Coverage: What is tested?
Performance Testing Process & Methodology
16 -
If we draw the box around the system as a whole, black box testing becomes another
name for system testing. And testing the units inside the box becomes white box testing.
This is one way to think about coverage. Another is to contrast testing that aims to cover
all the requirements with testing that aims to cover all the code. These are the two most
commonly used coverage criteria. Both are supported by extensive literature and
commercial tools. Requirements-based testing could be called black box because it
makes sure that all the customer requirements have been verified. Code-based testing is
often called white box because it makes sure that all the code (the statements, paths,
or decisions) is exercised.
Risks: Why are you testing?
Sometimes testing is targeted at particular risks. Boundary testing and other attack-based
techniques are targeted at common coding errors. Effective security testing also requires
a detailed understanding of the code and the system architecture. Thus, these
techniques might be classified as white box. Another set of risks concerns whether the
software will actually provide value to users. Usability testing focuses on this risk, and
could be termed black box.
Activities: How do you test?
A common distinction is made between behavioral test design, which defines tests based
on functional requirements, and structural test design, which defines tests based on the
code itself. These are two design approaches. Since behavioral testing is based on
external functional definition, it is often called black box, while structural testingbased
on the code internalsis called white box. Indeed, this is probably the most commonly
cited definition for black box and white box testing. Another activity-based distinction
contrasts dynamic test execution with formal code inspection. In this case, the metaphor
maps test execution (dynamic testing) with black box testing, and maps code inspection
(static testing) with white box testing. We could also focus on the tools used. Some tool
vendors refer to code-coverage tools as white box tools, and tools that facilitate applying
inputs and capturing inputsmost notably GUI capture replay toolsas black box tools.
Testing is then categorized based on the types of tools used.
Evaluation: How do you know if youve found a bug?
There are certain kinds of software faults that dont always lead to obvious failures. They
may be masked by fault tolerance or simply luck. Memory leaks and wild pointers are
examples. Certain test techniques seek to make these kinds of problems more visible.
Related techniques capture code history and stack information when faults occur, helping
with diagnosis. Assertions are another technique for helping to make problems more
visible. All of these techniques could be considered white box test techniques, since they
use code instrumentation to make the internal workings of the software more visible.
These contrast with black box techniques that simply look at the official outputs of a
program.
White box testing is concerned only with testing the software product, it cannot guarantee
that the complete specification has been implemented. Black box testing is concerned
only with testing the specification, it cannot guarantee that all parts of the implementation
have been tested. Thus black box testing is testing against the specification and will
discover faults of omission, indicating that part of the specification has not been fulfilled.
White box testing is testing against the implementation and will discover
Performance Testing Process & Methodology
17 -
faults of commission, indicating that part of the implementation is faulty. In order to fully
test a software product both black and white box testing are required.
White box testing is much more expensive than black box testing. It requires the source
code to be produced before the tests can be planned and is much more laborious in the
determination of suitable input data and the determination if the software is or is not
correct. The advice given is to start test planning with a black box test approach as soon
as the specification is available. White box planning should commence as soon as all
black box tests have been successfully passed, with the production of flowgraphs and
determination of paths. The paths should then be checked against the black box test plan
and any additional required test runs determined and applied.
The consequences of test failure at this stage may be very expensive. A failure of a white
box test may result in a change which requires all black box testing to be repeated and
the re-determination of the white box paths
To conclude, apart from the above described analytical methods of both glass and black
box testing, there are further constructive means to guarantee high quality software end
products. Among the most important constructive means are the usage of object-oriented
programming tools, the integration of CASE tools, rapid prototyping, and last but not least
the involvement of users in both software development and testing procedures
Summary :
Black box testing can sometimes describe user-based testing (people); system or
requirements-based testing (coverage); usability testing (risk); or behavioral testing or
capture replay automation (activities). White box testing, on the other hand, can
sometimes describe developer-based testing (people); unit or code-coverage testing
(coverage); boundary or security testing (risks); structural testing, inspection or codecoverage automation (activities); or testing based on probes, assertions, and logs
(evaluation).
basic set of execution paths. These are test cases that exercise basic set will
execute every statement at least once.
Note that unstructured loops are not to be tested . rather, they are
redesigned.
2 Design by Contract (DbC)
DbC is a formal way of using comments to incorporate specification information into the
code itself. Basically, the code specification is expressed unambiguously using a formal
language that describes the code's implicit contracts. These contracts specify such
requirements as:
Conditions that the client must meet before a method is invoked.
Conditions that a method must meet after it executes.
Assertions that a method must satisfy at specific points of its execution
Tools that check DbC contracts at runtime such as JContract
[http://www.parasoft.com/products/jtract/index.htm] are used to perform this function.
3 Profiling
Profiling provides a framework for analyzing Java code performance for speed and heap
memory use. It identifies routines that are consuming the majority of the CPU time so
that problems may be tracked down to improve performance.
These include the use of Microsoft Java Profiler API and Suns profiling tools that are
bundled with the JDK. Third party tools such as JaViz
[http://www.research.ibm.com/journal/sj/391/kazi.html] may also be used to perform this
function.
4 Error Handling
Exception and error handling is checked thoroughly are simulating partial and complete
fail-over by operating on error causing test vectors. Proper error recovery, notification and
logging are checked against references to validate program design.
5 Transactions
Systems that employ transaction, local or distributed, may be validated to ensure that
ACID (Atomicity, Consistency, Isolation, Durability). Each of the individual parameters is
tested individually against a reference data set.
Transactions are checked thoroughly for partial/complete commits and rollbacks
encompassing databases and other XA compliant transaction processors.
Advantages of White Box Testing
Expensive
Cases omitted in the code could be missed out.
GUI Testing
Never updateable fields should be displayed with black text on a gray background with a
black label. All text should be left justified, followed by a colon tight to it. In a field that
may or may not be updateable, the label text and contents changes from black to gray
depending on the current status. List boxes are always white background with black text
whether they are disabled or not. All others are gray.
In general, double-clicking is not essential. In general, everything can be done using both
the mouse and the keyboard. All tab buttons should have a distinct letter.
Spacing should be compatible with the existing windows spacing (word etc.). Items
should be in alphabetical order with the exception of blank/none, which is at the top or the
bottom of the list box. Drop down with the item selected should be display the list with the
selected item on the top. Make sure only one space appears, shouldn't have a blank line
at the bottom.
Does a failure of validation on every field cause a sensible user error message?
Is the user required to fix entries, which have failed validation tests?
Have any fields got multiple validation rules and if so are all rules being applied?
If the user enters an invalid value and clicks on the OK button (i.e. does not TAB
off the field) is the invalid entry identified and highlighted correctly with an error
message?
Is validation consistently applied at screen level unless specifically required at
field level?
For all numeric fields check whether negative numbers can and should be able to
be entered.
For all numeric fields check the minimum and maximum values and also some
mid-range values allowable?
For all character/alphanumeric fields check the field to ensure that there is a
character limit specified and that this limit is exactly correct for the specified
database size?
Do all mandatory fields require user input?
If any of the database columns don't allow null values then the corresponding
screen fields must be mandatory. (If any field, which initially was mandatory, has
become optional then check whether null values are allowed in this field.)
10. Can the cursor be placed in read-only fields by clicking in the field with the
mouse?
11. Is the cursor positioned in the first input field or control when the screen is
opened?
12. Is there a default button specified on the screen?
13. Does the default button work correctly?
14. When an error message occurs does the focus return to the field in error when
the user cancels it?
15. When the user Alt+Tab's to another application does this have any impact on the
screen upon return to the application?
16. Do all the fields edit boxes indicate the number of characters they will hold by
there length? e.g. a 30 character field should be a lot longer
Are the screen and field colors adjusted correctly for read-only mode?
Should a read-only mode be provided for this screen?
Are all fields and controls disabled in read-only mode?
Can the screen be accessed from the previous screen/menu/toolbar in read-only
mode?
5. Can all screens available from this screen be accessed in read-only mode?
6. Check that no validation is performed in read-only mode.
6. In drop down list boxes, assure that the list and each entry in the list can be
accessed via appropriate key / hot key combinations.
7. Ensure that duplicate hot keys do not exist on each screen
8. Ensure the proper usage of the escape key (which is to undo any changes that
have been made) and generates a caution message "Changes will be lost Continue yes/no"
9. Assure that the cancel button functions the same as the escape key.
10. Assure that the Cancel button operates, as a Close button when changes have
been made that cannot be undone.
11. Assure that only command buttons, which are used by a particular window, or in
a particular dialog box, are present. (i.e) make sure they don't work on the
screen behind the current screen.
12. When a command button is used sometimes and not at other times, assures that
it is grayed out when it should not be used.
13. Assure that OK and Cancel buttons are grouped separately from other command
buttons.
14. Assure that command button names are not abbreviations.
15. Assure that all field labels/names are not technical labels, but rather are names
meaningful to system users.
16. Assure that command buttons are all of similar size and shape, and same font &
font size.
17. Assure that each command button can be accessed via a hot key combination.
18. Assure that command buttons in the same window/dialog box do not have
duplicate hot keys.
19. Assure that each window/dialog box has a clearly marked default value
(command button, or other object) which is invoked when the Enter key is
pressed - and NOT the Cancel or Close button
20. Assure that focus is set to an object/button, which makes sense according to the
function of the window/dialog box.
21. Assure that all option buttons (and radio buttons) names are not abbreviations.
22. Assure that option button names are not technical labels, but rather are names
meaningful to system users.
23. If hot keys are used to access option buttons, assure that duplicate hot keys do
not exist in the same window/dialog box.
24. Assure that option box names are not abbreviations.
25. Assure that option boxes, option buttons, and command buttons are logically
grouped together in clearly demarcated areas "Group Box"
26. Assure that the Tab key sequence, which traverses the screens, does so in a
logical way.
27. Assure consistency of mouse actions across windows.
28. Assure that the color red is not used to highlight active objects (many individuals
are red-green color blind).
29. Assure that the user will have control of the desktop with respect to general color
and highlighting (the application should not dictate the desktop background
characteristics).
30. Assure that the screen/window does not have a cluttered appearance
31. Ctrl + F6 opens next tab within tabbed window
32. Shift + Ctrl + F6 opens previous tab within tabbed window
33. Tabbing will open next tab within tabbed window if on last field of current tab
Performance Testing Process & Methodology
28 -
34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed
window
35. Tabbing will go onto the next editable field in the window
36. Banner style & size & display exact same as existing windows
37. If 8 or less options in a list box, display all options on open of list box - should be
no need to scroll
38. Errors on continue will cause user to be returned to the tab and the focus should
be on the field causing the error. (i.e the tab is opened, highlighting the field with
the error on it)
39. Pressing continue while on the first tab of a tabbed window (assuming all fields
filled correctly) will not open all the tabs.
40. On open of tab focus will be on first editable field
41. All fonts to be the same
42. Alt+F4 will close the tabbed window and return you to main screen or previous
screen (as appropriate), generating "changes will be lost" message if necessary.
43. Microhelp text for every enabled field & button
44. Ensure all fields are disabled in read-only mode
45. Progress messages on load of tabbed screens
46. Return operates continue
47. If retrieve on load of tabbed window fails window should not open
cause
cause
cause
errors/
errors/
cause
8.
9.
10.
11.
12.
Key
No Modifier
Shift
F1
Help
Enter
Mode
F2
N/A
F3
CTRL
ALT
Help N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
F4
N/A
N/A
Close
Close
Document
/ Application.
Child window.
F5
N/A
N/A
N/A
N/A
F6
N/A
N/A
N/A
N/A
F7
N/A
N/A
N/A
N/A
F8
N/A
F9
N/A
N/A
N/A
N/A
F10
N/A
N/A
N/A
N/A
N/A
Tab
Move to next
open Document
or
Child
window.
(Adding SHIFT
reverses
the
order
of
movement).
Switch
to
previously used
application.
(Holding down
the ALT key
displays all open
applications).
Alt
N/A
N/A
Function
CTRL + Z
Undo
CTRL + X
Cut
CTRL + C
Copy
CTRL + V
Paste
CTRL + N
New
CTRL + O
Open
CTRL + P
CTRL + S
Save
CTRL + B
Bold*
CTRL + I
Italic*
CTRL + U
Underline*
* These shortcuts are suggested for text formatting applications, in the context for
which they make sense. Applications may use other modifiers for these
operations.
4 Regression Testing
4.1 What is regression Testing
Before a new version of a software product is released, the old test cases are run
against the new version to make sure that all the old capabilities still work. The
reason they might not work because changing or adding new code to a program
can easily introduce errors into code that is not intended to be changed.
The selective retesting of a software system that has been modified to ensure
that any bugs have been fixed and that no other previously working functions
have failed as a result of the reparations and that newly added features have not
created problems with previous versions of the software. Also referred to as
verification testing
It is a quality control measure to ensure that the newly modified code still
complies with its specified requirements and that unmodified code has not been
affected by the maintenance activity.
sanity cycle checks the entire system at a basic level (breadth, rather than depth)
to see that it is functional and stable. This cycle should include basic-level tests
containing mostly positive checks.
normal cycle tests the system a little more in depth than the sanity cycle. This cycle
can group medium-level tests, containing both positive and negative checks.
advanced cycle tests both breadth and depth. This cycle can be run when more
time is available for testing. The tests in the cycle cover the entire application
(breadth), and also test advanced options in the application (depth).
regression cycle tests maintenance builds. The goal of this type of cycle is to verify
that a change to one part of the software did not break the rest of the application. A
regression cycle includes sanity-level tests for testing the entire software, as well as
in-depth tests for the specific area of the application that was modified.
With Manual Test Execution you follow the instructions in the test steps of each
test. You use the application, enter input, compare the application output with the
expected output, and log the results. For each test step you assign either pass or
fail status.
During Automated Test Execution you create a batch of tests and launch the
entire batch at once. Testing Tools runs the tests one at a time. It then imports
results, providing outcome summaries for each test.
Bugs can be detected and reported by engineers, testers, and end-users in all
Information about bugs must be detailed and organized in order to schedule bug
fixes and determine software release dates.
First you report New bugs to the database, and provide all necessary
information to reproduce, fix, and follow up the bug.
Software developers fix the Open bugs and assign them the status Fixed.
QA personnel test a new build of the application. If a bug does not reoccur, it
is Closed. If a bug is detected again, it is reopened.
Communication is an essential part of bug tracking; all members of the development and
quality assurance team must be well informed in order to insure that bugs information is
up to date and that the most important problems are addressed.
The number of open or fixed bugs is a good indicator of the quality status of your
application. You can use data analysis tools such as re-ports and graphs in interpret bug
data.
product tested to meet the requirement. Below is a simple traceability matrix structure.
There can be more things included in a traceability matrix than shown below. Traceability
requires unique identifiers for each requirement and product. Numbers for products are
established in a configuration management (CM) plan.
Traceability ensures completeness, that all lower level requirements derive from
higher level requirements, and that all higher level requirements are allocated to
lower level requirements. Traceability is also used in managing change and
provides the basis for test planning.
SAMPLE TRACEABILITY MATRIX
A traceability matrix is a report from the requirements database or repository. The
examples below show traceability between user and system requirements. User
requirement identifiers begin with "U" and system requirements with "S."
Tracing S12 to its source makes it clear this requirement is erroneous: it must be
eliminated, rewritten, or the traceability corrected.
5 Phases of Testing
5.1 Introduction
The Primary objective of testing effort is to determine the conformance to requirements
specified in the contracted documents. The integration of this code with the internal code
is the important objective. Goal is to evaluate the system as a whole, not its parts
Techniques can be structural or functional.
Techniques can be used in any stage that tests the system as a whole (System testing
,Acceptance Testing, Unit testing, Installation, etc.)
QA Document
Requirement Checklist
Design Checklist
Functional Checklist
Unit Test Case Documents
Integration Test Case Documents
System Test Case Documents
Regression Test Case Documents
Performance Test Case Documents
User Acceptance Test Case
Documents.
Requirements
Acceptance Testing
Specification
System Testing
Integration Testing
Architecture
Unit Testing
Detailed Design
Coding
Requirement
Study
Requirement
Checklist
Software
Requirement
Software
Requirement
Functional
Specification
Functional
Specification
Functional
Specification
Architecture
Design
Architecture
Design
Detailed Design
Document
Coding
Functional
Specification
Design
Document
Functional
Specification
Unit/Integratio
n/System Test
Regression
Test Case
Functional
Specification
Performance
Performance
Test Cases and
Software
Requirement
Regression
Test Case
Performance
Test Cases and
User Acceptance
Test Case
Requirement
Regression
Round 3
Requirement
s Review
Performance
Testing
Specification
Regression
Round 2
Architecture
Regression
Round 1
Detailed
Architectur
e Review
Design
Review
Code
Specification
Review
System
Testing
Integration
Testing
Unit
Testing
Code
Walkthrough
6 Integration Testing
One of the most significant aspects of a software development project is the integration
strategy. Integration may be performed all at once, top-down, bottom-up, critical piece
first, or by first integrating functional subsystems and then integrating the subsystems in
separate phases using any of the basic strategies. In general, the larger the project, the
more important the integration strategy.
Very small systems are often assembled and tested in one phase. For most real systems,
this is impractical for two major reasons. First, the system would fail in so many places at
once that the debugging and retesting effort would be impractical
Second, satisfying any white box testing criterion would be very difficult, because of the
vast amount of detail separating the input data from the individual code modules. In fact,
most integration testing has been traditionally limited to ``black box'' techniques.
Large systems may require many integration phases, beginning with assembling modules
into low-level subsystems, then assembling subsystems into larger subsystems, and
finally assembling the highest level subsystems into the complete system.
To be most effective, an integration testing technique should fit well with the overall
integration strategy. In a multi-phase integration, testing at each phase helps detect
errors early and keep the system under control. Performing only cursory testing at early
integration phases and then applying a more rigorous criterion for the final stage is really
just a variant of the high-risk "big bang" approach. However, performing rigorous testing
of the entire software involved in each integration phase involves a lot of wasteful
duplication of effort across phases. The key is to leverage the overall integration structure
to allow rigorous testing at each phase while minimizing duplication of effort.
It is important to understand the relationship between module testing and integration
testing. In one view, modules are rigorously tested in isolation using stubs and drivers
before any integration is attempted. Then, integration testing concentrates entirely on
module interactions, assuming that the details within each module are accurate. At the
other extreme, module and integration testing can be combined, verifying the details of
each module's implementation in an integration context. Many projects compromise,
combining module testing with the lowest level of subsystem integration testing, and then
performing pure integration testing at higher levels. Each of these views of integration
testing may be appropriate for any given project, so an integration testing method should
be flexible enough to accommodate them all.
that it is possible to exercise them independently during integration testing. The idea
behind design reduction is to start with a module control flow graph, remove all control
structures that are not involved with module calls, and then use the resultant "reduced"
flow graph to drive integration testing. Figure 7-2 shows a systematic set of rules for
performing design reduction. Although not strictly a reduction rule, the call rule states that
function call ("black dot") nodes cannot be reduced. The remaining rules work together to
eliminate the parts of the flow graph that are not involved with module calls. The
sequential rule eliminates sequences of non-call ("white dot") nodes. Since application of
this rule removes one node and one edge from the flow graph, it leaves the cyclomatic
complexity unchanged. However, it does simplify the graph so that the other rules can be
applied. The repetitive rule eliminates top-test loops that are not involved with module
calls. The conditional rule eliminates conditional statements that do not contain calls in
their bodies. The looping rule eliminates bottom-test loops that are not involved with
module calls. It is important to preserve the module's connectivity when using the looping
rule, since for poorly-structured code it may be hard to distinguish the ``top'' of the loop
from the ``bottom.'' For the rule to apply, there must be a path from the module entry to
the top of the loop and a path from the bottom of the loop to the module exit. Since the
repetitive, conditional, and looping rules each remove one edge from the flow graph, they
each reduce cyclomatic complexity by one.
Rules 1 through 4 are intended to be applied iteratively until none of them can be applied,
at which point the design reduction is complete. By this process, even very complex logic
can be eliminated as long as it does not involve any module calls.
Incremental integration
Hierarchical system design limits each stage of development to a manageable effort, and
it is important to limit the corresponding stages of testing as well. Hierarchical design is
most effective when the coupling among sibling components decreases as the
component size increases, which simplifies the derivation of data sets that test
interactions among components. The remainder of this section extends the integration
testing techniques of structured testing to handle the general case of incremental
integration, including support for hierarchical design. The key principle is to test just the
interaction among components at each integration stage, avoiding redundant testing of
previously integrated sub-components.
Performance Testing Process & Methodology
46 -
7 Acceptance Testing
7.1 Introduction Acceptance Testing
In software engineering, acceptance testing is formal testing conducted to determine
whether a system satisfies its acceptance criteria and thus whether the customer should
accept the system.
The main types of software testing are:
Component.
Interface.
System.
Acceptance.
Release.
Acceptance Testing checks the system against the "Requirements". It is similar to
systems testing in that the whole system is checked but the important difference is the
change in focus:
Systems Testing checks that the system that was specified has been delivered.
Acceptance Testing checks that the system delivers what was requested.
The customer, and not the developer should always do acceptance testing. The customer
knows what is required from the system to achieve value in the business and is the only
person qualified to make that judgment.
The forms of the tests may follow those in system testing, but at all times they are
informed by the business needs.
The test procedures that lead to formal 'acceptance' of new or changed systems. User
Acceptance Testing is a critical phase of any 'systems' project and requires significant
participation by the 'End Users'. To be of real use, an Acceptance Test Plan should be
developed in order to plan precisely, and in detail, the means by which 'Acceptance' will
be achieved. The final part of the UAT can also include a parallel run to prove the system
against the current system.
Critical Problem; testing can continue but we cannot go into production (live) with this
problem
Major Problem; testing can continue but live this feature will cause severe disruption to
business processes in live operation
Medium Problem; testing can continue and the system is likely to go live with only
minimal departure from agreed business processes
Minor Problem ; both testing and live operations may progress. This problem should be
corrected, but little or no changes to business processes are envisaged
'Cosmetic' Problem e.g. colours; fonts; pitch size However, if such features are key to
the business requirements they will warrant a higher severity level.
The users of the system, in consultation with the executive sponsor of the project, must
then agree upon the responsibilities and required actions for each category of
problem. For example, you may demand that any problems in severity level 1, receive
priority response and that all testing will cease until such level 1 problems are resolved.
Caution. Even where the severity levels and the responses to each have been agreed by
all parties; the allocation of a problem into its appropriate severity level can be subjective
and open to question. To avoid the risk of lengthy and protracted exchanges over the
categorisation of problems; we strongly advised that a range of examples are agreed in
advance to ensure that there are no fundamental areas of disagreement; or, or if there
are, these will be known in advance and your organisation is forewarned.
Finally, it is crucial to agree the Criteria for Acceptance. Because no system is entirely
fault free, it must be agreed between End User and vendor, the maximum number of
acceptable 'outstandings' in any particular category. Again, prior consideration of this is
advisable.
N.B. In some cases, users may agree to accept ('sign off') the system subject to a range
of conditions. These conditions need to be analysed as they may, perhaps
unintentionally, seek additional functionality which could be classified as scope creep. In
any event, any and all fixes from the software developers, must be subjected to rigorous
System Testing and, where appropriate Regression Testing.
7.3 Conclusion
Hence the goal of acceptance testing should verify the overall quality, correct operation,
scalability, completeness, usability, portability, and robustness of the functional
components supplied by the Software system.
8 SYSTEM TESTING
8.1 Introduction to SYSTEM TESTING
For most organizations, software and system testing represents a significant element of a
project's cost in terms of money and management time. Making this function more
effective can deliver a range of benefits including reductions in risk, development costs
and improved 'time to market' for new systems.
Systems with software components and software-intensive systems are more and more
complex everyday. Industry sectors such as telecom, automotive, railway, and
aeronautical and space, are good examples. It is often agreed that testing is essential to
manufacture reliable products. However, the validation process does not often receive
the required attention. Moreover, the validation process is close to other activities such as
conformance, acceptance and qualification testing.
The difference between function testing and system testing is that now the focus is on
the whole application and its environment . Therefore the program has to be given
completely. This does not mean that now single functions of the whole program are
tested, because this would be too redundant. The main goal is rather to demonstrate the
discrepancies of the product from its requirements and its documentation. In other
words, this again includes the question, ``Did we build the right product?'' and not just,
``Did we build the product right?''
However, system testing does not only deal with this more economical problem, it also
contains some aspects that are orientated on the word ``system'' . This means that those
tests should be done in the environment for which the program was designed, like a
mulituser network or whetever. Even security guide lines have to be included. Once
again, it is beyond doubt that this test cannot be done completely, and nevertheless,
while this is one of the most incomplete test methods, it is one of the most important.
A number of time-domain software reliability models attempt to predict the growth of a
system's reliability during the system test phase of the development life cycle. In this
paper we examine the results of applying several types of Poisson-process models to the
development of a large system for which system test was performed in two parallel
tracks, using different strategies for test data selection.
we will test that the functionality of your systems meets with your specifications,
integrating with which-ever type of development methodology you are applying. We test
for errors that users are likely to make as they interact with the application as well as your
applications ability to trap errors gracefully. These techniques can be applied flexibly,
whether testing a financial system, e-commerce, an online casino or games testing.
System Testing is more than just functional testing, however, and can, when appropriate,
also encompass many other types of testing, such as:
o security
o load/stress
o performance
o browser compatibility
o localisation
reduction of costs
increased productivity
reduce commercial risks
These benefits are achieved as a result of some fundamental principles of testing, for
example, increased independence naturally increases objectivity.
Your test strategy must take into consideration the risks to your organisation, commercial
and technical. You will have a personal interest in its success in which case it is only
human for your objectivity to be compromised.
8.5 Conclusion:
Hence the system Test phase should begin once modules are integrated enough to
perform tests in a whole system environment. System testing can occur in parallel with
integration test, especially with the top-down method.
Unit Testing
mean that methods can get away with not being tested. The programmer should
know that their unit testing is complete when the unit tests cover at the very least
the functional requirements of all the code. The careful programmer will know
that their unit testing is complete when they have verified that their unit tests
cover every cluster of objects that form their application.
ACCEPTANCE
MAINTENANCE AND REGRESSION
driver
Module
TestCases
Performance Errors
Logic Errors
Screen Functionalities
Field Dependencies
Auto Generation
Algorithms and Computations
Normal and Abnormal terminations
Specific Business Rules if any..
ADVANTAGE:
Simplicity without the problems of statement coverage
DISADVANTAGE
This measure ignores branches within boolean expressions which occur due to shortcircuit operators.
Method for Condition Coverage:
-Test if every condition (sub-expression) in decision for true/false
-Select unique set of test cases.
Performance Testing Process & Methodology
59 -
9.4 Conclusion
Performance Testing Process & Methodology
60 -
10 Test Strategy
10.1 Introduction
This Document entails you towards the better insight of the Test Strategy and its
methodology.
It is the role of test management to ensure that new or modified service products meet
the business requirements for which they have been developed or enhanced.
The Testing strategy should define the objectives of all test stages and the techniques
that apply. The testing strategy also forms the basis for the creation of a standardized
documentation set, and facilitates communication of the test process and its implications
outside of the test discipline. Any test support tools introduced should be aligned with,
and in support of, the test strategy. Test Approach/Test Architecture are the acronyms for
Test Strategy.
Test management is also concerned with both test resource and test environment
management.
responsibility for testing and commissioning is buried deep within the supply chain as a
sub-contract of a sub-contract. It is possible to gain greater control of this process and
the associated risk through the use of specialists such as Systems Integration who can
be appointed as part of the professional team.
The time necessary for testing and commissioning will vary from project to project
depending upon the complexity of the systems and services that have been installed. The
Project Sponsor should ensure that the professional team and the contractor consider
realistically how much time is needed.
Fitness for purpose checklist:
Is there a documented testing strategy that defines the objectives of all test
stages and the techniques that may apply, e.g. non-functional testing and the
associated techniques such as performance, stress and security etc?
Does the test plan prescribe the approach to be taken for intended test activities,
identifying:
the items to be tested,
the testing to be performed,
test schedules,
resource and facility requirements,
reporting requirements,
evaluation criteria,
risks requiring contingency measures?
Are test processes and practices reviewed regularly to assure that the testing
processes continue to meet specific business needs?
For example, e-commerce testing may involve new user interfaces and a business focus
on usability may mean that the organization must review its testing strategies.
Top-down
Bottom-up
Thread testing
Stress testing
Back-to-back testing
Analysis
Errors 36%
and
design
Errors 64%
Coding
Test Factor The risk or issue that needs to be addressed as part of the test
strategy. The strategy will select those factors that need to be addressed in the
testing of a specific application system.
Test Phase The Phase of the systems development life cycle in which testing
will occur.
Not all the test factors will be applicable to all software systems. The
development team will need to select and rank the test factors for the specific
software systems being developed.
The test phase will vary based on the testing methodology used. For example the
test phases in as traditional waterfall life cycle methodology will be much
different from the phases in a Rapid Application Development methodology.
Facto
Maintain
Integrate
Dynamic Test
Build
Design
Requirements
TestFactors\T
est Phase
The test Strategy will need to be customized for any specific software system.
The applicable test factors would be listed as the phases in which the testing
must occur.
Four test steps must be followed to develop a customized test strategy.
Select and rank Test Factors
Identify the System Developmental Phases
Identify the Business risks associated with the System under
Development.
Place risks in the Matrix
Risks
10.7 Conclusion:
Test Strategy should be developed in accordance with the business risks associated with
the software when the test team develop the test tactics. Thus the Test team needs to
acquire and study the test strategy that should question the following:
Hence the Test Strategy must address the risks and present a process that can reduce
those risks. The system accordingly focuses on risks thereby establishes the objectives
for the test process.
Performance Testing Process & Methodology
66 -
11 TEST PLAN
11.1 What is a Test Plan?
A Test Plan can be defined as a document that describes the scope, approach,
resources and schedule of intended test activities. It identifies test items, the
features to be tested, the testing tasks, who will do each task, and any risks
requiring contingency planning.
The main purpose of preparing a Test Plan is that everyone concerned with the
project are in sync with regards to the scope, responsibilities, deadlines and
deliverables for the project. It is in this respect that reviews and a sign-off are very
important since it means that everyone is in agreement of the contents of the test
plan and this also helps in case of any dispute during the course of the project
(especially between the developers and the testers).
Purpose
Scope
Test Approach
Entry Criteria
Resources
Tasks / Responsibilities
Exit Criteria
Schedules / Milestones
Hardware / Software Requirements
Risks & Mitigation Plans
Tools to be used
Deliverables
References
a. Procedures
b. Templates
c. Standards/Guidelines
14. Annexure
15. Sign-Off
Scope
This section should talk about the areas of the application which are to be tested by the
QA team and specify those areas which are definitely out of scope (screens, database,
mainframe processes etc).
Test Approach
This would contain details on how the testing is to be performed and whether any specific
strategy is to be followed (including configuration management).
Entry Criteria
This section explains the various steps to be performed before the start of a test (i.e.)
pre-requisites. For example: Timely environment set up, starting the web server / app
server, successful implementation of the latest build etc.
Resources
This section should list out the people who would be involved in the project and their
designation etc.
Tasks / Responsibilities
This section talks about the tasks to be performed and the responsibilities assigned to the
various members in the project.
Exit criteria
Contains tasks like bringing down the system / server, restoring system to pre-test
environment, database refresh etc.
Schedules / Milestones
This sections deals with the final delivery date and the various milestone dates to be met
in the course of the project.
Hardware / Software Requirements
This section would contain the details of PCs / servers required (with the configuration) to
install the application or perform the testing; specific software that needs to be installed
on the systems to get the application running or to connect to the database; connectivity
related issues etc.
Risks & Mitigation Plans
This section should list out all the possible risks that can arise during the testing and the
mitigation plans that the QA team plans to implement incase the risk actually turns into a
reality.
Tools to be used
This would list out the testing tools or utilities (if any) that are to be used in the project
(e.g.) WinRunner, Test Director, PCOM, WinSQL.
Deliverables
This section contains the various deliverables that are due to the client at various points
of time (i.e.) daily, weekly, start of the project, end of the project etc. These could include
Test Plans, Test Procedure, Test Matrices, Status Reports, Test Scripts etc. Templates for
all these could also be attached.
Performance Testing Process & Methodology
69 -
References
Procedures
Templates (Client Specific or otherwise)
Standards / Guidelines (e.g.) QView
Project related documents (RSD, ADD, FSD etc)
Annexure
This could contain embedded documents or links to documents which have been / will be
used in the course of testing (e.g.) templates used for reports, test cases etc.
Referenced documents can also be attached here.
Sign-Off
This should contain the mutual agreement between the client and the QA team with both
leads / managers signing off their agreement on the Test Plan.
Configuration data can dictate control flow, data manipulation, presentation and user
interface. A system can be configured to fit several business models, work (almost)
seamlessly with a variety of cooperative systems and provide tailored experiences to a
host of different users. A business may look to an application's configurability to allow
them to keep up with the market without being slowed by the development process, an
individual may look for a personalized experience from commonly-available
software.
FUNCTIONAL TESTING SUFFERS IF DATA IS POOR
Tests with poor data may not describe the business model effectively, they may be hard
to maintain, or require lengthy and difficult setup. They may obscure problems or avoid
them altogether. Poor data tends to result in poor tests, that take longer to execute.
GOOD DATA IS VITAL TO RELIABLE TEST RESULTS
An important goal of functional testing is to allow the test to be repeated with the same
result, and varied to allow diagnosis. Without this, it is hard to communicate problems to
coders, and it can become difficult to have confidence in the QA team's results, whether
they are good or bad. Good data allows diagnosis, effective reporting, and allows tests to
be repeated with confidence,.
GOOD DATA CAN HELP TESTING STAY ON SCHEDULE
An easily comprehensible and well-understood dataset is a tool to help communication.
Good data can greatly assist in speedy diagnosis and rapid re-testing. Regression
testing and automated test maintenance can be made speedier and easier by using good
data, while an elegantly-chosen dataset can often allow new tests without the overhead
of new data.
A formal test plan is a document that provides and records important information about a
test project, for example:
project and quality assumptions
project background information
resources
schedule & timeline
entry and exit criteria
test milestones
tests to be performed
In order to ensure consistency when measuring the results, the tests should be
independently monitored. This task would normally be carried out by a nominated
member of the Business Recovery Team or a member of the Business Continuity
Planning Team.
This section of the BCP will contain the names of the persons nominated to monitor the
testing process throughout the organization. It will also contain a list of the duties to be
undertaken by the monitoring staff.
Prepare Feedback Questionnaires
It is vital to receive feedback from the persons managing and participating in each of the
tests. This feedback will hopefully enable weaknesses within the Business Recovery
Process to be identified and eliminated. Completion of feedback forms should be
mandatory for all persons participating in the testing process. The forms should be
completed either during the tests (to record a specific issue) or as soon after finishing as
practical. This will enable observations and comments to be recorded whilst the event is
still fresh in the persons mind.
This section of the BCP should contain a template for a Feedback Questionnaire.
Prepare Budget for Testing Phase
Each phase of the BCP process which incurs a cost requires that a budget be prepared
and approved. The 'Preparing for a Possible Emergency' Phase of the BCP process will
involve the identification and implementation of strategies for back up and recovery of
data files or a part of a business process. It is inevitable that these back up and recovery
processes will involve additional costs. Critical parts of the business process such as the
IT systems, may require particularly expensive back up strategies to be implemented.
Where the costs are significant they should be approved separately with a specific
detailed budget for the establishment costs and the ongoing maintenance costs.
This section of the BCP will contain a list of the testing phase activities and a cost for
each. It should be noted whenever part of the costs is already incorporated with the
organizations overall budgeting process.
This section of the BCP is to contain a list of each business process with a test schedule
and information on the simulated conditions being used. The testing co-ordination and
monitoring will endeavor to ensure that the simulated environments are maintained
throughout the testing process, in a realistic manner.
Test Accuracy of Employee and Vendor Emergency Contact Numbers
During the testing process the accuracy of employee and vendor emergency contact
information is to be re-confirmed. All contact numbers are to be validated for all involved
employees. This is particularly important for management and key employees who are
critical to the success of the recovery process. This activity will usually be handled by the
HRM Department or Division.
Where, in the event of an emergency occurring outside of normal business hours, a large
number of persons are to be contacted, a hierarchical process could be used whereby
one person contacts five others. This process must have safety features incorporated to
ensure that if one person is not contactable for any reason then this is notified to a
nominated controller. This will enable alternative contact routes to be used.
Assess Test Results
Prepare a full assessment of the test results for each business process. The following
questions may be appropriate:
Were objectives of the Business Recovery Process and the testing process met - if not,
provide further comment
Were simulated conditions reasonably "authentic" - if not, provide further comment
Was test data representative - if not, provide further comment
Did the tests proceed without any problems - if not, provide further comment
What were the main comments received in the feedback questionnaires
Each test should be assessed as either fully satisfactory, adequate or requiring further
testing.
Training Staff in the Business Recovery Process
All staff should be trained in the business recovery process. This is particularly important
when the procedures are significantly different from those pertaining to normal
operations. This training may be integrated with the training phase or handled separately.
The training should be carefully planned and delivered on a structured basis. The training
should be assessed to verify that it has achieved its objectives and is relevant for the
procedures involved.
Training may be delivered either using in-house resources or external resources
depending upon available skills and related costs.
Managing the Training Process
For the BCP training phase to be successful it has to be both well managed and
structured. It will be necessary to identify the objective and scope for the training, what
specific training is required, who needs it and a budget prepared for the additional costs
associated with this phase.
Develop Objectives and Scope of Training
The objectives and scope of the BCP training activities are to be clearly stated within the
plan.
The BCP should contain a description of the objectives and scope of the training phase.
This will enable the training to be consistent and organized in a manner where the results
can be measured, and the training fine tuned, as appropriate.
Performance Testing Process & Methodology
74 -
This section of the BCP will contain a list of the training phase activities and a cost for
each. It should be noted whenever part of the costs is already incorporated with the
organizations overall budgeting process.
Assessing the Training
The individual BCP training programmes and the overall BCP training process should be
assessed to ensure its effectiveness and applicability. This information will be gathered
from the trainers and also the trainees through the completion of feedback
questionnaires.
Feedback Questionnaires
Assess Feedback
Feedback Questionnaires
It is vital to receive feedback from the persons managing and participating in each of the
training programmes. This feedback will enable weaknesses within the Business
Recovery Process, or the training, to be identified and eliminated. Completion of
feedback forms should be mandatory for all persons participating in the training process.
The forms should be completed either during the training (to record a specific issue) or as
soon after finishing as practical. This will enable observations and comments to be
recorded whilst the event is still fresh in the persons mind.
This section of the BCP should contain a template for a Feedback Questionnaire for the
training phase.
Assess Feedback
The completed questionnaires from the trainees plus the feedback from the trainers
should be assessed. Identified weaknesses should be notified to the BCP Team Leader
and the process strengthened accordingly.
The key issues raised by the trainees should be noted and consideration given to
whether the findings are critical to the process or not. If there are a significant number of
negative issues raised then consideration should be given to possible re-training once the
training materials, or the process, have been improved.
This section of the BCP will contain a format for assessing the training feedback.
Keeping the Plan Up-to-date
Changes to most organizations occur all the time. Products and services change and
also their method of delivery. The increase in technological based processes over the
past ten years, and particularly within the last five, have significantly increased the level
of dependency upon the availability of systems and information for the business to
function effectively. These changes are likely to continue and probably the only certainty
is that the pace of change will continue to increase. It is necessary for the BCP to keep
pace with these changes in order for it to be of use in the event of a disruptive
emergency. This chapter deals with updating the plan and the managed process which
should be applied to this updating activity.
Maintaining the BCP
It is necessary for the BCP updating process to be properly structured and controlled.
Whenever changes are made to the BCP they are to be fully tested and appropriate
amendments should be made to the training materials. This will involve the use of
formalized change control procedures under the control of the BCP Team Leader.
Change Controls for Updating the Plan
It is recommended that formal change controls are implemented to cover any changes
required to the BCP. This is necessary due to the level of complexity contained within the
BCP. A Change request Form / Change Order form is to be prepared and approved in
respect of each proposed change to the BCP.
Performance Testing Process & Methodology
76 -
This section of the BCP will contain a Change Request Form / Change Order to be used
for all such changes to the BCP.
Responsibilities for Maintenance of Each Part of the Plan
Each part of the plan will be allocated to a member of the BCP Team or a Senior
Manager with the organization who will be charged with responsibility for updating and
maintaining the plan. The BCP Team Leader will remain in overall control of the BCP but
business unit heads will need to keep their own sections of the BCP up to date at all
times. Similarly, HRM Department will be responsible to ensure that all emergency
contact numbers for staff are kept up to date. It is important that the relevant BCP
coordinator and the Business Recovery Team are kept fully informed regarding any
approved changes to the plan.
Test All Changes to Plan
The BCP Team will nominate one or more persons who will be responsible for coordinating all the testing processes and for ensuring that all changes to the plan are
properly tested. Whenever changes are made or proposed to the BCP, the BCP Testing
Co-ordinator will be notified. The BCP Testing Co-ordinator will then be responsible for
notifying all affected units and for arranging for any further testing activities.
This section of the BCP contains a draft communication from the BCP Co-ordinator to
affected business units and contains information about the changes which require testing
or re-testing.
Advise Person Responsible for BCP Training
A member of the BCP Team will be given responsibility for co-ordinating all training
activities (BCP Training Co-ordinator). The BCP Team Leader will notify the BCP Training
Co-ordinator of all approved changes to the BCP in order that the training materials can
be updated. An assessment should be made on whether the change necessitates any retraining activities.
Advise Person Responsible for BCP Training
A member of the BCP Team will be given responsibility for co-ordinating all training
activities (BCP Training Co-ordinator). The BCP Team Leader will notify the BCP Training
Co-ordinator of all approved changes to the BCP in order that the training materials can
be updated. An assessment should be made on whether the change necessitates any retraining activities.
Problems which can be caused by Poor Test Data
Most testers are familiar with the problems that can be caused by poor data. The
following list details the most common problems familiar to the author. Most projects
experience these problems at some stage - recognizing them early can allow their effects
to be mitigated.
Unreliable test results.
Running the same test twice produces inconsistent results. This can be a symptom of an
uncontrolled environment, unrecognized database corruption, or of a failure to recognize
all the data that is influential on the system.
Degradation of test data over time.
Program faults can introduce inconsistency or corruption into a database. If not spotted at
the time of generation, they can cause hard-to-diagnose failures that may be apparently
unrelated to the original fault. Restoring the data to a clean set gets rid of the symptom,
but the original fault is undiagnosed and can carry on into live operation and perhaps
future releases. Furthermore, as the data is restored, evidence of the fault is lost.
Increased test maintenance cost
If each test has its own data, the cost of test maintenance is
correspondingly increased.
Performance Testing Process & Methodology
77 -
If that data is itself hard to understand or manipulate, the cost increases further.
Reduced flexibility in test execution
If datasets are large or hard to set up, some tests may be excluded from a
test run.
If the datasets are poorly constructed, it may not be time-effective to construct
further data to support investigatory tests.
Obscure results and bug reports
Without clearly comprehensible data, testers stand a greater chance
of missing important diagnostic features of a failure, or indeed of missing the failure
entirely. Most reports make reference to the input data and the actual and expected
results. Poor data can make these reports hard to understand.
Larger proportion of problems can be traced to poor data
A proportion of all failures logged will be found, after further analysis, not to be faults at
all. Data can play a significant role in these failures. Poor data will cause more of these
problems.
Less time spent hunting bugs
The more time spent doing unproductive testing or ineffective test maintenance, the less
time spent testing.
Confusion between developers, testers and business
Each of these groups has different data requirements. A failure to understand each others
data can lead to ongoing confusion.
Requirements problems can be hidden in inadequate data
It is important to consider inputs and outputs of a process for requirements modeling.
Inadequate data can lead to ambiguous or incomplete requirements.
Simpler to make test mistakes
Everybody makes mistakes. Confusing or over-large datasets can make data selection
mistakes more common.
Unwieldy volumes of data
Small datasets can be manipulated more easily than large datasets. A few datasets are
easier to manage than many datasets.
Business data not representatively tested
Test requirements, particularly in configuration data, often don't reflect the way the
system will be used in practice. While this may arguably lead to broad testing for a variety
of purposes, it can be hard for the business or the end users to feel confidence in the test
effort if they feel distanced from it.
Inability to spot data corruption caused by bugs
A few well-known datasets can be more easily be checked than a large number
of complex datasets, and may lend themselves to automated testing / sanity checks.
A readily understandable dataset can allow straightforward diagnosis; a complex dataset
will positively hinder diagnosis.
Poor database/environment integrity
If a large number of testers, or tests, share the same dataset, they can influence and
corrupt each others results as they change the data in the system. This can not only
cause false results, but can lead to database integrity problems and data corruption. This
can make portions of the application untestable for many testers simultaneously.
Partitioning
Partitions allow data access to be controlled, reducing uncontrolled changes in the data.
Partitions can be used independently; data use in one area will have no effect on the
results of tests in another.
Data can be safely and effectively partitioned by machine / database / application
instance, although this partitioning can introduce configuration management problems in
software version, machine setup, environmental data and data load/reload. A useful and
basic way to start with partitions is to set up, not a single environment for each test or
tester, but to set up three shared by many users, so allowing different kinds of data use.
These three have the following characteristics:
Safe area
Performance Testing Process & Methodology
80 -
Data is often used to communicate and illustrate problems to coders and to the business.
However, there is generally no mandate for outside groups to understand the format or
requirements of test data.
Giving some meaning to the data that can be referred to directly can help with improving
mutual understanding.
Clarity helps because:
Improves communication within and outside the team
Reduces test errors caused by using the wrong data
Allows another method way of doing sanity checks for corrupted or inconsistent data
Helps when checking data after input
Helps in selecting data for investigative tests
12.6 Conclusion
Data can be influential on the quality of testing. Well-planned data can allow flexibility and
help reduce the cost of test maintenance. Common data problems can be avoided or
reduced with preparation and automation. Effective testing of setup data is a necessary
part of system testing, and good data can be used as a tool to enable and improve
communication throughout the project.
The following points summarize the actions that can influence the quality of the data and
the effectiveness of its usage:
Plan the data for maintenance and flexibility
Know your data, and make its structure and content transparent
Use the data to improve understanding throughout testing and
the business
Test setup data as you would test functionality
Field Requirements:
Field Instructions for Entering Data
Name of Software Tested : Put the name of the S/W or subsystem tested.
Problem Description:
Write a brief narrative description of the variance
uncovered from expectations
Statement of Conditions: Put the results of actual processing that occurred here.
Statement of Criteria: Put what testers believe was the expected result from
processing
Effect of Deviation: If this can be estimated , testers should indicate what they believe
the impact or effect of the problem will be on computer processing
Cause of Problem: The testers should indicate what they believe is the cause of the
problem, if known. If the testers re unable to do this , the work paper will be given to the
development team and they should indicate the cause of the problem.
Location of the Problem: The Tests should document where problem occurred as
closely as possible.
Recommended Action: The testers should indicate any recommended action they
believe would be helpful to the project team. If not approved, the alternate action
should be listed or the reason for not following the recommended action should be
documented.
Name of the S/W tested:
Problem Description
Statement of Condition
Statement of Criteria
Effect of Deviation
Cause of a Problem
Location of the Problem
Recommended Action
Defect
This category includes a Description of the individual defects uncovered during the
testing process. This description includes but not limited to :
Data the defect uncovered
Name of the Defect
Location of the Defect
Severity of the Defect
Type of Defect
How the defect was uncovered(Test Data/Test Script)
The Test Logs should add to this information in the form of where the defect originated ,
when it was corrected, and when it was entered for retest.
program or from the individual keyboard or keypad software at any time. Individual
Reports include all of the following information.
Status Report
Word Processing Tests or Keypad Tests
Basic Skills Tests or Data Entry Tests
Progress Graph
Game Scores
Test Report for each test
Test Director:
Test Report Standards - Defining the components that should be included in a test
report.
Statistical Analysis - Ability to draw statistically valid conclusions from quantitative test
results.
Testing Data used for metrics
Testers are typically responsible for reporting their test status at regular intervals.
The following measurements generated during testing are applicable:
Total number of tests
Number of Tests executed to date
Number of tests executed successfully to date
Data concerning software defects include
Total number of defects corrected in each activity
Performance Testing Process & Methodology
89 -
13.2.2 Conclusion
The Test Logs obtained from the execution of the test results and finally the test reports
should be designed to accomplish the following objectives:
Provide Information to the customer whether the system should be placed into
production, if so the potential consequences and appropriate actions to minimize
these consequences.
One Long term objective is for the Project and the other is for the information
technology function.
The project can use the test report to trace problems in the event the application
malfunction in production. Knowing which functions have been correctly tested
and which ones still contain defects can assist in taking corrective actions.
The data can also be used to analyze the developmental process to make
changes to prevent defects from occurring in the future.
These defect prone components identify tasks/steps that if improved, could
eliminate or minimize the occurrence of high frequency defects in future.
14 Test Report
A Test Report is a document that is prepared once the testing of a software product is
complete and the delivery is to be made to the customer. This document would contain a
summary of the entire project and would have to be presented in a way that any person
who has not worked on the project would also get a good overview of the testing effort.
2. Test Details
This section would contain the Test Approach, Types of Testing conducted, Test
Environment and Tools Used.
Test Approach This would discuss the strategy followed for executing the
project. This could include information on how coordination was achieved
between Onsite and Offshore teams, any innovative methods used for
automation or for reducing repetitive workload on the testers, how information
and daily / weekly deliverables were delivered to the client etc.
Types of testing conducted This section would mention any specific types of
testing performed (i.e.) Functional, Compatibility, Performance, Usability etc
along with related specifications.
Test Environment This would contain information on the Hardware and
Software requirements for the project (i.e.) server configuration, client machine
configuration, specific software installations required etc.
Tools used This section would include information on any tools that were used
for testing the project. They could be functional or performance testing
automation tools, defect management tools, project tracking tools or any other
tools which made the testing work easier.
3. Metrics
This section would include details on total number of test cases executed in the
course of the project, number of defects found etc. Calculations like defects
found per test case or number of test cases executed per day per person etc
would also be entered in this section. This can be used in calculating the
efficiency of the testing effort.
4. Test Results
This section is similar to the Metrics section, but is more for showcasing the
salient features of the testing effort. Incase many defects have been logged for
the project, graphs can be generated accordingly and depicted in this section.
The graphs can be for Defects per build, Defects based on severity, Defects
based on Status (i.e.) how many were fixed and how many rejected etc.
5. Test Deliverables
This section would include links to the various documents prepared in the course
of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release Report
etc.
Performance Testing Process & Methodology
93 -
6. Recommendations
This section would include any recommendations from the QA team to the client
on the product tested. It could also mention the list of known defects which have
been logged by QA but not yet fixed by the development team so that they can
be taken care of in the next release of the application.
15 Defect Management
15.1 Defect
A mismatch in the application and its specification is a defect. A software error is present
when the program does not do what its end user expects it to do.
The Project Lead of the development team will review the defect and set it to one
of the following statuses:
Open Accepts the bug and assigns it to a developer.
Invalid Bug The reported bug is not valid one as per the requirements/design
As Designed This is an intended functionality as per the requirements/design
Deferred This will be an enhancement.
Duplicate The bug has already been reported.
Document Once it is set to any of the above statuses apart from Open, and the
testing team does not agree with the development team it is set to document
status.
Once the development team has started working on the defect the status is set to
WIP ((Work in Progress) or if the development team is waiting for a go ahead or
some technical feedback, they will set to Dev Waiting
After the development team has fixed the defect, the status is set to FIXED,
which means the defect is ready to re-test.
On re-testing the defect, and the defect still exists, the status is set to
REOPENED, which will follow the same cycle as an open defect.
If the fixed defect satisfies the requirements/passes the test case, it is set to
Closed.
Medium
Low
The problem prevents further processing and testing. The Development Team
must be informed immediately and they need to take corrective action
immediately.
The problem affects selected processing to a significant degree, making it
inoperable, Cause data loss, or could cause a user to make an incorrect
decision or entry. The Development Team must be informed that day, and they
need to take corrective action within 0 24 hours.
The problem affects selected processing, but has a work-around that allows
continued processing and testing. No data loss is suffered. These may be
cosmetic problems that hamper usability or divulge client-specific information.
The Development Team must be informed within 24 hours, and they need to
take corrective action within 24 - 48 hours.
The problem is cosmetic, and/or does not affect further processing and testing.
The Development Team must be informed within 48 hours, and they need to
take corrective action within 48 - 96 hours.
Bug reports need to do more than just describe the bug. They have to give
developers something to work with so that they can successfully reproduce the
problem.
In most cases the more information correct information given the better. The
report should explain exactly how to reproduce the problem and an explanation of
exactly what the problem is.
The basic items in a report are as follows:
Version:
Product:
If you are developing more than one product Identify the product
in question.
Data:
Steps:
List the steps taken to recreate the bug. Include all proper menu
names, dont abbreviate and dont assume anything.
After youve finished writing down the steps, follow them - make
sure youve included everything you type and do to get to the
problem. If there are parameters, list them. If you have to enter
any data, supply the exact data entered. Go through the process
again and see if there are any steps that can be removed.
When you report the steps they should be the clearest steps to
recreating the bug.
15.5.1Summary
A bug report is a case against a product. In order to work it must supply all
necessary information to not only identify the problem but what is needed to fix it
as well.
It is not enough to say that something is wrong. The report must also say what the
system should be doing.
The report should be written in clear concise steps, so that someone who has
never seen the system can follow the steps and reproduce the problem. It should
include information about the product, including the version number, what data
was used.
The more organized information provided the better the report will be.
16 Automation
What is Automation
Automated testing is automating the manual testing process currently in use
individuals. Many automated testing tools can replicate the activity of a large number of
users (and their associated transactions) using a single computer. Therefore, load/stress
testing using
automated methods require only a fraction of the computer hardware that would be
necessary to
complete a manual test. Imagine performing a load test on a typical distributed
client/server
application on which 50 concurrent users were planned.
To do the testing manually, 50 application users employing 50 PCs with associated
software, an
available network, and a cadre of coordinators to relay instructions to the users would be
required. With an automated scenario, the entire test operation could be created on a
single machine having the ability to run and rerun the test as necessary, at night or on
weekends without having to assemble an army of end users. As another example,
imagine the same application used by hundreds or thousands of users. It is easy to see
why manual methods for load/stress testing is an expensive and logistical nightmare.
regulations as well as being required to document their quality assurance efforts for all
parts of
their systems.
Internet/Intranet Testing
A good tool will have the ability to support testing within the scope of a web browser. The
tests created for testing Internet or intranet-based applications should be portable across
browsers, and should automatically adjust for different load times and performance
levels.
Performance Testing Process & Methodology
104 -
Ease of Use
Testing tools should be engineered to be usable by non-programmers and application
end-users. With much of the testing responsibility shifting from the development staff to
the departmental level, a testing tool that requires programming skills is unusable by
most organizations. Even if programmers are responsible for testing, the testing tool itself
should have a short learning curve.
A robust testing tool should support testing with a variety of user interfaces and create
simple-to manage, easy-to-modify tests. Test component reusability should be a
cornerstone of the product
architecture.
The selected testing solution should allow users to perform meaningful load and
performance tests to accurately measure system performance. It should also provide test
results in an easy-to-understand reporting format.
Test Planning
Careful planning is the key to any successful process. To guarantee the best possible result
from
an automated testing program, those evaluating test automation should consider these
fundamental planning steps. The time invested in detailed planning significantly improves the
benefits resulting from test automation.
Begin the automated testing process by defining exactly what tasks your application software
should accomplish in terms of the actual business activities of the end-user. The definition of
these tasks, or business requirements, defines the high-level, functional requirements
of the software system in question. These business requirements should be defined in such a
way as to make it abundantly clear that the software system correctly (or incorrectly) performs
the necessary business functions. For example, a business requirement for a payroll
application might be to calculate a salary, or to print a salary check.
A test case identifies the specific input values that will be sent to the application, the
procedures
for applying those inputs, and the expected application values for the procedure being tested.
A
proper test case will include the following key components:
Test Case Name(s) - Each test case must have a unique name, so that the results of these
test
elements can be traced and analyzed.
Test Case Prerequisites - Identify set up or testing criteria that must be established before a
test can be successfully executed.
Test Case Execution Order - Specify any relationships, run orders and dependencies that
might exist between test cases.
Test Procedures Identify the application steps necessary to complete the test case.
Input Values - This section of the test case identifies the values to be supplied to the
application as input including, if necessary, the action to be completed.
Expected Results - Document all screen identifier(s) and expected value(s) that must be
verified as part of the test. These expected results will be used to measure the acceptance
criteria, and
therefore the ultimate success of the test.
Test Data Sources - Take note of the sources for extracting test data if it is not included in
the test case.
Inputs to the Test Design and Construction Process
Test Case Documentation Standards
Test Case Naming Standards
Approved Test Plan
Business Process Documentation
Business Process Flow
Test Data sources
Outputs from the Test Design and Construction Process
Revised Test Plan
Test Procedures for each Test Case
Test Case(s) for each application function described in the test plan
Procedures for test set up, test execution and restoration
Test
ready
application
Tool settings,
Playback options
Script execution
Result analysis
Defect management
Are there functions to tell me when the page has finished loading?
Can I tell the test tool to wait until an image appears?
Can I test whether links are valid or not?
Can I test web based objects functions like is it enabled, does it contain data, etc.
Are there facilities that will allow me to programmatically look for objects of a certain
type on a web page or locate a specific object?
Can I extract data from the web page itself? E.g. the title? A hidden form element?
With Client server testing the target customer is usually well defined you know what
network operating system you will be using, the applications and so on but on the web it
is far different. A person may be connecting from the USA or Africa, they may be
disabled, they may use various browsers, and the screen resolution on their computer will
be different. They will speak different languages, will have fast connections and slow
connections, connect using MAC, Linux or Windows, etc, etc. So the cost to set up a test
environment is usually greater than for a client server test where the environment is fairly
well defined.
interface with files, spreadsheets, etc to create, extract data? Can you randomise the
access to that data? Is the data access truly random? This functionality is normally
more important than database tests as the databases will usually have their own
interface for running queries. However applications (except for manual input) do not
usually provide facilities for bulk data input.
The added benefit (as I have found) is this functionality can be used for a production
reason e.g. for the aforementioned bulk data input sometimes carried out in data
migration or application upgrades.
These functions are also very important as you move from the record/playback
phase, to data-driven to framework testing. Data-driven tests are tests that replace
hard coded names, address, numbers; etc with variables supplied from an external
source usually a CSV (Comma Separated variable) file, spreadsheet or database.
Frameworks are usually the ultimate goal in deploying automation test tools.
Frameworks provide an interface to all the applications under test by exposing a
suitable list of functions, databases, etc. This allows an inexperienced tester/user to
run tests by just running/providing the test framework with know
commands/variables. A test framework has parallels to Software frameworks where
you develop an encapsulation layer of software (framework) around the applications,
databases etc and expose functions, classes, methods etc that is used to call the
underlying applications, return data, input data, etc.
However to do this requires a lot of time, skilled resources and money to facilitate the
first two.
Pushbuttons
Checkboxes
Radio buttons
List views
Edit boxes
Combo boxes
If you have a custom object that behaves like one of these are you able to map (tell
the test tool that the custom control behaves like the standard) control? Does it
support all the standard controls methods? Can you add the custom control to its
own class of control?
17.11Extensible Language
Here is a question that you will here time and time again in automation forums. How
do I get {insert test tool name here} to do such and such, there will be one of four
answers.
I dont know
It cant do it
It can do it using the function x, y or Z
It cant in the standard language but you can do it like this
What we are concerned with in this section is the last answer e.g. if the standard test
language does not support it can I create a DLL or extend the language in some way
to do it? This is usually an advanced topic and is not encountered until the trained
tester has been using the tool for at least 6 12 months. However when this is
encountered the tool should support language extension. If via DLLs then the tester
must have knowledge of a traditional development language e.g. C, C++ or VB. For
instance if I wanted to extend a tool that could use DLLs created by VB I would need
to have Visual Basic then open say an ActiveX dll project, create a class containing
various methods (similar to functions) then I would make a dll file. Register it on the
machine then reference that dll from the test tool calling the methods according to
their specification. This will sound a lot clearer as you go on in the tools and this
document will be updated to include advanced topics like this in extending the tools
capabilities.
Some tools provide extension by allowing you to create user defined functions,
methods, classes, etc but these are normally a mixture of the already supported data
types, functions, etc rather than extending the tool beyond its released functionality.
Because this is an advanced topic I have not taken into account ease of use, as
those people who have got to this level should have already exhausted the current
capabilities of the tools. So want to use external functions like win32api functions and
so on and should have a good grasp of programming.
17.12Environment Support
How many environments does the tool support out the box? Does it support the latest
Java release, what Oracle, Powerbuilder, WAP, etc. Most tools can interface to
unsupported environments if the developers in that environment provide classes, dlls
etc that expose some of the applications details but whether a developer will or has
time to do this is another question.
Ultimately this is the most important part of automation. Environment support. If the
tool does not support your environment/application then you are in trouble and in
most cases you will need to revert to manually testing the application (more shelf
ware).
17.13Integration
How well does the tool integrate with other tools. This is becoming more and more
important. Does the tool allow you to run it from various test management suites?
Can you raise a bug directly from the tool and feed the information gathered from
your test logs into it? Does it integrate with products like word, excel or requirements
management tools?
When managing large test projects with an automation team greater than five and
testers totaling more than ten. The management aspect and the tools integration
moves further up the importance ladder. An example could be a major Bank wants to
redesign its workflow management system to allow faster processing of customer
queries. The anticipated requirements for the new workflow software numbers in the
thousands. To test these requirements 40,000 test cases have been identified 20,000
of these can be automated. How do I manage this? This is where a test management
tool comes in real handy.
Also how do I manage the bugs raised as a result of automation testing, etc?
Integration becomes very important rather than having separate systems that dont
share data that may require duplication of information.
The companies that will score larger on these are those that provide tools outside the
testing arena as they can build in integration to their other products and so when it
comes down to the wire on some projects, we have gone with the tool that integrated
with the products we already had.
17.14Cost
In my opinion cost is the least significant in this matrix, why? Because all the tools
are similar in price except Visual Test that is at least 5 times cheaper than the rest but
as you will see from the matrix there is a reason. Although very functional it does not
provide the range of facilities that the other tools do.
Price typically ranges from $2,900 - $5,000 (depending on quantity brought,
packages, etc) in the US and around 2,900 - 5,000 in the UK for the base tools
included in this document.
So you know the tools will all cost a similar price it is usually a case of which one will
do the job for me rather than which is the cheapest.
Visual Test I believe will prove to be a bigger hit as it expands its functional range it
was not that long ago where it did not support web based testing.
Performance Testing Process & Methodology
116 -
The prices are kept this high because they can. All the tools are roughly the same
price and the volumes of sales is low relative to say a fully blown programming
language IDE like JBuilder or Visual C++ which are a lot more function rich and
flexible than any of the test tools.
On top of the above prices you usually pay an additional maintenance fee of between
10 and 20%. There are not many applications I know that cost this much per license
not even some very advanced operating systems. However it is all a matter of supply.
The bigger the supply the less the price as you can spread the development costs
more. However I do not anticipate a move on the prices upwards as this seems to be
the price the market will tolerate.
Visual Test also provides a free runtime license.
17.15Ease Of Use
This section is very subjective but I have used testers (my guinea pigs) of various
levels and got them from scratch to use each of the tools. In more cases than not
they have agreed on which was the easiest to use (initially). Obviously this can
change as the tester becomes more experienced and the issues of say extensibility,
script maintenance, integration, data-driven tests, etc are required. However this
score is based on the productivity that can be gained in say the first three months
when those issues are not such a big concern.
Ease of use includes out the box functions, debugging facilities, layout on screen,
help files and user manuals.
17.16Support
In the UK this can be a problem as most of the test tool vendors are based in the
USA with satellite branches in the UK.
Just from my own experience and the testers I know in the UK. We have found
Mercury to be the best for support, then Compuware, Rational and last Segue.
However having said that you can find a lot of resources for Segue on the Internet
including a forum at www.betasoft.com that can provide most of the answers rather
than ringing the support line.
On their website Segue and Mercury provide many useful user and vendor
contributed material.
I have also included various other criteria like the availability of skilled resources,
online resources, validity of responses from the helpdesk, speed of responses and
similar
17.17Object Tests
Now presuming the tool of choice does work with the application you wish to test
what services does it provide for testing object properties?
Can it validate several properties at once? Can it validate several objects at once?
Can you set object properties to capture the application state?
This should form the bulk of your verification as far as the automation process is
concerned so I have looked at the tools facilities on client/server as well as web
based applications.
Performance Testing Process & Methodology
117 -
17.18Matrix
Object Tests
Support
Ease of use
Cost
Integration
Environment support
Extensible Language
Test/Error recovery
Image testing
Object Mapping
Data functions
Database tests
Web Testing
What will follow after the matrix is a tool-by-tool comparison under the appropriate
heading (as listed above) so that the user can get a feel for the tools functionality side
by side.
Each category in the matrix is given a rating of 1 5. 1 = Excellent support for this
functionality, 2 = Good support but lacking or another tool provides more effective
support, 3 = Basic/ support only. 4 = This is only supported by use of an API call or
third party add-in but not included in the general test tool/below average, 5 = No
support.
WinRunner
QA Run
Silk Test
Visual Test
Robot
17.19Matrix score
Win Runner = 24
QARun = 25
SilkTest = 24
Visual Test = 39
Robot = 24
Click Finish.
In the configure project window displayed click the Create button. To manage the
Requirements assets connect to Requisite Pro, to manage test assets create associated test
data store and for defect management connect to Clear quest database.
Once the Create button in the Configure project window is chosen, the below seen
Create Test Data store window will be displayed. Accept the default path and click OK button.
Once the below window is displayed it is confirmed that the Test datastore is
successfully created and click OK to close the window.
Click OK in the configure project window and now your first Rational project is
ready to play with.
Perform full functional testing. Record and play back scripts that navigate through
your application and test the state of objects through verification points.
Perform full performance testing. Use Robot and TestManager together to record and
play back scripts that help you determine whether a multi-client system is performing
within user-defined standards under varying loads.
Create and edit scripts using the SQABasic, VB, and VU scripting environments. The
Robot editor provides color-coded commands with keyword Help for powerful
integrated programming during script development.
Test applications developed with IDEs such as Visual Basic, Oracle Forms,
PowerBuilder, HTML, and Java. Test objects even if they are not visible in the
application's interface.
The Object-Oriented Recording technology in Robot lets you generate scripts quickly by
simply running and using the application-under-test. Robot uses Object-Oriented Recording
to identify objects by their internal object names, not by screen coordinates. If objects change
locations or their text changes, Robot still finds them on playback.
The Object Testing technology in Robot lets you test any object in the application-under-test,
including the object's properties and data. You can test standard Windows objects and IDEspecific objects, whether they are visible in the interface or hidden.
Once logged you will see the robot window. Go to File-> New->Script
In the above screen displayed enter the name of the script say First Script by which
the script is referred to from now on and any description (Not mandatory).The type of the
script is GUI for functional testing and VU for performance testing.
The GUI Script top pane) window displays GUI scripts that you are currently recording,
editing, or debugging. It has two panes:
Asset pane (left) Lists the names of all verification points and low-level scripts for
this script.
Script pane (right) Displays the script.
Build Displays compilation results for all scripts compiled in the last operation. Line
numbers are enclosed in parentheses to indicate lines in the script with warnings and
errors.
Console Displays messages that you send with the SQAConsoleWrite command.
Also displays certain system messages from Robot.
In this window we can set general options like identification of lists, menus ,recording
think time in General tab:
Web browser tab: Mention the browser type IE or Netscape
Robot Window: During recording how the robot should be displayed and hotkeys
details
Object Recognition Order: the order in which the recording is to happen .
For ex: Select a preference in the Object order preference list.
If you will be testing C++ applications, change the object order preference to C++
Recognition Order.
18.6.1Playback options
Go to Tools-> Playback options to set the options needed while running the script.
Performance Testing Process & Methodology
128 -
This will help you to handle unexpected window during playback, error recovery, mention the
time out period, to manage log and log data.
A verification point is stored in the project and is always associated with a script. When you
create a verification point, its name appears in the Asset (left) pane of the Script window. The
verification point script command, which always begins with Result =, appears in the Script
(right) pane.
Because verification points are assets of a script, if you delete a script, Robot also deletes all
of its associated verification points.
You can easily copy verification points to other scripts if you want to reuse them.
Description
Captures and compares alphabetic or numeric
values.
Captures and compares alphanumeric data that
has been copied to the Clipboard.
File Comparison
File Existence
Menu
Module Existence
Object Data
Object Properties
Region Image
Web Site Compare
Web Site Scan
Window Existence
Window Image
If recording, click the Display GUI Insert Toolbar button on the GUI Record toolbar.
If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on
the Standard toolbar.
2.
3.
4.
Robot inserts the comment into the script (in green by default) preceded by a single
quotation mark. For example:
2.
Each virtual tester that runs the script can send realistic data (which can include
unique data) to the server.
A single virtual tester that performs the same transaction multiple times can send
realistic data to the server in each transaction.
You must add datapool commands to GUI scripts manually while editing the script in
Robot. Robot adds datapool commands to VU scripts automatically.
18.12Debug menu
The Debug menu has the following commands:
Performance Testing Process & Methodology
132 -
Go
Go Until Cursor
Animate
Pause
Stop
Set or Clear Breakpoints
Clear All Breakpoints
Step Over
Step Into
Step Out
Note: The Debug menu commands are for use with GUI scripts only.
18.14Compilation errors
After the script is created and compiled and errors fixed it can be executed.
The results need to be analyzed in the Test Manager.
In the Results tab of the Test Manager, you could see the results stored.
From Test Manager you can know start time of the script and
20 Supported environments
20.1 Operating system
WinNT4.0 with service pack 5
Win2000
WinXP(Rational 2002)
Win98
Win95 with service pack1
20.2 Protocols
Oracle
SQL server
HTTP
Sybase
Tuxedo
SAP
People soft
www.rational.com
21 Performance Testing
The performance testing is a measure of the performance characteristics of an
application. The main objective of a performance testing is to demonstrate that the
system functions to specification with acceptable response times while processing the
required transaction volumes in real-time production database. The objective of a
performance test is to demonstrate that the system meets requirements for transaction
throughput and response times simultaneously. The main deliverables from such a test,
prior to execution, are automated test scripts and an infrastructure to be used to execute
automated tests for extended periods.
During a design or redesign of a module or a part of the system, more than one
alternative presents itself. In such cases, the evaluation of a design alternative is
the prime mover for an analysis.
Post-deployment realities create a need for the tuning the existing system. A
systematic approach like performance analysis is essential to extract maximum
benefit from an existing system.
Identification of bottlenecks in a system is more of an effort at troubleshooting.
This helps to replace and focus efforts at improving overall system response.
As the user base grows, the cost of failure becomes increasingly unbearable. To
increase confidence and to provide an advance warning of potential problems in
case of load conditions, analysis must be done to forecast performance under
load.
that
is
Stable system
A test team attempting to construct a performance test of a system whose software is of
poor quality is unlikely to be successful. If the software crashes regularly, it will probably
not withstand the relatively minor stress of repeated use. Testers will not be able to record
scripts in the first instance, or may not be able to execute a test for a reasonable length of
time before the software, middleware or operating systems crash.
Realistic test environment
The test environment should ideally be the production environment or a close simulation
and be dedicated to the performance test team for the duration of the test. Often this is
not possible. However, for the results of the test to be realistic, the test environment
should be comparable to the actual production environment. Even with an environment
which is somewhat different from the production environment, it should still be possible to
interpret the results obtained using a model of the system to predict, with some
confidence, the behavior of the target environment. A test environment which bears no
similarity to the actual production environment may be useful for finding obscure errors in
the code, but is, however, useless for a performance test.
after a specified period of live running complete the load profile. Typically, data volumes
estimated to exist after one years use of the system are used, but two year volumes or
greater might be used in some circumstances, depending on the business application.
Requirement
Collection
Test Plan
Preparation
Test Plan
Test Design
Preparation
Test Design
Scripting
Test Scripts
Test Execution
Test Analysis
Preliminary Report
Activity
Is
Performan
ce
Goal
DeliverableReached?
NO
Deliverable
Internal
YES
Preparation of
Reports
Final Report
Work items
Understand the system and application model
Server side and Client side Hardware and software
requirements.
Browser Emulation and Automation Tool Selection
Decide on the type and mode of testing
Operational Inputs Time of Testing, Client and Server side
parameters.
22.1.1
22.1.2Deliverables
Deliverable
Requirement Collection
Sample
RequirementCollectio
n.doc
Work items
Hardware and Software Details
Test data
Transaction Traversal that is to be tested with sleep times.
Periodic status update to the client.
22.2.1Deliverables
Deliverable
Sample
Test Plan
TestPlan.doc
Activity
Test Design Generation
Work items
Hardware and Software requirements that includes the
server components , the Load Generators used etc.,
Setting up the monitoring servers
Setting up the data
Preparing all the necessary folders for saving the results as
the test is over.
Pre Test and Post Test Procedures
22.3.1Deliverables
Deliverable
Sample
Test Design
TestDesign.doc
Work items
Browse through the application and record the transactions
with the tool
Parameterization, Error Checks and Validations
Run the script for single user for checking the validity of
scripts
22.4.1Deliverables
Deliverable
Test Scripts
Sample
Sample Script.doc
Work items
Starting the Pre Test Procedure scripts which includes start
scripts for server monitoring.
Modification of automated scripts if necessary
Test Result Analysis
Report preparation for every cycle
22.5.1 Deliverables
Deliverable
Test Execution
Sample
Time Sheet.doc
Run Logs.doc
Work items
Analyzing the run results and preparation of preliminary
report.
22.6.1 Deliverables
Deliverable
Performance Testing Process & Methodology
147 -
Sample
Proprietary & Confidential
Test Analysis
Preliminary
Report.doc
Work items
Preparation of final report.
22.7.1 Deliverables
Deliverable
Final Report
Sample
Final Report.doc
goals. Establish incremental performance goals throughout the product development cycle.
All the members in the team should agree that a performance issue is not just a bug; it is a
software architectural problem.
Performance testing of Web services and applications is paramount to ensuring an excellent
customer experience on the Internet. The Web Capacity Analysis (WebCAT) tool provides
Web server performance analysis; the tool can also assess Internet Server Application
Programming Interface and application server provider (ISAPI/ASP) applications.
Creating an automated test suite to measure performance is time-consuming and laborintensive. Therefore, it is important to define concrete performance goals. Without defined
performance goals or requirements, testers must guess, without a clear purpose, at how to
instrument tests to best measure various response times.
The performance tests should not be used to find functionality-type bugs. Design the
performance test suite to measure response times and not to identify bugs in the product.
Design the build verification test (BVT) suite to ensure that no new bugs are injected into the
build that would prevent the performance test suite from successfully completing.
The performance tests should be modified consistently. Significant changes to the
performance test suite skew or make obsolete all previous data. Therefore, keep the
performance test suite fairly static throughout the product development cycle. If the design or
requirements change and you must modify a test, perturb only one variable at a time for each
build.
Strive to achieve the majority of the performance goals early in the product development
cycle because:
Achieving performance goals early also helps to ensure that the ship date is met because a
product rarely ships if it does not meet performance goals. You should reuse automated
performance tests Automated performance tests can often be reused in many other
automated test suites. For example, incorporate the performance test suite into the stress
test suite to validate stress scenarios and to identify potential performance issues under
different stress conditions.
Tests are capturing secondary metrics when the instrumented tests have nothing to do with
measuring clear and established performance goals. Although secondary metrics look good
on wall charts and in reports, if the data is not going to be used in a meaningful way to make
improvements in the engineering cycle, it is probably wasted data. En sure that you know
what you are measuring and why.
Testing for most applications will be automated. Tools used for testing would be the tool
specified in the requirement specification. The tools used for performance testing are
Loadrunner 6.5 and Webload 4.5x
23 Tools
23.1 LoadRunner 6.5
LoadRunner is Mercury Interactives tool for testing the performance of client/server systems.
LoadRunner enables you to test your system under controlled and peak load conditions. To
generate load, LoadRunner runs thousands of Virtual Users that are distributed over a
network. Using a minimum of hardware resources, these Virtual users provide consistent.
Repeatable and measurable load to execute your client/server system just as real users
would. LoadRunners in depth reports and graphs provide the information that you need to
evaluate the performance of your client/server system.
WebSizr,
WebCorder
http://www.technova
tions.com/home.htm
Win95(98),
Windows NT,
Windows 2000
Technovations.
WebSizr load
testing tool supports
authentication,
cookies, redirects
Notes:
downloadable, 30
eval. period.
Purpose: explains the value and focus of the test, along with some simple
background information that might be helpful during testing.
Constraints: details any constraints and values that should not be exceeded during
testing.
Time estimate: a rough estimate of the amount of time that the test may take to
complete.
Type of workload: in order to properly achieve the goals of the test, each test
requires a certain type of workload. This methodology specification provides
information on the appropriate script of pages or transactions for the user.
Methodology: a list of suggested steps to take in order to assess the system under
test.
What to look for: contains information on behaviors, issues and errors to pay
attention to during and after the test.
24 Performance Metrics
The Common Metrics selected /used during the performance testing is as below
Response time
Turnaround time = the time between the submission of a batch job and the
completion of its output.
Stretch Factor: The ratio of the response time with single user to that of concurrent
users.
Throughput: Rate (requests per unit of time) Examples:
Jobs per second
Requests per second
Millions of Instructions Per Second (MIPS)
Millions of Floating Point Operations Per Second (MFLOPS)
Packets Per Second (PPS)
Bits per second (bps)
Transactions Per Second (TPS)
Capacity:
Nominal Capacity: Maximum achievable throughput under ideal workload conditions.
E.g., bandwidth in bits per second. The response time at maximum throughput is too
high.
Efficiency: Ratio usable capacity to nominal capacity. Or, the ratio of the
performance of an n-processor system to that of a one-processor system is its
efficiency.
Utilization: The fraction of time the resource is busy servicing requests.
Average Fraction used for memory.
As tests are executed, metrics such as response times for transactions, HTTP requests per
second, throughput etc., should be collected. It is also important to monitor and collect the
statistics such as CPU utilization, memory, disk space and network usage on individual web,
application and database servers and make sure those numbers recede as load decreases.
Cognizant has built custom monitoring tools to collect the statistics. Third party monitoring
tools are also used based on the requirement.
Running Vusers
Hits per Second
Throughput
HTTP Status Code
HTTP responses per Second
Pages downloaded per Second
Transaction response time
Bandwidth Utilization
Network delay time
Network Segment delay time
24.4 Conclusion
Performance testing is an independent discipline and involves all the phases as the
mainstream testing lifecycle i.e strategy, plan, design, execution, analysis and reporting.
Without the rigor described in this paper, executing performance testing does not yield
anything more than finding more defects in the system. However, if executed systematically
with appropriate planning, performance testing can unearth issues that otherwise cannot be
done through mainstream testing. It is very typical of the project manager to be overtaken by
time and resource pressures leading not enough budget being allocated for performance
testing, the consequences of which could be disastrous to the final system. There is another
flip side of the coin.
However there is an important point to be noted here. Before testing the system for
performance requirements, the system should have been architected and designed for
meeting the required performance goals. If not, it may be too late in the software
development cycle to correct serious performance issues.
Web-enabled applications and infrastructures must be able to execute evolving business
processes with speed and precision while sustaining high volumes of changing and
unpredictable user audiences. Load testing gives the greatest line of defense against poor
performance and accommodates complementary strategies for performance management
and monitoring of a production environment. The discipline helps businesses succeed in
leveraging Web technologies to their best advantage, enabling new business opportunity
lowering transaction costs and strengthening profitability. Fortunately, robust and viable
solutions exist to help fend off disasters that result from poor performance. Automated load
Performance Testing Process & Methodology
156 -
testing tools and services are available to meet the critical need of measuring and optimizing
complex and dynamic application and infrastructure performance. Once these solutions are
properly adopted and utilized, leveraging an ongoing, lifecycle-focused approach, businesses
can begin to take charge and leverage information technology assets to their competitive
advantage. By continuously testing and monitoring the performance of critical software
applications, business can confidently and proactively execute strategic corporate initiatives
for the benefit of shareholders and customers alike.
25 Load Testing
Load Testing is creation of a simulated load on a real computer system by using virtual users
who submit work as real users would do at real client workstations and thus testing the
systems ability to support such workload.
Testing of critical web applications during its development and before its deployment should
include functional testing to confirm to the specifications, performance testing to check if it
offers an acceptable response time and load testing to see what hardware or software
configuration will be required to provide acceptable response time and handle the load that
will created by the real users of the system
Finally it should also be taken into consideration of the test tool which supports load
testing by determining its multithreading capabilities and the creation of number of
virtual users with minimal resource consumption and maximal virtual user count.
26.3 Settings
Run time settings should be defined the way the scripts should be run in order to
accurately emulate real users. Settings can configure the number of concurrent
connections, test run time, follow HTTP redirects etc., System response times also
can vary based on the connection speed. Hence throttling bandwidth can emulate dial
up connections at varying modem speeds (28.8 Kbps or 56.6 Kbps or T1 (1.54M) etc.
26.6 Conclusion
Load testing is the measure of an entire Web application's ability to sustain a number
of simultaneous users and transactions, while maintaining adequate response times. It
is the only way to accurately test the end-to-end performance of a Web site prior to
going live.
Two common methods for implementing this load testing process are manual and
automated testing.
Manual testing would involve
With automated load testing tools, tests can be easily rerun any number of times and
the results can be reported automatically. In this way, automated testing tools provide
a more cost-effective and efficient solution than their manual counterparts. Plus, they
minimize the risk of human error during testing.
27 Stress Testing
27.1 Introduction to Stress Testing
This testing is accomplished through reviews (product requirements, software functional
requirements, software designs, code, test plans, etc.), unit testing, system testing (also
known as functional testing), expert user testing (like beta testing but in-house), smoke tests,
etc. All these testing activities are important and each plays an essential role in the overall
effort but, none of these specifically look for problems like memory and resource
management. Further, these testing activities do little to quantify the robustness of the
application or determine what may happen under abnormal circumstances. We try to fill this
gap in testing by using stress testing.
Stress testing can imply many different types of testing depending upon the audience. Even
in literature on software testing, stress testing is often confused with load testing and/or
volume testing. For our purposes, we define stress testing as performing random
operational sequences at larger than normal volumes, at faster than normal speeds
and for longer than normal periods of time as a method to accelerate the rate of finding
defects and verify the robustness of our product.
Stress testing in its simplest form is any test that repeats a set of actions over and over with
the purpose of breaking the product. The system is put through its paces to find where it
may fail. As a first step, you can take a common set of actions for your system and keep
repeating them in an attempt to break the system. Adding some randomization to these steps
will help find more defects. How long can your application stay functioning doing this
operation repeatedly? To help you reproduce your failures one of the most important things
to remember to do is to log everything as you proceed. You need to know what exactly was
happening when the system failed. Did the system lock up with 100 attempts or 100,000
attempts?[1]
Note that there are many other types of testing which have not mentioned above, for
example, risk based testing, random testing, security testing, etc. We have found, and it
seems they agree, that it is best to review what needs to be tested, pick multiple testing types
that will provide the best coverage for the product to be tested, and then master these testing
types, rather than trying to implement every testing type.
Some of the defects that we have been able to catch with stress testing that have not been
found in any other way are memory leaks, deadlocks, software asserts, and configuration
conflicts. For more details about these types of defects or how we were able to detect them,
refer to the section Typical Defects Found by Stress Testing.
Table 1 provides a summary of some of the strengths and weaknesses that we have found
with stress testing.
Table 1
Stress Testing Strengths and Weaknesses
Strengths
Weakness
Find defects that no other type of test would
find
Using randomization increase coverage
Test the robustness of the application
With automated stress testing, the stress test is performed under computer control. The
stress test tool is implemented to determine the applications configuration, to execute all
valid command sequences in a random order, and to perform data logging. Since the stress
test is automated, it becomes easy to execute multiple stress tests simultaneously across
more than one product at the same time.
Depending on how the stress inputs are configured stress can do both positive and
negative testing. Positive testing is when only valid parameters are provided to the device
under test, whereas negative testing provides both valid and invalid parameters to the device
as a way of trying to break the system under abnormal circumstances. For example, if a valid
input is in seconds, positive testing would test 0 to 59 and negative testing would try 1 to 60,
etc.
Even though there are clearly advantages to automated stress testing, it still has its
disadvantages. For example, we have found that each time the product application changes
we most likely need to change the stress tool (or more commonly commands need to be
added to/or deleted from the input command set). Also, if the input command set changes,
then the output command sequence also changes given pseudo-randomization.
Table 2 provides a summary of some of these advantages and disadvantages that we have
found with automated stress testing.
Table 2
Automated Stress Testing Advantages and Disadvantages
Advantages
Disadvantages
Automated stress testing is performed under
computer control
Capability to test all product application
command sequences
Multiple product applications can be supported
by one stress tool
Uses randomization to increase coverage; tests
vary with new seed values
Repeatability of commands and parameters
help reproduce problems or verify that existing
problems have been resolved
Informative log files facilitate investigation of
problem
To take advantage of automated stress testing, our challenge then was to create an
automated stress test tool that would:
1. Simulate user interaction for long periods of time (since it is computer controlled we
can exercise the product more than a user can).
2. Provide as much randomization of command sequences to the product as possible to
improve test coverage over the entire set of possible features/commands.
3. Continuously log the sequence of events so that issues can be reliably reproduced
after a system failure.
4. Record the memory in use over time to allow memory management analysis.
5. Stress the resource and memory management features of the system.
2) Graphical User Interfaces (GUIs): Interfaces that use the Windows model to allow
the user direct control over the device, individual windows and controls may or may
not be visible and/or active depending on the state of the device.
For additional complexity, other variations of the automated stress test can be performed.
For example, the stress test can vary the rate at which commands are sent to the interface,
the stress test can send the commands across multiple interfaces simultaneously, (if the
product supports it), or the stress test can send multiple commands at the same time.
Stress Test
Tool
Input File
Log command
Sequence
DUT
Log Test
Results
occurs, continue to add additional data to the defect description. Eventually, over
time, you will be able to detect a pattern, isolate the root cause and resolve the defect.
Some defects just seem to be un-reproducible, especially those that reside around
page faults, but overall, we know that the robustness of our applications increases
proportionally with the amount of time that the stress test will run uninterrupted.
Probably the most basic form of test coverage is to measure what procedures were
and were not executed during the test suite. This simple statistic is typically available
from execution profiling tools, whose job is really to measure performance
bottlenecks. If the execution time in some procedures is zero, you need to write new
tests that hit those procedures. But this measure of test coverage is so coarse-grained
it's not very practical.
28.4 Line-Level Test Coverage
The basic measure of a dedicated test coverage tool is tracking which lines of code
are executed, and which are not. This result is often presented in a summary at the
procedure, file, or project level giving a percentage of the code that was executed. A
large project that achieved 90% code coverage might be considered a well-tested
product.
Typically the line coverage information is also presented at the source code level,
allowing you to see exactly which lines of code were executed and which were not.
This, of course, is often the key to writing more tests that will increase coverage: By
studying the unexecuted code, you can see exactly what functionality has not been
tested.
28.5 Condition Coverage and Other Measures
It's easy to find cases where line coverage doesn't really tell the whole story. For
example, consider a block of code that is skipped under certain conditions (e.g., a
statement in an if clause). If that code is shown as executed, you don't know whether
you have tested the case when it is skipped. You need condition coverage to know.
There are many other test coverage measures. However, most available code coverage
tools do not provide much beyond basic line coverage. In theory, you should have
more. But in practice, if you achieve 95+% line coverage and still have time and
budget to commit to further testing improvements, it is an enviable commitment to
quality!
28.6.1Source-Level Instrumentation
Some products add probes at the source level. They analyze the source code as
written, and add additional code (such as calls to a code coverage runtime) that will
record where the program reached.
Such a tool may not actually generate new source files with the additional code. Some
products, for example, intercept the compiler after parsing but before code generation
to insert the changes they need.
One drawback of this technique is the need to modify the build process. A separate
version namely, code coverage version in addition to other versions, such as debug
(un optimized) and release (optimized) needs to be maintained.
Proponents claim this technique can provide higher levels of code coverage
measurement (condition coverage, etc.) than other forms of instrumentation. This type
of instrumentation is dependent on programming language -- the provider of the tool
must explicitly choose which languages to support. But it can be somewhat
independent of operating environment (processor, OS, or virtual machine).
28.6.2Executable Instrumentation
Probes can also be added to a completed executable file. The tool will analyze the
existing executable, and then create a new, instrumented one.
This type of instrumentation is independent of programming language. However, it is
dependent on operating environment -- the provider of the tool must explicitly choose
which processors or virtual machines to support.
28.6.3Runtime Instrumentation
Probes need not be added until the program is actually run. The probes exist only in
the in-memory copy of the executable file; the file itself is not modified. The same
executable file used for product release testing should be used for code coverage.
Because the file is not modified in any way, just executing it will not automatically
start code coverage (as it would with the other methods of instrumentation). Instead,
the code coverage tool must start program execution directly or indirectly.
Alternatively, the code coverage tool will add a tiny bit of instrumentation to the
executable. This new code will wake up and connect to a waiting coverage tool
Performance Testing Process & Methodology
171 -
whenever the program executes. This added code does not affect the size or
performance of the executable, and does nothing if the coverage tool is not waiting.
Like Executable Instrumentation, Runtime Instrumentation is independent of
programming language but dependent on operating environment.
BullseyeCoverage
Win32,
Unix
C/C++
CompuWare
DevPartner
Win32
C/C++,
Java, VB
Win32,
Unix
C/C++,
Java, VB
Software
Research
TCAT
Win32,
Unix
C/C++,
Java
Testwell
CTC++
Win32,
Unix
C/C++
Paterson
Technology
LiveCoverage
Win32
C/C++, VB
Coverage analysis is a structural testing technique that helps eliminate gaps in a test
suite. It helps most in the absence of a detailed, up-to-date requirements specification.
Each project must choose a minimum percent coverage for release criteria based on
available testing resources and the importance of preventing post-release failures.
Clearly, safety-critical software should have a high goal. We must set a higher
coverage goal for unit testing than for system testing since a failure in lower-level
code may affect multiple high-level callers.
Simple (1-3)
Average (4-7)
Complex (> 8)
The test cases for a particular requirement are classified into Simple, Average and
Complex based on the following four factors.
Total
Simple
Average
Complex
< 2 transactions
3-6 transactions
> 6 transactions
Interface with
other Test case
0
<3
>3
Number of
verification
points
Baseline
Test Data
Not
Required
Required
Required
<2
3-8
>8
Test Case
Type
Complexity Adjustment
Weight
Factor
Simple
2(A)
Average
4(B)
Complex
Total Test
Case Points
8(C)
Number
Result
No of Simple requirements in Number*Adjust factor A
the project
(R1)
No of Average requirements in Number*Adjust factor B
the project
(R2)
No of Complex requirements in Number*Adjust factor C
the project
(R3)
R1+R2+R3
From the break up of Complexity of Requirements done in the first step, we can get
the number of simple, average and complex test case types. By multiplying the
number of requirements with it s corresponding adjustment factor, we get the simple,
average and complex test case points. Summing up the three results, we arrive at the
count of Total Test Case Points.