Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Standard view
Full view
of .
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
Chapter 6

Chapter 6

Ratings: (0)|Views: 43|Likes:
Published by Kapildev

More info:

Published by: Kapildev on Nov 27, 2008
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less





Chapter 6 :Tool Support for Testing
6.1 Overview
When people discuss testing tools they invariably think of automated testing tools andin particular capture/replay tools. However, the market changes all the time and thismodule is intended to give you a flavor of the many different types of testing toolavailable. There is also a discussion about how to select and implement a testing toolfor your organization. Remember the golden rule, if you automate a mess, you'll getautomated chaos; choose tools wisely!
6.2 Objectives
After completing this module you will be able to:» Name up to thirteen different types of testing tools.» Explain which tools is in common use today and why.» Understand when test automation tools are appropriate and when they are not.» Describe in outline a tool selection process.
6.3 Types of CAST tools
There are numerous types of computer-aided software testing (CAST) tool and theseare briefly described below.
 Requirements testing tools
 provide automated support for the verification andvalidation of requirements models, such as consistency checking and animation.
 Static analysis tools
 provide information about the quality of the software byexamining the code, rather than buy running test cases through the code. Staticanalysis tools usually give objective measurements of various characteristics of thesoftware, such as the cyclomatic complexity measures and other quality metrics.
Test design tools
generate test cases from a specification that must normally be heldin a CASE tool repository or from formally specified requirements held in the toolsitself. Some tools generate test cases from an analysis of the code.1
Test data preparation tools
enable data to be selected from existing databases or created, generated, manipulated and edited fro use in tests. The most sophisticatedtools can deal with a range of file and database formats.
Character-based test running tools
 provide test capture and replay facilities for dumb-terminal based applications. The tools simulate user-entered terminalkeystrokes and capture screen responses for later comparison. Test procedures arenormally captured in a programmable script language; data, test cases and expectedresults may be held in separate test repositories. These tools are most often used toautomate regression testing.
GUI test running tools
 provide test capture and replay facilities for WIMP interface based applications. The tools simulate mouse movement, button clicks and keyboardinputs and can recognize GUI objects such as windows, fields, buttons and other controls. Object states and bitmap images can be captured for later comparison. Test procedures are normally captured in a programmable script language; data, test casesand expected results may be held in separate test repositories. These tools are mostoften used to automate regression testing.
Test harnesses and drivers
are used to execute software under test, which may nothave a user interface, or to run groups of existing automated test scripts, which can becontrolled by the tester. Some commercially available tools exist, but custom-written programs also fall into this category. Simulators are used to support tests where codeor other systems are either unavailable or impracticable to use (e.g. testing software tocope with nuclear meltdowns).
 Performance test tools
have two main facilities: load generation and test transactionmeasurement. Load generation is done either by driving application using its user interface or by test drivers, which simulate load generated by application onarchitecture. Records of numbers of transactions executed are logged. Drivingapplication using its user interface, response time measurements are taken for selectedtransactions and these are logged. Performance testing tools normally provide reports based on test logs, and graphs of load against response times.
 Dynamic analysis tools
 provide run-time information on state of executing software.These tools are most commonly used to monitor allocation, use and de-allocation of memory, flag memory leaks, unassigned pointers, pointer arithmetic and other errorsdifficult to find 'statically'.
 Debugging tools
are mainly used by programmers to reproduce bugs and investigatethe state of programs. Debuggers enable programmers to execute programs line byline, to halt program at any program statement and to set and examine programvariables.2
Comparison tools
are used. to detect differences between actual results and expectedresults. Standalone comparison tools normally deal with a range of file or databaseformats. Test running tools usually have built-in comparators that deal with character screens, Gill objects or bitmap images. These tools often have filtering or maskingcapabilities, whereby they can 'ignore' rows or columns of data or areas on screens.
Test management tools
may have several capabilities. Test ware management isconcerned with creation, management and control of test documentation, e.g. test plans, specifications, and results. Some tools support project management aspects of testing, for example, scheduling of tests, logging of results and management of incidents raised during testing. Incident management tools may also have workflow-oriented facilities to track and control allocation, correction and retesting of incidents.Most test management tools provide extensive reporting and analysis facilities.
Coverage measurement 
(or analysis) tools
 provide objective measures of structuraltest coverage when test are executed. Programs to be tested are instrumented beforecompilation. Instrumentation code dynamically captures coverage data in a log filewithout affecting functionality of program under test. After execution, log file isanalysed and coverage statistics generated. Most tools provide statistics on mostcommon coverage measures such as statement or branch coverage.
6.4 Tool selection and implementation
There are many test activities, which can be automated, and test execution tools arenot necessarily first or only choice. Identify your test activities where tool supportcould be of benefit and prioritize areas of most importance.Fit with your test process may be more important than choosing tool with mostfeatures in deciding whether you need a tool, and which one you choose. Benefits of tools usually depend on a systematic and disciplined test process. If testing is chaotic,tools may not be useful and may hinder testing. You must have a good process now,or recognize that your process must improve in parallel with tool implementation. Theease by which CAST tools can be implemented might be called 'CAST readiness’.Tools may have interesting features, but may not necessarily be available on your  platforms, e.g., 'works on 15 flavors of Unix, but not yours...’ Some tools, e.g. performance testing tools, require their own hardware, so cost of procuring thishardware should be a consideration in your cost benefit analysis. If you already havetools, you may need to consider level and usefulness of integration with other tools,e.g., you may want test execution tool to integrate with your existing test managementtool (or vice versa). Some vendors offer integrated toolkits, e.g. test execution, testmanagement, performance-testing bundles. Integration between some tools may bringmajor benefits, in other cases; level of integration is cosmetic only.3

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->