Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword
Like this
2Activity
0 of .
Results for:
No results containing your search query
P. 1
Increasing Effectiveness of Automated Testing

Increasing Effectiveness of Automated Testing

Ratings: (0)|Views: 72|Likes:
Published by Anand

More info:

Published by: Anand on Aug 20, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

07/16/2010

pdf

text

original

 
 
88
 
Increasing the Effectiveness of Automated Testing
Shaun Smith and Gerard Meszaros
ClearStream Consulting Inc.1200 250 6
th
Avenue SWCalgary, Alberta, Canada T2P 3H71-403-264-5840{shaun|gerard}@clrstream.com 
ABSTRACT
This paper describes techniques that can be used toreduce the execution time and maintenance cost of theautomated regression test suites that are used to drivedevelopment in eXtreme Programming (XP). They areimportant because developers are more likely to write andrun test, thus getting valuable feedback, if testing is aspainless as possible. Test execution time can be reducedby using an in-memory database to eliminate the latencyintroduced by accessing a disk-based database and/or filesystem. This paper also describes how the effort of testdevelopment can be reduced through the use of aframework that simplifies setup and teardown of textfixtures.
Keywords
XP, Automated Testing, Database, Object/RelationalMapping, Test-first Development, Business System,Information System
1 INTRODUCTION
Automated testing is a useful practice in the toolkit of every developer, whether they are doing XP or more“traditional” forms of software development. The maindrawing card is the ability to rerun tests whenever youwant reassurance that things are working as required.However, for tests to provide this valuable feedback to adeveloper they have to run relatively quickly. On a seriesof Java projects building business systems, we havefound that running large numbers of JUnit tests against arelational database is normally too slow to provide thekind of rapid feedback developers need. As a result wehave developed a collection of techniques, practices, andtechnologies that together allow us to obtain the rapidfeedback developers need even when a relationaldatabase underlies the system under construction.
2 DEVELOPER TESTING ISSUES
XP is heavily reliant on testing. Tests tell us whether wehave completed the implementation of some requiredfunctionality and whether we have broken anything as aresult. On an XP project without extensive designdocumentation, the tests, along with the code, become amajor means of communication. It is not just a case of letting “the code speak to you”. The tests must speak toyou as well. And they should be listened to over and overagain.
Slow Test Execution
Having established that we rely so heavily on tests toguide development and report on progress, it is nosurprise that we want to be able to run tests frequently forfeedback. We have found that it is essential for tests toexecute very quickly; our target is under 30 seconds forthe typical test run.A purely economic argument is compelling enough byitself. Assuming a developer runs the test suite every 10minutes while developing, they will have run the suite 24times in a single 4-hour programming session. A 1-minute increase in test execution time increasesdevelopment time by 10% (24 minutes)! But the result of slow tests is even more insidious than the economicargument insinuates!If test execution is too slow, developers are more likely toput off testing and that delays the feedback. Thepreferred model of development is the making of a seriesof small changes, each time running the appropriate testsuite to assess the impact of the change. If tests runslowly, developers will likely make a number of smallchanges and then take a “test timeout” to run the tests.The impact of this delayed use of tests is two fold. First,debugging becomes more difficult because if tests failafter a series of changes, identifying the guilty change isdifficult. Developers may even forget they made somechanges. Murphy’s Law says that this forgotten changewill most likely be the source of the failing tests.Second, if tests take so long to run that a developer leavesher desk while a test suite runs, her train of thought maybe broken. This routine of starting a test suite and thengetting up for a stretch and a chat was observed regularlyon one project. The lengthy test suite execution time wasbecause of the large number of tests and the relationaldatabase access required by each test. This combinationbecame a recurring pattern on a number of projects.
Expensive Test Development
Given the test-first approach taken by XP, the cost of writing and maintaining tests becomes a critical issue.Unit tests can usually be kept fairly simple, but functionaltests often require large amounts of fixture setup to doeven limited testing. On several projects, we found thatthe functional tests were taking large amounts of time to
 
 
89
 write and needed considerable rework every time weadded new functionality. The complex tests were hard tounderstand in part because they contained too muchnecessary setup. We tried sharing previously setupobjects across tests, but we also found it difficult to writetests that did not have unintended interactions with eachother.
3 POSSIBLE SOLUTIONS
Business systems need access to enterprise data. Intraditional business systems, queries are made againstdatabases for such data. In testing such systems, we haveexperienced slow test execution due to the latency of queries and updates caused by disk I/O. The time toexecute a single query or update may only be a fraction of a second, but we have often had to do several or manysuch queries and updates in each test as text fixtures aresetup, tests are performed, and test fixtures are torn down.
Running Fewer Tests
One way to reduce the time required to obtain usefulfeedback is to limit the number of tests that are run. TheJUnit Cookbook describes how to organize your tests sothat subsets can be run. Unfortunately, this strategysuffers from the need for developers to choose anappropriate subset of the tests to use. If they choosepoorly, they may be in for a nasty surprise when they runthe full test suite just before integration. Nevertheless,there is value in these techniques. The key challenge is toknow which tests to run at a particular time.
Faster Execution Environment
Another approach would be taken to increase testexecution speed by using faster hardware, dedicated IPsubnets, more/better indices for the database, etc.However, these techniques tend to yield percentageincreases while we needed guaranteed improvementsmeasured in orders of magnitude.
In-Memory Testing
Since the problem preventing the running of many tests isthe time required to query or update a database, the idealsolution is to somehow avoid having to do this as muchas possible. We have replaced the relational database witha simple in-memory “object database”. To obtain rapidfeedback from test suites, developers run against thein-memory database while coding. The developmentprocess focuses on the use of in-memory testing duringdevelopment and then switches to database testing beforea task is integrated and considered complete.While our experiences are with relational databases, thisapproach could also be used to speed up testing withobject databases or simple file-based persistence.
4 IN-MEMORY TESTING
The challenges for running tests faster by running them inmemory are:1. How do you eliminate the database access from thebusiness logic? This requires
Separation of  Persistence from Business Logic
, including
ObjectQueries
.2. How do you know which tests can run in memory?We use standard JUnit test packaging conventionsand aggregate tests capable of being run in-memoryin a separate test suite from those that require adatabase.3. How do you specify whether they should be run inmemory on a particular test run? This requires
  Dynamic Test Adaptation
.4. How do you deal with configuration specificbehavior? This requires
 Environment Plug-ins
.
Separation of Persistence from Business Logic.
Switching back and forth between testing in-memory andagainst a database is only possible if application code andtests are unaware of the source of objects.We have been building business systems using a“Business Object Framework” for about five years inboth Smalltalk and Java. The objective of thisframework is to move all the “computer science” out of the business logic and into the infrastructure. We havemoved most of the “plumbing” into service providerobjects and abstract classes from which business objectscan inherit all the technical behavior. The technicalinfrastructure incorporates the TOPLink 
 Object/Relational (O/R) mapping framework. TOPLink is primarily responsible for converting database data intoobjects and vice versa. It eliminates the code needed toimplement these data/object conversions (which meansno SQL in application code) and it does automatic“faulting” into memory of objects reached by traversingrelationships between objects. This eliminates the needfor having explicit “reads” in the business logic.In essence, TOPLink makes a JDBC-compliant datasource (such as a relational database) look like an objectdatabase. Our infrastructure makes persistenceautomatic, which leaves developers to focus on thebusiness objects in an application and not on thepersistent storage of those objects. By removing allknowledge of persistence from application code, it alsomakes in-memory testing possible.This approach is described in more detail in [4].
Querying (Finding objects by their attributes)
If applications access relational databases, querying usingtable and column names, then replacing a relationaldatabase with an in-memory object database becomesproblematic because an in-memory database containsobjects, not tables. How do you hide the different sourcesof objects from the application, especially when you needto search for specific objects based on the values of theirattributes?
Solution:
Object Queries
—All queries are performed against theobjects and their attributes, not against the underlyingdatabase tables and columns. TOPLink’s query facility
 
 
90
 constructs the corresponding SQL table/column query forqueries specified using objects and attributes.Application code is unaware of where and how objectsare stored—which provides for the swapping of the“where and how” between an in-memory and a relationaldatabase.Our initial approach to querying was to move all databasequeries into “finder” methods on Home (class or factory)objects. But to support both in-memory and databasequerying we had to provide two implementations of thesefinder methods: one that used TOPLink’s query facilityagainst a relational database and the other that used theAPI of the in-memory database to perform the query.We quickly tired of having to implement queries twiceand have now implemented support for the evaluation of TOPLink’s object/attribute queries against ourin-memory database. With this technology, the samequery can be used to find objects in our in-memory objectdatabase or translated into SQL to be sent to a relationaldatabase.
Configuration Specific Behavior
When testing in memory, how do you handle functionalfeatures of databases like stored procedures, triggers, andsequences?
Solution:
The functional features of databases can be implementedin memory by
 Environment Plug-ins
(
Strategy
[3])
 .
 Each function provided by the database has a pair of plug-ins. When testing with a database, the databaseversion of a plug-in simply passes the request to thedatabase for fulfillment. The in-memory version of theplug-in emulates the behavior (side effects) of thedatabase plug-in.In a relational database, a sequence table may used togenerate unique primary key values. The in-memoryversion of the plug-in keeps a counter that is incrementedeach time another unique key is requested.Stored procedures can be particularly problematic whenbuilding an interface to an existing (legacy) database. Onone project, we had to create an account whose state wasinitialized by a stored procedure. During in-memorytesting, the state was not initialized so business rulesbased on the account’s state would fail. We could haveadded code to set the state during the account objectinitialization but we did not want to have any codespecific to testing in the production system. We wereable to avoid this using an
InMemoryAccountInitializationStrategy
thatperformed the required initialization during in-memorytesting and a
 NullObject 
[5] that did nothing when thedatabase’s stored procedure initialized the state.Because it is possible that an in-memory plug-in behavesdifferently than the database functionality it replaces, it isstill necessary to run the tests against the database atsome point. In practice, we require a full database testbefore changes are permitted to be integrated.
Environment Configuration
How and when do you setup the test environment withthe appropriate
Configuration Specific Behavior 
? Howdo you decide which environment to use for this test run?
Solution:
 Dynamic Test Adaptation
– We use Test Decorators tospecify whether the test environment should be in-memory or database. Using this technique we can chooseto run a test in memory for maximum speed or we can runthe test against the database for full accuracy. One hitchis the fact that one can choose to run all the tests, just thetests for a package, just the tests for a class or just a singletest method. To ensure that the tests run in the rightenvironment regardless of how they are invoked, we haveadded methods to our
TestCase
base class that push thedecorators down to the individual test instance level.On a recent project we created two Java packages: onecontaining tests that could only be run in memory(because the O/R mappings were not yet complete), andone for multi-modal tests (tests that could be run either inmemory or against a database.) As the projectprogressed, tests were moved from the in-memory testpackage to the multi-modal test package. Combining thisorganizational scheme with
 dynamic test adaptation
, wewere able to run the multi-modal tests against either thein-memory or the relational database.
5 OPTIMIZING TEST DEVELOPMENT
The JUnit test lifecycle specifies that one sets up a testfixture before a test and tears it down afterwards. But wefound that many of our functional tests depended on alarge number of other objects. A test can fail if itdepends on a previous test’s side effects and those effectscan vary depending upon a test’s success or failure. Wedivided the objects used by a test into three groups:1. Objects referenced but never modified. These
shared objects
can be setup once for all tests.2. Objects created or modified specifically for a test.These
temporary objects
must be created as part of each test’s setup.3. Objects created, modified or deleted during the testWe made it a hard and fast rule that tests cannot modifyany shared objects because to do so makes test inter-dependent, which in turn makes tests much harder tomaintain.
Shared Test Objects
In database testing, these shared objects would be theinitial contents of database before any testing started.How do you ensure that the shared objects are availablein both in-memory and database testing modes?
Solution
 In-Memory Database Initializer
—For in-memorytesting, we define an object who’s responsibility is to

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->