There was a problem sending you an sms. Check your phone number or try again later.
We've sent a link to the Scribd app. If you didn't receive it, try again.
In today\u2019s environment of plummeting cycle times, test automation becomes an
increasingly critical and strategic necessity. Assuming the level of testing in the past
was sufficient (which is rarely the case), how do we possibly keep up with this new
explosive pace of web-enabled deployment while retaining satisfactory test coverage
and reducing risk? The answer is either more people for manual testing, or a greater
level of test automation. After all, a reduction in project cycle times generally
correlates to a reduction of time for test.
With the onset and demand for rapidly developed and deployed web clients test
automation is even more crucial. Add to this the cold, hard reality that we are often
facing more than one active project at a time. For example, perhaps the team is
finishing up Version 1.0, adding the needed new features to Version 1.1, and
prototyping some new technologies for Version 2.0!
Better still; maybe our test team is actually a pool of testers supporting many
diverse applications completely unrelated to each other. If each project implements a
unique test strategy, then testers moving among different projects can potentially be
more a hindrance rather than a help. The time needed for the tester to become
productive in the new environment just may not be there. And, it may surely detract
from the productivity of those bringing the new tester up to speed.
To handle this chaos we have to think past the project. We cannot afford to engineer or reengineer automation frameworks for each and every new application that comes along. We must strive to develop a single framework that will grow and continuously improve with each application and every diverse project that challenges us. We will see the advantages and disadvantages of these different approaches in Section 1.2
Historically, test automation has not met with the level of success that it could. Time
and again test automation efforts are born, stumble, and die. Most often this is the
result of misconceived perceptions of the effort and resources necessary to
implement a successful, long-lasting automation framework. Why is this, we might
ask? Well, there are several reasons.
seen the vendor\u2019s sample applications. We have seen the tools play nice withthose
applications. And we try to get the tools to play nice withour applications just as
fluently. Inherently, project after project, we do not achieve the same level of
This usually boils down to the fact that our applications most often contain elements that are not compatible with the tools we use to test them. Consequently, we must often mastermind technically creative solutions to make these automation tools work with our applications. Yet, this is rarely ever mentioned in the literature or the sales pitch.
The commercial automation tools have been chiefly marketed for use as solutions for
testing an application. They should instead be sought as the cornerstone for an
enterprise-wide test automation framework. And, while virtually all of the automation
tools contain some scripting language allowing us to get past each tool\u2019s failings,
testers have typically neither held the development experience nor received the
training necessary to exploit these programming environments.
"For the most part, testers have been testers, not programmers.
Consequently, the \u2018simple\u2019 commercial solutions have been far too complex
to implement and maintain; and they become shelfware."
Most unfortunate of all, otherwise fully capable testers are seldom given the time
required to gain the appropriate software development skills. For the most part,
testers have been testers, not programmers. Consequently, the "simple" commercial
solutions have been far too complex to implement and maintain; and they become
In 1996, one large corporation set out evaluating the various commercial automation
tools that were available at that time. They brought in eager technical sales staff
from the various vendors, watched demonstrations, and performed some fairly
thorough internal evaluations of each tool.
By 1998, they had chosen one particular vendor and placed an initial order for over
$250,000 worth of product licensing, maintenance contracts, and onsite training. The
tools and training were distributed throughout the company into various test
departments--each working on their own projects.
None of these test projects had anything in common. The applications were vastly
different. The projects each had individual schedules and deadlines to meet. Yet,
every one of these departments began separately coding functionally identical
common libraries. They made routines for setting up the Windows test environment.
They each made routines for accessing the Windows programming interface. They
made file-handling routines, string utilities, database access routines--the list of code
duplication was disheartening!
For their test designs, they each captured application specific interactive tests using
the capture\replay tools. Some groups went the next step and modularized key
reusable sections, creating reusable libraries of application-specific test functions or
scenarios. This was to reduce the amount of code duplication and maintenance that
so profusely occurs in pure captured test scripts. For some of the projects, this might
have been appropriate if done with sufficient planning and an appropriate automation
framework. But this was seldom the case.
With all these modularized libraries testers could create functional automated tests in the automation tool\u2019s proprietary scripting language via a combination of interactive test capture, manual editing, and manual scripting.
One problem was, as separate test teams they did not think past their own individual
projects. And although they were each setting up something of a reusable
framework, each was completely unique--even where the common library functions
were the same! This meant duplicate development, duplicate debugging, and
duplicate maintenance. Understandably, each separate project still had looming
deadlines, and each was forced to limit their automation efforts in order to getreal
As changes to the various applications began breaking automated tests, script
maintenance and debugging became a significant challenge. Additionally, upgrades in
the automation tools themselves caused significant and unexpected script failures. In
some cases, the necessity to revert back (downgrade) to older versions of the
automation tools was indicated. Resource allocation for continued test development
Eventually, most of these automation projects were put on hold. By the end of
1999--less than two years from the inception of this large-scale automation effort--
over 75% of the test automation tools were back on the shelves waiting for a new
chance to try again at some later date.
Past failings like these have been lessons for the entire testing community. Realizing
that we must develop reusable test strategies is no different than the reusability
concerns of any good application development project. As we set out on our task of
automating test, we must keep these past lessons forefront.
In order to make the most of our test strategy, we need to make it reusable and manageable. To that end, there are some essential guiding principles we should follow when developing our overall test strategy:
Now bringing you back...
Does that email address look wrong? Try again with a different email.