There was a problem sending you an sms. Check your phone number or try again later.
We've sent a link to the Scribd app. If you didn't receive it, try again.
This paper focuses on the area of regression testing. An industry-wide definition of regression test is\u201cretest-
ing of a previously tested program fol- lowing modification to ensure that faults have not been introduced or uncovered as a result of the changes made.\u201d
We have adapted this slightly for the type of testing we perform to give the following definition: \u201ca test designed to
confirm that an existing feature still performs correctly after any software modifications.\u201d
Both these definitions provide the read- er with a sense of the importance of regression testing. It is of no benefit providing your valued customer with lots of new and exciting features if you manage to break features they have been using successfully for the last number of years. In fact, some feedback received from a customer several years back indicated that they expect to have problems with new features when they first get a new release of software, but
So, regression testing is recognised as being very important, yet it often remains the test phase given the least attention in the planning of test activi- ties. Too often, organizations take the easy way out and regression test phases simply consist of reruns of existing test suites in the hope that these will catch all the problems. This approach has a number of potential flaws:
\u2022 If you are dealing with a large com- plex system, you can end up with thousands upon thousands of regres- sion tests as you continue to add fea- ture test suites from previous releas- es to the regression program for your current release. This is all well and good if you have a fully automated and stable test environment, which allows you to run all these tests quickly without using a lot of resources.
\u2022 You may not have the required cov- erage with your existing test suites to find problems being introduced as
\u2022 Although you may build up a library of regression tests over time, and this library can be quite extensive, they can also prove troublesome as they contain both redundant test suites & test cases which themselves contain errors. There can be a huge overhead in maintaining this library of regres- sion suites and test cases.
So, an unplanned approach to regres- sion testing can lead to inefficient use of resources. If you take the time to plan your regression test program, there are a number of questions you need to ask:
\u2022 What do I need to test?
\u2022 How long is it going to take?
\u2022 What resources are required?
\u2022 What tools are required?
\u2022 Are there any other special
The first question on this list is the most important one to answer, as it will help answer the remaining questions. Apart
A Strategy for Designing
and Executing an
from being the most impor- tant question it is also the most difficult to answer.
We are involved in sub-sys- tem and system testing of software releases for a Mobile Switching Centre (MSC). We test in both an MSC-only configuration and also in a full cellular system configuration with other net- work elements. Over the past 3 years the complexity of the features being developed for the MSC has increased
tremendously as we move away from MSC-only features to features that span several network elements. Our regres- sion testing is a mixture of automated (30%), semi-automated (30%) and manual (40%).
On our current release of MSC soft- ware, we decided to spend some time analysing the way we approached our regression testing. In general we had been taking an ad-hoc approach to plan- ning our regression test program for previous releases and our analysis uncovered the following facts:
\u2022 We are dealing with a legacy system; there has been a lack of consistency in the way features developed over the years have been documented and controlled. This has led to a lack of traceability from existing test cases/suites to old feature/functional requirements.
\u2022 Consistently on all our releases, our Problem Report (PR) arrival rate showed a straight-line increase with no levelling off towards the end of our cycle as shown in Figure 1. A \u2018normal\u2019 PR arrival rate is one where you are finding the majority of your problems early in your cycle, thus
allowing them to be fixed early. This then reduces the risk of impacting quality with fixes having to be made to the release late in the cycle prior to customer deployment.
\u2022 We had an increasing number of regression tests per release, as we simply added new tests to cover new features developed on the previous release we had tested. This increas- ing number of tests had turned our regression cycle into a 12-week interval on average and this was increasing with every release. In fact, over a 6-year period our regression cycle had increase from 6 weeks to the current average of 12 weeks.
\u2022 There was a lot of frustration among test engineers, as they were simply running regression tests for no appar- ent reason except we did it on the last release so must do it again! More often or not, these tests would not even find any problems, so the frus- tration increased.
\u2022 Over the years we have built up an extensive library of regression tests through the addition of tests to cover new functionality. Even with this library of tests, we can never be
100% sure that we have full coverage, as the method of selecting the regression tests was purely to try and cover the basic functionality of the MSC.
We have been perform- ing Escaped Defect Analysis & Root Cause Analysis on problems that escape to our cus- tomers, and this showed us a number of things in relation to our regression testing:
\u2022 We were good at testing new feature and enhancements but problems escaped in other areas that should have been covered by our regression testing.
\u2022 By simply re-running existing regression suites, we could not hope to catch most of the problems that were escaping. Regression suites often covered the basic functionality of a particular feature, so if a code change resulted in an impact outside of this basic functionality, we did not have the required coverage to catch the problem.
\u2022 Quite often, the solution to the Escaped Defect Analysis was just to add a test to provide coverage for the problem that escaped, thus increas- ing the number of regression tests even further. Clearly it would be bet- ter to try and detect the problem in the first place rather than adding a test just in case it re-occurred.
Once we had done the analysis, it was time for some change. Quite simply we could not continue the way we were going.
Due to the effort involved for older features and func- tionality, we could not go back and resolve the prob- lem of poorly documented and controlled requirements and the lack of traceability, so we had to work around this in some way.
Why did our typical prob- lem arrival rate show a straight line for all our releases? Some of it was due to churn on the release, i.e., features being added late in our cycle causing new problems to be intro- duced and found late in our cycle or sometimes not at all. However, the major finding was that we simply
had no strategy in terms of the way we executed our regression tests\u2014engi- neers simply took their suites, went into our labs and started executing. We decided that we needed a way of priori- tising our tests and then to execute based on the priority.
While looking at the issue of the increasing number of tests per release, it was clear that we had no regression selection criteria, except that we were trying to show that existing features and functionality still worked. We decided we needed some way of reducing the amount of tests we were running by identifying the tests that were actually necessary to run. When testing new fea- tures you have requirements from which to work from in developing tests and have a methodology that can be fol- lowed to create these tests. For regres- sion we had nothing, so there was a need to create some form of require- ments, which we could then use to iden- tify the required regression testing. To do this we needed specific information from the development organization on the changes they were implementing in
the software, both for new features/ enhancements and any other changes they were making (problem fixes, re- engineering work etc.).
We created a simple Regression Impacts Matrix that lists all existing features and areas of functionality. Each development team working on a new feature or enhancement was then asked to complete the matrix indicating the features and functionality both directly and indirectly impacted by their code changes. Table 1 shows a sample Regression Impacts Matrix.
It is quite simple in format and lists all existing features and functionality along the top row with all new features on the release being listed in the first column. Where a new feature has an explicit impact on an existing feature, an \u2018X\u2019 is used to indicate the impact. Where the impact is not explicit, an \u2018R\u2019 is used to indicate a related impact.
Related impacts can be consid- ered to be of lower priority and tests for these features can be included in the Secondary phase (refer to Section 4.). Where there are no impacts, the cell can be left blank.
We used an existing Test Impacts Checklist. The devel- opment engineers are required to complete this checklist for every submission of code they make to the software release. The checklist asks specific questions that the developer has to answer, and the test engineers can then use this information to identify regres- sion impacts due to code changes not associated with a new feature or enhancement
(i.e. fixes, re-engineering). In our case, the checklist was broken down into sev- eral areas as follows:
fy 8 main functional areas (including call-processing, Billing, Fault Man- agement & Statistics), and the develop- er first has to indicate which area was being impacted by their code change. This will then give the testers an idea of the type of regression test they might require.
asked to describe the impact in their own words. This information helps fur- ther define the regression test that might be required.
testing involves call processing, devel- opers are asked to identify specific call- scenarios that are impacted. Example scenarios include: three-party confer- encing, call forwarding, and hand-offs. Knowing the call-scenarios impacted allows the tester to select these scenar- ios from our existing suites of regres-
Now bringing you back...
Does that email address look wrong? Try again with a different email.