You are on page 1of 307

Software Best Practice 1

ESSI Practitioners' Reports


Springer-Verlag Berlin Heidelberg GmbH
Michael Haug Erie W. Olsen
Luisa Consolini (Eds.)

Software Quality
Approaches:
Testing, Verification,
and Validation
Software Best Practice 1

With 63 Figures and 30 Tables

, Springer
Editors:
Michael Haug Luisa Consolini
Eric W. Olsen GEMIN! soco consoaorol.
HIGHWARE GmbH Via So Serlio 24/2
WinzererstraBe 46 40128 Bologna, Italy
80797 MUnchen, Germany
luisa@gemini.it
Michael@Haugocom
ewo@homeocom

ISBN 978-3-540-41784-2

Library of Congress Cataloging-in-Publication Data


Software best practice o
pocmo
Ineludes bibliographical references and indexo
1. Software quality approaches : testing, verification, and validation / Mo Haug,
Eo Wo Olsen, 1. Consolini, edso -- 20 Managing the change : software configuration and
change management / Mo Haug ooo [et alo], edso -- 30 Software management approaches :
project management, estimation, and life cyele support / Mo Haug 000 [et al.], edso--
40 Software process improvement : metrics, measurement, and process modelling /
Mo Haug, Eo Wo Olsen, 1. Bergman, edso
ISBN 978-3-540-41784-2 ISBN 978-3-642-56612-7 (eBook)
DOI 10.1007/978-3-642-56612-7
1. Software engineeringo 1. Haug, Michael, 1951-
QA760758 oS6445 2001
00501--dc21 2001041181
This work is subject to copyright. AU rights are reserved, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data
bankso Duplication of this publication or parts thereof is permitted only under the provi-
sions of the German copyright law of September 9, 1965, in its current version, and permis-
sion for use must always be obtained from Springer-Verlago Violations are liable for prose-
cution under the German Copyright Lawo

http://wwwospringerode
© Springer-Verlag Berlin Heidelberg 2001
Originally published by Springer-Verlag Berlin Heidelberg New York 2001

The use of general descriptive names, trademarks, etco in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general useo
Cover design: design & production GmbH, Heidelberg
Typesetting: Camera-ready by editors
Printed on acid-free paper SPIN: 10832653 45/3142 ud - 543210
Foreword

C. Amting
Directorate General Information Society, European Commission, Brussels

Under the 4th Framework of European Research, the European Systems and Soft-
ware Initiative (ESSI) was part of the ESPRIT Programme. This initiative funded
more than 470 projects in the area of software and system process improvements.
The majority of these projects were process improvement experiments carrying
out and taking up new development processes, methods and technology within the
software development process of a company. In addition, nodes (centres of exper-
tise), European networks (organisations managing local activities), training and
dissemination actions complemented the process improvement experiments.
ESSI aimed at improving the software development capabilities of European
enterprises. It focused on best practice and helped European companies to develop
world class skills and associated technologies to build the increasingly complex
and varied systems needed to compete in the marketplace.
The dissemination activities were designed to build a forum, at European level,
to exchange information and knowledge gained within process improvement ex-
periments. Their major objective was to spread the message and the results of
experiments to a wider audience, through a variety of different channels.
The European Experience Exchange ~UR~X) project has been one of these dis-
semination activities within the European Systems and Software Initiative.~UR~)(
has collected the results of practitioner reports from numerous workshops in
Europe and presents, in this series of books, the results of Best Practice achieve-
ments in European Companies over the last few years.
~UR~)( assessed, classified and categorised the outcome of process improve-
ment experiments. The theme based books will present the results of the particular
problem areas. These reports are designed to help other companies facing software
process improvement problems.
The results of the various projects collected in these books should encourage
many companies facing similar problems to start some improvements on their
own. Within the Information Society Technology (1ST) programme under the 5th
Framework of European Research, new take up and best practices activities will
be launched in various Key Actions to encourage the companies in improving
their business areas.
Preface

M. Haug
HIGHWARE, Munich

In 1993, I was invited by Rainer Zimmermann and David Talbot to participate in


the industrial consultation group for the then-new ESSI initiative. Coming from a
Software Engineering background and having been responsible for industrial
software production for more than 20 years, I was fascinated by the idea of tack-
ling the ubiquitous software quality problem in a fresh new way, in helping not
only a particular organisation to improve their software process, but to create the
framework for an exchange of the experience gained among those organisations
and beyond, to spread this experience throughout the European Software Industry.
While serving as an evaluator and reviewer to the Commission within the ESSI
initiative, I had the opportunity to have a more or less superficial look at more
than 100 Process Improvement Experiments (PIEs) at workshops, conferences and
reviews. Consequently, the desire to collect and consolidate information about and
experience from all of the more than 300 PIEs in a more organised way became
immanent. In short, the idea for~UR~X was born.
~UR~X is an ESSI dissemination project. The budget limitations applicable to
such projects did not allow us to conduct reviews or interviews of all of the more
than 300 projects. Therefore, a distributed and staged approach was taken: a set of
regional workshops became the platform to collect the information. The results of
these 18 workshops held in Europe over a period of two years, together with con-
tributions from representative PIEs and with expert articles rounding out the ex-
perience reports, is now in your hands: a series of books focussing on the central
problem domains of Software Process Improvement.
Each of the books concentrates on a technical problem domain within the soft-
ware engineering process, e.g. software testing, verification and quality manage-
ment in Vol. 1. All of the books have a common structure:
Part I SPI, ESSI, ~UR~X describes the context of the European Software and
Systems Initiative and the ~UR~X project. While Part I is similar in all books, the
problem domains are differentiated for the reader. It consists of the chapters:
1 Software Process Improvement
2 The ~UR~X project
3 The ~UR~X taxonomy.
VIII Preface

In Part II we present the collected findings and experiences of the process im-
provement experiments that dealt with issues related to the problem domain ad-
dressed by the book. Part II consists of the chapters:
4 Perspectives
5 Resources for Practitioners
6 Experience Reports
7 Lessons from the ~UR~X Workshops
8 Significant Results
Part III offers summary information for all the experiments that fall into the
problem domain. These summaries, collected from publicly available sources,
provide the reader with a wealth of information about each of the large number of
projects undertaken. Part III includes the chapters:
9 Table of PIEs
10 Summaries of Process Improvement Experiment Reports
A book editor managed each of the books, compiling the contributions and
writing the connecting chapters and paragraphs. Much of the material originates in
papers written by the PIE organisations for presentation at ~UR~X workshops or
for public documentation like the Final Reports. Whenever an author could be
identified, we attribute the contributions to him or her. If it was not possible to
identify a specific author, the source of the information is provided. If a chapter is
without explicit reference to an author or a source, the book editor wrote it.
Many people contributed to ~UR~XPI, more than I can express my appreciation
to in such a short notice. Representative for all of them, my special thanks go to
the following teams: David Talbot and Rainer Zimmermann (CEC) who made the
ESSI initiative happen; Mechthild Rohen, Brian Holmes, Corinna Amting and
Knud Lonsted, our Project Officers within the CEC, who accompanied the project
patiently and gave valuable advice; Luisa Consolini and Elisabetta Papini, the
Italian ~UR~X team, Manu de Uriarte, Jon Gomez and lfiaki Gomez, the Spanish
~URD< team, Gilles Vallet and Olivier Becart, the French ~UR~X team, Lars
Bergman and Terttu Orci, the Nordic ~UR~X team and Wilhelm Braunschober,
Bernhard Kolmel and Jorn Eisenbiegler, the German ~UR~X team; Eric W. Olsen
has patiently reviewed numerous versions of all contributions; Carola, Sebastian
and Julian have spent several hundred hours on shaping the various contributions
into a consistent presentation. Last but certainly not least, Ingeborg Mayer and
Hans Wossner continuously supported our efforts with their professional publish-
ing know-how; Gabriele Fischer and Ulrike Drechsler patiently reviewed the
many versions of the typoscripts.
The biggest reward for all of us will be, if you - the reader - find something in
these pages useful to you and your organisation, or, even better, if we motivate
you to implement Software Process Improvement within your organisation.

PI
Opinions in these books are expressed solely on the behalf of the authors. The European
Commission accepts no responsibility or liability whatsoever for the content.
Table of Contents

Part I SPI, ESSI, ~UR~X 1


1 Software Process Improvement 3
1.1 Introduction 3
1.2 Objectives - Scope of the Initiative 3
1.3 Strategy .4
1.4 Target Audience 5
1.5 Dimensions of Software Best Practice 5
1.6 European Dimension 7
1.7 Types of Projects 8
1.7.1 Stand Alone Assessments 8
1.7.2 Process Improvement Experiments (PIEs) 9
1.7.3 Application Experiments 11
1.7.4 Dissemination Actions 11
1.7.5 ExperiencelUser Networks 12
1.7.6 Training Actions 13
1.7.7 ESSI PIE Nodes (ESPINODEs) 13
1.7.8 Software Best Practice Networks (ESBNETs) 14
2 The ~UR~ Project 17
2.1 Target Audience and Motivation 17
2.2 Objectives and Approach 19
2.3 Partners .20
2.4 Related Dissemination and Training Actions 20
2.4.1 Software Improvement Case Studies Initiative (SISSI) 20
2.4.2 ESPITI 22
3 The ~UR~X Taxonomy 25
3.1 Analysis and Assessment of PIEs 25
3.2 Classification into Problem Domains 25
3.2.1 First Regional Classification 26
3.2.2 Result of First Regional Classification 26
3.2.3 Consolidation and Iteration 26
3.2.4 Update of Regional Classification 26
3.2.5 Mapping of Attributes 27
3.2.6 Review of Classification and Mapping into Subject
Domains 27
X Table of Contents

3.2.7 Subject Domains Chosen 27


3.2.8 Unclassified PIEs 29
3.3 Testing, Verification, and Quality Management 30
Part II Testing, Verification, and Quality Management 31
4 Perspectives 33
4.1 Introduction to the Subject Domain 33
4.2 Software Verification & Validation Introduced 36
4.2.1 Verification & Validation with Respect to the Product
Development Process 36
4.2.2 The Main Weaknesses of the Testing Process 38
4.2.3 An Improved Process Model... 40
4.2.4 How to Improve: the Road to Process Improvement.. .43
4.2.5 Cost/Benefit Analysis .45
4.3 Testware .46
4.3.1 A Testing Definition .46
4.3.2 Customer Needs 46
4.3.3 Types of Testing .47
4.3.4 Debugging .48
4.3.5 Other Techniques 49
4.3.6 Tools 49
4.3.7 Testware 54
4.3.8 Benefits and Limits 55
4.3.9 References 56
4.4 Classic Testing Mistakes 57
4.4.1 Theme One: The Role ofTesting 58
4.4.2 Theme Two: Planning the Testing Effort 62
4.4.3 Theme Three: Personnel Issues 65
4.4.4 Theme Four: The Tester at Work 68
4.4.5 Theme Five: Technology Run Rampant.. 74
4.4.6 Acknowledgements 79
4.4.7 References 80
5 Resources for Practitioners 83
5.1 Methods and Tools 83
5.2 Books 84
5.2.1 Introductory Reference Books on Software Quality .84
5.2.2 Classics on Testing 84
5.2.3 Key Books on Testing 84
5.2.4 Key Books on Inspections 85
5.3 Organisations 85
5.4 Important Conferences 85
5.5 Web Sites 86
6 Experience Reports 87
Table of Contents XI

6.1 PI3 Project Summary 90


6.1.1 Participants 91
6.1.2 Business Motivation and Objectives 91
6.1.3 The Experiment 92
6.1.4 Impact and Experience 93
6.1.5 References 95
6.2 PROVE Project Summary 97
6.2.1 Participants 97
6.2.2 Business Motivation and Objectives 98
6.2.3 The Experiment 99
6.2.4 Impact and Experience 102
6.3 TRUST Project Summary 107
6.3.1 Participants 107
6.3.2 Business Motivation and Objectives 107
6.3.3 The Experiment 109
6.3.4 Impact and Experience 116
6.4 FCI-STDE Project Summary 119
6.4.1 Participants 119
6.4.2 Business Motivation and Objectives 120
6.4.3 The Experiment l21
6.4.4 Impact and Experience 122
6.5 TESTLIB Project Summary 126
6.5.1 Participants 126
6.5.2 Business Motivation and Objectives 127
6.5.3 The Experiment 128
6.5.4 Impact and Experience 129
6.6 ATECON Project Summary 132
6.6.1 Participants 132
6.6.2 Business Motivation and Objectives 133
6.6.3 The Experiment 133
6.6.4 Impact and Experience 134
6.7 GU!-Test Project Summary 136
6.7.1 Participants 136
6.7.2 Business Motivation and Objectives 137
6.7.3 The Experiment 137
6.7.4 Impact and Experience 138
7 Lessons from the ~UR~X Workshops 141
7.1 Second Italian Workshop 141
7.1.1 Introduction 141
7.1.2 The Workshop Experts 142
7.1.3 Testing Web-based Applications 143
7.1.4 Workshop Conclusions 160
7.1.5 Workshop Discussions 163
XII Table of Contents

7.2 Third Spanish Workshop 169


7.2.1 Introduction 169
7.2.2 Expert Presentation 170
7.2.3 Workshop Discussion and Conclusions 181
7.3 Pilot German Workshop 190
7.3.1 Introduction 190
7.3.2 Expert Presentation 191
7.3.3 Workshop Discussion and Conclusions 193
7 A Lessons Learned from the Workshops 194
704.1 People Issues 195
704.2 Business Issues 195
704.3 Technical Issues 197
70404 Final Conclusions 198
8 Significant Results 199
8.1 Barriers Preventing Change of Practices 200
8.1.1 Ignorance of the Software Product Quality Methods 200
8.1.2 Uncertainty about the Return on Investment and Fear of
Raising Development Costs to an Unacceptable Level... ........20 1
8.1.3 Still Not Enough Pressure on Software Producers to
Increase Quality Standards 20 1
8.2 Best Practices Recommended by Experts 202
8.2.1 Investing in the Acquisition of New Skills 202
8.2.2 Formalising the Verification Process and Integrating it
with the Development Process 202
8.2.3 Investing Carefully but Inevitably in Automation .203
8.204 Measuring Results and Return on Investment.. .204
8.3 Revisiting the Classic Testing Mistakes 204
8.3.1 Mistakes in the Role of Testing 204
8.3.2 Mistakes in Planning the Complete Testing Effort 205
8.3.3 Mistakes in Personnel Issues 205
8.304 Mistakes in the Tester-at-Work 205
8.3.5 Mistakes in Test Automation 206
8.3.6 Mistakes in Code Coverage 206
8.4 The l;URl;X Process 206
Part III Process Improvement Experiments 209
9 Table of PIEs 211
10 Summaries of PIE Reports 215
10.1 ACIQIM21757 215
10.2 AERIDS 10965 216
10.3 ALCAST 10146 217
lOA AMIGO 21222 219
10.5 ARETES 24148 220
Table of Contents XIII

10.6 ASTEP 23860 221


10.7 ASTERIX 23951 222
10.8 ATECON 10464 223
10.9 ATM 21823 224
10.10 ATOS21530 225
10.11 AUTOMA 10564 .227
10.12 AUTOQUAL 24143 228
10.13 AVAL21362 230
10.14 AVE 21682 232
10.15 BEPTAM 21284 233
10.16 CALM 21826 234
10.17 CITRATE 23671 235
10.18 CLEANAP21465 236
10.19 CLISERT24206 238
10.20 CONFITEST 24362 239
10.21 DATM-RV 21265 240
10.22 DOCTES 21306 242
10.23 EMINTP 21302 243
10.24 ENG-MEAS 21162 245
10.25 EXOTEST 24266 246
10.26 FCI-STDE 24157 247
10.27 FI-TOOLS21367 248
10.28 GRIPS 23887 249
10.29 GUI-TEST 24306 250
10.30 IDEA 23833 251
10.31 IMPACTS2 24078 252
10.32 INCOME 21733 253
10.33 MAGICIAN 23690 254
10.34 METEOR 21224 256
10.35 MIST 10228 256
10.36 ODP 10788 257
10.37 OMP/CAST 24053 258
10.38 PCFM 23743 259
10.39 PET 10438 260
10.40 PI3 21199 262
10.41 PIE-TEST 24344 263
10.42 PREV-DEV23705 264
10.43 PROVE 21417 265
10.44 QUALITAS 23834 266
10.45 RESTATE23978 .268
10.46 SDI-WAN 10494 269
10.47 SIMTEST 10824 270
10.48 SMUIT 21612 271
10.49 SPIDER 21394 273
XIV Table of Contents

10.50 SPI 23750 274


10.51 SPIRlT21799 275
10.52 STOMP 24193 276
10.53 STUT-IU 21160 277
10.54 SWAT23855 278
10.55 TEPRlM 21385 280
10.56 TESTART 23683 281
10.57 TESTING 21170 283
10.58 TESTLIB 21216 284
10.59 TRUST 23754 285
10.60 USST 23843 286
10.61 VERA 23732 287
10.62 VERDEST 21712 288
10.63 VISTA 24153 289
10.64 STAR 27378 291
Index 295
List of Contributors

Gualtiero Bazzana Luisa Consolini


Onion Gemini
gb@onion.it luisa@gemini.it

S. Daiqui Michael Haug


Deutsche Forschungsanstalt fur HIGHWARE
Luft- und Raumfahrt e. V. Michael_Haug@compuserve.com
sylvia.daiqui@dlr.de

P. Hodgson M. del Coso Lampreabe


Procedimientos-Uno SL ALCATEL ESPANA
phodgson@procuno.pta.es delcoso@alcatel.es

T. Linz F. Lopez
imbus GmbH Procedimientos-Uno SL
info@imbus.de flopez@procuno.pta.es

Brian Marick Fabio Milanese


Testing Foundations Compuware
marick@testing.com Fabio_Milanese@compuware.com

Eric W. Olsen Michele Paradiso


HIGHWARE IBM Semea Sud
ewo@home.com MParadiso@it.ibm.com

B. Quaquarelli lC. Sanchez


Think3 (formerly CAD.LAB) Integracion y Sistemas de Medida,
baqu@think3.it SA
integrasys@compuserve.com
A. Silva
Agusta
asilva.mardib@iol.it
Part I

SPI, ESSI, ~UR~X


1 Software Process Improvement
A European View

l
1.1 Introduction

Enterprises in all developed sectors of the economy - not just the IT sector - are
increasingly dependent on quality software-based IT systems. Such systems sup-
port management, production, and service functions in diverse organisations.
Furthermore, the products and services now offered by the non-IT sectors, e.g., the
automotive industry or the consumer electronics industry, increasingly contain a
component of sophisticated software. For example, televisions require in excess of
half a Megabyte of software code to provide the wide variety of functions we have
come to expect from a domestic appliance. Similarly, the planning and execution
of a cutting pattern in the garment industry is accomplished under software con-
trol, as are many safety-critical functions in the control of, e.g., aeroplanes, eleva-
tors, trains, and electricity generating plants. Today, approximately 70% of all
software developed in Europe is developed in the non-IT sectors of the economy.
This makes software a technological topic of considerable significance. As the
information age develops, software will become even more pervasive and trans-
parent. Consequently, the ability to produce software efficiently, effectively, and
with consistently high quality will become increasingly important for all industries
across Europe if they are to maintain and enhance their competitiveness.

1.2 Objectives - Scope of the Initiative

The goal of the European Systems and Software Initiative (ESSI) was to promote
improvements in the software development process in industry, through the take-
up of well-founded and established - but insufficiently deployed - methods and
technologies, so as to achieve greater efficiency, higher quality, and greater econ-
omy. In short, the adoption of Software Best Practice.

All material presented in Chapter I was taken from publicly available information issued
by the European Commission in the course of the European Systems and Software Initia-
tive (ESSI). It was compiled by the main editor to provide an overview of this pro-
gramme.

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
4 Software Process Improvement

The aim of the initiative was to ensure that European software developers in
both user and vendor organisations continue to have the world class skills, the
associated technology, and the improved practices necessary to build the increas-
ingly complex and varied systems demanded by the market place. The full impact
of the initiative for Europe will be achieved through a multiplier effect, with the
dissemination of results across national borders and across industrial sectors.

1.3 Strategy

To achieve the above objectives, actions have been supported to:


• Raise awareness of the importance of the software development process to the
competitiveness of all European industry.
• Demonstrate what can be done to improve software development practices
through experimentation.
• Create communities of interest in Europe working to a common goal of
improving software development practices.
• Raise the skill levels of software development professionals in Europe.

Create communities
of common interest

Demonstrate benefits Raise the skills of


via experimentation professionals

Fig. 1.1 A focused strategy for achieving Best Practice


1.4 Target Audience 5

1.4 Target Audience


(Who can participate, Who will benefit)

Any organisation in any sector of the economy, which regards generation of soft-
ware to be part of its operation, may benefit from the adoption of Software Best
Practice. Such a user organisation is often not necessarily classified as being in the
software industry, but may well be an engineering or commercial organisation in
which the generation of software has emerged as a significant component of its
operation. Indeed as the majority of software is produced by organisations in the
non-IT sector and by small and medium sized enterprises (SMEs), it is these two
groups who are likely to benefit the most from this initiative.

Competitive
advantage
Greater
customer
satisfaction
Better
quality

Better value
for money
Greater
efficiency
Improvements to
the software
development
process

Fig. 1.2 The benefits of Software Best Practice

In addition to the user organisations participating directly in the initiative, soft-


ware vendors and service providers also stand to benefit, as demand for their
methodologies, tools and services is stimulated and valuable feedback is given on
the strengths and weaknesses of their offerings.

1.5 Dimensions of Software Best Practice

Software Best Practice activities focus on the continuous and stepwise improve-
ment of software development processes and practices. Software process im-
provement should not be seen as a goal in itself but must be clearly linked to the
business goals of an organisation. Software process improvement starts with ad-
6 1 Software Process Improvement

dressing the organisational issues. Experiences in the past have shown that before
any investments are made in true technology upgrades (through products like tools
and infrastructure computer support) some critical process issues need to be ad-
dressed and solved. They concern how software is actually developed: the meth-
odology and methods, and, especially, the organisation of the process of develop-
ment and maintenance of software.
Organisational issues are more important than methods and improving methods
is, in tum, more important than introducing the techniques and tools to support
them.
Finding the right organisational framework, the right process model, the right
methodology, the right supporting methods and techniques and the right mix of
skills for a development team is a difficult matter and a long-term goal of any
process improvement activity. Nevertheless, it is a fundamental requirement for
the establishment of a well-defmed and controlled software development process.

1. Business: market, customers, competition, .


& People issues: skills, culture, teamwork, .

Business & People ' " driven


2. Process

'"
3. Technical approach: methods, procedures, ...

'"
4. Technical support: tools, computers, ...

Fig. 1.3 Anatomy of a successful SPI programme

Software development is a people process and due consideration should be


given to all the players involved. Process improvement and implementation con-
cerns people and needs to take into account all people related aspects (human
factors). These are orthogonal to the technology and methodology driven ap-
proaches and are crucial to the success of adopting best practice.
Successful management of change includes staff motivation, skiIling and pro-
motion of the positive contributions that staff can make.
The people aspects cover all the different groups which have an input to the
software development process including Management, and Software Engineers.
In order to ensure an appropriate environment for the successful adherence to a
total quality approach it is imperative that Senior Management are fully aware of
all the issues. Their commitment and involvement are crucial to the successful
1.6 European Dimension 7

implementation of the improvement process and it might be necessary to raise


their awareness regarding this issue.
It is important to identify clear milestones that will enable the software devel-
oper to measure progress along the road of software improvement. Certification
through schemes such as ISO 9000, while not an end in itself, can playa valuable
role in marking and recognising this progress.

1.6 European Dimension

The objectives of Software Best Practice can be accomplished by understanding


and applying the state-of-the-art in software engineering, in a wide range of indus-
tries and other sectors of the economy, taking into account moving targets and
changing cultures in this rapidly evolving area. The full impact for Europe will
then be achieved by a multiplier effect, with the dissemination of results across
national borders and across industrial sectors.
The definition of best practice at the European level has three main advantages.
Firstly, there is the matter of scale. Operating on a European-wide basis offers the
possibility to harness the full range of software development experience that has
been built up across the full spectrum of industry sectors in addition to offering
exposure to the widest selection of specialist method and tool vendors. In the
second place, it maximises the possibility to reduce duplication of effort. Finally,
it offers the best possibility to reduce the present fragmentation of approaches and,
at the same time, to provide a more coherent and homogeneous market for well-
founded methods and tools.
Moreover, as we move towards the Information Society, we need to develop
and build the technologies necessary to create the Information Infrastructure (such
as is envisaged in the Commission White Paper on "Growth, Competitiveness and
Employment"); a dynamic infrastructure of underlying technologies and services
to facilitate fast and efficient access to information, according to continually
changing requirements. Within this context, software is seen as a major enabling
technology and the way in which we develop software is becoming a key factor
for industrial competitiveness and prosperity.
All of the above factors can be enhanced through the creation and use of stan-
dards, including de-facto standards for "best practice" and, indeed, standards are
vital in the long term. However, the proposed actions should not, at this stage of
evolving standards, be restricted to one particular standard. Furthermore, the ac-
tions cannot wait for a full and accepted set to be established before being able to
implement improvement. Nevertheless, a close look at the ISO-SPICE initiative
and active contribution to it is suggested.
8 1 Software Process Improvement

1.7 Types of Projects

The European Commission issued three Calls for Proposals for Software Best
Practice in the Fourth Framework Programme in the years 1993, 1995 and 1996.
The first call was referred to as the "ESSI Pilot Phase". The aim was to test the
perceived relevance of the programme to its intended audience and the effective-
ness of the implementation mechanisms. Before the second call in 1995 a major
review and redirection took place. Following the revision of the ESPRIT Work
programme in 1997, a further call was issued of which the results are not been
reviewed in this book. The four calls varied slightly in their focus. In the follow-
ing, all types of projects supported by the ESSI initiative will be presented.

Training

D
Process
Assessment Improvement Dissemination
Experiment

D
Experience Networks

Fig. 1.4 Lines of complementary action

2
1.7.1 Stand Alone Assessments

The main objective of the Stand Alone Assessments action was to raise the aware-
ness of user organisations to the possibilities for improvement in their software
development process, as well as give the momentum for initiating the improve-

Stand Alone Assessments have been called only in the year 1995.
1.7 Types of Projects 9

ment process. Assessments were targeted particularly at organisations at the lower


levels of software development maturity.
It was expected that assessments will stimulate the pursuit of quality through
continuous improvement of the software development process.
An underlying methodology was needed to carry out an assessment. This meth-
odology had to recognise that software development is governed by the processes
which an organisation uses to capitalise upon the potential talent of its employees
(people) in order to produce competitive, top quality, software systems and ser-
vices (products).
Most assessment methods are based on questionnaires and personnel inter-
views. An assessment resulted in the identification of an organisation's strengths
and weaknesses, provides a comparison with the rest of the industry, and was
accompanied by a series of recommendations on how to address the weak aspects
of the software development process, from a technological and organisational
point of view.
No single standard methodology was advocated; however, the adopted ap-
proach had to be a recognised assessment methodology, such as BOOTSTRAP,
TickIT, etc.
The following types of assessment have been envisaged:
Self-assessments, which were conducted if the organisation had the required re-
source capacity to allow it to absorb the load of conducting the assessment. In this
case, it was expected that an internal assessment team was set up, trained in the
selected methodology, and that it carried out the assessment according to an
agreed schedule. This type of assessment may have conducted with the support of
the methodology provider or under the guidance of external assessors.
Assessments carried out by external assessors. The organisation was expected
to select an external assessor who conducted the assessment. Again, an internal
assessment team was expected to be set up to collaborate with the assessors.
Both types of assessment had to cater for measuring the organisation's existing
situation, positioning the organisation relatively to the rest of the industry in terms
of software development process and allowing the organisation to plan and priori-
tise for future improvements.

1.7.2 Process Improvement Experiments (PIEs)3

PIEs are aimed at demonstrating software process improvement. These followed a


generic model and demonstrated the effectiveness of software process improve-

Process Improvement Experiments have been called in the years 1995, 1996 and 1997.
As the project type "Application Experiment" can be considered the predecessor of PIEs,
it is legitimate to say that PIEs have been subject to all ESSI calls and have formed not
only the bulk of projects but also the "heart" of the initiative.
10 Software Process Improvement

ment experiments on an underlying baseline project that is tackling a real devel-


opment need for the proposing organisation.
Process Improvement Experiments (PIEs) formed the bulk of the Software Best
Practice initiative. Their aim was to demonstrate the benefits of software process
improvement through user experimentation. The results had to be disseminated
both internally within the user organisations to improve software production and
externally to the wider community to stimulate adoption of process improvement
at a European level.
The emphasis was on continuous improvement through small, stepped actions.
During a PIE, a user organisation undertook a controlled, limited experiment in
process improvemen~, based on an underlying baseline project. The baseline pro-
ject was a typical software project undertaken by the user organisation as part of
its normal business and the results of the experiment should therefore be replic-
able.

PIE
Dissemination
...-- ~
Analysis of
current
situation
I--- Experimentation .. Analysis of
final
situation
I--- f- Next
Sta ge

\ /\ /\ t
V V V
~ ~
Baseline Project

Fig. 1.5 A PIE in relation to an underlying baseline project

The introduction of a Configuration Management System, improvements to the


design documentation system, the use of a Computer Aided Design (CAD) tool,
the application of Object Oriented Programming techniques, the development of a
library for software re-use and the introduction of metrics, are some examples of
possible improvement steps for Software Best Practice and the focus of a PIE.
It was expected that a PIE was carried out as part of a larger movement within
the user organisation towards process improvement. Participants were expected to
have considered their strengths and weaknesses, and to have at least an idea of the
general actions required. They also needed to demonstrate that they were aware of
quality issues and were considering the people aspects of their actions.
1.7 Types of Projects 11

Dissemination of the results of the experiment, from a software engineering and


business point of view, to the wider community, was an essential aspect of a PIE
and was undertaken with the support of the Software Best Practice Dissemination
Actions.

4
1.7.3 Application Experiments

These experiments were targeted at building up a comprehensive set of examples


to show that the adoption of improved software development practices were both
possible and had clear industrial benefits. The experiments involved the introduc-
tion of state-of-the-art software engineering (e.g. management practices, method-
ologies, tools) into real production environments that address specific applica-
tions, and then evaluating the resulting impact.
Within the context of this book (and the project I;URI;X) these Application Ex-
periments have been treated like PIEs, i.e. their specific results have been in-
cluded.

5
1.7.4 Dissemination Actions , 6

Dissemination Actions aimed at raising awareness and promoting the adoption of


software best practice by Industry at large. Actions provided software producing
organisations with information concerning the practical introduction of software
best practice, how it can contribute to meeting business needs and how those or-
ganisations can benefit: particularly, by showing the real life business benefits -
and costs - in a way which could be of interest to companies intending to address
related problems.
The Dissemination Actions widely disseminated Software Best Practice informa-
tion by making it available and packaging it in a form suitable for "focused target
audiences":
• The experience gained by the participants in PIEs (Process Improvement Ex-
periments): experiences and lessons learned which could be of interest to indus-
try at large.
• Software Best Practice material and experiences available world-wide. For
example, valuable and generally useful software engineering material which is
representative of a class of processes, methodologies, assessment methods,
tools, etc. Relevant world-wide experiences.

Application Experiments have only been called in 1993. See also the footnote to Process
Improvement Experiments.
6 Dissemination Actions have been called in 1993, 1995 and 1996.
The ESSI project EUREX which resulted in this book was such a Dissemination Action.
12 Software Process Improvement

Source of
Inform ation

Worldwide Info

Software Best Practice RfFh


Li b ra ry E±±EiJ
Make Information visible
Actual Information or pointer

Focused Target
Audiences

Fig. 1.6 ESSI Dissemination Framework

7
1.7.5 Experience/User Networks

There was opportunity for networks of users, with a common interest, to pursue a
specific problem affecting the development or use of software. ExperiencelUser
Networks mobilised groups of users at a European level and provided them with
the critical mass necessary to influence their suppliers and the future of the soft-
ware industry through the formulation of clear requirements. A network had to be
trans-national with users from more than one Member or Associated State.
By participating in an ExperiencelUser Network, a user organisation helped to
ensure that a particular problem - with which it is closely involved - is addressed
and that it is able to influence the choice of proposed solution.
Software suppliers (methodologies, tools, services, etc.) and the software indus-
try as a whole took benefit from ExperiencelUser Networks by receiving valuable

Experience/User Networks have only been called in 1995.


1.7 Types of Projects 13

feedback on the strengths and weaknesses of their current offerings, together with
information on what is additionally required in the marketplace.

8
1.7.6 Training Actions

Training actions have been broad in scope and covered trammg, skilling and
education for all groups of people involved - directly or indirectly - in the devel-
opment of software. In particular, training actions aimed at:
• increasing the awareness of senior managers as to the benefits of software pro-
cess improvement and software quality
• providing software development professionals with the necessary skills to de-
velop software using best practice
Emphasis had been placed on actions which served as a catalyst for further
training and education through, for example, the training of trainers. In addition,
the application of current material - where available and appropriate - in a new or
wider context was preferred to the recreation of existing material.

1.7.7 ESSI PIE Nodes (ESPINODEs)9

The primary objective of an ESPINODE was to provide support and assistance, on


a regional basis, to a set of PIEs in order to stimulate, support, and co-ordinate
activities. ESPINODEs acted closely with local industry and were particularly
aimed at helping to facilitate exchange of practical information and experience
between PIEs, to provide project assistance, technical and administrative support,
and to exploit synergies.
On a regional level, an ESPINODE provided a useful interface between the
PIEs themselves, and between the PIEs and local industry. This included improv-
ing and facilitating access to information on ESSIIPIE results, and raising interest
and awareness of local companies (notably SMEs) to the technical and business
benefits resulting from software process improvement conducted in the PIEs.
At the European level, an ESPINODE exchanged information and experience
with other ESPINODEs, in order to benefit from the transfer of technology, skills
and know-how; from economies of scale and from synergies in general - thus
creating a European network of PIE communities.

Training Actions have been called in 1993 and 1996. Whereas the projects resulting from
the call in 1996 were organised as separate projects building the ESSI Training Cluster
ESSItrain, the result of the call in 1993 was one major project ESPITI which is described
9 in chapter 2.3.2.
ESSI PIE Nodes have only been called in 1997.
14 1 Software Process Improvement

Fig. 1.7 ESPINODE collaboration model

1.7.8 Software Best Practice Networks (ESBNETs)1O

The objective of an ESBNET was to implement small scale software best practice
related activities on a regional basis, but within the context of a European net-
work. A network in this context was simply a group of organisations, based in
different countries, operating together to implement an ESBNET project, accord-
ing to an established plan of action, using appropriate methods, technologies and
other appropriate support. By operating on a regional level, it was expected that
the specific needs of a targeted audience will be better addressed. The regional
level was complemented by actions at European level, to exploit synergies and
bring cross-fertilisation between participants and their target audiences. A network
had a well defined focus, rather than being just a framework for conducting a set
of unrelated, regional software best practice activities.
The two ESSI tasks newly introduced in the Call for Proposals in 1997 - ESPI-
NODEs and ESBNETs - aimed to continue and build upon the achievements of
the initiative so far, but on a more regional basis. ESPINODEs aim with first
priority to provide additional support to PIEs, whilst ESBNETs aim to integrate
small-scale software best practice actions of different type implemented on a re-
gional basis - with an emphasis on the non-PIE community.
By operating on a regional level, it was expected that ESPINODEs and ESB-
NETs will be able to tailor their actions to the local culture, delivering the mes-
sage and operating in the most appropriate way for the region. Further, it was
expected that such regional actions will be able to penetrate much more into the
very comers of Europe, reaching a target audience which is much broader and

10
Software Best Practice Networks have only been called in 1997.
1.7 Types of Projects 15

probably less experienced in dealing with European initiatives. Such an approach


should be of particular interest to SMEs and other organisations not in the tradi-
tional IT sector, for which it is perhaps difficult to deal directly with an organisa-
tion based in a different country, due to - for example - a lack of resources, cul-
tural and language reasons.

Regional Support within


European etworks

• Disseminate the results


beyond those directly
involved in ESSI
• Ensure that projects act as a
'catalyst' for further action
• Increase the participation in
ESSI
• Reach organisations never
involved before

Fig. 1.8 ESPINODEs and ESBNETs


2 The ~UR~X Project

M. Haug, E.W. Olsen


HIGHWARE, Munich

The European Experience Exchange project (tURl;X) was conceived, proposed,


and carried out as an ESSI Dissemination Action (see Chapter I). The overall
objective of l;URl;X was to evaluate the experiences of several hundred ESSI Proc-
ess Improvement Experiments (PIEs) and to make this experience accessible to a
broad European audience in a convenient form. In particular, the goal was to col-
lect and make available to interested practitioners information about Software Best
Practice and its introduction in specific problem domains.
In the following sections, we briefly review the history ofthe l;URl;X project.

2.1 Target Audience and Motivation

Over 70% of the organisations that participated in events organised during the
course of the ESPITl project (see section 1.3.2 below) were Small or Medium
Enterprises (SMEs), and many of which had substantially fewer than 250 employ-
ees. This response rate demonstrated a significant interest on the part of SMEs in
finding out more about Software Process Improvement (SPI). Therefore, the pri-
mary target audience for ~UR~X was those European SMEs, and small teams in the
non-IT organisations, engaged in the activity of developing software. Within these
organisations, the focus was on management and technical personnel in a position
to make decisions to undertake process improvement activities.
The ESPITI User Survey presents a clear picture of the needs and requirements
of SMEs concerning software process improvement. For example, 25% of those
who responded requested participation in working groups for experience ex-
change. However, SMEs are faced with many difficulties when it comes to trying
to implement improvement programmes.
For example, SMEs are generally less aware than larger companies of the bene-
fits of business-driven software process improvement. It is perceived as being an
expensive task and the standard examples that are quoted in an attempt to con-
vince them otherwise are invariably drawn from larger U.S. organisations and
therefore bear little relevance for European SMEs. ESSlgram No 11 also reported
that "peer review of experiment work in progress and results would be helpful."

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
18 2 The EUREX Project

Thus, SMEs need to see success among their peers, using moderate resources,
before they are prepared to change their views and consider embarking upon SPI
actions.
For those SMEs that are aware of the benefits of SPI, there are frequently other
inhibitors that prevent anything useful being accomplished. Many SMEs realise
that they should implement software process improvement actions but do not
know how to do this. They do not have the necessary skills and knowledge to do it
themselves and in many cases they do not have the financial resources to engage
external experts to help them. Consequently, SPI actions get deferred or cancelled
because other business priorities assume greater importance. Even those SMEs
that do successfully initiate SPI programmes can find that these activities are not
seen through to their natural completion stage because of operational or financial
constraints.
Many of the concerns about the relevance of SPI for SMEs were addressed by
~UR~X in a series of workshops in which speakers from similarly characterised
companies spoke about their experiences with SPI. The workshops were in inte-
gral part of the ~UR~X process and provided much of the data presented in this
volume.
The Commission funded ~UR~X in large measure because the evaluation of ap-
proximately 300 PIEs was too costly for an independent endeavour. Even if some
resource-rich organisation had undertaken this task, it is likely that the results
would not have been disseminated, but would rather have been used to further
competitive advantage. Commission support has insured that the results are widely
and publicly distributed.
Many ESSI dissemination actions have been organised as conferences or work-
shops. PIE Users register in order to discharge their obligations to the Commis-
sion; however, the selection and qualification of contributions is often less than
rigorous. In addition, many public conferences have added PIE presentation tracks
with little organisation of their content. Small audiences are a consequence of the
competition of that track with others in the conference. The common thread in
these experiences is that organisation of the actions had been lacking or passive.
~UR~X turned this model on its end. PIE Users were approached proactively to
involve them in the process. In addition, the information exchange process was
actively managed. The ~UR~X workshops were organised around several distinct
problem domains and workshop attendees were supported with expert assistance
to evaluate their situations and provide commentary on solutions from a broadly
experienced perspective. (See chapter 3 for a detailed discussion of the domain
selection process.) Participants were invited through press publications, the local
chambers of commerce, the Regional Organisations of ~UR~X and through co-
operation with other dissemination actions.
This approach provided a richer experience for attendees. Since the workshops
were domain-oriented, the participants heard different approaches to the same
issues and were presented with alternative experiences and solutions. This was a
more informative experience than simply hearing a talk about experiences in a
2.2 Objectives and Approach 19

vacuum, with no background and no possibility for comparison or evaluation. The


opportunity to exchange views with one's peers and to hear advice from respected
experts provides substantial benefit not found using a passive approach to dis-
semination.
Our approach also offered a better experience for European Industry as a
whole. Since we have categorised and evaluated approximately 300 different im-
provement experiments, we present a broad practical view of the selected problem
domains. This is distinctly different from purely academic approaches that offer
little practical experience. ~UR~X is an opportunity to derive additional benefit
from the PIEs, beyond that of obligatory presentations. We hope to lend an au-
thoritative voice to the overall discussion of Software Process Improvement.

2.2 Objectives and Approach

As mentioned above, the objective of ~UR~X was to assess, classify, categorise,


and exploit the experience of the ESSI PIE Prime Users and Associated Partners
(collectively referred to here simply as Users or PIE Users) and then to make this
experience accessible. In particular, we sought to provide a broad European audi-
ence with data about Software Best Practice and its introduction in selected prob-
lem domains.
The approach is broken down into two phases. The first phase required the
classification and collection of data and the second phase involves the analysis,
distribution and dissemination of the resulting information. The phases were im-
plemented in three steps:
I. Classify and categorise the base of PIE Users and the Problem Domains ad-
dressed by them. All of the available material from over 300 PIEs was as-
sessed, the categorisation was designed such that over 90% of the material
under consideration fell into one of the selected Problem Domains (see chap-
ter 3).
2. Plan and conduct a series of Regional Workshops in order to collect informa-
tion from PIE projects as well as for disseminating the PIE's experiences at a
regional level. 18 workshops in 8 European countries were undertaken. (Refer
to chapter 7 for the best of the workshop material.)
3. Publish the first four of the Software Best Practice Reports and Executive
Reports to detail the experiences. In addition, a Web-site provides access to
the background material used by ~UR~X.
Steps I and 2 fall within phase one and steps 2 and 3 are within phase two. No-
tice that, because multiple benefits are derived from the same activity, the two
phases overlapped somewhat. This approach is intended to convey to the largest
possible audience the experiences of the Commission's Process Improvement
Experiment program.
20 2 The EUREX Project

The ~UR~X Software Best Practice Reports (of which this volume is one) and
Executive Reports are directed at two distinct audiences. The first is the techni-
cally oriented IT manager or developer interested in the full reports and technol-
ogy background. The second is senior management, for whom the Executive Re-
ports a summary of benefits and risks of real cases are appropriate.

2.3 Partners

The ~UR~X project was carried out by the following partners:


• HIGHWARE GmbH, Germany (Coordinator)
• Editions HIGHWARE sari, France
• GEMINI Soc. Cons. A, Italy
• SOCINTEC, Spain
• SISU, Sweden
• MARl Northern Ireland Ltd., United Kingdom.
The fact that MARl has left the consortium (as they did with other projects as
well) caused some disruption and delay for the project. The partners were able to
compensate largely, e.g. the number of workshops held and the countries covered.
Even the book about the domain assigned to MARl, Object Orientation, was pre-
pared with the help ofFZI Forschungszentrum Informatik, Karlsruhe, Germany.

2.4 Related Dissemination and Training Actions

Other ESSI Dissemination Actions that have also generated significant results that
may be of interest to the reader. These actions include SISSI and ESPITI, both
described briefly below.

2.4.1 Software Improvement Case Studies Initiative (51551)

European companies must face the challenge of translating software engineering


into a competitive advantage in the market place, by taking full advantage of the
existing experiences and results. The process of overcoming existing barriers is
not an easy one, particularly if individual companies must face them on their own.
It is a major issue to put at the disposal of companies a set of written case studies
providing a practical view of software process improvement (SPI) impact and best
practices. Successful experiences can demonstrate that existing barriers can be
dismantled. This learning process, which takes time and requires continuity in the
long term, is being fostered by the SISSI project.
2.4 Related Dissemination and Training Actions 21

2.4.1.1 Overview
The target audience for the SISSI case studies is senior executives, i.e. decision-
makers, in software producing organisations through Europe. This includes both
software vendors and companies developing software for in-house use. The mate-
rial has been selected in such a way that it is relevant for both small and large
organisations.
SISSI produced a set of 33 case studies, of about 4 pages each, and distributed
50 case studies overall, together with cases from previous projects. Cases are not
exclusively technical; rather, they have a clear business orientation and are fo-
cused on action. Cases are a selected compendium of fmished Process Improve-
ment Experiments (PIEs) funded by the ESSI program of the EC. They are classi-
fied according to parameters and keywords so tailored and selective extractions
can be made by potential users or readers. The main selection criteria are the busi-
ness sector, the software process affected by the improvement project and its busi-
ness goals.
The dissemination mechanisms of SISSI were the following: a selective tele-
phone-led campaign addressed to 500 appropriate organisations together with
follow up actions; an extensive mailing campaign targeting 5000 additional or-
ganisations which have selected the relevant cases from an introductory document;
joint action with the European Network of SPI Nodes - ESPINODEs - to distrib-
ute the SISSI material and provide continuity to the SISSI project; WWW pages
with the full contents of the case studies; synergic actions with other Dissemina-
tion Actions of the ESSI initiative, like ~UR~X, SPIRE, RAPID; co-operation with
other agents like European publications, SPI institutions, or graduate studies act-
ing as secondary distribution channels.
SISSI developed an SPI Marketing Plan to systematically identify and access
this target market in any European country and distributed its contents through the
European Network of SPI Nodes both for a secondary distribution of SISSI Case
Studies, and for a suitable rendering of the ESPINODEs services. The plan was
implemented for the dissemination of the SISSI Case Studies in several European
countries, proving its validity.

2.4.1.2 Objectives
The main goals of the approach taken in the SISSI project have been as follows:
• The material produced has been formed by a wide variety of practical real cases
selected by the consultants of the consortium, and documented in a friendly and
didactic way to capture interest between companies.
• The cases have clearly emphasised the key aspects of the improvement projects
in terms of competitive advantage and tangible benefits (cost, time to market,
quality).
22 2 The EUREX Project

• Most of the cases have been successful cases, but also not successful ones have
been sought in order to analyse causes of failure, i.e. inadequate analysis of the
plan before starting the project.
• The project has not been specially focused on particular techniques or applica-
tion areas, but it has been a selected compendium of the current and fInished
Process Improvement Experiments - PIEs -. They have been classifIed accord-
ing to different parameters and keywords so tailored and selective extractions
can be made by potential users or readers. The main selection criteria have
been: business sector (fInance, electronics, manufacturing, software houses, en-
gineering, etc.), the software process, the business goals and some technologi-
cal aspects of the experiment.
• The Dissemination action should open new markets promoting the SPI benefIts
in companies not already contacted by traditional ESSI actions.
• The SISSI Marketing Plan should provide the methodology and the information
not only to disseminate the SISSI material, but has to be generic enough to di-
rect the marketing of other ESSI services and SPI activities in general.
The SISSI material should be used in the future by organisations and other dis-
semination actions and best practices networks as a reference material to guide
lines of software improvement and practical approaches to face them. In particu-
lar, SISSI has to provide continuity of the action beyond the project itself support-
ing the marketing of SPI in any other ESSI action.

2.4.2 ESPITI

The European Software Process Improvement Training Initiative (ESPITI) was


officially launched on 22 November 1994 in Belfast, Northern Ireland. The fInal
event was held in Berlin, Germany in Spring 1996. The Initiative aimed to maxi-
mise the benefIts gained from European activities in the improvement and subse-
quent ISO 9000 certifIcation of the software development process through train-
ing. A sum of 8.5 million ECU was allocated to the Initiative for a period of 18
months, to support actions intended to:
• Identify the true needs of European industry for training in software process
improvement (SPI).
• Increase the level of awareness of the benefIts of software process improvement
and ISO 9001.
• Provide training for trainers, managers and software engineers.
• Support the development of networks between organisations at regional and
European levels to share knowledge and experience and form links of mutual
benefIt.
• Liase with similar initiatives world-wide and transfer their experiences to
Europe.
2.4 Related Dissemination and Training Actions 23

2.4.2.1 Organisational Structure


The Initiative was implemented through a network of 14 Regional Organisations
addressing the local needs of 17 ED and EFTA countries. Regional Organisations
(ROs) have been existing commercial organisations that were contracted to carry
out a specific range of activities in support of the ESP1TI goals. The ROs were
divided into 2 sets, each set supported by a Partner. The two Partner organisations,
Forschungszentrum Karlsruhe GmbH from Germany and MARl (Northern Ire-
land) Ltd from the United Kingdom, have been co-ordinating and supporting co-
operation at European level through the provision of services to the ROs. These
services included provision of:
• Preparation of a user survey in all countries involved to determine the local SPI
needs.
• An electronic communication network for exchanging SPI information of
mutual interest.
• Guidelines on event organisation, e.g. seminars, training courses and working
groups.
• Awareness material for project launches, software process improvement and
ISO 9001.
• Assistance in evaluating performance at project and event levels.
• Guidance in programme planning and control.
• Assistance in PR activities.
• Assistance in experience exchange and co-operation between the ROs.
The European Software Institute ESI was also involved in ESPITI, providing
the Partners with valuable assistance, including the merging of the European user
survey results, liaison with other initiatives and contributions to RO meetings.

2.4.2.2 The ESPITI Approach


The ESPITI project adopted a multi-pronged strategy for improving the competi-
tiveness of the European software industry.
• Survey of European needs was carried out to ascertain the needs and the best
approach to adopt to satisfy these needs within each region.
• Seminars for raising awareness of the benefits and approaches to quality
management and process improvement.
• Training courses for improving know-how in initiating, assessing, planning and
implementing quality management and process improvement programmes.
• Workshops, which aim to teach participants about a subject and direct them in
implementing the subject in their Organisations.
• Working groups for enabling dissemination of experience in a subject, and to
allow participants to discuss and learn about those experiences.
• Case studies for demonstrating the successes and difficulties in software proc-
ess improvement.
24 2 The EUREX Project

• Liaisons with similar, related initiatives world-wide to understand their ap-


proaches and successes and to transfer the lessons learned there to Europe.
• Public relations activities to promote the aims and objectives of ESPITI and to
ensure participation in ESPITI events.
• Evaluation of the ESPITI project to assess the effectiveness of the initiative,
and to determine how the initiative could progress from there.

2.4.2.3 The Partners and Regional Organisations


The Partners
• MARl (Northern Ireland) Ltd, United Kingdom
• Forschungszentrum Karlsruhe GmbH, Germany

The Regional Organisations


• Austrian Research Centre, Austria
• Flemish Quality Management Centre, Belgium
• Delta Software Engineering, Denmark
• CCC Software Professionals Oy, Finland
• AFNOR, France
• Forschungszentrum Karlsruhe GmbH, Germany
• INTRASOFT SA, Greece
• University of Iceland, Iceland
• Centre for Software Engineering, Ireland
• ETNOTEAM, Italy
• Centre de Recherche Public Henri Tudor, Luxembourg
• SERC, The Netherlands
• Norsk Regnesentral, Norway
• Instituto Portugues da Qualidade, Portugal
• Sip Consultoria y formaci6n, Spain
• SISU, Sweden
• MARl (Northern Ireland) Ltd., United Kingdom
Part II

Testing, Verification, and Quality Management


3 The ~UR~X Taxonomy

M. Haug, E.W. Olsen


HIGHWARE, Munich

One of the most significant tasks perfonned during the ~UR~X project was the
creation of the taxonomy needed to drive the Regional Workshops and, ultimately,
the content of these Software Best Practice Reports. In this chapter, we examine in
detail the process that led to the ~UR~X taxonomy and discuss how the taxonomy
led to the selection of PIEs for the specific subject domain.

3.1 Analysis and Assessment of PIEs

Over 300 Process Improvement Experiments (PIEs) funded by the Commission in


the calls of 1993, 1995 and 1996 were analysed using an iterative approach as
described below. The technical domain of each of the PIEs was assessed by ~UR~X
and each PIE was attributed to certain technological areas.
Early discussions proved what others (including the Commission) had already
experienced in the attempt to classify PIEs: there is no canonical, "right" classifi-
cation. The type, scope and detail of a classification depends almost entirely on
the intended use for the classification. The ~UR~)( taxonomy was required to serve
the ~UR~)( project. In particular, it was used to drive the selection of suitable sub-
ject areas for the books and, consequently, the selection of regional workshop
topics to insure that good coverage would be achieved both by the number of PIEs
and by the partners in their respective regions.

3.2 Classification into Problem Domains

A set of more than 150 attributes was refined in several iterations to arrive at a
coarse grain classification into technological problem domains. These domains
were defmed such that the vast majority of PIEs fall into at least one of these do-
mains. There were seven steps used in the process of discovering the domains, as
described in the following paragraphs.

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
26 3 The EUREX Taxonomy

In part because of the distributed nature of the work and in part because of the
necessity for several iterations, the classification required 6 calendar months to
complete.

3.2.1 First Regional Classification

Each partner examined the PIEs conducted within its region and assigned attrib-
utes from the list given above that described the work done within the PIE (more
than one attribute per PIE was allowed). The regions were assigned as shown in
Table 3.1.

Table 3.1 Regional responsibilities of consortium partners

Partner Region

SISU Denmark, Finland, Norway, Sweden


MARl United Kingdom, Ireland
GEMINI Italy
SOCINTEC Spain, Portugal, Greece
HIGHWARE Germany Germany, Austria, The Netherlands, Israel
and all other regions not explicitly assigned
HIGHWARE France Benelux, France

3.2.2 Result of First Regional Classification

HIGHWARE Germany (the consortium co-ordinator) began with a classification


of the German PIEs according to the above procedure. This first attempt was dis-
tributed among the partners as a working example.
Using the example, each partner constructed a spreadsheet with a first local
classification and returned this to HIGHWARE Germany.

3.2.3 Consolidation and Iteration

HIGHWARE Germany prepared a consolidated spreadsheet using the partners'


input, and developed from that a first classification and clustering proposal. This
was sent to the other partners for review and cross-checking.

3.2.4 Update of Regional Classification

All partners reviewed their classification, in particular the assignment of attributes


to PIEs. Corrections were made as necessary.
3.2 Classification into Problem Domains 27

3.2.5 Mapping of Attributes.

HIGHWARE Germany mapped all key words used by the partners into a new set
of attributes, normalising the names of attributes. No attribute was deleted, but the
overall number of different attributes decreased from 164 to 127. These attributes
were further mapped into classes and subclasses that differentiate members of
classes. This second mapping lead to a set of 24 classes each containing 0 to 13
subclasses. The resulting classes are shown in table 3.2.

Table 3.2 Attributes of the Classification

Assessment Case Tools Change Management


Configuration Management Decision Support Documentation
Estimation Formal Methods Life Cycle: Analysis & Design
Life Cycle: Dynamic System Life Cycle: Installation & Life Cycle: Requirements &
Modelling Maintenance Specification
Life Cycle: Product Manage- Metrics Modelling & Simulation
ment.
Object Orientation Process Model: Definition Process Model: Distributed
Process Model: Iterative Process Model: Support Project Management
Prototyping Quality Management Reengineering
Reuse & Components Reverse Engineering Target Environment
Testing, Verification & Valida- User Interface
tion

3.2.6 Review of Classification and Mapping into SUbject Domains

The classification achieved by the above mentioned process was reviewed by the
partners and accepted with minor adjustments. It is important to note that up to
this point, the classification was independent of the structure of the planned publi-
cations. It simply described the technical work done by PIEs in the consolidated
view of the project partners.
In the next step this view was discussed and grouped into subject domains suit-
able for publications planned by the consortium.

3.2.7 Subject Domains Chosen

Out of the original 24 classes, 7 were discarded from the further consideration,
either because the number of PIEs in the class was not significant or because the
domain was already addressed by other ESSI Dissemination Actions (e.g. formal
methods, reengineering, and so on). The 17 final classes were grouped into the
28 3 The EUREX Taxonomy

subject domains shown in table 3.3 such that each of the resulting 5 domains
forms a suitable working title for one of the ~UR~X books.

Table 3.3 Final Allocation of Domains


Partner Domain

SISU Metrics, Measurement and Process Modelling


MARl Object Orientation, Reuse and Components
GEMINI Testing, Verification, Validation, Quality Management
SOCINTEC Configuration & Change Management, Requirements Engineering
HIGHWARE France Project Management, Estimation, Life Cycle Support

100 88

w
til 80 70
ii:
'0 60 48 51
...ell 40
.c 40 . 34
E
::J
Z 20

0
A B 0 OK E F GR IRL ISR N NL P S SF UK
Country

Fig. 3.1 All PIEs by Country

The breakdown of all (unclassified) PIEs on a per-country basis is shown in


Fig. 3.1. The distribution of PIEs is somewhat related to population, but there are
notable exceptions (e.g. Italy and France).
The classification breakdown of PIEs Europe-wide is worth examining. Refer-
ring to Fig. 3.2, notice first that the classification has resulted in a relatively even
distribution of projects, only the Project Management classification dips noticea-
bly below the average. The number of PIEs without any classification was held
below 10% of the total. (Further discussion of the "No Classification" category
appears below.)
3.2 Classification into Problem Domains 29

3.2.8 Unclassified PIEs

There we 33 PIEs that were not classified by r;.URr;.X. There were generally two
reasons for lack of classification.

No Classification Object Orientation,


Reuse, Components
7%
Project Management, 22%
Estimation 10%

Metrics & Testing,


Process Verificati on&
Modelling Quality Mgt
24% 19%

18%
Config&Change Management,
Requirements Engineering
Fig. 3.2 Classification ofPIEs Europe-wide

1. Neither the r;.URr;.X reviewer alone nor the consortium as a whole was able to
reach a conclusion for classification based on the PIE description as pub-
lished.
2. The PIE addressed a very specific subject that did not correspond to a class
defined by r;.URr;.X and/or the PIE was dealt with by other known ESSI pro-
jects, e.g. formal methods. The consortium tried to avoid too much overlap
with other projects.
30 3 The EUREX Taxonomy

14
12
12
LfJ 10
Q: 8
15 6
4

f: 0

A
0

B D D

2 2

E

• 0

F GR I
o •
IR IS
0 o •
N N
• o •
P

S SF UK

Counby

Fig. 3.3 Unclassified PIEs.

When one of these rules was applied, the corresponding PIE was given no clas-
sification and was not considered further by the ~UR~X analysis. Fig. 3.3 shows
the breakdown of unclassified PIEs by country.
As can be seen in Fig. 3.3, there were 33 PIEs that remained unclassified once
the ~UR~X analysis was complete.

3.3 Testing, Verification, and Quality Management

Part II presents the results of ~URfX project with respect to the Testing, Verifica-
tion, and Quality Management classification. The attributes associated with this
classification were Testing, Validation and Verification, and Quality Management.
Within this classification there were a total of 93 PIEs that were assigned one or
more of these attributes. The distribution of these PIEs throughout Europe is
shown in figure 3.4.

17
18
16 14
w 141/1

ii: 12 lU lU

1:
.
'0 10
8 5 II
7
()
E 6
:l 't ~ ~ 4 ~
4

• • •• • • •
Z
2 • • • • 1 • 1
U
o
A B D DK E F GR I IRL ISR N NL P S SF UK

Country

Fig. 3.4 Testing, Verification, Validation, and Quality Management PIEs by Country
4 Perspectives

L. Consolini
GEMINI, Bologna

Virtual1y al1 of the PIEs examined by WR~)( fal1 into five subject domains.
GEMINI, the consortium partner for Italy, was responsible for the domain classi-
fied as Testing, Verification, Validation, and Quality Management, consisting of
88 PIEs performed throughout Europe between 1994 and 1997.
This volume discusses the results obtained by ~URl;)( concerning this domain
and focuses primarily on the improvement of the product verification process
through better testing practices.
This chapter provides an introduction to the central theme of the domain and
presents the contributions of three authors who analyse the state-of-the art and the
state-of-the practice from different perspectives.

4.1 Introduction to the Subject Domain

The number of PIEs al10cated by (;UR(;)( to Testing, Verification, Validation and


Quality Management (almost one hundred) was sufficient to demonstrate that
product quality is still an unresolved concern for many software organisations,
ranking as high as time-to-market reduction and development cost containment.
It is clear that the enhancement of the product verification process' effective-
ness is a central issue that should be tackled by acquiring a better understanding of
this process' nature.
The quality of a software product is directly related to many characteristics that
can be measured either through dynamic verification, Le. testing, or through static
verification, Le. review and inspections. Among the ful1 set of quality-related
issues, the fol1owing can be highlighted:
• correct identification and expression of user requirements;
• correct implementation of the specified requirements;
• absence of problems with the code and the data;
• usability, completeness and level of updating of the documentation given to
customers;
• maintainability of the product.

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
34 4 Perspectives

The activities performed during the development and maintenance process that
ensure that these aspects of quality are adequately represented in the software
product form the core of Software Quality Assurance. Some of these activities are
intended to insure the implementation of a defined quality standard, others are
targeted at assessing and controlling the products of the software process to check
for defects and remove them before delivery.
The latter group of activities are more precisely named Quality Control activi-
ties, or, using a terminology more common in the software industry, Verification
and Validation (V&V) activities. They consist mostly of document reviewing,
code inspection and testing; testing being by far the most widespread.
It is evident that the level and amount of V&V cannot be equal for all types of
software: what is suited to the production of safety critical software could be ex-
cessive and unaffordable in the production of low-risk commercial software. The
selection of the appropriate V&V activities in the context of a specific software
project or product revolve around the following issues:
• the nature of the specific quality targets to be achieved;
• the nature of the product;
• the specific customer's demands;
• the available resources;
• the available skills;
• the level of risk that can be accepted;
• schedule related issues.
The application of an appropriate product verification process consists in as-
sessing these issues, establishing the right quality targets and adopting the most
suitable strategy to achieve them. Such a strategy involves the selection of meth-
ods, techniques and tools that can be applied to perform V&V at different levels of
depth, thoroughness, productivity and skills demand.
Adopting the right product verification process according to the quality objec-
tives is also known as V&V planning (which also includes test planning) and is a
core Quality Management component.
Depending on the nature of the software production model used by an organisa-
tion, V&V planning can be performed anew for each project (custom-built soft-
ware, second party regulated software) or can be simplified by the tailored appli-
cation of standard practices described in internal procedures (commercial soft-
ware, off-the-shelf software). In the latter case, planning will concentrate on the
identification of the specific controls and tests to be carried out and on the set up
of an adequate environment to perform them in compliance with an organisation's
internal standards. Most of the PIEs represented in this Part II followed this ap-
proach.
The current culture and experience in the application of product verification
methods, techniques and tools is unfortunately quite unsatisfactory in the software
industry, principally in the commercial software area. More know-how is found
among the producers of highly regulated or safety critical software.
4.1 Introduction to the SUbject Domain 35

There is no lack of a consistent market offer though. The groundwork on test-


ing techniques that is still largely used today dates back to the seventies. Many
tools supporting software product quality verification are available at affordable
prices. Rather, software verification methodologies lack a reality check; in other
words, it is still uncertain if they live up to expectations and deliver on what they
promise.
To add one last element to the picture, it seems evident that product verification
activities should be integrated in the software life cycle to achieve the benefits of
discovering and removing defects as early as possible. This means that introducing
new V&V practices has an impact on development and maintenance in addition to
the organisational implications. For this reason change in this area has to be dealt
with carefully and cautiously to avoid disruption of the existing software process
and also to show early and lasting results, capable of justifying the (usually high)
level of investment requested.
Gathering more hands-on experience with this wide range of issues and per-
forming a reality check on the commonly available methodologies is at the heart
of process improvement based on a best practice approach. Accordingly, the PIEs
selected by I;URI;X as exemplary cases have been dealing with multiple dimen-
sions at the same time: process, methods and technology, organisation and people
skills. The lessons derived from them and exposed by the I;URI;X workshops
should be read as the outcome of a field trial performed in real conditions but also
in a controlled environment where the observation and the measurement of the
results could not be neglected.
These results should be particularly useful to those attempting the change on
their own, however they come from the tailored application of ideas and ap-
proaches that should become familiar to the readers before tackling the core of
Part II. Chapter 4 serves this aim; it provides an overview including the state-of-
the-art and the state-of-the-practice. Three articles have been included to explore
the core theme as thoroughly as possible: the authors, from different angles, vari-
ously emphasised the key dimensions of process, methods, technology, organisa-
tion and people skills:
• The first article by Michele Paradiso of IBM SEMEA illustrates the basic ter-
minology and the core concepts used in the subject domain. This paper takes
the process view and advocates the integration of the verification process with
the development process.
• The second article by Fabio Milanese of COMPUWARE focuses on methods
and technology, taking the automation view and analysing its integration into a
software verification process.
• The third article by Brian Marick, Technical Editor of Software Testing &
Quality Engineering Magazine, points at the state of the practice and singles out
what he calls "classic testing mistakes", particularly organisational and strategy
blunders.
36 4 Perspectives

The three authors represented are first of all practitioners and their articles are
based on their direct working experience; references to the application of general
concepts into a real environment are therefore frequent and substantiate the au-
thors' perspective on the subject domain.
More introductory information is also found in the expert presentation that
opened the third fURfX workshop in Spain (see Chapter 7.2.2).

4.2 Software Verification & Validation Introduced

M. Paradiso
IBM Semea Sud, Bari

Michele Paradiso is currently an Advisory liT Specialist at IBM Semea Sud in the
Application Products Software Development Center in Bari (Italy).
He has worked on software quality assurance applied to software development,
ISO 9000 auditing, software measurement, application of software reliability
growth modelling and test process improvement.
Since 1996, he was included in IBM Internal ISO auditor team and, for his ac-
tivities on test process improvement he received an IBM Outstanding Technical
Achievement Award.
He received a bachelor's degree in Computer and Information Science from the
University of Bari (Italy).

4.2.1 Verification & Validation with Respect to the Product


Development Process

Software Verification & Validation should be analysed in the context of the


Product Development Process. The Product Development Process, as detailed in
this overview, provides a consistent definition of the steps that are carried out in a
typical product development effort from initial concept to the end the product
lifecycle.
The Product Development Process is structured into manageable units and in-
cludes V& V activities such as reviews, inspections, and testing as well as execu-
tive reviews at critical decision checkpoints. The process phases are defined ac-
cording to industry standards, which allow competitive benchmarking of time and
spending per phase against projects of comparable size and complexity.
The table 4.1 shows the relationship between phase and V& V activities as well
as the major milestones that mark the beginning and the end of each phase.
4.2 Software Verification & Validation Introduced 37

Table 4.1 Phases and related V&V activities

Phase Focus Inspections and/or reviews

Develop Product Require- Requirements Collection product documents review


ments & Concept
Develop Product Definition & Product Specification and product and project documents
Project Plan Design II review
Develop & Verify Product Coding and Testing code inspection,
testing
Quality & Certify Product Validation and Packaging product testing and project
documents review
Launch Product Shipment product documents review
Manage life-cycle of product Service code inspection, testing
product testing and product
documents review

Inspections and reviews of the documents and code are executed to "assure" the
full adherence to the customer needs and to development standards. The execution
of these V& V activities marks a Checkpoint (CP) in the development process. A
specific approval is required to proceed to the next CPo
The "Develop & Verify Product" phase includes all aspects of product design,
coding and testing, together with the development of plans for marketing, distribu-
tion, servicing and supporting the product.
Current industry practices rely extensively on testing to ensure software quality.
The following levels of testing are executed to assess all the product quality char-
acteristics, which are part of the quality targets of the product, such as reliability,
functionality, performance, usability, maintainability and portability (as defmed
by the ISO 9126 standard):

Table 4.2 Levels of testing

Testing Level Done by Purpose

Unit Testing Developers to test each module separately in order to verify that it
executes as specified without any programming error.
Functional Testing Application to test each function separately in order to verify that
Domain Spe- functional requirements are implemented as stated in
cialist the Functional Specification document. Formal test
cases are defined and executed; errors are recorded and
test results analysed
Product Testing Application to verify proper execution of the whole product and to

11
The phase ends when the product meets the established specifications as demonstrated by
successful completion of V&V activities.
38 4 Perspectives

Testing Level Done by Purpose

Domain Spe- evaluate the external product interface; testing proce-


cialist dures are the same as in Functional Testing
System Testing Application to verify proper execution of the whole product in the
Domain Spe- target environment (hardware and software); volume
cialist and stress tests are executed and performance is evalu-
ated
Installation Testing Developers to verify the install ability of the product. To verify the
product packaging and documentation (readable, cor-
rect and complete)
Validation Testing Customer to obtain an independent assessment of the quality level
from someone who will act as a first user (customer
view)

Software testing as part of software development cycle requires a relevant share


of the whole effort compared to other software development activities. It is con-
sidered one of the most challenging and costly aspects in software development.
Experiences report that costs associated with testing range from 30% to 40% of
the entire product development lifecycle expenses (in both capital and time) and
the activity is highly dependent on the knowledge of few application domain ex-
perts.
Because of such high costs associated with testing during the entire develop-
ment cycle, the pressure to increase test efficiency is especially high.

4.2.2 The Main Weaknesses of the Testing Process

The competitive pressure on high quality software, stringent budget constraints


and aggressive schedules require to increase productivity while sustaining quality
in all phases of the software development life cycle. The business imperative for
organisations in the 2000s are to gain competitive advantage while reducing time
to market and at the same time minimising business risk; all this means getting
new and sound applications and lor solutions out of the door as quickly as possi-
ble. As a consequence the time for product verification is more constrained than
ever: comprehensiveness and thoroughness cannot be pursued as "zero defects"
quality would require.
The systematic search for defects in all project deliverables is part of the re-
sponse to the fact that exhaustive testing is not feasible, and even if it were, it
would not be cost effective. The extension of testing that can be afforded is based
on economics: if the cost of failure exceeds the cost of testing, then testing is cost
justified.
In this framework, the pressure to deliver "high quality software" continues to
be a constant challenge to testing organisations that need to decide how much
4.2 Software Verification & Validation Introduced 39

testing is enough not only on the basis of technical requirements but also on the
basis of risk and business considerations.
Unfortunately factors such as the increased complexity of the business, tech-
nology and development environment, as well as the lack of adequately trained
people, have increased the probability and the cost of failure.
The main weakness of the testing process currently used to test software products
can be identified in the areas of:
• text execution
• test documentation management
• measurement framework
• testing organisation and the cultural environment.
All these areas and their current shortfalls (as can be found in most software
development organisations) are analysed hereafter. In the remaining of this article
the same areas will be seen from the point of view of how an improved process
could work.

4.2.2.1 Test Execution


The major testing activities are executed manually due to a lack of testing tools
with adequate industrial strength to help improve problem detection and to ensure
higher software reliability. Every time a test case is executed (during testing or
regression testing) the specific environment (data records, parameters, etc.) and
test paths have to be reproduced. During the testing the defects detected are filed
to the developers who normally loose time trying to reproduce it.
Regarding the available tools, most of them fail to meet the performance and
capability requirements that the industry needs to handle their complex environ-
ments and large-scale software.
Some of the typical "testing troubles" concerning automated testing are related to
the following aspects:
• interactive testing environment: it is important to have a mechanism to repro-
duce defects, allowing a programmer to analyse and verify the correctness of
program fixes. On one hand, when a problem is reported, the developer would
like to re-execute the test case to reproduce the defect, on the other hand once
the defect has been fixed, the developer should be able to re-execute the "right
test" to check that the bug has really been solved
• usability: "easy to use" tools keep start up costs low
• increased testing quality: developers always need to increase the quality of
testing by eliminating the risk for human errors, they have to re-execute the test
cases and to be certain that there were no variations in the test cases over sub-
sequent runs.
40 4 Perspectives

4.2.2.2 Documentation Management


Test cases, results, defects description and testing progress status are often avail-
able on paper. This documentation is not be easily re-usable for future testing
activities.
Usually only few persons have the right application domain knowledge to de-
sign effective test cases. Often the testing choices remain "coded" inside their
minds and at every new testing cycle the wheel has to be reinvented.
A clear and traceable link between product specifications, software modules,
test cases, test data and quality records still remain non-feasible because a specific
and centralised data repository for the development project is not available.

4.2.2.3 Measurement Framework


The evaluation of the product quality is often based on few indicators (related to
software quality characteristics such as Reliability, Usability, Installability, Per-
formance, Serviceability, etc.) and the metrics associated with them are defined
according to the "developer view". A re-alignment of these metrics to specific
standards (for example ISO 9126, SPICE, CMM) have to be done to adequate
these indicators to the customer's view (for example to address business goals).

4.2.2.4 Testing Resources Organisation and Cultural Environment


In addition to the above testing process weaknesses, some natural factors affect
the testing activities such as the boredom and repetitiveness of the task. There is a
"natural" (meaning that it is a psychological fact inside software organisation)
dislike of the testing work. Moreover the increasing product complexity and the
exponential growth of functional requirements to be implemented in a software
product causes an equivalent growth in terms of test planning activities and num-
ber 0 test cases.
In this scenario it is difficult to staff the testing team with the right persons as
well as to establish the right testing effort size, and, consequently, the correct time
to spend in testing applications.

4.2.3 An Improved Process Model

Any software process improvement should not be seen as a goal in itself but it
pays off if it is clearly linked to the business goals of an organisation. Software
process improvement starts with addressing the organisational issues.
To increase the business profitability and market share several software devel-
opment companies declare their strong commitment achieve a greater customer
satisfaction by improving the quality of products and services as well as reducing
development costs and delivery time. The improvement of the testing process can
4.2 Software Verification & Validation Introduced 41

be directly related to the achievement of these commitments as shown in folowing


table 4.3:

Table 4.3 Improvement of the testing process


Driver Testing Process Contribution

Cost productivity improvement especially in regression testing


Time time reduction in testing execution and testing data man-
agement activities
Product and service quality reduction in the number of defects delivered with the
product
reduction of the average time necessary for problem fixing

To achieve these benefits an improved testing process should address the main
weaknesses identified in chapter 4.1.2. The improved model will be described
hereafter according to the same decomposition of the testing process into the areas
of:
• text execution
• test documentation management
• measurement framework
• testing organisation and the cultural environment.

4.2.3.1 Test Execution


An interactive testing environment should be implemented with the automatic
recording and re-execution of test cases. Specifically the following aspects could
be covered:
• automatic recording of test cases during test execution
• storage of the test cases into a library and re-use of them in further testing
phases like regression testing, installation testing, packaging testing and in the
maintenance phase.
• interactive as well as unattended re-execution equally possible
• browsing and query facilities to select the test cases to be executed from the test
cases library.
Much of the above can be pursued by introducing testing automation tools,
namely capture and playback tools. However it should not be forgotten that the
use of recording tools requires an investment in test case design and maintenance
to keep them re-usable. This investment is reasonable and productive for test cases
covering interactive functions and in advanced test phases (Functional, System,
42 4 Perspectives

Packaging Testing), so this approach is not recommendable for all levels and type
of testing.
A large reuse ofthis investment can be done when the applied software process
model is evolutionary and development proceeds by new releases of the same
product.
Another aspect where automated data management can considerably help the
testing effort is maintaining a cross-reference between test cases and requirements
specifications to make the task of identifying the test cases related with changes
easier and more secure.

4.2.3.2 Documentation Management


The management of all test data and related information should be electronic,
centralised and shared on line by all the people involved in testing and test data
analysis.
In a state-of-the-art scenario these technical objectives should be pursued:
• The test plan and the test cases are defined in a structured way, easy to store,
update and re-use.
• All the information related with the test execution (test cases, results, time,
tester name, defects data, etc) are recorded in a central repository and available
for analysis according to different perspectives (technical and managerial).
• Cross references are maintained tracing each specified requirement to the set of
test cases validating it, the software object implementing it and any eventual
defects detected in testing.

4.2.3.3 Measurement Framework


A product quality profile should be defined according to software specific interna-
tional standards (or industrial standards), based on software quality characteristics
(i.e. Functionality, Reliability, Usability, Efficiency, etc.), and for each of these
characteristics a target value or range should be established taking into account
mostly the customer point of view.
Process metrics have to be identified too in order to track project progress,
status, and the specific costs related to the testing phases.

4.2.3.4 Testing Resources Organisation and Cultural Environment


Changing the testing process to make happen the state-of-the-art scenario de-
scribed in this article always involves an initial resistance to the adoption of more
rigorous methods and to the introduction of changes in the current way of work-
ing.
The strong commitment of the management and the active involvement of all
team members in the definition of the new testing environment are key factors to
4.2 Software Verification & Validation Introduced 43

get over this problem. A wide visibility of progress could be a facilitating factor,
too: the results achieved and the benefits perceived in the daily work would be
appreciated from the people involved becoming a motivating factor.
It is also important to promote an independent unit responsible to monitor and
support the development teams during the transition. This unit could be involved
in the evaluation of the improvement action results and in the dissemination of the
experience gained.

4.2.4 How to Improve: the Road to Process Improvement

To be successful in process improvement software organisations should raise


awareness on the importance of software quality to the competitiveness of the
business at all levels (developers and management). After sensitisation they
should focus on the strengths and weaknesses of their current testing activities in
order to identify the right ways to improve as well as the right skills on specific
methods and technology to be developed. This preliminary step is actually a fo-
cused assessment and can be part of a wider software process assessment also
covering other process areas.
Anyone involved in the process of changing their current software engineering
processes is likely to consider the use of a pilot project to evaluate new promising
practices.
Pilot projects can minimise the risk of adopting inappropriate methods and
technologies, and can reduce the problem of carefully selected technologies being
rejected nonetheless by developers and engineering staff.
All the improvement projects have to manage some factors that facilitate suc-
cess. Disregarding these factors could delay progress or make it difficult to
achieve good results. Some of the key facilitating factors are:
• Management commitment and support
• Staff involvement
• Providing enhanced understanding
• Tailoring improvement initiatives
• Encouraging communication and collaboration
• Stabilising changed processes
• Setting relevant and realistic objectives
• Unfreezing the organisation
Finally it is very important to remember that process improvement and imple-
mentation concerns people and needs to take into account all people related as-
pects (human factors). These are orthogonal to the technology and methodology
driven approaches and are crucial to the success of adopting the new practices.
All the recommendations mentioned above have been taken into account in a
"process improvement roadmap" summarised in table 4.4.
44 4 Perspectives

Table 4.4 Process improvement roadmap


Step Activities Results

Assessment make a testing process assessment guided awareness of the weak points
by an external organisation and involving with respect to business needs
the developers directly performing the clarification of the key product
testing quality characteristics with
set a baseline against which future im- respect to customer needs
provements can be measured identification of improvement
actions
Consensus share the results of the assessment with diffused and formalised com·
Building both senior management and developers mitment
and get their commitment on the implemen-
tation of the improvement actions. The
commitment should be formalised
Organisation assign roles and responsibilities for the an adequate "Process Im-
improvement project and particularly: provement Organisation"
project management, technical and meth-
odological direction, internal process sup-
port
Planning organise and implement the improvement project plan
actions on a pilot project as self-contained
work packages, each of them associated
with well identified improvement objec-
tives
Best Practices define the new, improved, practices and defined methodology
Definition identify the skills to be acquired trammg programme
Field Trial apply the defined practices to a pilot project input to the evaluation step
and continuously monitor the results refined practices definition
Evaluation make a final assessment with the same an evaluation of the new
approach of the initial one. The comparison practices in technical and
with the initial baseline will measure the business terms
improvement
Diffusion illustrate the results achieved to a wider institutionalised practices
audience in the organisation, and tum them
into the new process standards.

Any organisation in any sector of the economy, which regards the production of
software as part of its operations may benefit from the adoption of the roadmap
described. Such a user organisation is often not necessarily classified as being in
the software industry, but may well be an engineering or commercial organisation
in which software has emerged as a significant component of its products, services
or processes.
4.2 Software Verification & Validation Introduced 45

4.2.5 Cost/Benefit Analysis

Finally a costlbenefit analysis table 4.5 has been included to help setting out the
parameters according to which the suggested improvement actions can be meas-
ured within or after the timeframe of an improvement pilot project.

Table 4.5 Cost/Benefit Analysis

Action Benefits When benefits are gained

Assessment Ability to make self-assessment. The Within the project time-


competence could be used to assess other frame
processes
Measurement frame- Identification of a product Quality Profile After the project; the
work defined according to international stan- projects sets up the infra-
dards (for example ISOIIEC 9126 ) structure
Metric Plan definition
Identification of product and process
metrics to be used within the organisation
Automatic recording a "reuse culture" in test documentation 1 year from the availability
and re-execution of and test execution with several activities of the repository. 12
test cases seen as an investment for the future
Reduction of the test execution cost
Education, Skill and ability to operate with advanced testing Within the project time-
cultural environment tools frame
deep knowledge of:
Testing techniques
Product and process Metrics
People motivation

12
The use of the recording tools requires an investment in terms of test case structure and
maintenance of the output recording. Several of these test cases could be automatically
re-executed during the system test of the new release of the "same product"
46 4 Perspectives

4.3 Testware

F. Milanese
Compuware, Milano

Fabio Milanese, received a BSC degree in Electronic Engineering from the


Politecnico di Milano, during his academic and working career his main interest
has always been in software quality. He currently works at Compuware and he is
in charge of the company's automated testing product line for the Client/Server
environment.

4.3.1 A Testing Definition

What is software testing? There are many defmitions of software testing but the
classical definition is:
"Testing is the process of executing a program with the intent of finding errors"
But testing is much more...

4.3.2 Customer Needs

First of all we will try to focus on customer needs: it's essential to understand
what a customer needs from an automated testing tool and which are the goals.
This is the first, most important step of the testing process.
Generally a customer needs a better quality of the software produced and, in the
meantime, he needs to save time so an automated testing tool should help to im-
prove the quality of software but should also reduce testing times. A testing tool
should be easy to learn and to use, not very expensive, it should interface the most
common planning and development tools and it should automate everything that is
tedious and boring in the testing process.
How to achieve these goals?
In order to obtain a better quality of the software it's necessary to identify ex-
actly the quality factors that are essential for the application: reliability, integrity,
security, safety, correctness, ease of use, maintainability, portability, performance
and so on. For every quality factor try to identify the best type of test to be per-
formed and the best testing tool.
In order to save time the customer should plan accurately every testing action
and should identify the most recursive phases of test: these are the steps to auto-
mate!
4.3 Testware 47

4.3.3 Types of Testing

There are many types of testing and therefore there are many methods of testing
an application system. Let's try to identify the main types of testing.

4.3.3.1 Static Analysis


Static Analysis is defined as all the techniques based on code inspection.
The basic code inspection is the one performed by compilers (type checking
and syntax checking).
The Data Flow Analysis should be considered as another particular kind of
code coverage: the analysis of the variables' value during a "static" execution of
the code.
We should mention also all the techniques of non-computer based testing or
"human testing": the most known are Program Inspections, Walkthroughs and
Desk Checking. All these techniques involve the reading or visual inspection of a
program by a team of people or by a single person and usually are carried out in
meetings with the objective to fmd errors but not to find solutions to the errors.
They are the opposite of the automated testing, totally manual and time con-
suming, and for these reasons they should only be performed for critical parts of
the code.

4.3.3.2 Dynamic Analysis


These techniques are mainly based on program execution and they are divided in
two principal categories: white box testing and black box testing.
White box testing is a strategy that examines the internal structure of the pro-
gram and derives test data from an examination of the program's logic.
Statement coverage, decision coverage, condition coverage, decision/condition
coverage and multiple condition coverage are all types of coverage metrics taken
to measure white box testing.
Black box testing is a strategy that views the program as a black box, that is,
the tester is completely unconcerned about the internal behaviour and structure of
the program. The tester is only interested in finding circumstances in which the
program does not behave according to its specifications.
Test cases designed from formal or informal specifications by using the following
techniques are all types of black box testing:
• equivalence partitioning (test of a small specific subset of all possible inputs)
• boundary-value analysis (test cases that explore boundary conditions)
• cause-effect graphing (test cases that explore combinations of input circum-
stances)
• error guessing (technique of assuming the existence of certain probable types of
errors and writing test cases to expose these errors).
48 4 Perspectives

Dynamic analysis also involves the following levels oftesting:


• unit testing (test of subprograms, subroutines and procedures inside a wider
program),
• integration testing (next step of the unit test that defines the integration between
the various modules of the program),
• system testing (test of a program in all its global aspects),
• regression testing (test of a new version of a program that verifies the compati-
bility and the differences between the old and the new version),
• acceptance testing (test usually performed by the fmal user in order to validate
and accept the final product)
• installation testing (an unusual kind of test whose goal is not to find software
errors but to find installation errors).
Many aspects of the system test are fundamental for the automation: let's think
about load, stress and performance testing and the need to perform these kinds of
test without involving hardware and final users.

4.3.4 Debugging

Debugging is the process of fmding the location of an error and correcting it.
Even if debugging is not properly part of the testing process it is the logical
consequence of the testing and should be always followed by another testing
phase.
It is extremely important to collect as much fault information as possible in or-
der to optimise the debugging phases. The process of notifying an error from the
testing environment to the debugging environment and of tracking errors is also
known as defect tracking. Automated defect tracking tools document faults, notify
and assign faults to programmers, defme category and priority of faults, keep track
of the evolution and fixing times of problems, document re-testing phases until the
closure of the problem.
There are many methods to debug a program:
• debugging by brute force (method consisting in storage dump, in inserting print
statements throughout the program and in using automated debugging tools),
• debugging by induction (the process of proceeding from the particulars to the
whole that is by starting with the clues, the symptoms, to find the error) or de-
duction (the process of proceeding from some general theories or premises, us-
ing the processes of elimination and refinement, to arrive at the location of the
error),
• debugging by backtracking (an effective error-locating method for small pro-
grams that starts at the point in the program where the incorrect result was pro-
duced and deduces, from the observed output, what the values of the program's
variables must have been),
4.3 Testware 49

• debugging by testing (where the purpose of test cases is to provide infonnation


useful in locating a suspected error).

4.3.5 Other Techniques

There are many other techniques for testing software used sometimes in particular
cases and critical applications.
We will mention:
• mutation analysis (a testing method that generates programs very similar to the
testing one and generates test cases for all these mutated programs)
• quality metrics (identification of metrics or benchmarks in order to evaluate the
quality of software)
• symbolic execution (where the program is executed symbolically, that is a
variable can take on symbolic, as well as numeric, values and where a symbolic
value is an expression containing symbolic and numeric values)
• test case generators (automated tools that, starting by specifications, generate
test data randomly for a particular program)
• simulators (tools that simulate the environment surrounding the system to test,
typically used when the real environment is too expensive or impossible to use)
• predictive models (models that estimate the number of errors remaining in a
program and detennine when to stop testing and how much it will cost).

4.3.6 Tools

For almost every category of testing there are testing tools that help the automa-
tion of the testing process and make the work easier. We will mention the main
categories of these tools referring to the corresponding testing techniques and
identifying their key characteristics.

4.3.6.1 Code Coverage Tools


These tools help in the Source Code Coverage phase of testing. Historically de-
veloped for the C-Ianguage currently they are available for the most common
standard languages and object oriented languages like C/C++, Java, Visual Basic,
Delphi and so on.
They provide an efficient method for analysing the source code of the program,
so they can only be used if the source code is available.
They should be used during the programming step but they also should be used
after every code re-engineering process. Their goal is to produce more readable
and cleaner code, and to avoid potential problems during the execution, but they
50 4 Perspectives

should not be confused with compilers even if some functions (e.g. warnings gen-
eration) are very similar.
They give information about the correctness of code, the number of lines of
comment compared to the number of lines of code, the variables declared but not
initialised nor used, unreachable code, loops, functions declared but never used
and so on.
They should have graphing and reporting functions because they could be used
to analyse the quality of the code and they should be integrated with the compiler
of the specific language used by developers in order to be easily used during the
programming.

4.3.6.2 Capture and Playback Tools


Capture and playback tools are considered the best tools to be used in order to
automate testing in all those circumstances in which the source code is not avail-
able.
Actually they are the only tools that can be used to test applications written
with object oriented languages to simulate polymorphism and dynamic binding,
that is values assumed by functions and variables depending on the execution and
not to be analysed by code coverage tools. They are also the best tools for func-
tional black box testing and they are fundamental in regression testing.
Capture and playback (C&P) tools record user actions and the response of the
application recording these data into a command language script. These tools
should not record only keystrokes, mouse moves and clicks but they should work
at object level, identifying objects, such as buttons for example, by name. If these
tools have this capability, and are not position dependent, the scripts will be less
sensitive to changes in the underlying application.
C&P tools should be able to insert checks and validations of every type in the
script and the playback should reproduce the recorded user actions exactly with all
the validations inserted.
Scripts are stored in a database that is sometimes proprietary, sometimes a third
party database.
Usually these tools run on mainframes, Windows, UNIX operating systems.
With a single tool you can generally test applications developed using different
technologies such as visual languages with a graphical user interface, character
based applications, Internet/Intranet and web based applications, but there are also
tools specific for specific languages or environments.
C&P tools simplify testing by automating the repetitive, time consuming steps
involved in building test scripts and tests can be easily maintained for repeated
use. They can reduce test execution times but they can also help in the analysis of
test results creating log files or run-time messages on every action performed and
on passed and failed checks. A complete capture and playback testing tool should
also manage synchronous and asynchronous events that could affect program
4.3 Testware 51

execution and it should be able to read data from and external database and insert
data automatically into the program.

4.3.6.3 Test Data Management Tools


Test data management tools are specifically designed to manage every kind of test
data. They help the tester in all those activities involving test data handling such
as:
• extraction of subsets of data from the production database in order to create a
test database
• conversions of data from a source database to a target database
• handling of data and structures from the source database to the target database
• random generation of data, restoring of test database
• comparisons between a pre-test and a post-test database in order to identify
differences between data
• handling of particular conversion rules.
These tools should be able to read data from almost every kind of database,
from text data files or flat files to classical relational databases such as Oracle,
lnformix, SQL Server, DB2 and so on. They should have a simple, user friendly
graphical interface and everyone, not necessarily a skilled database administrator,
should be able to perform all the operations. They should also have particular
editing capabilities such as duplications of records and key fields handling.

4.3.6.4 Test Plan Management Tools


Test plan management tools co-ordinate the entire testing process enabling the
tester to carry out the following tasks:
• plan and organise test requirements
• execute tests from a variety of development and automated testing tools
• incorporate rules and condition logic into test plans
• view and analyse test results
• report results
• load information easily into defect tracking systems
• take control of the entire testing process.
These tools group test cases in a visual structure similar to a tree, enabling the
tester to easily build test plans that include the testing scripts required to verify the
entire application.
By automating test execution, the tools overcome the time restrictions that of-
ten make thorough testing impossible, they run multiple test cycles and store re-
sults in a database repository. Tests can be executed interactively or in scheduled,
unattended batch sessions on a daily, weekly or monthly basis.
52 4 Perspectives

These tools should have the ability to manage the development of descriptive
manual tests and they should also have the ability to define pre-execution and
post-execution rules, environmental set-up and cleanup tasks.
They should have an open architecture that will make possible the integration
with as many automated testing tools as they are required to test applications.
In many cases test plan management tools provide the ability to execute tests on
remote machines so you can test an application across its distributed components
as it were fully operational. The distributed test execution should also enable to
perform parallel testing allowing to distribute a large testing load across a network
to make effective use of system resources.
These tools are usually tightly integrated with other development and manage-
ment tools. For example they can give the possibility to import test plans from
word processors and spreadsheets, to handle version control systems, to report and
graph results in reporting tools, to integrate with event debuggers, to send faults
records automatically to defect tracking tools.
Test analysis is another fundamental aspect of test plan management tools: they
store the results of each run in a database, consolidate test results from multiple
tools in a single view, and allow the tester to see the passed or failed status of test
cases. Data can eventually be extracted in standard report formats.
As regression testing is performed, the results trend graphs enable to compare
results from multiple test cycles to determine the progress status of application
quality.

4.3.6.5 Defect Tracking Tools


Defect tracking tools for automated testing help to establish a systematic method
for tracking software defects, from detection through resolution and verification.
By handling the time-consuming tasks of documenting and reporting defects, they
free up staff time to focus on resolving problems. They link the testing environ-
ment and the debugging and development environment by allowing the automated
submission of the defects identified by the testing tools.
They can track defects but also releases, features, testing assets and similar in-
formation.
A defect tracking tool provides a structure for recording the priority, category,
solver and status of a problem.
When a tester reports a defect, the defect can be prioritised and assigned to the
development staff for resolution. Once the developer reports the problem as re-
solved, it will automatically appear as "fixed but not yet tested" to the tester. Eve-
ryone in the test and development team can view real-time status of defects.
Usually these tools have a central customisable repository database.
There is a wide number of defects tracking tools in the market but only few of
them are specifically designed for testing purposes. So they should have the possi-
bility to provide electronic mail notification when a defect arise or under specific
circumstances.
4.3 Testware 53

Some of them have an Internet/Intranet remote access, enabling developers and


software testers, with a common Internet browser, to obtain up to date information
on defects remotely. Of course it should be possible to report and graph in every
moment the defects' situation in order to manage the debugging process easily and
to better re-plan the bug-fixing phase.
The integration with source code management tools is also very important: in
fact defect tracking tools can track the association between low-level source file
modifications and high level modifications, such as bug fixes, or test versions.

4.3.6.6 Version Control Tools


Version control tools are strictly connected to automated testing tools, debugging
tools and defect tracking tools even if not properly considered part of the auto-
mated testing process. They manage the development of new versions of code and
so they are used after the testing and the debugging steps.
Automated testing tools identify problems and send them automatically via de-
fect tracking tools to the debugging team. After this stage debugging is necessary
to remove the problem, to fix the error, to recompile source code, so to generate a
new version of code, and to recycle to the testing phase. Different versions of code
are maintained by version control systems managed in parallel with different ver-
sions oftest cases' execution cycles.
With a version control system you can check in and out files for reading or for
editing, keep a history of files and version numbers associated with each code
release, managing a concurrent access to the source code performed by the differ-
ent members of the development team.

Debugging Tools
The process of program debugging can be described as the activity performed
after executing a test case that revealed a fault. Debugging tools help the user to
locate errors with a precise static analysis of source code. Inserting breakpoints,
executing the program step by step, watching the variables and, with the best
tools, perfonning an event debugging.
All the debugging activity is based on the infonnation provided by the testing
phase, so we can say that the quality and the perfonnance of the debugging phase
heavily depends on the quality of the infonnation sent by the automated testing
tools.

"Ad Hoc" Instruments


Even the most sophisticated tool does not always fit exactly the customer's needs
and the more an application is peculiar the more it is difficult to find in the market
an automated testing tool that meets its needs.
Even without considering particular software applications involving safety as-
pects, there are some critical systems - for example banking systems or stock
54 4 Perspectives

exchange systems - that need in-depth testing. In all these cases it could be neces-
sary to develop application specific testing tools.
There is no limit to the complexity of these "ad hoc" tools: we have to mention
only that they could be very expensive in terms of money, human resources and
know how and it is very important to evaluate all these aspects before deciding to
write specific testing programs.

Load Testing Tools


Once we have checked an application for functionality defects, there is often the
need to verify that the application will work fine also with the foreseeable number
of concurrent users.
As an application expands, effective system testing can help to determine if the
servers have the capacity to handle expected user loads and can provide acceptable
response times for user transactions.
Load testing tools provide the ability to simulate an application without involv-
ing end users or end-user equipment. It performs load testing by capturing the
transactions that an application can process and the underlying access methods
invoking the databases and the servers. Introducing the concept of virtual users
that replay the captured transactions these tools can simulate physical users.
There is a double goal in this activity: measuring end to end response times and
generating real traffic on the servers and in the network.
These tools can generate graphs and reports that show the performance of the
servers under test and highlight performance bottlenecks in the system.
Usually the captured transactions are converted into software scripts in a pro-
prietary or standard language like C. These scripts represent the actions performed
by end users in the system and they can be modified to better reproduce real end
users activity. Once generated the scripts may be reused and played back as often
as necessary with different set-ups to achieve the optimum configuration.
The tools work usually with the most common databases and middleware,
mainly in a Windows or UNIX environment. During the test, the hardware and
software configurations can be varied as well as the testing pace, the load and the
number of virtual users to create a truly scalable testing environment.
Load tools could also have the possibility to perform statistics on results, to in-
sert timers and checkpoints to measure response time, to get data from an external
source.

4.3.7 Testware

What is testware? It is neither a tool nor a methodology but it is an essential part


of the testing process: testware is "software written in order to test software".
It is useless to acquire an automated testing tool without considering a large
amount of work to do in order to use the tool properly.
4.3 Testware 55

Taking capture and playback tools as an example: there is no wizard with a


magic wand that creates and tailors the scripts for us! The tester should be aware
that a great effort must be done to tailor scripts.
An example will illustrate this point.
Supposing that we need to test a large data input into a form from an external
database, and we have also to check that pressing a certain button in the form the
result calculated on the data will be consistent with a baseline stored in a text file.
The capture capability of a tool can be of great help: we could record a single data
insertion and the button pressing, obtaining a script with constants inserted in the
form the we could replace the constant in the script with variables pointing to the
fields of the external database. To do this we have to isolate the piece of code
containing the insertion, tum the constants into variables, implement an access to a
database, insert this code into a "Do ... While" cycle that stops when data are
ended, and the we have to implement the comparison between the actual and the
expected result stored in the text file. Finally we have to decide in which way the
scripts will alert us in case of failure: by putting information in a log file, sending
an e-mail message, or popping up an alert window during the execution of the
script?
Clearly code has to be written and experience teaches us that the percentage of
code inserted compared to the code recorded varies from 20% to 50%. However in
the end we will obtain an automated script that can handle hundreds or even thou-
sands of data and we can run the script as many times as needed without any fur-
ther work.
In certain cases it could be necessary to write the test script completely from
scratch to perform particular testing actions that cannot be recorded in any case.
All the testing tools mentioned above require an amount of manual program-
ming that must be considered and planned. A good testing manager should be
aware of this and should plan accurately the testware phase in the test plan.

4.3.8 Benefits and Limits

Which are the benefits of these testing tools and which are the limits?
It is hard to say without considering the specific testing environment. We can
say that the more repetitive is the testing the more convenient is the use of auto-
mated testing tools, the more creative is the testing the less convenient is the use
of automated testing tools.
The best results will be obtained on repetitive testing actions in which the initial
testing effort will be paid off by re-use of the testing scripts. There are many tools
that can help testers to automate actions but no program exists that accepts a speci-
fication as input and produces an automated test script as output: this step is still
human dependent.
The best results will also be obtained only if a certain amount of time is dedi-
cated to test plans: planning a testing process is as important as development plan-
56 4 Perspectives

ning. Tools will be useless without strategic methodology setting out what to do
and how to do it.
Testing will not be completely automated by testing tools, but once the test
team is skilled the quality of software produced will be greater and a great amount
of time will be saved.

4.3.9 References

[Myers78]
Glenford J. Myers, The Art ofSoftware Testing, Wiley-Interscience., 1978
[Ghezzi91]
Ghezzi, Fuggetta, Morasca, Morzenti, Pezze, Ingeneria del Software, Monda-
dori Informatica, 1991
[Perry95]
William Perry, Effective Methodsfor Software Testing, Wiley, 1995
4.4 Classic Testing Mistakes 57

4.4 Classic Testing Mistakes

B. Marick
Testing Foundations

Brian Marick, 11 years as programmer, tester, and line manager. Owner of Testing
Foundations since 1992.
Trainer and consultant he also spends a good deal of time on independent prod-
uct ("black box") testing. In recent years, a considerable amount of his work has
been with mass-market software.
Brian Marick is the author of a groundbreaking book for practitioners: The
Craft of Software Testing (see Chapter 5)

It's easy to make mistakes when testing software or planning a testing effort.
Some mistakes are made so often, so repeatedly, by so many different people, that
they deserve the label Classic Mistake.
Classic mistakes cluster usefully into five groups, which I've called "themes":
• The Role of Testing: who does the testing team serve, and how does it do that?
• Planning the Testing Effort: how should the whole team's work be organised?
• Personnel Issues: who should test?
• The Tester at Work: designing, writing, and maintaining individual tests.
• Technology Rampant: quick technological fixes for hard problems.
I have two goals for this paper. First, it should identify the mistakes, put them
in context, describe why they're mistakes, and suggest alternatives. Because the
context of one mistake is usually prior mistakes, the paper is written in a narrative
style rather than as a list that can be read in any order. Second, the paper should be
a handy checklist of mistakes. For that reason, the classic mistakes are printed in a
larger bold font when they appear in the text, and they're also summarised at the
end.
Although many of these mistakes apply to all types of software projects, my
specific focus is the testing of commercial software products, not custom software
or software that is safety critical or mission critical.
This paper is essentially a series of bug reports for the testing process. You may
think some of them are features, not bugs. You may disagree with the severities I
assign. You may want more information to help in debugging, or want to volun-
teer information of your own. Any decent bug reporting system will treat the
original bug report as the first part of a conversation. So should it be with this
58 4 Perspectives

paper. Therefore, see http://www.stlabs.com/marick/c1assic.htm for an ongoing


discussion of this topic.

4.4.1 Theme One: The Role of Testing

A first major mistake people make is thinking that the testing team is responsible
for assuring quality. This role, often assigned to the first testing team in an organi-
sation, makes it the last defence, the barrier between the development team (ac-
cused of producing bad quality) and the customer (who must be protected from
them). It's characterised by a testing team (often called the "Quality Assurance
Group") that has formal authority to prevent shipment of the product. That in itself
is a disheartening task: the testing team can't improve quality, only enforce a
minimal level. Worse, that authority is usually more apparent than real. Discover-
ing that, together with the perverse incentives of telling developers that quality is
someone else's job, leads to testing teams and testers who are disillusioned, cyni-
cal, and view themselves as victims. We've learned from Deming and others that
products are better and cheaper to produce when everyone, at every stage in de-
velopment, is responsible for the quality of their work ([Deming86], [Ishi-
kawa85]).
In practice, whatever the formal role, most organisations believe that the pur-
pose of testing is to find bugs. This is a less pernicious definition than the previous
one, but it's missing a key word. When I talk to programmers and development
managers about testers, one key sentence keeps coming up: "Testers aren't finding
the important bugs." Sometimes that's just griping, sometimes it's because the
programmers have a skewed sense of what's important, but I regret to say that all
too often it's valid criticism. Too many bug reports from testers are minor or ir-
relevant, and too many important bugs are missed.
What's an important bug? Important to whom? To a first approximation, the
answer must be "to customers". Almost everyone will nod their head upon hearing
this definition, but do they mean it? Here's a test of your organisation's maturity.
Suppose your product is a system that accepts email requests for service. As soon
as a request is received, it sends a reply that says "your request of 5/12/97 was
accepted and its reference ID is NIC-05l297-3". A tester who sends in many re-
quests per day finds she has difficulty keeping track of which request goes with
which ID. She wishes that the original request were appended to the acknowl-
edgement. Furthermore, she realises that some customers will also generate many
requests per day, so would also appreciate this feature. Would she:
• file a bug report documenting a usability problem, with the expectation that it
will be assigned a reasonably high priority (because the fix is clearly useful to
everyone, important to some users, and easy to do)?
• file a bug report with the expectation that it will be assigned "enhancement
request" priority and disappear forever into the bug database?
4.4 Classic Testing Mistakes 59

• file a bug report that yields a "works as designed" resolution code, perhaps with
an email "nastygram" from a programmer or the development manager?
• not bother with a bug report because it would end up in cases (2) or (3)?
If usability problems are not considered valid bugs, your project defines the
testing task too narrowly. Testers are restricted to checking whether the product
does what was intended, not whether what was intended is useful. Customers do
not care about the distinction, and testers shouldn't either.
Testers are often the only people in the Organization who use the system as
heavily as an expert. They notice usability problems that experts will see. (Formal
usability testing almost invariably concentrates on novice users.) Expert customers
often don't report usability problems, because they've been trained to know it's
not worth their time. Instead, they wait (in vain, perhaps) for a more usable prod-
uct and switch to it. Testers can prevent that lost revenue.
While defining the purpose of testing as "finding bugs important to customers"
is a step forward, it's more restrictive than I like. It means that there is no focus on
an estimate of quality (and on the quality of that estimate). Consider these two
situations for a product with five subsystems.
• 100 bugs are found in subsystem 1 before release. (For simplicity, assume that
all bugs are of the highest priority.) No bugs are found in the other subsystems.
After release, no bugs are reported in subsystem 1, but 12 bugs are found in
each of the other subsystems.
• Before release, 50 bugs are found in subsystem 1. 6 bugs are found in each of
the other subsystems. After release, 50 bugs are found in subsystem 1 and 6
bugs in each of the other subsystems.
From the "find important bugs" standpoint, the first testing effort was superior.
It found 100 bugs before release, whereas the second found only 74. But I think
you can make a strong case that the second effort is more useful in practical terms.
Let me restate the two situations in terms of what a test manager might say before
release:
• "We have tested subsystem 1 very thoroughly, and we believe we've found
almost an of the priority 1 bugs. Unfortunately, we don't know anything about
the bugginess ofthe remaining five subsystems."
• "We've tested an subsystems moderately thoroughly. Subsystem 1 is still very
buggy. The other subsystems are about 1I10th as buggy, though we're sure
bugs remain."
This is, admittedly, an extreme example, but it demonstrates an important point.
The project manager has a tough decision: would it be better to hold on to the
product for more work, or should it be shipped now? Many factors - all rough
estimates of possible futures - have to be weighed: Will a competitor beat us to
release and tie up the market? Will dropping an unfinished feature to make it into
a particular magazine's special "Java Development Environments" issue cause us
60 4 Perspectives

to suffer in the review? Will critical customer X be more annoyed by a schedule


slip or by a shaky product? Will the product be buggy enough that profits will be
13
eaten up by support costs or, worse, a recall?
The testing team will serve the project manager better if it concentrates first on
providing estimates of product bugginess (reducing uncertainty), then on fmding
more of the bugs that are estimated to be there. That affects test planning, the topic
of the next theme.
It also affects status reporting. Test managers often err by reporting bug data
without putting it into context. Without context, project management tends to
focus on a graph like in Fig. 4.1.
The flattening in the curve of bugs found will be interpreted in the most opti-
mistic possible way unless you as test manager explain the limitations of the data:
• "Only half the planned testing tasks have been finished, so little is known about
half the areas in the project. There could soon be a big spike in the number of
bugs found."
• "That's especially likely because the last two weekly builds have been lightly
tested. I told the testers to take their vacations now, before the project hits
crunch mode."
• "Furthennore, based on previous projects with similar amounts and kinds of
testing effort, it's reasonable to expect at least 45 priority-1 bugs remain undis-
covered. Historically, that's pretty high for a successful product."
For discussions of using bug data, see [Cusuman095], [Rothman96], and
[Marick97].
Earlier I asserted that testers can't directly improve quality;. they can only
measure it. That's true only if you fmd yourself starting testing too late. Tests
designed before coding begins can improve quality. They infonn the developer of
the kinds of tests that will be run, including the special cases that will be checked.
The developer can use that infonnation while thinking about the design, during
14
design inspections, and in his own developer testing.
13
Notice how none of the decisions depend solely on the product's bugginess. That's an-
other reason why giving the testing manager "stop ship" authority is a bad idea. He or
she simply doesn't have enough information to use that authority wisely. The project
14 manager might not have enough either, but won't have less.
One person who worked in a pathologically broken organisation told me that they were
given the acceptance test in advance. They coded the program to recognise the test cases
and return the correct answer, bypassing completely the logic that was supposed to calcu-
late the answer. Few companies are that bad, but you could argue that programmers will
tend to produce code "trained" for the tests. If the tests are good, that's not a problem -
the code is also trained for the real customers. The biggest danger is that the program-
mers will interpret the tests as narrow special cases, rather than handling the more gen-
eral situation. That can be forestalled by writing the early test designs in terms of general
situations rather than specific inputs: "more than two columns per page" rather than
"three two-inch columns on an A4 page". Also, the tests given to the programmers will
likely be supplemented by others designed later.
4.4 Classic Testing Mistakes 61

120 '---,- - - - - - - - - - - - - - - - ,
iI
100 ,
I
I
I

..
c
::;,
80 ~

60 ~
I
I
0 I
0
I
40 J

,I
20

0 I
I

1 2 3 4 5 6 7 8 9 10
--0- Bugs found
Build
-0- Bugs fixed

Fig. 4.1 Sample Bug Trend Chart

Early test design can do more than prevent coding bugs. As will be discussed in
the next theme, many tests will represent user tasks. The process of designing
them can find user interface and usability problems before expensive rework is
required. I've found problems like no user-visible place for error messages to go,
plugable modules that didn't fit together, two screens that had to be used together
but could not be displayed simultaneously, and "obvious" functions that couldn't
be performed. Test design fits nicely into any usability engineering effort ([Niel-
sen93]) as a way of finding specification bugs.
I should note that involving testing early feels unnatural to many programmers
and development managers. There may be feelings that you are intruding on their
turf or not giving them the chance to make the mistakes that are an essential part
of design. Take care, especially at first, not to increase their workload or slow
them down. It may take one or two entire projects to establish your credibility and
usefulness.
62 4 Perspectives

4.4.2 Theme Two: Planning the Testing Effort

I'll first discuss specific planning mistakes, then relate test planning to the role of
testing.
It's not unusual to see test plans biased toward functional testing. In functional
testing, particular features are tested in isolation. In a word processor, all the op-
tions for printing would be applied, one after the other. Editing options would later
get their own set of tests.
But there are often interactions between features, and functional testing tends to
miss them. For example, you might never notice that the sequence of operations
open a document, edit the document, print the whole document, edit one page,
print that page doesn't work. But customers surely will, because they don't use
products functionally. They have a task orientation. To find the bugs that custom-
ers see - that are important to customers - you need to write tests that cross func-
tional areas by mimicking typical user tasks. This type of testing is called scenario
testing, task-based testing, or use-case testing.
A bias toward functional testing also under-emphasises configuration testing.
Configuration testing checks how the product works on different hardware and
when combined with different third party software. There are typically many
combinations that need to be tried, requiring expensive labs stocked with hardware
and much time spent setting up tests, so configuration testing isn't cheap. But, it's
worth it when you discover that your standard in-house platform which "entirely
conforms to industry standards" actually behaves differently from most of the
machines on the market.
Both configuration testing and scenario testing test global, cross-functional as-
pects of the product. Another type of testing that spans the product checks how it
behaves under stress (a large number of transactions, very large transactions, a
large number of simultaneous transactions). Putting stress and load testing off to
the last minute is common, but it leaves you little time to do anything substantive
when you discover your product doesn't scale up to more than 12 users. \5
Two related mistakes are not testing the documentation and not testing installa-
tion procedures. Testing the documentation means checking that all the proce-
dures and examples in the documentation work. Testing installation procedures is
a good way to avoid making a bad first impression.

15
Failure to apply particular types of testing is another reason why developers complain
that testers aren't finding the important bugs. Developers of an operating system could be
spending all their time debugging crashes of their private machines, crashes due to net-
working bugs under normal load. The testers are doing straight "functional tests" on iso-
lated machines, so they don't find bugs. The bugs they do find are not more serious than
crashes (usually defined as highest severity for operating systems), and they're probably
less.
4.4 Classic Testing Mistakes 63

4.4.2.1 How about Avoiding Testing Altogether?


At a conference last year, I met (separately) two depressed testers who told me
their management was of the opinion that the World Wide Web could reduce
testing costs. "Look at [wildly successful internet company]. They distribute betas
over the network and get their customers to do the testing for free!" The Windows
9S beta program is also cited in similar ways.
Beware of an over-reliance on beta testing. Beta testing seems to give you test
cases .representative of customer use - because the test cases m customer use.
Also, bugs reported by customers are by defmition those important to customers.
However, there are several problems:
• The customers probably
16
aren't that representative. In the common high-tech
marketing model , beta users, especially those of the "put it on your web site
and they will download" sort, are the early adopters, those who like to tinker
with new technologies. They are not the pragmatists, those who want to wait
until the technology is proven and safe to adopt. The usage patterns of these
two groups are different, as are the kinds of bugs they consider important. In
particular, early adopters have a high tolerance for bugs with workarounds and
for bugs that "just go away" when they reload the program. Pragmatists, who
are much less tolerant, make up the large majority of the market.
• Even of those beta users who actually use the product, most will not use it seri-
ously. They will give it the equivalent of a quick test drive, rather than taking
the whole family for a two week vacation. As any car buyer knows, the test
drive often leaves unpleasant features undiscovered.
• Beta users - just like customers in general - don't report usability problems
unless prompted. They simply silently decide they won't buy the fmal version.
• Beta users - just like customers in general - often won't report a bug, espe-
cially if they're not sure what they did to cause it, or if they think it is obvious
enough that someone else must have already reported it.
• When beta users report a bug, the bug report is often unusable. It costs much
more time and effort to handle a user bug report than one generated internally.
Beta programs can be useful, but they require careful planning and monitoring
if they are to do more than give a warm fuzzy feeling that at least some customers
have used the product before it's inflicted on all of them. See [Kaner93] for a brief
description.
The one situation in which beta programs are unequivocally useful is in con-
figuration testing. For any possible screwy configuration, you can fmd a beta user
who has it. You can do much more configuration testing than would be possible in
an in-house lab (or even perhaps an outsourced testing agency). Beta users won't
do as thorough ajob as a trained tester, but they'll catch gross errors of the "Back-
upBuster doesn't work on this brand of 'compatible' floppy tape drive" sort.
16
See [Moore91] or [Moore95]. I briefly describe this model in a review of Moore's books,
available through Pure Atria's book review pages (http://www.pureatria.com).
64 4 Perspectives

Beta programs are also useful for building word of mouth advertising, getting
"first glance" reviews in magazines, supporting third-party vendors who will build
their product on top of yours, and so on. Those are properly marketing activities,
not testing.

4.4.2.2 Planning and Re-planning in Support of the Role of Testing


Each of the types of testing described above, including functional testing, reduces
uncertainty about a particular aspect of the product. When done, you have confi-
dence that some functional areas are less buggy, others more. The product either
• • 17
usually works on new configuratiOns, or It doesn't.
There's a natural tendency toward finishing one testing task before moving on
to the next, but that may lead you to discover bad news too late. It's better to know
something about all areas than everything about a few. When you've discovered
where the problem areas lie, you can test them to greater depth as a way of helping
the developers raise the quality by finding the important bugs. 18
Strictly, I have been over-simplistic in describing testing's role as reducing un-
certainty. It would be better to say "risk-weighted uncertainty". Some areas in the
product are riskier than others, perhaps because they're used by more customers or
because failures in that area would be particularly severe. Riskier areas require
more certainty. Failing to correctly identify risky areas is a common mistake, and
it leads to misallocated testing effort. There are two sound approaches for identify-
ing risky areas:
• Ask everyone you can for their opinion. Gather data from developers, market-
ers, technical writers, customer support people, and whatever customer repre-
sentatives you can find. See [Kaner96a] for a good description of this kind of
collaborative test planning.
• Use historical data. Analysing bug reports from past products (especially those
from customers, but also internal bug reports) helps tell you what areas to ex-
plore in this project.

17
I use "confidence" in its colloquial rather than its statistical sense. Conventional testing
that searches specifically for bugs does not allow you to make statements like "this prod-
uct will run on 95±5% of Wintel machines". In that sense, it's weaker than statistical or
reliability testing, which uses statistical profiles of the customer environment to both find
bugs and make failure estimates. (See [Dyer92], [Lyu96], and [Musa87].) Statistical test-
ing can be difficult to apply, so I concentrate on a search for bugs as the way to get a us-
able estimate. A lack of statistical validity doesn't mean that bug numbers give you noth-
ing but "warm and fuzzy (or cold and clammy) feelings". Given a modestly stable testing
process, development process, and product line, bug numbers lead to distinctly better de-
18
cisions, even if they don't come with p-values or statistical confidence intervals.
It's expensive to test quality into the product, but it may be the only alternative. Code
redesigns and rewrites may not be an option.
4.4 Classic Testing Mistakes 65

4.4.2.3 "So, Winter's early this Year. We're still going to Invade
Russia."
Good testers are systematic and organised, yet they are exposed to all the chaos
and twists and turns and changes of plan typical of a software development pro-
ject. In fact, the chaos is magnified by the time it gets to tester s9' because of their
1
position at the end of the food chain and typically low status. One unfortunate
reaction is sticking stubbornly to the test plan. Emotionally, this can be very
satisfying: "They can flail around however they like, but I'm going to hunker
down and do my job." The problem is that your job is not to write tests. It's to find
the bugs that matter in the areas of greatest uncertainty and risk, and ignoring
changes in the reality of the product and project can mean that your testing
20
becomes irrelevant.
That's not to say that testers should jump to readjust all their plans whenever
there's a shift in the wind, but my experience is that more testers let their plans
fossilise than overreact to project change.

4.4.3 Theme Three: Personnel Issues

Fresh out of college, I got my first job as a tester. I had been hired as a developer,
and knew nothing about testing, but, as they said, "we don't know enough about
you yet, so we'll put you somewhere where you can't do too much damage". In
due course, I "graduated" to development.
Using testing as a transitional job for new programmers is one of the two clas-
sic mistaken ways to staff a testing organisation. It has some virtues. One is that
you really can keep bad hires away from the code. A bozo in testing is often less
dangerous than a bozo in development. Another is that the developer may learn
something about testing that will be useful later. (In my case, it founded a career.)
And it's a way for the new hire to learn the product while still doing some useful
work.
The advantages are outweighed by the disadvantage: the new hire can't wait to
get out of testing. That's hardly conducive to good work. You could argue that the
testers have to do good work to get "paroled". Unfortunately, because people tend
to be as impressed by effort as by results, vigorous activity - especially activity
that establishes credentials as a programmer - becomes the way out. As a result,
the fledgling tester does things like becoming the expert in the local programma-
ble editor or complicated freeware tool. That, at least, is a potentially useful role,
19
How many proposed changes to a product are rejected because of their effect on the
testing schedule? How often does the effect on the testing team even cross a developer's
20 or marketer's mind?
This is yet another reason why developers complain that testers aren't finding the impor-
tant bugs. Because of market pressure, the project has shifted to an Internet focus, but the
testers are still using and testing the old "legacy" interface instead of the now critically
important web browser interface.
66 4 Perspectives

though it has nothing to do with testing. More dangerous is vigorous but misdi-
rected testing activity; namely, test automation. (See the last theme.)
Even if novice testers were well guided, having so much of the testing staff be
transients could only work if testing is a shallow algorithmic discipline. In fact,
good testers require deep knowledge and experience.
The second classic mistake is recruiting testers from the ranks of failed pro-
grammers. There are plenty of good testers who are not good programmers, but a
bad programmer likely has some work habits that will make him a bad tester, too.
For example, someone who makes lots of bugs because he's inattentive to detail
will miss lots of bugs for the same reason.
So how should the testing team be staffed? If you're willing to be part of the
21
training department, go ahead and accept new programmer hires. Accept as ap-
plicants programmers who you suspect are rejects (some fraction of them really
have gotten tired of programming and want a change) but interview them as you
would an outside hire. When interviewing, concentrate less on fonnal qualifica-
tions than on intelligence and the character of the candidate's thought.
22
A good tester has these qualities:
• methodical and systematic.
• tactful and diplomatic (but finn when necessary).
• sceptical, especially about assumptions, and wants to see concrete evidence.
• able to notice and pursue odd details.
• good written and verbal skills (for explaining bugs clearly and concisely).
• a knack for anticipating what others are likely to misunderstand. (This is useful
both in fmding bugs and writing bug reports.)
• a willingness to get one's hands dirty, to experiment, to try something to see
what happens.
Be especially careful to avoid the trap of testers who are not domain experts.
Too often, the tester of an accounting package knows little about accounting.
Consequently, she finds bugs that are unimportant to accountants and misses ones
that are. Further, she writes bug reports that make serious bugs seem irrelevant. A
programmer may not see past the unrepresentative test to the underlying important
problem. (See the discussion of reporting bugs in the next theme.)
Domain experts may be hard to find. Try to find a few. And hire testers who are
quick studies and are good at understanding other people's work patterns.
Two groups of people are readily at hand and often have those skills. But test-
ing teams often do not seek out applicants from the customer service staff or the
technical writing staff. The people who field email or phone problem reports de-

21
Some organisations rotate all developers through testing. Well, all developers except
those with enough clout to refuse. And sometimes people not in great demand don't seem
22 ever to rotate out. I've seen this approach work, but it's fragile.
See also the list in [Kaner93], chapter 15.
4.4 Classic Testing Mistakes 67

velop, if they're good, a sense of what matters to the customer (at least to the
vocal customer) and the best are very quick on their mental feet.
Like testers, technical writers often also lack detailed domain knowledge.
However, they're in the business of translating a product's behaviour into terms
that make sense to a user. Good technical writers develop a sense of what' s impor-
tant, what's confusing, and so on. Those areas that are hard to explain are often
fruitful sources of bugs. (What confuses the user often also confuses the pro-
grammer.)
One reason these two groups are not tapped is an insistence that testers be able
to program. Programming skill brings with it certain advantages in bug hunting. A
programmer is more likely to find the number 2,147,483,648 interesting than an
accountant will. (It overflows a signed integer on most machines.) But such tricks
of the trade are easily learned by competent non-programmers, so not having them
is a weak reason for turning someone down.
If you hire according to these guidelines, you will avoid a testing team that
lacks diversity. All of the members will lack some skills, but the team as a whole
will have them all. Over time, in a team with mutual respect, the non-programmers
will pick up essential titbits of programming knowledge, the programmers will
pick up domain knowledge, and the people with a writing back-ground will teach
the others how to deconstruct documents.
All testers - but non-programmers especially - will be hampered by a physical
separation between developers and testers. A smooth working relationship be-
tween developers and testers is essential to efficient testing. Too much valuable
information is unwritten; the tester finds it by talking to developers. Developers
and testers must often work together in debugging; that's much harder to do re-
motely. Developers often dismiss bug reports too readily, but it's harder to do that
to a tester you eat lunch with.
Remote testing can be made to work - I've done it - but you have to be careful.
Budget money for frequent working visits, and pay attention to interpersonal is-
sues.
Some believe that programmers can't test their own code. On the face of it, this
is false: programmers test their code all the time, and they do find bugs. Just not
enough of them, which is why we need independent testers.
But if independent testers are testing, and programmers are testing (and inspect-
ing), isn't there a potential duplication of effort? And isn't that wasteful? I think
the answer is yes. Ideally, programmers would concentrate on the types of bugs
they can find adequately well, and independent testers would concentrate on the
rest.
The bugs programmers can find well are those where their code does not do
what they intended. For example, a reasonably trained, reasonably motivated pro-
grammer can do a perfectly fine job finding boundary conditions and checking
whether each known equivalence class is handled. What programmers do poorly is
discovering overlooked special cases (especially error cases), bugs due to the
68 4 Perspectives

interaction of their code with other people's code (including system-wide proper-
ties like deadlocks and perfonnance problems), and usability problems.
Crudely put' 2pood programmers do functional testing, and testers should do
everything else. Recall that I earlier claimed an over-concentration on functional
testing is a classic mistake. Decent programmer testing magnifies the damage it
does.
Of course, decent programmer testing is relatively rare, because programmers
are neither trained nor motivated to test. This is changing, gradually, as companies
realise it's cheaper to have bugs found and fixed quickly by one person, instead of
more slowly by two. Until then, testers must do both the testing that programmers
can do and the testing only testers can do, but must take care not to let functional
testing squeeze out the rest.

4.4.4 Theme Four: The Tester at Work

When testing, you must decide how to exercise the program, then do it. The doing
is ever so much more interesting than the deciding. A tester's itch to start breaking
the program is as strong as a programmer's itch to start writing code - and it has
the same effect: design work is skimped, and quality suffers. Paying more atten-
tion to running tests than to designing them is a classic mistake. A tester who is
not systematic, who does not spend time laying out the possibilities in advance,
will overlook special cases. They may be the same subtle ones that the program-
mers overlooked.
Concentration on execution also results in unreviewed test designs. Just like
programmers, testers can benefit from a second pair of eyes. Reviews of test de-
signs needn't be as elaborate as product design reviews, but a short check of the
testing approach and the resulting tests can fmd significant omissions at low cost.

4.4.4.1 What is a Test Design?


A test design should contain a description of the set-up (including machine con-
figuration for a configuration test), inputs given to the product, and a description
of expected results. One common mistake is being too specific about test inputs
and procedures.
Let's assume manual test implementation for the moment. A related argument
for automated tests will be discussed in the next section. Suppose you're testing a
banking application. Here are two possible test designs:

23
Independent testers will also provide a "safety net" for programmer testing. A certain
amount offunctional testing might be planned, or it might be a side effect of the other
types of testing being done.
4.4 Classic Testing Mistakes 69

Desil!n 1
Setup: initialise the balance in account 12 with $100.
Procedure:
Start the program.
Type 12 in the Account window.
Press OK.
Click on the 'Withdraw' toolbar button.
In the withdraw popup dialog,
click on the 'all' button.
Press OK.
Expect to see a confirmation popup that
says "You are about to withdraw all
the money from this account. Continue?"
Press OK.
Expect to see a 0 balance in the account window.
Separately query the database to check
that the zero balance has been posted.
Exit the program with File->Exit.

Desil!n 2
Setup: initialise the balance with a positive value.
Procedure:
Start the program on that account.
Withdraw all the money from the account
using the 'all' button.
It's an error if the transaction happens
without a confirmation popup.
Immediately thereafter:
- Expect a $0 balance to be displayed.
- Independently query the database to check
that the zero balance has been posted.

The fIrst design style has these advantages:


• The test will always be run the same way. You are more likely to be able to
reproduce the bug. So will the programmer.
• It details all the important expected results to check. Imprecise expected results
make failures harder to notice. For example, a tester using the second style
would fInd it easier to overlook a spelling error in the confIrmation popup, or
even that it was the wrong popup.
• Unlike the second style, you always know exactly what you've tested. In the
second style, you couldn't be sure that you'd ever gotten to the Withdraw dia-
70 4 Perspectives

log via the toolbar. Maybe the menu was always used. Maybe the toolbar but-
ton doesn't work at all!
• By spelling out all inputs, the first style prevents testers from carelessly overus-
ing simple values. For example, a tester might always test accounts with $100,
rather than using a variety of small and large balances. (Either style should in-
clude explicit tests for boundary and special values.)
• However, there are also some disadvantages:
• The first style is more expensive to create.
• The inevitable minor changes to the user interface will break it, so it's more
expensive to maintain.
• Because each run of the test is exactly the same, there's no chance that a varia-
tion in procedure will stumble across a bug.
• It's hard for testers to follow a procedure exactly. When one makes a mistake-
pushes the wrong button, for example - will she really start over?
On balance, I believe the negatives often outweigh the positives, provided there
is a separate testing task to check that all the menu items and toolbar buttons are
hooked up. (Not only is a separate task more efficient, it's less error-prone. You're
less likely to accidentally omit some buttons.)
I do not mean to suggest that test cases should not be rigorous, only that they
should be no more rigorous than is justified, and that we testers sometimes error
on the side of uneconomical detail.
Detail in the expected results is less problematic than in the test procedure, but
too much detail can focus the tester's attention too much on checking against the
script he's following. That might encourage another classic mistake: not noticing
and exploring "irrelevant" oddities. Good testers are masters at noticing "some-
thing funny" and acting on it. Perhaps there is a brief flicker in some toolbar but-
ton which, when investigated, reveals a crash. Perhaps an operation takes an oddly
long time, which suggests to the attentive tester that increasing the size of an "ir-
relevant" dataset might cause the program to slow to a crawl. Good testing is a
combination of following a script and using it as a jumping-off point for an explo-
ration of the product.
An important special case of overlooking bugs is checking that the product
does what it's supposed to do, but not that it doesn't do what it isn't supposed to
do. As an example, suppose you have a program that updates a health care ser-
vice's database of family records. A test adds a second child to Dawn Marick's
record. Almost all testers would check that, after the update, Dawn now has two
children. Some testers - those who are clever, experienced, or subject matter ex-
perts - would check that Dawn Marick's spouse, Brian Marick, also now has two
children. Relatively few testers would check that no one else in the database has
had a child added. They would miss a bug where the programmer over-generalised
and assumed that all "family information" updates should be applied both to a
patient and to all members of her family, giving Paul Marick (aged 2) a child.
4.4 Classic Testing Mistakes 71

Ideally, every test should check that all data that should be modified has been
modified and that all other data has been unchanged. With forethought, that can be
built into automated tests. Complete checking may be impractical for manual tests,
but occasional quick scans for data that might be corrupted can be valuable.

4.4.4.2 Testing should not be Isolated Work


Here's another version of the test we've been discussing:

Desi n 3
Withdraw all with confirmation and normal check for O.

That means the same thing as Design 2 - but only to the original author. Test
suites that are understandable only by their owners are ubiquitous. They cause
many problems when their owners leave the company; sometimes many months'
worth of work has to be thrown out.
I should note that designs as detailed as Designs I or 2 often suffer a similar
problem. Although they can be run by anyone, not everyone can update them
when the product's interface changes. Because the tests do not list their purposes
explicitly, updates can easily make them test a little less than they used to. (Con-
sider, for example, a suite of tests in the Design I style: how hard will it be to
make sure that all the user interface controls are touched in the revised tests? Will
the tester even know that's a goal of the suite?) Over time, this leads to what I call
"test suite decar," in which a suite full of tests runs but no longer tests much of
2
anything at all.
Another classic mistake involves the boundary between the tester and pro-
grammer. Some products are mostly user interface; everything they do is visible
on the screen. Other products are mostly internals; the user interface is a "thin
pipe" that shows little of what happens inside. The problem is that testing has to
use that thin pipe to discover failures. What if complicated internal processing
produces only a "yes or no" answer? Any given test case could trigger many inter-
25
nal faults that, through sheer bad luck, don't produce the wrong answer.
In such situations, testers sometimes rely solely on programmer ("unit") testing.
In cases where that's not enough, testing only through the user-visible interface is
a mistake. It is far better to get the programmers to add "testability hooks" or
"testpoints" that reveal selected internal state. In essence, they convert a product
like that shown in Fig. 4.2 into one like shown in Fig. 4.3.

24
The purpose doesn't need to be listed with the test. It may be better to have a central
document describing the purposes of a group of tests, perhaps in tabular form. Of course,
25 then you have to keep that document up to date.
This is an example of the formal notion of "testability". See, [Friedman95] or [Voas91]
for an academic treatment.
72 4 Perspectives

User Interface
"""""."."""""." ............ " """""" ...""." ..."""""""""""".""""."".".""""""".""""...
I
"",,--------,
Guts of the Product
Fig. 4.2 Program without testability hooks

Testing

. . . . ".~.~.:.~ . I"~"~.:..~~.~.:",,....a..-.
1 ...l."..."... ".I".~.~.:,,~.ace
Guts of the Product
Fig. 4.3 Program with testing interface

It is often difficult to convince programmers to add test support code to the prod-
uct. (Actual quote: "I don't want to clutter up my code with testing crud.") Perse-
vere, start modestly, and take advantage of these facts:
• The test support code is often a simple extension of the debugging support code
• 26
programmers wrIte anyway.
• A small amount of test support code often goes a long way.
A common objection to this approach is that the test support code must be
compiled out of the final product (to avoid slowing it down). If so, tests that use
the testing interface "aren't testing what we ship". It is true that some of the tests
won't run on the final version, so you may miss bugs. But, without testability
code, you'll miss bugs that don't reveal themselves through the user interface. It's
a risk trade-off, and I believe that adding test support code usually wins. See
[Marick95], chapter 13, for more details.
In one case, there's an alternative to having the programmer add code to the
product: have a tool do it. Commercial tools like Purify, Boundschecker, and Sen-
tinel automatically add code that checks for certain classes of failures (such as

26
For example, the Java language encourages programmers to use the toString method to
make internal objects printable. A programmer doesn't have to use it, since the debugger
lets her see all the values in any object, but it simplifies debugging for objects she'll look
at often. All testers need (roughly) is a way to call toString from some external interface.
4.4 Classic Testing Mistakes 73

memory leaks).27 They provide a narrow, specialised testing interface. For market-
ing reasons, these tools are sold as programmer debugging tools, but they're
equally test support tools, and I'm amazed that testing groups don't use them as a
matter of course.
Testability problems are exacerbated in distributed systems like conventional
client/server systems, multi-tiered client/server systems, Java applets that provide
smart front-ends to web sites, and so forth. Too often, tests of such systems
amount to shallow tests of the user interface component because that's the only
component that the tester can easily control.

4.4.4.3 Finding Failures is only the Start


It's not enough to find a failure; you must also report it. Unfortunately, poor bug
reporting is a classic mistake. Tester bug reports suffer from five major problems:
• They do not describe how to reproduce the bug. Either no procedure is given,
or the given procedure doesn't work. Either case will likely get the bug report
shelved.
• They don't explain what went wrong. At what point in the procedure does the
bug occur? What should happen there? What actually happened?
• They are not persuasive about the priority of the bug. Your job is to have the
seriousness of the bug accurately assessed. There's a natural tendency for pro-
grammers and managers to rate bugs as less serious than they are. If you be-
28
lieve a bug is serious, explain why a customer would view it the way you do.
If you found the bug with an odd case, take the time to reproduce it with a more
obviously common or compelling case.
• They do not help the programmer in debugging. This is a simple cost/benefit
trade-off. A small amount of time spent simplifying the procedure for reproduc-
ing the bug or exploring the various ways it could occur may save a great deal
of programmer time.
• They are insulting, so they poison the relationship between developers and
testers.
[Kaner93] has an excellent chapter (5) on how to write bug reports. Read it.
Not all bug reports come from testers. Some come from customers. When that
happens, it's common for a tester to write a regression test that reproduces the bug
in the broken version of the product. When the bug is fixed, that test is used to
check that it was fixed correctly.
However, adding only regression tests is not enough. A customer bug report
suggests two things:
27
For a list of such commercial tools, see http://www.stlabs.com/marick/faqs/tools.htm.
28 Follow the link to "Other Test Implementation Tools".
Cern Kaner suggests something even better: have the person whose budget will be di-
rectly affected explain why the bug is important. The customer service manager will
speak more authoritatively about those installation bugs than you could.
74 4 Perspectives

29
• That area of the product is buggy. It's well known that bugs tend to cluster.
• That area of the product was inadequately tested. Otherwise, why did the bug
originally escape testing?
An appropriate response to several customer bug reports in an area is to sched-
ule more thorough testing for that area. Begin by examining the current tests (if
they're understandable) to determine their systematic weaknesses.
Finally, every bug report is a gift from a customer that tells you how to test bet-
ter in the future. A common mistake is failing to take notes for the next testing
effort. The next product will be somewhat like this one, the bugs will be somewhat
like these, and the tests useful in finding those bugs will also be somewhat like the
ones you just ran. Mental notes are easy to forget, and they're hard to hand to a
new tester. Writing is a wonderful human invention: use it. Both [Kaner93] and
[Marick95] describe formats for archiving test information, and both contain gen-
eral-purpose examples.

4.4.5 Theme Five: Technology Run Rampant

Test automation is based on a simple economic proposition:


• If a manual test costs $X to run the first time, it will cost just about $X to run
each time thereafter, whereas:
• If an automated test costs $Y to create, it will cost almost nothing to run from
then on.
$Y is bigger than $X. I've heard estimates ranging from 3 to 30 times as big,
with the most commonly cited number seeming to be 10. Suppose 10 is correct for
your application and your automation tools. Then you should automate any test
that will be run more than 10 times.
A classic mistake is to ignore these economics, attempting to automate all tests,
even those that won't be run often enough to justify it. What tests clearly justify
automation?
Stress or load tests may be impossible to implement manually. Would you have
a tester execute and check a function 1000 times? Are you going to sit 100 people
down at 100 terminals?
Nightly builds are becoming increasingly common. (See [McConnell96] or
[Cusuman095] for descriptions of the procedure.) If you build the product nightly,
you must have an automated "smoke test suite". Smoke tests are those that are run
after every build to check for grievous errors.
Configuration tests may be run on dozens of configurations.
The other kinds of tests are less clear-cut. Think hard about whether you'd
rather have automated tests that are run often or ten times as many manual tests,

29
That's true even if the bug report is due to a customer misunderstanding. Perhaps this
area of the product is just too hard to understand.
4.4 Classic Testing Mistakes 75

each run once. Beware of irrational, emotional reasons for automating, such as
testers who find programming automated tests more fun, a perception that auto-
mated tests will lead to higher status (everything else is "monkey testing"), or a
fear of not rerunning a test that would have found a bug (thus leading you to
automate it, leaving you without enough time to write a test that would have found
a different bug).
You will likely end up in a compromise position, where you have:
• a set of automated tests that are run often.
• a well-documented set of manual tests. Subsets of these can be rerun as neces-
sary. For example, when a critical area of the system has been extensively
changed, you might rerun its manual tests. You might run different samples of
this suite after each major build. 30
• a set of undocumented tests that were run once (including exploratory "bug
bash" tests).
Beware of expecting to rerun all manual tests. You will become bogged down
rerunning tests with low bug-finding value, leaving yourself no time to create new
tests. You will waste time documenting tests that don't need to be documented.
You could automate more tests if you could lower the cost of creating them.
That's the promise of using GUT capture/replay tools to reduce test creation cost.
The notion is that you simply execute a manual test, and the tool records what you
do. When you manually check the correctness of a value, the tool remembers that
correct value. You can then later play back the recording, and the tool will check
whether all checked values are the same as the remembered values.
There are two variants of such tools. What T call the first generation tools cap-
ture raw mouse movements or keystrokes and take snapshots of the pixels on the
screen. The second generation tools (often called "object oriented") reach into the
program and manipulate underlying data structures (widgets or controls).31
First generation tools produce un-maintainable tests. Whenever the screen lay-
out changes in the slightest way, the tests break. Mouse clicks are delivered to the
wrong place, and snapshots fail in irrelevant ways that nevertheless have to be
checked. Because screen layout changes are common, the constant manual updat-
ing of tests becomes insupportable.
Second generation tools are applicable only to tests where the underlying data
structures are useful. For example, they rarely apply to a photograph editing tool,
where you need to look at an actual image - at the actual bitmap. They also tend

30
An additional benefit of automated tests is that they can be run faster than manual tests.
That allows you to reduce the time between completion of a build and completion of its
testing. That can be especially important in the final builds, if only to avoid pressure
from executives itching to ship the product. You're trading fewer tests for faster time to
market. That can be a reasonable trade-off, but it doesn't affect the core of my argument,
31 which is that not all tests should be automated.
These are, in effect, another example of tools that add test support code to the program.
76 4 Perspectives

not to work with custom controls. Heavy users of capture/replay tools seem to
spend an inordinate amount of time trying to get the tool to deal with the special
features of their program - which raises the cost of test automation.
Second generation tools do not guarantee maintainability either. Suppose a ra-
dio button is changed to a pull-down list. All of the tests that use the old controls
will now be broken.
GUI interface changes are of course common, especially between releases.
Consider carefully whether an automated test that must be recaptured after GUl
changes is worth having. Keep in mind that it can be hard to figure out what a
captured test is attempting to accomplish unless it is separately documented.
As a rule of thumb, it's dangerous to assume that an automated test will pay for
itself this release, so your test must be able to survive a reasonable level of GUI
change. I believe that capture/replay tests, of either generation, are rarely robust
enough.
An alternative approach to capture/replay is scripting tests. (Most GUI cap-
ture/replay tools also allow scripting.) Some member of the testing team writes a
"test API" (application programmer interface) that lets other members of the team
express their tests in less GUI-dependent terms. Whereas a captured test might
look like this:

Caotured Test
text $main.accountField "12 U

click $main.OK
menu $operations
menu $withdraw
click $withdrawDialog.all

a script might look like this:

Script
select-account 12
. withdraw all
I
The script commands are subroutines that perform the appropriate mouse clicks
and key presses. If the API is well-designed, most GUI changes will require
changes only to the implementation of functions like withdraw, not to all the tests
32
that use them. Please note that well-designed test APls are as hard to write as any
other good API. That is, they're hard, and you shouldn't expect to get it right the
first time.
32
The "Joe Gittano" stories and essays on my web page,
http://www.stlabs.com/marickJroot.htm. go into this approach in more detail.
4.4 Classic Testing Mistakes 77

In a variant of this approach, the tests are data-driven. The tester provides a ta-
ble describing key values. Some tool reads the table and converts it to the appro-
priate mouse clicks. The table is even less vulnerable to QUI changes because the
sequence of operations has been abstracted away. It's also likely to be more un-
derstandable, especially to domain experts who are not programmers. See [Petti-
chord96] for an example of data-driven automated testing.
Note that these more abstract tests (whether scripted or data-driven) do not nec-
essarily test the user interface thoroughly. If the Withdraw dialog can be reached
via several routes (toolbar, menu item, and hotkey), you don't know whether each
route has been tried. You need a separate (most likely manual) effort to ensure that
all the QUI components are connected correctly.
Whatever approach you take, don't fall into the trap of expecting regression
tests to find a high proportion of new bugs. Regression tests discover that new or
changed code breaks what used to work. While that happens more often than any
of us would like, most bugs are in the product's new or intentionally changed
behaviour. Those bugs have to be caught by new tests.

4.4.5.1 The Importance of Code Coverage


QUI capture/replay testing is appealing because it's a quick fix for a difficult
problem. Another class of tool has the same kind of attraction.
The difficult problem is that it's so hard to know if you're doing a good job
testing. You only really find out once the product has shipped. Understandably,
this makes managers uncomfortable. Sometimes you find them embracing code
coverage with the devotion that only simple numbers can inspire. Testers some-
times also become enamoured of coverage, though their romance tends to be less
fervent and ends sooner.
What is code coverage? It is any of a number of measures of how thoroughly
code is exercised. One common measure counts how many statements have been
executed by any test. The appeal of such coverage is twofold:
• If you've never exercised a line of code, you surely can't have found any of its
bugs. So you should design tests to exercise every line of code.
• Test suites are often too big, so you should throw out any test that doesn't add
value. A test that adds no new coverage adds no value.
Only the first sentences in (1) and (2) are true. I'll illustrate this in figure 4.4.
If you write only the tests needed to satisfy coverage, you would find bugs.
You're guaranteed to find the code that always fails, no matter how it's executed.
But most bugs depend on how a line of code is executed. For example, code with
an off-by-one error fails only when you exercise a boundary. Code with a divide-
by-zero error fails only if you divide by zero. Coverage-adequate tests will find
some of these bugs, by sheer dumb luck, but not enough of them. To find enough
bugs, you have to write additional tests that "redundantly" execute the code.
78 4 Perspectives

Fig. 4.4 What is code coverage?

For the same reason, removing tests from a regression test suite just because
they don't add coverage is dangerous. The point is not to cover the code; it's to
have tests that can discover enough of the bugs that are likely to be caused when
the code is changed. Unless the tests are ineptly designed, removing tests will just
remove power. If they are ineptly designed, using coverage converts a big and
lousy test suite to a small and lousy test suite. That's progress, I suppose, but it's
33
addressing the wrong problem.
A grave danger of code coverage is that it is concrete, objective, and easy to
measure. Many managers today are using coverage as a performance goal for
testers. Unfortunately, a cardinal rule of management applies here: "Tell me how a
person is evaluated, and I'll tell you how he behaves." If a person is evaluated by
how much coverage is achieved in a given time (or in how little time it takes to
reach a particular coverage goal), that person will tend to write tests to achieve
high coverage in the fastest way possible. Unfortunately, that means short-
changing careful test design that targets bugs, and it certainly means avoiding in-
34
depth, repetitive testing of "already covered" code.
Using coverage as a test design technique works only when the testers are both
designing poor tests and testing redundantly. They'd be better off at least targeting

33
Not all regression test suites have the same goals. Smoke tests are intended to run fast
and find grievous, obvious errors. A coverage-minimised test suite is entirely appropri-
34 ate.
In pathological cases, you'd never bother with user scenario testing, load testing, or
configuration testing, none of which add much, if any, coverage to functional testing.
4.4 Classic Testing Mistakes 79

their poor tests at new areas of code. In more normal situations, coverage as a
guide to design only decreases the value of the tests or puts testers under unpro-
ductive pressure to meet unhelpful goals.
Coverage does playa role in testing, not as a guide to test design, but as a rough
evaluation of it. After you've run your tests, ask what their coverage is. If certain
areas of the code have no or low coverage, you're sure to have tested them shal-
lowly. If that wasn't intentional, you should improve the tests by rethinking their
design. Coverage has told you where your tests are weak, but it's up to you to
understand how.
You might not entirely ignore coverage. You might glance at the uncovered
lines of code (possibly assisted by the programmer) to discover the kinds of tests
you omitted. For example, you might scan the code to determine that you under-
tested a dialog box's error handling. Having done that, you step back and think of
all the user errors the dialog box should handle, not how to provoke the error
checks on line 343, 354, and 399. By rethinking design, you'll not only execute
those lines, you might also discover that several other error checks are entirely
missing. (Coverage can't tell you how well you would have exercised needed code
that was left out of the program.)
There are types of coverage that point more directly to design mistakes than
35
statement coverage does (branch coverage, for example). However, none - and
not all of them put together - are so accurate that they can be used as test design
techniques.
One final note: Romances with coverage don't seem to end with the former
devotee wanting to be "just good friends". When, at the end of a year's use of
coverage, it has not solved the testing problem, I fmd testing groups abandoning
coverage entirely. That's a shame. When I test, I spend somewhat less than 5% of
my time looking at coverage results, rethinking my test design, and writing some
new tests to correct my mistakes. It's time well spent.

4.4.6 Acknowledgements

My discussions about testing with Cern Kaner have always been illuminating. The
LAWST (Los Altos Workshop on Software Testing) participants said many inter-
esting things about automated GUI testing. The LAWST participants were Chris
Agruss, Tom Arnold, James Bach, Jim Brooks, Doug Hoffman, Cern Kaner, Brian
Lawrence, Tom Lindemuth, Noel Nyman, Brett Pettichord, Drew Pritsker, and
Melora Svoboda. Paul Czyzewski, Peggy Fouts, Cern Kaner, Eric Petersen, Joe
Strazzere, Melora Svoboda, and Stephanie Young read an earlier draft.

35
See [Marick95], chapter 7, for a description of additional code coverage measures. See
also [Kaner96b] for a list of more than one hundred types of coverage.
80 4 Perspectives

4.4.7 References

[Cusuman095 ]
M. Cusumano and R. Selby, Microsoft Secrets, Free Press, 1995.
[Dyer92]
Michael Dyer, The Cleanroom Approach to Quality Software Development,
Wiley, 1992.
[Friedman95]
M. Friedman and J. Voas, Software Assessment: Reliability, Safety, Testability,
Wiley, 1995.
[Kaner93]
C. Kaner, 1. Falk, and H.Q. Nguyen, Testing Computer Software (2Ie), Van
Nostrand Reinhold, 1993.
[Kaner96a]
Cern Kaner, "Negotiating Testing Resources: A Collaborative Approach," a po-
sition paper for the panel session on "How to Save Time and Money in Test-
ing", in Proceedings ofthe Ninth International Quality Week (Software Re-
search, San Francisco, CA), 1996. (http://www.kaner.com/negotiate.htm)
[Kaner96b]
Cern Kaner, "Software Negligence & Testing Coverage," in Proceedings of
STAR 96, (Software Quality Engineering, Jacksonville, FL), 1996.
(http://www.kaner.com/coverage.htm)
[Lyu96]
Michael R. Lyu (ed.), Handbook ofSoftware Reliability Engineering, McGraw-
Hill, 1996.
[Marick95]
Brian Marick, The Craft ofSoftware Testing, Prentice Hall, 1995.
[Marick97]
Brian Marick, "The Test Manager at the Project Status Meeting," in Proceed-
ings ofthe Tenth International Quality Week (Software Research, San Fran-
cisco, CA), 1997. (http://www.stlabs.com/-marick/root.htm)
[McConne1l96]
Steve McConnell, Rapid Development, Microsoft Press, 1996.
[Moore91]
Geoffrey A. Moore, Crossing the Chasm, Harper Collins, 1991.
[Moore95]
Geoffrey A. Moore, Inside the Tornado, Harper Collins, 1995.
[Musa87]
1. Musa, A. lannino, and K. Okumoto, Software Reliability: Measurement,
Prediction, Application, McGraw-Hill, 1987.
[Nielsen93 ]
Jakob Nielsen, Usability Engineering, Academic Press, 1993.
[Pettichord96]
Brett Pettichord, "Success with Test Automation," in Proceedings ofthe Ninth
4.4 Classic Testing Mistakes 81

International Quality Week (Software Research, San Francisco, CA), 1996.


(http://www.io.com/-wazmo/succpap.htm)
[Rothman96]
Johanna Rothman, "Measurements to Reduce Risk in Product Ship Decisions,"
in Proceedings ofthe Ninth International Quality Week (Software Research,
San Francisco, CA), 1996. (http://world.std.comHr/Papers/QW96.html)
[Voas91]
1. Voas, 1. Morell, and K. Miller, "Predicting Where Faults Can Hide from
Testing," IEEE Software, March, 1991.
5 Resources for Practitioners

L. Consolini
Gemini, Bologna

5.1 Methods and Tools

A full listing of tools and training courses can be found at hrtp://www.stlabs.com/


marick /root.htm.
The site is maintained by a testing consultant as a free service to the software
community. The list is thorough and up-to-date. The editor disclaims any respon-
sibility for the quality of the tools included in the list.
Concerning methods for software product quality the following methods and
techniques are worth mentioning because of their effectiveness and significant
diffusion in the software industry:
• The Systematic Test and Evaluation Process (STEP) methodology by Bill Het-
zel was used by the ALCAST PIE (see Chapter 9 for reference data). An illus-
tration can be found in Bill Hetzel - "The Complete Guide To Software Test-
ing" -Second edition, John Wiley, 1988
• Brian Marick in his book "The Craft Of Software Testing" (see 5.2) illustrates
an interesting and valuable sub-system testing technique called "catalogue
based testing". It also contains useful hints for inspections. The PIE PROVE
was inspired by this technique (see Chapter 6).
• Tom Gilb in his book "Software Inspection" (see 5.2) explains and exemplifies
a widely applied inspection technique.
See also http://ourworld.compuserve.comlhomepages/KaiGilb/.
• Daniel P. Freedman and Gerald M. Weinberg in their book "Handbook Of
Walkthroughs, Inspections, and Technical Reviews" provide enough guidelines
and sample checklists to start up an inspection programme fairly easily.
• McCabe & Associates provides a comprehensive set of techniques and tools on
testing support and static analysis. Reference material can be found at
hrtp://www.mccabe.com.

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
84 5 Resources for Practitioners

5.2 Books

5.2.1 Introductory Reference Books on Software Quality

R. Dunn, R. Ullman - "Quality Assurance For Computer Software" - McGraw


Hill, NY, 1982
The book provides a clear and readable overview of how to organise and run a
quality assurance program to improve software quality. It gives practical and com-
plete guidance.
Pankaj Jalote - "An Integrated Approach To Software Engineering" - Springer
Verlag, NY, 1991
Conceived for undergraduate students, this book offers an integrated approach
to software engineering: topics are not covered in isolation; instead a running case
study used throughout the book integrates the application of different activities of
software development. Also contains a good section on testing and software Veri-
fication & Validation.

5.2.2 Classics on Testing

GlenfordJ.Myers - The Art Of Software Testing -John Wiley, NY, 1975


A true classic; a must read for every software tester. The focus of the book is on
techniques for designing effective test cases.
Boris Beizer - Software Testing Techniques - Van Nostrand Reinholds, NY,
1990
The book has become a standard in the field. Beizer has been considered a pio-
neer in software testing. The book is dense and certainly not an easy reading.

5.2.3 Key Books on Testing

Brian Marick - The Craft Of Software Testing - Prentice Hall, NJ, 1995
This book is the logical sequel of Myers' fundamental work. It explores new
techniques and discovers the potential of sub-system testing.
Cern Kaner, Jack, Falck, Hung Quoc Nguyen - Testing Computer Software -
International Thompson Computer Press, 1993
This book is about testing under real-world conditions. It is full of insight and
useful advice.
Boris Beizer - Black-box Testing -John Wiley, 1995
Another savvy book by Beizer. This one is really focused on functional testing
of software and systems.
Daniel 1. Mosley - The Handbook Of MIS Application Software Testing -
Yourdon Press, 1993
5.3 Organisations 85

A practical presentation on software testing concepts applied in a Management


Information Systems department.

5.2.4 Key Books on Inspections

Daniel P. Freedman, Gerald M. Weinberg - Handbook Of Walkthroughs, Inspec-


tions, and Technical Reviews - Dorset House, NY, 1990
A book full of insight and humour. It provides many concrete examples.
Tom Gilb, Dorothy Graham - Software Inspection - Addison-Wesley, 1993
The book provides guidelines for the introduction of inspection techniques. It
relates about examples of successful implementation (in large companies)

5.3 Organisations
Table 5.1 Organisations

Name URL

Association for Computing Machinery http://www.acm.org


SIGAPP (ACM)
SIGMETRICS (ACM)
SIGMIS (ACM)
IEEE Computer Society http://www.ieee.org
Council of European Professional Informatics Societies
International Institute for Software Testing http://www.testinginstitute.com

5.4 Important Conferences


Table 5.2 Conferences

Name

International Conference on Software Engineering (SIGSOFT)


Foundations of Software Engineering (SIGSOFT)
Symposium on Applied Computing (SIGAPP)
European Software Engineering Conference (CEPIS)
Design, Automation, and Test in Europe (IEEE)
STAR 'XX, Software Testing, Analysis & Review
Software Quality Week (Software Research)
European Software Engineering Process Group Conference
EuroSPI - Conference on European Software Process Improvement
ICSTEST - The International Conference on Software Testing
European Software Measurement Conference - FESMA
86 5 Resources for Practitioners

Although most of these conferences are organised regularly, the organisational


contacts or WEB site URLs often change. Postings referring to the most current
data can be received by subscribing to the SRE mailing list: Software Require-
ments Engineering.
The SRE mailing list aims to act as a forum for exchange of ideas among the
requirements engineering researchers and practitioners. This moderated list is a
free service offered by the CSIRO-Macquarie University Joint Research Centre
for Advanced Systems Engineering (JRCASE) at Macquarie University, Sydney.
To subscribe to SRE mailing list, send e-mail tolistproc@jrcase.mq.edu.au with
the following line as the first and only line in the body of the message: subscribe
SRE your-first-name your-second-name.

5.5 Web Sites

Table 5.3 Web Sites

Name URL

Hotlist with links to everything related to http://www.rstcorp.com/hotlist/


software testing: Reliable Software Tech-
nologies
Hotlist with links to everything related to http://www.mtsu.edu/-storm/
software testing: STORM
Hotlist with links to everything related to http://www.soft.comlInstituteiHotList/index.html
software testing: Software Research.
Hotlist with links to everything related to http://www.io.com/-wazmo/qa.html
software testing Brett Pettichord's
For those interested in essays, rather than http://www.kaner.com/
hotlists, Cern Kaner's web site
The archives of Software Testing Labs ST http://www.stlabs.com/testnet.htm
Labs, Tester's Network
6 Experience Reports

L. Consolini
Gemini, Bologna

Among the PIEs examined by (;UR(;X and involved in the workshops, several par-
ticularly significant PIEs were selected for a more in-depth analysis (see Table
6.1). Their experience is both .interesting and relevant to many of the key issues
involved in the application of Validation and Verification to real life software.
At the same time these PIEs have been chosen to represent a wide range of or-
ganisations (SMEs, large companies, not-for-profit organisations) and domains
(technical software, aerospace software, Internet software, commercial MIS soft-
ware).

Table 6.1 The selected PIEs

Project Acronym Title Company/ e-mail


nr.

21199 PI3 Process improvement in internet ONION, (I)


service providing gb@onion.it
21417 PROVE Quality improvement through verifi- Think3 (formerly CAD.LAB),
cation process (I) baqu@think3.it
23754 TRUST Improvement of the testing process Agusta Un'azienda Fin-
exploiting requirements traceability meccanica S.p.a., (I) asi!-
va.mardib@iol.it
21216 TESTLIB Use of object oriented test libraries INTEGRA SYS, (S)
for generic re-use of test code in type integrasys@compuserve.com
approval computer controlled test
systems
24157 FCI-STDE Formal code inspection in small Procedimientos-UNO, (S)
technical development environments phodgson@procuno.pta.es
ci-stde
10464 ATECON Application of an inte- DLR, (G)
grated,modular,metric based system Mrs. Sylvia Daiqui
and software test concept fax: 49 8153 281846
24306 GUI-Test Automated testing of graphical user IMBUS, (G)
interfaces info@imbus.de

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
88 6 Experience Reports

In chapter 4 the experts highlighted a few elements that characterise an improved


testing process, including:
• V&V planning: selection of the appropriate V&V activities to achieve specific
quality objectives of a software project or product;
• Culture and skills: improvement of the current culture and experience in the
application of product verification methods, techniques and tools, mostly in the
commercial software area;
• Availability of a consistent market offer covering both methods and tools;
• Integration: integrating product verification activities in the software life cycle
to achieve the benefits of discovering and removing defects as early as possible.
ESSI PIEs, as early adopters of process improvement, did not have the advan-
tage of such analysis in advance. They pursued their own individual objectives
and their findings should be examined in the light of the experts' views to validate
theory with experience and vice versa. rURrX made an attempt at such reciprocal
validation and some conclusions can now be drawn.
Concerning V&V planning, most of the PIEs adopted a pragmatic approach.
Rather than developing formal V&V plans, they concentrated on the identification
of the necessary controls and tests and on the establishment of a suitable environ-
ment to perform them in the most efficient way. Consequently, although each PIE
adopted a specific approach, most of them tackled the central issue of testing
automation and tests re-use: undoubtedly one of the key strategies to increase
efficiency and safeguard product quality during maintenance.
Certainly the culture and skills improvement were largely confirmed as a top
priority. Similarly, integration of product verification and development activities
was recognised as a necessity (see PROVE, ATECON, TRUST in particular).
Related to the issue of automated testing, the "Which tool is best?" question
always comes up. On the theme of the availability of a consistent market offer,
claimed in Chapter 4 by the experts, the PIEs performed a robust reality check,
selecting those software verification methods and tools that passed their field trials
and delivered on what they promised. As a result, many options have been ex-
plored and the conclusions go beyond the mere indication of a shortlist of "best"
tools to point out that effective automation requires putting the process straight
first and most integration and in-house development work afterwards (PROVE,
TESTLIB, ATECON).
Testing is by far the most widely adopted V&V method, but some of the PIEs
experienced the increased potential of inspections applied either as a defect detec-
tion technique (FCI-STDE) or as a product qualification with respect to focused
criteria (PI3, PROVE).
TRUST offers an innovative look at archiving and re-use of test cases based on
traceability concepts. Cross-referencing requirements and test cases (and
maintaining the reference up-to-date!) is a means to measure requirements
coverage during testing, to assess the completeness of the tests run against a
system build, and to re-use tests over multiple system variants.
6 Experience Reports 89

PI3 provides useful insights into the issues emerging from (lack of) testing
Internet based applications and the applicability of traditional testing techniques to
this new application software paradigm.
Finally, GUI-Test explores the transition from manual GUI testing - time-
consuming and cumbersome - to automated GUI testing based on commercial
tools. GUI-Test compares manual. semi-automated and automated methods to
establish a cost-effective strategy that respects the needs and resources of a small
company providing customer-specific software.
90 6 Experience Reports

6.1 PI3 Project Summary

G. Bazzana
Onion

The explosive growth in the WWW and the increasing complexity of Internet
applications, the interaction with legacy systems and large DBMS, the use of web-
based interfaces for business applications, require the adoption of systematic test-
ing activities also in the Internet realm.
In the World of WWW technologies, the PI3 Project helped an innovative and
dynamic small company enhancing product quality, timeliness and productivity at
the same time.
The PIE shows an interesting approach combining mature testing methods and
inspection techniques to ensure the overall quality of Internet based applications;
in fact also static web pages can contain bugs and should be checked for legal
syntax and for additional problems (portability across browsers for example is an
issue).
PI3 is also relevant for its practical and business oriented approach to the meas-
urement of the results based on a Goal Question Metrics (GQM) approach. The
company claims a "THREE-DIMENSIONAL" improvement in product quality
(+17%), time-to-market (-10%) and cost (-9%). An analysis of the ROI for intro-
ducing HTML validation tools is reported.
The following are the key lessons learnt from this experiment:
• the introduction of more systematic testing methods and tools is of paramount
importance for Level I SMEs and can be done with success in a short time
whereas the introduction of Configuration Management requires specific care
both from a methodology and a cultural point of view
• pursuing two improvement actions (Configuration Management and testing) at
a time has been perceived as difficult and demanding
• during the PIE the company felt the need of an overall framework for its im-
provement actions; although they had not planned for it the company defined a
first draft Quality Manual adherent to ISO 9000 before getting to the definition
of detailed guidelines.
6.1 PI3 Project Summary 91

6.1.1 Participants

Onion is a privately owned Italian company specialised in the fields of Communi-


cations, Technologies and Consulting; ONION's software activities can be classi-
fied as follows:
• development ofturn-key IT solutions
• service providing on Internet/Intranet
• development of multimedia applications
Onion strategy is focusing on Internet/Intranet applications and services.
Onion is a young small company (25 employees) but strong in technology, as
consulting services provider, and as systems integrator.

6.1.2 Business Motivation and Objectives

After a software process self-assessment ONION decided to focus on improve-


ment actions characterised as follows: highest pay-off; relevance for all the busi-
ness lines of the company; direct applicability and pragmatic feasibility in the
medium term.
The key process areas exhibiting such characteristics were configuration man-
agement (CM) and testing. Thus the goals of the project addressed the introduc-
tion of mature methods and tools in those areas.
The activities performed during the experiment provided tangible benefits at sev-
erallevels:
• increased reliability of the solutions
• company capabilities visibility to a wider audience
• staff motivation
• process standardisation (testing ofal1 WWW applications)
• growth of sales revenues and profit.
The quantitative improvements are summarised in table 6.2.

Table 6.2 Quantitative Improvements

Business goal Impact

Product quality Increase by 17%


Time-to-market Reduction by 10%
Cost Reduction by 9%
92 6 Experience Reports

6.1.3 The Experiment

The project was developed in co-operation between the Communications / Tech-


nologies department and the Consulting department, which has specific skills and
experiences in software process improvement, both from a theoretical (several
international publications on the subject) and practical point of view (specific
activities are in progress with several important customers).
The experiment went through the following phases:

6.1.3.1 Tool Procurement


CM and testing tools needed for improvement were evaluated and the technical
environment was changed as follows:
• adoption of a WWW Workbench
• adoption of a WWW Test Environment
• adoption of a CM environment particularly suited for document and asset man-
agement oriented also towards ISO 9000 document and data control rules

6.1.3.2 Testing Management


In this phase the following activities were done:
• definition of rules for applying method and tools within the baseline projects
• application of defmed methods and tools to:
• regression testing
• white-box and black-box testing
• syntax testing of a Web Site
• derivation of a company-wide testing guideline for future inclusion within a
QMS standard operating procedure

6.1.3.3 Measurement of Results


Metrics were defined using the GQM approach, as summarised in table 6.3.

Table 6.3 Definition ofmetrics using the GQM approach

Goal Indicator Measurement unit Target


value

Productivity SW Productivity LOC/ person-month >250


Asset Productivity HTML lines/ person-months >1500
SW Re-use Re-used lines/ LaC weighted W.r.t. >20%
language re-usability
Asset Re-use Y:z Structure re-use + Y:z Object re-use >50%
6.1 PI3 Project Summary 93

Goal Indicator Measurement unit Target


value

Testing effectiveness Faults in testing! Total faults >80%


Product SW Fault density SW Faults/ KLOC during first year of <5
Quality operation
Asset Fault density HTML Faults/ HTML Kilo-lines during <3
first year of use
Time-to- Timeliness Planned service development timel actual >80%
market time

6.1.4 Impact and Experience


6.1.4.1 Business
As an overall evaluation of the results of the experiment it is obliged to point out
that the improvement achieved in product quality, time-to-market and cost reduc-
tion has led to an important increase in customer satisfaction.
Table 6.4 reports the values for the two baseline projects.

Table 6.4 Values of indicators for the two PI3 baseline projects

Indicator Goal Project I Project 2

SW Productivity > 250 1138 3346


Asset Productivity > 1500 300 2702
Testing effectiveness >80% 78% 89%
SW Fault density <5 2.1 0.15
Asset Fault density <3 not applicable 0.19
Timeliness >80% 75% 80%
SW Re-use >20% 10% 21%
Asset Re-use >50% 50% 50%

With respect to the main business goals of the software-producing unit, the
quantitative improvements summarised in Table 6.5 have been observed.

Table 6.5 Results on Business goals

Business goal Impact

Product quality Increase by 17%


Time-to-market Reduction by 10%
Cost Reduction by 9%
94 6 Experience Reports

In addition the PI3 experiment has also originated some indirect benefits; among
them, the following must be mentioned:
• a common company approach has been established with respect to the ISO
9000 certification
• an important echo has been generated at intemationallevel.

6.1.4.2 Technical
According to ONION's Technology Director the global evaluation of the technical
results can be summed up as follows:
• The definition of Onion's Intranet services architecture and the definition of
ONION's Software Development Factory have been achieved as a conse-
quence of the PI3 project.
• A testing checklist and tool has been defined for the automation of various tests
for ONION's products (mainly for Web applications).

6.1.4.3 Aspects for Improvement


According to ONION's management, if the experiment had to be repeated, they
would make specific changes to overcome the two identified weaknesses, mainly:
• more accurate timing of deployment of improvements in the daily routine work
• defmition of company rules adopting a top-down approach.

In addition to the deployment activities already agreed and under way, the follow-
ing actions are foreseen (some of them already done) after the end of the PIE:
• installation of testing tools on a server accessible to the whole development
community
• regular exhaustive regression testing
• deployment of a Web-based tracking system to all designers, and integration
with a defect report data base

Table 6.6 Historical data and results from new test practice

Before After

Productivity (LOC/p-months) 1500,00 2000,00


Fault density pre-release (Fault/Kloc) 8,00 8,00
Overhead caused by new practices (%) 1,00 1,15
Testing effectiveness (%) 75,00 92,00
Fixing Effort pre-release (p-days) 0,12 0,12
Fixing Effort post-release (p-days) 0,50 0,50
Average resource cost (ECU! p-day) 200,00 200,00
6.1 PI3 Project Summary 95

Table 6.7 Impacts on a typical project

Before After

Size (Asset LOC) 12000,00 12000,00


Faults post-release (number) 24,00 7,68
Faults pre-release (number) 72,00 88,32
Number of designers 3,00 3,00

Table 6.8 Analysis of quantitative impacts

Before After

Development Effort (p-m) 8,00 6,90


Set-up/training effort (p-m) 0,00 0,50
Correction effort pre-release (p-m) 0,43 0,53
Correction effort post-release (p-m) 0,60 0,19
Total effort (p-m) 9,03 8,12
Development cost (ECU) 33728,00 31719,68
Maintenance costs (ECU) 2400,00 768,00
Tooling costs (ECU) 0,00 400,00
Total cost (ECU) 36128,00 32887,68
Development time (months) 3,01 2,71

6.1.5 References

[Onion]
ONION, "Process Improvement in Internet Service Providing", available at
net.onion.it! pi3/
[Bazzana96]
G. Bazzana, E. Fagnoni, M. Piotti, G. Rumi, F. Visentin "Testing in the Inter-
net", Proceedings ofEuroStar 1996
[Visentin96]
F. Visentin, E. Fagnoni, G. Rumi "Onion Technology Survey on Testing and
Configuration Management", Onion, Id: PI3-D02, April 1996, Excerpts availa-
ble at: http://net.onion.it!pi3/
[Bowers96]
N. Bowers, "WebUnt: Quality Assurance for the World Wide Web", Proceed-
ings of5th International WWW Conference, Paris, May 1996, pg. 1283-1290
[lmagiWare]
ImagiWare, "Doctor HTML", http://www2.imagiware.com/RxHTML/
[Bach]
lBach "Testing Internet Software", available at http://www.stlabs.com/inet.htm
96 6 Experience Reports

[McGraw]
G. McGraw, D. Hovemeyer, "Untangling the Woven Web: Testing Web-based
software", available at http://www.rstcorp.com/- anup/Ibconf/Ibconf.html
[Bergel]
H. Bergel, "Using the WWW Test Pattern to check HTML client compliance",
IEEE Computer, Vol. 28, No.9, pages 63-65, http://www.uark.edu/-wrg/
[Driscoll]
S. DriscolI "Systematic Testing ofWWW Applications", available on
http://www.oclc.org/webart/paper2
[Mercury]
Mercury Interactive "Automated Testing for Internet and Intranet Applica-
tions", available on http://www-
heva.mercuryinteractive.com/resources/library/whitepapers/
[Yourdon96]
E. Yourdon "Testing Internet Software", Corporate Internet, Vol. II, N.l 0 Oc-
tober 1996
[ST Labs]
ST Labs "Internet Testing: Keeping Bugs Off of the Web", available at
http://www.stlabs.com/Bugs_Off.htm
[AM&PM]
AM&PM Consulting "Testing Your Internet Security"
[Software QA]
Software QA "Web Site Test Tools and Site Management Tools", available at
http://www.softwareqatest.com/qatwebl.html
6.2 PROVE Project Summary 97

6.2 PROVE Project Summary

B. QuaquarelIi
Think3 (fonnerly CAD. LAB)

The relevance of the PIE PROVE stems from a fundamental consideration: The
goal of high software quality is obvious: to produce software that works flaw-
lessly, but the quality has to be reached without hindering development; thus the
verification process had to be compatible with other priorities like time-to-market
and adding leading-edge features to the product.
The need to achieve considerable improvement in software verification under
strong competitive pressure and tight schedules pushed this PIE to implement a
comprehensive testing environment to support the design, implementation, execu-
tion and the reuse of test cases. The environment has been integrated by an errors
database that was a pivotal tool to actually measure the effectiveness of the verifi-
cation process.
The company has kept enhancing its environment also after PROVE and now it
is seen as an essential component of the developers' workbench.
PROVE also revealed the critical importance of improving the test design skills
of developers and testers to really get that dramatic improvement in the effective-
ness of testing that cannot be achieve only by technology.

6.2.1 Participants

CAD.LAB, a CAD/CAM systems producer based in Italy, carried out the Process
Improvement Experiment (PIE) PROVE to improve the quality of its software
products by implementing a measurable verification process in parallel with the
software development cycle. Two approaches were experimented: dynamic verifi-
cation (testing) and static verification (inspection).
About Cad-Lab:
• Established in 1979
• Software Factory
• CAD (Computer Aided Design) and PDM (Product Data Management) systems
for the manufacturing industry
• Hardware independent software (WS and PC)
98 6 Experience Reports

6.2.2 Business Motivation and Objectives

As product complexity increases and customers' demand for high quality software
grows, the verification process is becoming a crucial one for software producers.
Unfortunately, even if verification techniques have been available for a few years,
little experience in their application can be found among commercial software
producers. For this reason we believe that our experience, will be of significant
relevance for a wider community, not least because it could demonstrate the feasi-
bility of a structured and quantitative approach to verification in a commercial
software producer whose products sell on the national and international market.
• Our business goal: to produce software that works flawlessly.
• The objective of the experiment: defining a measured verification process inte-
grated with our software development cycle.
• A fundamental requirement: doing the best job possible in the time available.

By setting up a verification method and supporting it with an automated


infrastructure we were able to demonstrate the following results on a baseline
project, based on our flagship product for three-dimensional design, Eureka:
• Less errors are escaping from our ordinary quality control.
• More reliability is assured in the subsequent releases through which our product
evolves.
• Verification activities are more productive because they can relay on a replic-
able set of procedures which now form the core of our non-regression quality
control.
• Quantitative data on the correctness of the product are gathered and analysed
continuously.

Some key sentences summarise the lessons that we consider most valuable for
whoever will repeat a similar experiment:
• "A cultural growth on testing is paramount"
• "Developers won't accept change which does not provide a clear return on their
effort investment"
• "Provide senior managers the results which are useful to pursuing their business
strategy"
6.2 PROVE Project Summary 99

Testing
Will this particular input cause
the program to fail?

Inspection
Is there any input that causes
the program to fail?
Fig. 6.1 A global verification strategy

6.2.3 The Experiment

The baseline project in PROVE was identified with a significant selection of the
subsystems of a 3D (three-dimensional) design environment Eureka. The subsys-
tems selected are different for design and technology and PROVE took into ac-
count their importance within the product; the baseline covers about the 25% of
the whole product.
PROVE consisted of these steps:
• To define a global verification strategy - tailored to the distinct characteristics
of Eureka's subsystems - in which testing and inspection are balanced.
• To build up and experiment with an automated testing environment, compatible
with the different technological environment of the baseline project subsys-
tems.
• To defme inspection procedures focused on those aspects which cannot be
dynamically tested and to facilitate re-execution by means of partial automa-
tion.
• To identify an initial set of software metrics - to be applied by means of static
analysis tools - to assess the design quality of the code (namely the 00 code).
• To set up an ongoing measurement mechanism based on testing results logging
and error tracking to obtain quantitative data about the correctness of the code
and the defect removal capability.
100 6 Experience Reports

system's functional

r----------------+
I

. . u m . ! a : ! . ! n . ! . ! u : ! . ! a i ! . ! I U i w n ! : i S ~ p ~ e ~ c : . ! : t u i O ! n . ! . - ~
1
I
1
1 __ __
I
1
1
~__t ' ~
1
1
I
ERRORS DB

I
I
1
1
1

1_ _ -> inspection
_.....:.:.:..:J::_~::.:::.~~_
--+

Fig. 6.2 After PROVE: the inspection process

test suites execution:


test suites execution (developers)
filtering out errors
Alpha test (CQS - independent testing)
\ \ ~ Beta test
field reports

TEST-LOG
• ERRORS
DB

DB

Fig. 6.3 Measurement


6.2 PROVE Project Summary 101

Particular care was put into selecting and deploying supportive technology, in-
tegrated into an overall verification environment, and on acquiring the necessary
training and external assistance. An illustration of these aspects of the work per-
formed is shown in Fig. 6.4.
PROVE's work plan was conceived around these fundamental assumptions:
• Each of the baseline project subsystems has peculiar quality aspects to be veri-
fied. For this reason the most suitable verification approach for each of these
components had to be planned for. Both testing and inspection techniques had
to be included because not all the relevant quality aspects could be easily veri-
fied through just one of those techniques.
• The software process model of CAD.LAB (repeated evolutionary cycles) could
benefit from a set of reusable test cases and inspection procedures to be re-
executed on every new release in an automated way. Automation had to be
compatible with the development environment which, at the time PROVE
started, presented major differences from one subsystem to another.
• Results of applying the new verification process had to be measured. Measur-
ing meant setting up a test log database to monitor the quality level before re-
lease and an error database to track and analyse quality related data after re-
lease.
• Testers had firstly to be trained on the fundamentals of testing and inspecting
and then take a more in-depth, hands-on training on a method for testing.
To make the new verification practices ready for being adopted by all R&D
staff, and to achieve integration with the development process on all the com-
pany's product lines, CAD.LAB made the methods, tools and measures defined by
PROVE available to software engineers as immediately accessible practices on the
internal WEB site.
PROVE have certainly moved the verification work from a very raw state into a
much more mature one, the problems/initiatives are understood and accepted and
there is a bigger awareness of its importance at all levels.
The results of PROVE are becoming embedded in CAD.LAB's process, chang-
ing some of its phases.
A significant impact of PROVE was that for the first time clear roles and re-
sponsibilities for the testing process could be identified. We chose to make the
programmers responsible for subsystem testing, in fact this kind of testing is based
on the knowledge of subsystem's structural details. As regards system testing and
inspection we preferred a mixed solution: the programmer and the tester (where
tester means an independent tester that doesn't know how the system was built,
but on the other hand, he/she knows how the system will be used by the users) will
design the test plan together. The programmers will carry out a more "technical"
testing, focused on performance, accuracy and geometrical consistency of results
and portability, whilst independent testers will exercise the product by emulating
what a user could do with it.
102 6 Experience Reports

As a consequence of the new verification practices defects are found before re-
lease saving later work and costs, moreover this same method can be applied
alongside program development to prevent mistakes, enhancing error prevention
capability.

problem reports The PRODUCT & its subsystems


ERRORS DB
l

ERRORS DB

r manual

REPOSITOR

!
'suites
·test cases
·scenarios automated

TEST· LOG DB

Fig. 6.4 After PROVE: the testing process

6.2.4 Impact and Experience

At the time PROVE started, verifying the product meant spending time on setting
up an environment and developing test cases over and over again. Errors were
registered (unevenly) in a database through a cumbersome text based interface that
made this activity slow and frustrating.
Within PROVE such dispersion of resources has been removed by providing an
infrastructure that assures repeatability, traceability and availability of the infor-
mation.
6.2 PROVE Project Summary 103

A Test Suite (TS) is a set of test cases mapped to a


TestUnit (TU) and composed of Scenarios

.;-
./
------
/
/
/
/
CCInters3d I
(31 scenarios) I
I
SSinters
I
(16 scenarios) \
\
\
\

-
"-

------_
"- ........
.......
....

Fig. 6.5 Test cases repository: Incremental Test Suites

500 , . . . . - - - - - - - - - - - - - - - - - - - - - - - ,
450
400
• Open
350
• Fixed after
~ 300
• Fixed before
2
U! 250 • Not an error
z
200 • Frozen
• Duplicated
150
• Can't reproduce
100
50

oL-.L..-------.JI!!!!I!------.l="----'-.....'--------------'
<96109/17 <96110/25 <96112/23 <97/03121 <97/04/16 <Today
(8.0B3) (8.084) (8.0C) (8.0C3) (8.00)
Release date

Fig. 6.6 Cumulative Error trend and resolution status over EUREKA releases
104 6 Experience Reports

100
90
80
70

.
0 60

~ 50
z 40
30
20
~ Delta fixed since last rev
10 _ _ Delta open since last rev
0
<96/09/1 <96/10/2 <96/12/2 <97/03/2 <97/04/1 <Today
(8083) (8084) (8 .OC) (8 .OC 3) (8.00)
Release
Fig. 6.7 Open and fix capability

platforms

Fig. 6.8 Test failed on the same build, same release over different platforms
6.2 PROVE Project Summary 105

External testing Internal testing


(black-box) (white-box)

Fig. 6.9 A unified WEB interface to test execution

2. n e-mail
addressed to the
corrector is fired off

4. The ubmitter
receive an e-mail
notice of the fix

Fig. 6.10 The error report cycle


106 6 Experience Reports

E'''' Ed. ~..... l:il> Iilo<l_ .Q<Joono Q"edOly ;t,,1..... tJ,ilp ,...
I<> J)lsirl'Jif'"Cfllt lf3;W31 dIIlll~l
.""
¢o

r~ LDcabOJl:.!h:rlI:l IIPcul\r8,/Clberron,SdledaForm.M.p
flll
.
Error Form O,er Sod>"", Qu.quonIli B.ccotdo 1 Qr3U
I

ISJ~rJ~
I
[ftipoate II! Delete I Filter I JrtnsVlew j[Ust v=! Print PrintAll I
Currm\ F.ur P'ro4\u:lJ ... S\IfttK.A

En",. 690 rn.....db' Enrico P.ponni 0,. 4fllv.l7


=
P:-odUl:'tNamc 'EURER:A ProdoctVen:,on JEUREKA v.a.OD El c""'!'O"o'" I fJ
Pl>lf.rm :IPC -Win9S kJl V~I:!¢rt:
I I
Descr1praon: Result of imploding SUrf8C8:S is 8 non-manifold solid. 15 ,.""'Y .INot blocking
If you explocte the solid into su~faceg and try to re- implode t.he surfaces
Deu;u ineo ~ ~olid~ you have! non-meoitold ~olid.

~ ..? ~'!lf: ~" ..~ ~ %.~ "': -"" .. "".-


- -
.;;..

AIto<bod Ii!< axp250/disc3/eurekalblest80Inonslrichludeilsolido.e3 IC.-=l


SlalUi. Open ~ SoIY,db, On. :

.)i.'~ -
1"!Z7:s!1.1 IlJocumQf1t.,Ol)l'LY
............... - .- - - .~.:J:
... -'j

Fig. 6.11 Error Database: the WEB interface


6.3 TRUST Project Summary 107

6.3 TRUST Project Summary

A.Silva
Agusta

TRUST explored two key issues in software verification:


• How to maintain traceability of test cases to requirements to show that re-
quirements coverage has reached a target level.
• How to re-use test cases across releases and variants of a software product to
enhance re-use and productivity.
A requirements and test cases management environment has been developed
and field-tested by TRUST to find a solution to those issues. Careful management
of traceability across releases and variants makes it possible to identify from a
database the tests to be re-used and run for each new release/variant of a product
baseline.
It is relevant the contribution of technology to the actual success of the new
process. In the PIE words Every new procedure in software development can
succeed only if supported by tools; paper based approaches imply inefficiency and
mistakes incompatible with a competitive market."

6.3.1 Participants

The participants in this experiment are Agusta SpA, a helicopter manufacturer


having in charge the development of both mechanical and avionics, and TXT
Ingegneria Informatica, software house with large experience in software engi-
neering practice and products development involved as external assistance pro-
vider.

6.3.2 Business Motivation and Objectives

In the Aerospace Industry product costs and time-to-market are today 2 key com-
petitive levers. Helicopter is a very software intensive product, in which the
avionics software contributes to the product costs by more than 30% and to the
overall time-to-market by 40%. The software Testing and Validation activities
significantly contribute to the above mentioned high avionics software costs
(50%) and lead-time (40%).
108 6 Experience Reports

The specific Project goals can be summarised as:


• to define new/improved procedures to trace requirements along the whole SW
development process
• to reduce the time of preparation of the Traceability Matrices by a factor of
50%
• to provide objective data as the basis for a new Product Variant cost estimate
• to provide a safe instrument to identify reusable Software and Test Data com-
ponents inherited from original Products
The overall objective of the TRUST PIE is to improve the Testing and Valida-
tion Process, in order to reduce the related costs by at least 15% and to cut the
avionics software development time by at least 10%. The emphasis is on im-
proving the process, rather than on just automating some of the testing/validation
activities.
In particular requirement traceability has been improved considering the single
requirement as applicable across different Releases and Variants of the products.
Such enhanced traceability allows highlighting the following points:
• Impact of requirement changes for new Releases / Variants to be developed on
the basis of existing products.
• Test Sequences that can be inherited from previous Releases / Variants both for
regression or test purposes. This avoids writing a brand new Test Sequence
when already available in a different Release or Variant.

Variant VersionslReleases

RAFI RAF2

RAF I I

FMF3 FMF4 - FMF4.2P


EHIOI
FMF I I
AMC

SMC4 SMC4.1
SMC I I

Figure 6.12 Release and Variant definitions


6.3 TRUST Project Summary 109

6.3.3 The Experiment

The Experiment took place in the following steps:


• Assessment of current practices
• Definition of a new approach to requirement traceability
• Re- issue of Code of Practice
• Installation of a Toolset to support the new approaches
• Training to SW development Teams
• Adoption of new practices
• single Variant scenario
• across Variant scenario
• Result assessment and benefit evaluation
Due to the nature of the avionics software - real-time and often safety-critical -
the Software Testing and Validation Process is a key step in the whole develop-
ment effort, which today accounts for about 50 % of the total software develop-
ment costs and effort (for a single Variant this might well sum up to 30 man-years
during the Variant total development time). The testing takes place at various
levels: individual modules, subsystems, functional units and the whole system. It
is not only a cost-intensive task, but also a very time consuming task, which is
today on the critical path of the helicopter development process and one of the key
factors affecting the total product time-to-market.
One of the reasons for such very high costs is that today the avionics software
testing is repeated in its entirety for each Version of the software (within the same
Release of the product), for each Release (within the same Variant) and for each
different Variant. This leads to high testing and validation costs and also affects
the overall lead-time of the avionics software development process.
In order to achieve its objective, requirements traceability has been exploited,
as the means to relate directly and unambiguously subsets of testing and validation
sequences to specific subsets of requirements. The goal is to keep track of what
and how should be tested when requirements change or when amendments to
faults in a product ReleaseNariant have to be propagated to all the other relevant
active ReleasesNariants.
Specific topics of this new approach are:
• Definition of new software engineering rules to identifY common sets of re-
quirement text across different software Versions / Variants.
• Definition of new/improved procedures to trace requirements and specifically
to cross-trace requirements and testing/validation sequences.
• Implementation of a toolset to give support to rules and procedures application.
• Implementation of such new procedures in the Agusta environment, with spe-
cific reference to a pilot baseline project.
• Measurement and demonstration of the actual benefits deriving from the im-
plemented improvement steps.
110 6 Experience Reports

The high number of software requirements, code (software procedures) and


Test Sequences applicable for each software baseline (each in the magnitude of
thousands) leads to the need of automating the whole requirement defmition and
traceability process. A toolset was defined, basically composed of:
• a Requirement Database
• a set of functions, built-in within our wordprocessor, to automatically align the
Software Requirement Document (SRD) contents with the Requirement Data-
base.

6.3.3.1 Existing Practices at Project Start

• Traceability provided within a specific AMC Variant-Release


• No traceability is achieved across:
• Releases of the same Variant
• Releases of different Variants
• Requirement changes across Releases/Variants are not always traced:
• COP does not provide rules and guidelines
• personal initiatives exist to trace requirement changes
• Requirements can be renumbered across Releases/Variants thus making diffi-
cult to manually perform an impact analysis and SW/Test reuse assessment
from previous Releases/Variants

6.3.3.2 New Approach


Requirements
• are identified by:
• a label (as currently defined within COP)
• a version (a number starting from 1)
• can be associated to several Releases/Variants E.g.: «PPI3.1-1: Title» this
requirement applies to SMC4.0.1 and RAFI.
• cannot be renumbered.
• modifications in the textual description can be given a "weight" factor related
to the nature of the change.
• allocation to design I TS is addressed by COP.
• label shall mandatory address requirement levels effectively containing func-
tional specifications while higher levels of hierarchical nesting not providing
actual information are ruled out.
Software Requirements Document (SRD) continues to be the master environ-
ment in which the Requirements are defined and maintained
6.3 TRUST Project Summary 111

A Requirements Database is linked by Word-Processor Plug-ins to the SRD


and is transparently maintained as the SRD is updated, by the activation of the
Plug-ins
The Plug-ins also insert information in the SRD aimed at the identification of
Requirement inheritance and modification state.
An SRD Requirement as

VSA1.3-1 : SIU SF Maintenance Alarm Reset - Datum Con-


trol

contains hidden fields (shown in italic):

(#s:U) (#w:O) (#p:RAF1) (#d:RAF2) (#1: VSA1.3) (#v:-l) (#t:


SIU SF Maintenance Alarm Reset - Datum Control)

where

s: status (U=unchanged, M=modified, N=new, D=deleted)


w: weight of change (0 .. 4)
d: destination product Variant
v: requirement version
p: product Variant origin
1: requirement label id
t: requirement title

Re DB
RelVar ReqVer RelVar+ 1 Status
SRD FMF4
n/a PP13.I-l FMF4 N
« PP13. I-I title» FMF4 PPI3.I-l RAFI U
RAFJ PPI3.1-2 RAF2 M

/
RAF2 PPI3.1-2 SMC4.1 U

SRD RAF\

/
«PP13.I-1 . title»

SRD SMC4.\
SRDRAF2
«PP13.1-2 : title»
« PPI3.1-2 : title»

Fig. 6.13 Requirement DB representation


112 6 Experience Reports

8--,----8
Database Implementation

DB Roq FMF4.2

8 1---- SMe4.D.2

'-----8 RAFI

Fig.6.14 Implementation Databases and relationship to Requirement DB

RAFIDB
Re DB
AccProc TSM

RelVar ReqVer RelVar+1 Status PI TSMOIOI


P7 TSM0203
n1a PPI3.I-l FME,L--I+---' P3 sm0204
FMF4 PPI3.I-1 RAFI U
RAFI PPI3.1-2 RAF2 M
RAF2 PPI3.I-2 SMC4.1 U
TS

TS0301
TS0304
TS0505

Fig. 6.15 Implementation DBs data and relationship to Requirement DB


6.3 TRUST Project Summary 113

SRD SRD SRD


FMF4.2 RAFI SMC4.1

NewPaste Link
NewRelVar ,,
I

I
I

,,
I

'- - - - ------ - --
SRD Link DB
RAF2 RAFl
Revisioned/
Marked
Link

Fig. 6.16 Dynamic operations of Requirement construction using the TestTracker toolset

Requirement Database Summary Features


• History by requirement label
• Last requirement version
• Requirement list by ReleaseNariant
• Inherited requirement list by ReleaseNariant
• Statistics by ReleaseNariant.
114 6 Experience Reports

History by Requirement Label ~ ~


Req: I~le HIe: Selling of ADF modes recovering from a 5nk fail

Document. Idoc Section: Isect

Fig. 6.17 Requirement DB Queries: History by Requirement Label

_ LI x

VM4.S
VMS Ii
VMS 1 Ii
VMS.2 l?
VMS 21 Ii
VMS.22 Ii
VMS.2.3 1i......J
VMS.2.4 Ii
VMS.2.4.1 Ii
VMS.24.1.1 r;
VMS 2 4.2 Ii
VMS.3 li
-""""':~'".,...."<n--"-""""'",,,""..:."'1..J..~:.::.'T.rJ1""io/";:21';:;7:;<'9_-'o!''''_~
ff
r,:.
~

Fig. 6.18 Requirement DB Queries: Inherited Requirement List by RelVar


6.3 TRUST Project Summary 115

_ D X

Last Requirement Version ~ ~ ~ ~


Req RV·
~ !BJ RAF2
AIA7.2 RAF2
AUv12.1 RAF1
AUv12.2 RAF1
AUv12.3 RAF1
AUv14.1 RAF1
AUv14.2 RAF1
A lv16.1 RAF1
Auv16.2 RAF1
ALIv16.2.1 RAF1
ALIYI6.2.2 RAFl
ALIv16.2.3 RAF1
ALM6.2.4 RAF1
ALlv1625 RAFl .....
Record: I~ ~ ~ .. +0122

Fig. 6.19 Requirement DB Queries: Last requirement Version

Origin RelVar Req


~ RAFl ADFl.l
RAFl ADF1.2
RAFl ADF1.3
RAFl ADF2.1
RAFl ADF2.2
RAFl ADF3.1
RAF1 ADF3.2
RAF1 ADF3.3
RAF1 ADF4.1
RAF1 ADF4.2
RAF1 ADF43
RAF1
RecOld:~

Fig. 6.20 Requirement DB Queries: Requirement List by Release/Variant


116 6 Experience Reports

~ Requirement Statistics by ReleaseNariant


RelNal Nl.llOOef of Reqs Number
ms

Fig.6.21 Requirement DB Queries: Requirement Statistics by ReleaseNariant

6.3.4 Impact and Experience

In the context above the impacts are:


• a remarkable cost saving in the testing activities (18%) due to a reduction of
new test preparation and execution supported by the across Release require-
ment traceability.
• an easier definition of traceability matrices involving both the implementation
and tests.
• the use of new tools has increased the skill of involved personnel.
It is foreseen that, having performed the culture creation, training and initial
population of the databases, a second application of the methodology would yield
a more precise identification of specific and regression tests and even better re-
sults.
The adoption of new procedures at the beginning caused a soft sceptical reac-
tion, due to the long term and constrained approaches to software life cycle im-
posed by military and civil standards and ingrained in the team working style.
Nevertheless this mood has quickly turned to a co-operative behaviour when the
benefit of the supporting tools emerged.
Benefits obtained so far:
• Automatic production of Traceability Matrices along the entire Development
Process
• Substantial reuse of all Software and Test Items inherited without change
6.3 TRUST Project Summary 117

• Assisted reuse of Items inherited with varying degrees of change


• Precise identification of new tests needed and of the degree of regression test
achieved
• Assisted quantitative assessment of change impact
• Automated identification of Items candidate to modification as a consequence
of Requirements changes.

M1 = Number of tests re - executed


Number of added / modified requirements

t::
1600
0 1400
l::.-.
~ ~ 1200
"t:I l'l:l
C"C 1000
l'l:l'"
til I: 800
Vi 0
~ l!! 600
....-0 ' -Ql=-' 400
..:: 200
z
0
If>
;;;
t 6'
g 0

e
Ql
W 0
f-
eo ;;;
If>
"5 ill
f- u
f- ';::
a;
E
N
2:

Fig. 6.22 Metrics and results (I)

M1 = Number of tests re - executed


effort int estphase
118 6 Experience Reports

1100
1000
~ 900
~ 800
'=
c 700
«;I
600
til
0- 500
~ 400
"-
0 300
"-
z: 200
100
0
"'C
aJ
(fJ
u; 6'
0
0)
OJ
c:
ro
..c:
I- L
u ro (fJ

c- o .g
aJ I- a;
a: E
~

:::2:

Fig. 6.23 Metrics and results (2)

RAF2

RAF
1
I '---.---------r--~---'l
-,
o 500 1000 1500
Nr.ofTests

Fig. 6.24 Test Apportionment identification


6.4 FCI-5TDE Project Summary 119

6.4 FCI-STDE Project Summary

F.L6pez, P.Hodgson
Procedimientos-Uno SL

The interest FCI-STDE lies in the subject matter of this experiment: code inspec-
tions. It is widely claimed that code inspections can have a higher defect discovery
potential than testing at lower costs. This PIE field tested this statement and came
to some relevant conclusions:
• A relationship exists between the cost of inspecting and testing and the com-
plexity of the object subject to these procedures. A linear cost/complexity ten-
dency fits well with the experimental data - taking in account the overall limi-
tations of a small experiment.
• Code inspections were only marginally cost effective in the PIE context what-
ever the complexity of inspected objects.
• Young and relatively flat organisations, such as our company, offer little or no
resistance to the introduction of code inspections. Current literature tends to
maintain the opposite.
• The introduction of formal code inspections has led to changes that are more
organisational in nature (affecting workflow and deliverables) than technical.
The successful implementation of the FCI-STDE process improvement experi-
ment at Procedimientos-Uno, S.L. has led to the improvement of code maintain-
ability as well as to better knowledge of the code and product. As a consequence,
the risks related to engineers leaving the organisation have been positively re-
duced for the company.

6.4.1 Participants

Procedimientos-Uno, SL is a small software consultancy and supply company. It


has 20 employees and seven years experience producing and marketing technical
software in the fields of architecture, engineering and more recently fmance for
SME. With more than 3000 clients in Spain, 50 software applications and 50000
packages installed, its turnover for 1997 was 1.3 MECU.
The company's organisation is extremely flat with only two levels: a board of
directors (all partners of the company) and the rest of the workforce, though func-
tionally divided in three departments and task oriented subdivisions. The com-
pany's economic health is highly dependent on the productivity and quality of the
software produced.
120 6 Experience Reports

6.4.2 Business Motivation and Objectives

Code inspections have not been widely adopted by software developers of techni-
cal applications; due mainly to the relative low testing costs and the fact that tech-
nical knowledge of the domain was considered more important than good coding
practice. In small software developing units this situation becomes worse, as clas-
sical code inspection plans require more human resources than usually available.
The widespread introduction of graphical user interfaces and the demand for dis-
tributed computing have seriously risen testing costs for all types of software, and
has led Procedimientos-Uno to consider the introduction of code inspections.

Q)
c.
• IEEE 10281988, M.E. Fagan • Small Businesses (SME)
8 • Large companies: IBM, AT&T • Technical software
r./'J • IT Departments • Small projects (1020.000 NCSL
• Very large projects (100.000 person months)
NCSL person years) • Small working teams (also one
• Large working teams person teams)
• Limited range of conditions to • Large range (almost unlimited)
be tested of conditions to be tested
T={l...n}

qt. r'
~.:~. <

;.. J

Fig. 6.25 Introducing code inspections

The motivation for the PIE was to contain the ever-increasing testing costs and
to improve software quality, especially the quality as perceived by the end-user.
Indirect benefits were expected as well: Procedimientos-Uno has always acted
as a well-integrated team, but as the size of the business grows, methods are re-
quired to enhance communication within the development team.
Improvement in code maintainability was also expected, as many coding stan-
dards are oriented to enhance code readability.
Additional improvement in software reliability were also expected; based on
the fact that code inspections can discover many faulty aspects that software test-
ing cannot capture.
In concrete, the following objectives were presented:
• higher level of quality of components at less cost
• eliminate errors that software testing cannot capture
• enhance code maintainability
• enhance code readability
6.4 FCI-STDE Project Summary 121

• enhance reliability
• group motivation

6.4.3 The Experiment

The experiment was carried out in the context of a baseline project that is strategi-
cally vital to Procedimientos-Uno and was subject to external auditing by a public
institution (CDTI of the Spanish Ministry of Industry). The project, known as
NovaMedia, falls in the category of development platforms and can be described
in short as a true visual language based on COM technology.
The experiment was designed in two phases, each one covering a complete de-
velopment, inspection and testing cycle of different components of the baseline
project.
Code inspection took place when the coding effort was at its peak on the base-
line project NovaMedia.
The following steps were carried out:
• Review of current C++ coding standards to bring them up-to-date and to make
them appropriate to the product and development environment characteristics.
• Review Test Metrics, with respect to the PIE's main objective. This revision
suggested collecting detailed effort data in order to enable later evaluation of
FCI cost effectiveness.
• Software tool acquisition and set-up. The decision to update to "Microsoft
Visual c++ Development System Version 5.0" was made. This product offered
additional benefits beyond improved static code analysis.
• Selection of code to be reviewed (Test plan 1). Special attention was paid to
establishing similar amounts of code (in effort, complexity, etc.) to be subject
or not to Formal Code Inspection (FCI), based on existing specifications and
test plan.
• Formal Code Inspections - Phase 1. Three engineers were involved in the roles
of: Inspector/Moderator, Inspector/Reporter and Author in the following proc-
esses: Inspection overview, Inspection session, Meeting, Rework and Follow
up.
• Process Improvement Plan. Required situating the FCI practice in a general
plan aiming to a wide adoption of reviews and inspections based on clearly de-
fined business objectives.
• Statistics.
• Review of standards and metrics, with the objective of improving standards and
criteria established in WPI and WP2.
• Selection of code to be reviewed. The main practical difference of this new
round was that the objects subject to the FCI were now classes of a complex
container rather than simple components.
• Formal Code Inspections phase 2.
122 6 Experience Reports

• Statistics and final assessment of the PIE results.


The experiment aimed to compare the relative cost of inspected and non-
inspected code. For this comparison to be believable it was considered important
that inspected and non-inspected code had similar complexity.
A compound relative complexity index was designed for each phase based on
simple counts and metrics such as lines of code and number of attributes.
In the first phase the set of inspected components had a similar compound rela-
tive complexity (total 6.57) to that of the control (non-inspected) set (total 8.42).
In the second phase the total compound relative complexity of inspected code was
13.06 against 16.94 for non-inspected.

6.4.4 Impact and Experience

In the FCI-STDE PIE two full cycles of coding, inspection and testing have been
completed. The following tables and figures show the results:

Table 6.9 Inspection productivity

Component Lines In- Preparation Inspection Fixing Total


spected Hours

Average 2055 26,47 82,75 37,30 14,65

Table 6.10 Inspection Speed

Component Lines I Prepara- Lines I Inspec- Lines I Lines I Total % Inspect.


tion tion Fixing hours

Average 1658,79 324,28 508,23 155,38 100,00

Table 6.11 Inspection Efficiency

Component Design defects I Logic defects I Critical defects I


KLOC KLOC KLOC

Average 1,17 8,51 2,03

Though improved inspection process and inspectors experience somewhat un-


dermine the hypothesis, we believe there is a fairly straight correlation between
inspection effort and complexity. This noticeably different from coding effort,
which tends to grow exponentially with increasing complexity.
6.4 FCI-5TDE Project Summary 123

i
2500
i
+-,~~----l-~~--J~~--J~~~t--~---...~~-A-+-~~+-~~-j

I
Ui i
S::l 2IlOO,
-:-~~~----l-~__--J~~--Jf-----...e-."~~+-~~+-~~+-~~-j
.5 I
E i
-~ I
1500 -i-~~-+-~-""+-~-.,..lL---+---+----+----+-------1

ffi

'.
1000 +--_.Jf£-~-+-----+-------I-----+---+----+----+

0,00 O,SO 1,00 1,50 2,00 2,50 3,00 3,SO <,00

Relative Complexity

• Inspected • Non-Inspected -Lineal (Inspected) - • • Lineal (Non-Inspected)

Fig.6.26 Effort related to complexity (Inspected & Control sets)

Analysis of the figures and charts show FCI to be, at best, only marginal bene-
ficial in the terms of the experiment in our environment. At our current level of
competence in the inspection process, and only taking the experimental measures
into account, Procedimientos-Uno can save an overall average of 0.25 person days
per KLOC delivered to our clients.
If only inspection, testing and rework are compared, careful attention has to be
paid to the size of the inspection team versus the size of the testing team. The
amount of testing required in a single cycle (not including re-testing) is independ-
ent of the inspection effort. Obviously inspections have to be very efficient (and
testing thorough) if the inspection team is the larger.
As for the metrics for code inspections themselves, in the first phase inspectors
covered an average of 155 lines per hour and discovered an average of 2.02 criti-
cal defects per KLOC. In the second phase inspectors covered an average of 233
lines per hour and discovered an average of 0.89 critical defects per KLOC.
The experiment has proved that FCI improves knowledge dissemination within
the organisation. This directly affects (reduces) business risks related to the loss of
part of the organisations engineering workforce.
In spite of the marginal 0.25 person days per KLOC overall gains implied by
the experiments results, Procedimientos-Uno has decided to adopt FCI (and re-
views in general) at an organisation wide level. There are several reasons for this:
124 6 Experience Reports

• improved code maintainability


• better knowledge of the code and the product.
In consequence FCI reduces the risks related to engineers leaving the organisa-
tion.
As formal code inspection has been adopted for all projects, and additional de-
sign criteria have been established to enable design review, our customers will
receive better and more uniform products. This should lead to increased customer
satisfaction.
The experience of the experiment itself (which included developers that had
doubts about the usefulness of FCI) was essential. Adoption of FCI and reviews at
organisational level would have encountered much more resistance if the FCI-
STDE PIE had not existed.
In spite of code inspections being useful and cost effective, the benefits in envi-
ronments such as this PIE's are not even near to the twenty-fold benefits described
in some literature.
Having defined the process, standards, deliverables and work documents for an
experiment such as FCI-STDE, little effort is required to extend experimental
practices in an organisation such as ours.
Comparing our experience to current literature we believe that smaller (more
specialised) organisations, as ours, tend to be more permeable to innovation than
large enterprises.
The experiment has not only been the basis to changes in our software devel-
opment processes; it has also led to greater acceptance of process improvement
efforts in general.

6.4.4.1 Summary of the Results


No silver bullets, no tool can tum an uncontrolled process into an optimised proc-
ess.
The study of our processes allows us to improve them but there are some points to
be remembered:
• Models like SPICE are no recipe.
• The context affects processes.
• Processes affect other processes.
There are mathematical models that relate product quality to external metrics,
but there are no universally accepted models to relate it to internal metrics. We
analyse results and compute statistics but the development process is human cen-
tred and the measurement that we take have wide dispersion.

6.4.4.2 Further Information


For more information about FCI-STDE:
6.4 FCI-STDE Project Summary 125

• http://www.procuno.pta.es/fci-stde/
• http://epic.onion.it/workshops/w07/slides03/index.htm
• http://www.esLesIVASIE/
• mailto:phodgson@procuno.pta.es

Some resources on inspection on the Web:


• http://www.bell-labs.com/userlhpsiy/research/inspectpubs.html
• http://www.result-planning.com/
• http://www.elsop.com/wrclhumor/coderev.htm.

And about testing:


• http://www.stlabs.com/testnet.htm
• http://www.soft.com/
126 6 Experience Reports

6.5 TESTLIB Project Summary

J.c. Sanchez
lntegracion y Sistemas de Medida, SA

TESTLIB deals with the key issues in software verification: the automation of test
software generation by using a high level command language, the integration of
the generated code in a testing environment usable by independent testers and the
management of the generated code in a data base.
TESTLIB took an innovative approach to these issues basing its testing soft-
ware generation on re-use. To this aim the results created by TESTLIB are:
• A set of libraries containing different types of modules to be used in the gen-
eration of tests and test flow sequences.
• An automated process to integrate all the required modules into a test engine in
order to generate, without user intervention, the final run-time test application
software.
Finally the set of tools created by TESTLIB has been encapsulated within the
commercial development environment HP SoftBench to make the testing envi-
ronment smoothly integrated with the development environment.
The work carried out in this experiment demonstrated that in testing automation
the traditional drawbacks of testing automation - and namely low efficiency, long
development cycle, high risk and costly projects - can be addressed and over-
come.
Data gathered in this experiment show that with TESTLIB's innovative ap-
proach it is feasible to generate test software based on re-usability. This approach
does not go without pain: very high software development effort was necessary
prior to feasible application of results.
By applying the techniques involved in this experiment software engineering
and test engineering specialists interact within the test software development proc-
ess in a highly efficient way through task specialisation.

6.5.1 Participants

The experiment has been carried out by Integraci6n y Sistemas de Medida, S.A, a
SME highly specialised in the development of applications software for tum-key
testbeds - ATS (Automated Test System) - to be used in automated systems
testing, and particularly telecommunication devices.
6.5 TESTLIB Project Summary 127

6.5.2 Business Motivation and Objectives

The objectives of this experiment were focused on exploring potential ways of


improving the organisation and the process of testing software generation for ATS
which consist of a set of electronic measuring instruments operating under
computer control.
Objectives:
• To explore new ways of improving the software generation process for Type
Approval Test Systems.
• To reduce the software skills needed for a test engineer to develop efficient
systems.

The motivation behind TESTLlB comes from the current drawbacks affecting the
development of ATS software and namely:
• long development time, high costs
• low re-use
• difficult maintenance of test cases and test procedures, particularly when main-
tenance is performed by different people from those who developed them in the
first place
• impossible to make extensions and changes by the final user
TESTLlB explored the use of c++ object libraries for the automated generation
of reusable test software code to be used into an automated testing environment
for telecommunication devices.
As a direct consequence of the experiment, test engineers, rather than software
engineers, will be able to develop 00 testing code with a relatively small training
effort, based on generic code re-use.
Software re-use will allow development effort to be reduced in a 30 to 40 per-
cent.

.. Lower development costs.

.. Lower time to market.

.. Lower risks.

t High quality.
Fig. 6.27 Benefits in the Company's point of view
128 6 Experience Reports

6.5.3 The Experiment

Currently, every single programmable instrument used in an ATS must have its
own proprietary driver within the test code and this driver is designed according to
the tests to be performed every time a new ATS is projected. This fact implies the
involvement of both test engineers and software engineers any time a new test is
being coded for every single instrument included in the ATS. This is a tedious and
repetitive process subjected to improvement for a much higher productivity and
reliability.
The experiment was organised around three main lines of action:
• Development of object-oriented test and instrumentation libraries.
• Integration and application of object-oriented libraries to the baseline project.
• Integration and encapsulation within a standard CASE tool. (HP Softbench).
A generic instrumentation drivers library was developed.
Tests and measurement definitions were made by means of a visual program-
ming environment to simplify the way test and measurement procedures pro-
gramming are performed.
Test software code was generated based on the above defmitions by means of a
high-level code compiler.
The generation of a relational database for allocating and handling all test code
components and definitions was automated.

6.5.3.1 Technical Approach

• Object Orientation and Software Components reuse.


• Distributed computing using a client-server model.
• Graphical interfaces and programming.
• Final system view as a generic test engine that executes specific test programs.
• Industry Standards based.
Following this new approach, ETS 300086 type approval test standard for mo-
bile radio terminals was implemented under the baseline project.

Table 6.12 Mapping of Technical Approach into Business improvements

Technical Approach Business Improvements

Object Orientation and software reuse Lower development costs and time
Graphical programming and interfaces Lower software skills
Based on Industry Standards Market strength and continuity
6.5 TESTLIB Project Summary 129

Problem Domain Analysis


• Drivers:
• Generic 00 driver concept and operations.
• VXI p&p Standard.
• Tests and Repertories:
• Hierarchical model.
• Multiple results tests.
• Limits:
• Comparison logic and uncertainty.

Development Environment Analysis


• Project mode:
• Oriented to specific system development.
• Components reuse graphically.
• Library mode:
• Oriented to generic and reusable components development.
• Powerful and flexible.

Basic Design
• Basic Object Management
• Unique Identification network service
• Object Persistence using ANSI-SQL
• Remote Connectivity using ONC-rpc's
• Object typing and browsing
• Code Generation
• Automatic c++ classes
• Binding process of generic object to specific instrument drivers

6.5.4 Impact and Experience

The experience gained can be summarised as follows:


• The Object Orientation Paradigm can be applied to other activities than pure
software modelling: The generic instrument driver.
• The key for a successful 00 development is to fonow a methodology.
• The implementation of the design depends on the goodness of a compiler.
• An 00 system could be modelled as a real life system using objects that are
born, live interacting with other objects and finally die.
• It's not easy to manage a real life system.
130 6 Experience Reports

Graphical
Environment
25%

Basic
Object
Mgt
30%

Automation Code
Generation
Fig. 6.28 Time used in Design Phase

The improvement on the process of test systems software development has


been noticeable in terms of cost and delivery over traditional techniques.
To actually measure this improvement, we should consider two limiting factors:
we had one single base line project to apply this new technology and the available
instrumentation software components to be reused were restricted to those particu-
lar of the base line project. With these factors in mind, we can settle a figure be-
tween a twenty and a 30% of improvement in speed and costs.
There are also two indirect benefits derived from this innovative approach
which are extremely valuable in our opinion:
• The test software development environment is "tuned" to test developers with
lower software engineering skills, providing them a friendly, easy to use envi-
ronment to test designers.
• The quality of the code generated by automatic means has improved. This is
due to this better correspondence between skills and activities, but also to the
reuse of highly debugged components of software which have been tested
through its reuse in several different modules.
This process improvement experiment is based on the idea of making a more
rational usage of the different engineering specialities involved in test software
development, by applying software science knowledge to the service of the test
engineering side of the process. If software engineers provide a suitable frame-
work to test experts, these could concentrate their efforts in using high level and
friendly syntax command language to describe how measurements should be made
and how computer controllable instrumentation behave and interact.
6.5 TESTLIB Project Summary 131

Standards
Dev.
15% Alternatives
Environment
5%
30%

Problem
Domain
50%

Fig. 6.29 Distribution of time used over domains

In the midterm, it will be possible to design and generate a commercial CASE


tool, based on this same principle, which will allow other industries, not in the
software business, to directly benefit from the possibility of developing their own
codes for instrumentation control and testing, without the need of involving their
software experts in the software generation process, reducing the investment cur-
rently required and raising the feasibility of ATS projects.
132 6 Experience Reports

6.6 ATECON Project Summary

S. Daiqui
Deutsche Forschungsanstalt fur Luft- und Raumfahrt e. V.

The ATECON PIE is particularly significant for the completeness of its approach:
It covers all phases of testing, a wide range of tools and different development
environments.
As a consequence ATECON shows how to make a verification approach scal-
able, adaptable and tailored to projects with different reliability, availability, main-
tainability and safety requirements.
The approach has been applied and validated in seven real-world projects from
different application areas using heterogeneous hardware/software environments
and traditional programming languages like FORTRAN or C.
To all these domains ATECON has applied a consistent and very systematic
testing approach which proved to payoff in terms higher quality and reduced
efforts, mostly when applied earlier in the project.
The partners in the ATECON project are planning to extend the test approach
to cover object oriented system development with new challenges like dynamic
linking and overloading.

6.6.1 Participants

ATECON was performed by the Deutsche Forschungsanstalt fur Luft- und Raum-
fahrt e. V. (DLR). DLR, the German aerospace research establishment, is a non-
profit institution that develops systems for aerospace applications and the corre-
sponding ground support systems. DLR's quality and safety-division was respon-
sible for the overall experiment management as well as for the development of the
testing concept.
The ideas of the application experiment were tested in seven baseline projects.
Five of them were other divisions of DLR, covering a wide range of different
application areas like robotics, space operation centre, and the German remote
sensing data centre. The project partners CAM and ZETTLER performed the other
two baseline projects. CAM develops software systems for the monitoring and
control of technical facilities like ground support stations or systems for aerospace
applications. ZETTLER develops electrical and electronic systems with monitor-
ing and control systems that contain embedded real-time software with hard reli-
ability, availability and safety requirements.
6.6 ATECON Project Summary 133

6.6.2 Business Motivation and Objectives

The objective of the ATECON project was to defme and apply a cost effective,
efficient state-of-the-art system and software test concept.
This concept should include test methods, detailed practical procedures and the
suggestion for state-of-the-art test tools. The concept had to be modular to be
scalable to projects with different requirements regarding reliability, availability,
maintainability and safety (RAMS).
The test concept had to cover all aspects of testing, i.e. the unit, integration,
system and acceptance testing. For each phase off-the-shelf test tools had to be
selected and integrated in a state-of-the-art software and system testing environ-
ment. Special attention had to be given to the integration of the test concept into
the overall system and software development life cycle, considering the necessary
interfaces regarding the methods, procedures, and tools utilised during the differ-
ent life cycle phases.
The concept had to be tested by seven baseline projects from a wide range of
application areas and different hardware/software environments to make sure that
the know-how can be transferred to industry.
ATECON set out to prove that the result of systematic testing is higher quality
software systems and measurable improvements regarding reliability, availability,
maintainability and safety.

6.6.3 The Experiment

The test approach, consisting of a test concept and a supporting test environment,
covered all testing phases starting from the coding and debugging phase, over the
module, integration, and system testing phase, up to the acceptance testing phase.
For all testing phases, state-of-the-art testing methods, procedures, and tools
were defined and applied. In particular, projects were provided with tools support-
ing the test specification, preparation, performance, and evaluation of regression
tests on module, component, subsystem, and system level.
Tools supported also the measurement and analysis of the achieved test cover-
age.
These methods and tools utilised by the different baseline projects were sup-
plemented by static and dynamic software quality analysis activities and RAMS
(reliability, availability, maintainability, safety) analysis activities performed cen-
tralised for all baseline projects by appropriate specialists.
The broad range of projects and users selected for this application experiment
ensured the definition of a modular, scalable test concept and test environment,
ensuring transferability to future projects and to other organisations.
The overall test concept consisted of four sub-concepts for the unit, integration,
system and acceptance testing phases. Each sub-concept has been defmed as a set
of "Software Engineering Modules" (SEMs). A SEM describes a specific test
134 6 Experience Reports

activity (like boundary analysis or testability and complexity measurement). It


contains information on methods to be used for the activity, tools supporting the
activity and procedures how to do it. It also includes templates necessary to gather
the required test information.
Table 6.13 shows the tools that were used to support the test concepts of ATE-
CON and namely the testing tools.

Table 6.13 Testing tools per area

Area Tools

User Interface Services SoftBench Desktop


Process Management Services SynerVision for SoftBench
Object Management Services SoftBench Framework
Tools and Applications SoftBench Tools, UIM/X, ISA
Coding & Debugging Dialog Manager
Unit & Integration Testing Cantata
System & Acceptance Testing Software Testworks
Quality Analysis & Metrics Logiscope, Purify, LDRA
RAMS Analysis RiskSystem,IQ-FMEA2
Configuration Management SCCS, RCS
Communication Services SoftBench Framework

6.6.4 Impact and Experience

The project has shown that many practitioners underestimate the power of a sys-
tematic test approach. Not only that the test effort becomes more predictable and
the software becomes more stable, but knowing more about testing techniques and
starting with the testing activities earlier in the project can lead to higher quality
systems and reduced efforts.
Among the most important technical lessons learned were the following:
• Even highly critical systems have only few components that are critical. Know-
ing which components are critical can reduce the overall testing effort dramati-
cally since only they have to undergo more elaborated test procedures.
• Testing starts with the requirements phase and never ends. This is well known
by the experts but seldom applied in real projects.
• The programming languages to a much higher degree than expected influence
the methods and tools used for testing. Not only minor differences in applying a
method have been observed, but the use of totally different methods and proce-
dures.
• Most of the testing methods and procedures described in the literature have two
shortcomings: it is not well defined under which conditions and for which kind
6.6 ATECON Project Summary 135

of systems they can be applied, and they are often purely academic and cannot
be used in real projects without tool support, which is often not available.
ATECON overcame these problems by providing detailed descriptions of step-
by-step procedures, methods and tools applicable in real world projects.
• A training program tailored to the individual needs of the project was extremely
important to transfer the theoretical concepts into practice.
• In tools selection test installations and trial periods are an absolute must, espe-
cially when the tools are used in different hardware/software environments.
Testing is not an art. One can define and apply strict procedures based on ob-
jective criteria similar to well know requirements and design methods. And these
test procedures even provide feedback to improve the usage of requirements and
design methods!
136 6 Experience Reports

6.7 GUI-Test Project Summary

T. Linz
imbus GmbH

In the age of highly interactive and graphics user interfaces testers have to cope
with the increasing complexity of thoroughly testing the large combination of
options available to the user.
The manual approach is scarcely effective, costly and, worse than all, does not
produce any re-usable asset.
GUI testing tools are certainly the largest family of testing automation tools
available on the market these days. They carry big promises of increased produc-
tivity and lower costs. However it is well known that automated GUI tests could
be difficult to re-use and maintain as the interfaces changes (and you can bet it
will!).
GUI-Test looks into this issue to establish the most appropriate GUI testing
strategy fitting with their business needs and their process which is already de-
fined as part of the company Quality Management System.
Therefore GUI-test is a good example of experiment focused on assessing ob-
jectively - and with an eye to the bottom line - the benefits of switching from
manual to automated in critical area of the software process.

6.7.1 Participants

GUI-Test was run by imbus GmbH, a company established by six partners in


1992. Today, more than twenty professional software engineers develop customer-
specific software and offer training and consulting in the field of Software Quality
Assurance - including software testing services. Imbus' customers include soft-
ware companies, software vendors, and the engineering departments of large
manufacturers particularly in the field of automation and telecommunications.
Imbus' development and testing processes are defined as part of a global ISO
900 I registered Quality Management System.
Testing at imbus is a step by step process consisting of: Test specification -7
Test execution -7 Test analysis -7 Error elimination. Testing staff is independent
of software development whenever possible. Before the PIE semi-automated test-
ing had been possible in some specific projects like system software development,
however the company had not been able to automate GUI testing.
6.7 GUI·Test Project Summary 137

6.7.2 Business Motivation and Objectives

Today's software systems usually have Graphical User Interfaces (GUI) offering a
large amount of control elements to the system's users. As there are thousands of
possible interactions, testing such systems is extremely difficult and labour inten-
sive.
GUI-Test aimed to standardise, optimise GUI testing methods by introducing
commercial tools to automate the testing.
The effectiveness of GUI testing is very relevant to imbus since their customers
expect a robust graphical user interface for practically every software project.
From a business point of view imbus' motivation was to gain the following bene-
fits:
• a cost reduction and a more accurate estimate of testing costs; the reason for
this being standardisation and repeatability of testing procedures
• lower residual error rates because testing is more thorough and effective, lead-
ing to bug repair costs savings.
• Lower total development costs and shorter time-to-market.
Given that imbus is also a service provider in the field of testing one of the mo-
tivations behind the PIE was to acquire know-how to improve their competence as
"Third Party Tester".

6.7.3 The Experiment

At PIE start at imbus GUI-Systems were only tested manually, a labour intensive
and costly approach. The completeness and effectiveness of manual tests were
strongly dependent on the ability of the tester and the results were difficult to
quantify and qualify.
In addition such testing effort did not produce any re-usable asset and the test-
ing work had to be done and done again over subsequent versions of the software.
To address this issue imbus opted for the application of commercially available
GUI testing tools to automate the tests.
imbus ran a formal tool selection phase before committing to a final selection
consisting of:
• GUI test automation tools "WinRunner" and "TestDirector" from "Mercury
Interactive".
• Runtime Analyser & Error Detection tools for MS-Windows "BoundsChecker"
from "NuMega Technologies".
The tools were evaluated in a real software development project: the baseline
project we have chosen is the development of the latest of an integrated PC soft-
ware tool for GSM radio base stations. The application had been developed as a
distributed system with several tasks and DLLs using Microsoft Visual
138 6 Experience Reports

C++/Visual Studio (about 100000 Lines of Code) and running on MS-Windows


95 and MS-Windows NT.

PIE: GUI-Test
Goals: Automate Results:
optimize and defined tests Use og GUI·
automate .... and repeat tests Testing Tool
using GUI·Test
GUI·testing ! integrated into Next
Tool.
imbus Testing
Stage
Procedure

Baseline Project
PC software tool for GSM radio base
stations maintenance.
I
Define tests
and run all tests manually.
97 890 Lines of C++ Code.

Fig. 6.30 The Process Improvement Experiment

Tests were run on the baseline project using both traditional manual methods
and the new, semi-automated or automated methods. The amount of test that can
be automated was assessed and the old and the new methods were compared.

6.7.4 Impact and Experience

The PIE results showed that QUI that it is possible to automate up to 90% of QUI
test cases which had to be performed manually beforehand.

6.7.4.1 Making GUI Testing Painless


Preconditions:
• Structured testing process already established.
• Error tracking and change management is in use.
• Testing is done by testers not by developers.

Improvements:
• Buy a capture & replay tool.
• Let the testers learn to program and use it.
• Build up and maintain a test case library.
• Integrate tool usage into your testing process.
6.7 GUI·Test Project Summary 139

6.7.4.2 Lessons Learnt


Automation:
• Degree of GUI test automation up to 90% is possible.
• Complete overnight test runs are possible.
Test Design:
• Define the GUI-style-guide by test cases.
• Design Tests with automation in mind.
Business Factors:
• Ability of 100% regression testing.
• Shorter "Test-Update" cycles.
• Reduction of overall test costing.

6.7.4.3 Conclusions
The tools selected (but also tools of competitors that were not selected) worked
fine, but easy usage needs a high amount of tool specific programming know-how
and tool set-up. Therefore imbus tried to isolate as many reusable test cases as
possible and put them into a test case library to be used within the GUI testing
tool.
However GUI-Test showed that if the tool is well known its regular usage is
less effort consuming than manual testing.
As an intermediate result GUI-Test found that the break-even point on GUI
testing automation within the baseline project was reached after the 2nd - 8th
repetition of an automated test run, proving that testing automation pays off on the
medium to long distance if tests are re-run frequently.
This result was better than what imbus expected but must still be confirmed
during ongoing evaluation.
7 Lessons from the ~UR~X Workshops

L. Consolini
Gemini, Bologna

There were three software-quality-related workshops organised by I;URI;X. Three


of the workshops incorporated themes on software verification and testing in par-
ticular. The best of the workshop infonnation is presented here.
Although the workshops followed different fonnats and were adapted to the
audience and the specific national culture, the editors have tried to present the
infonnation in a common fonn, giving each section a common structure.
Following the workshop reviews, data from the workshops are synthesised into
a summary that presents the problems and general conclusions from the entire
workshop series.

7.1 Second Italian Workshop


7.1.1 Introduction

The workshop was held on 28 May 1998 in Milan. GEMINI, the Italian partner of
I;URI;X organised the workshop. The title of the second workshop was "The
Testing of Software". 19 participants attended the workshop, mostly representing
SMEs software industries.
The subject domain was selected on the basis of the results of the classification
of the PIEs, which showed that 8 out of 52 Italian PIEs dealt or are dealing with
software testing. The workshop focused on the following aspects of testing:
• Testing automation: does it payoff?
• The economics of testing: how can you tell how much testing is needed and
• economically viable?
• Is it possible to re-use tests?
• Who is responsible for testing?
• How do the traditional testing methods apply to Internet based applications?
These were identified as the major questions arising from the PIEs' experience.
The PIEs more involved in these issues, and who achieved significant results,
suitable to be disseminated to a wider audience, were invited to be part of the

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
142 7 Lessons from the EUREX Workshops

workshop panel, they were: PD, PROVE, and TRUST. The PIEs and the experts
formed the panel of the workshop.
The workshop aimed at determining which approaches to testing and particu-
larly to testing automation are most beneficial to the software industry, what im-
pact it is going to have on the company and how to measure its results. This aim
was pursued by elaborating a "workshop hypothesis" to be discussed and proved
or disproved during the workshop.
In the first part of the workshop the experts introduced the topic and illustrated
the workshop hypothesis. Three PIEs presented their experience and commented
the hypothesis.
In the second half of the workshop one of the expert stimulated the discussion
among all the participants by defending a set of provocative thesis fairly critical
about the effectiveness of testing to ensure product quality.
A lively discussion involved PIEs, experts and the audience; the panel drew the
final conclusions.

7.1.2 The Workshop Experts

Two domain experts participated in the event. The first was Gualtiero Bazzana,
Partner and Chief Executive of ONION Communications - Technologies - Con-
sulting. He has more than 10 years of experience in conducting IT-projects in
several sectors of applications and has more than 40 international publications of
which about 10 are dedicated to the domain of testing. Lately he specialised in
automation of testing and in verification and validation of Internet / Intranet appli-
cations.
The other expert was Luisa Consolini, president of GEMINI S. cons. a r. I., a
consortium for Software Engineering and Software Quality. Since 1991 she is
involved in Software Engineering and Software Quality. She assisted the PIE
PROVE, that focused on the verifying and testing process.
They were selected for their hands-on experience in the field and for being well
known in Italy and knowledgeable on testing in the software sector.
Gualtiero Bazzana did a introduced the topic with a presentation focused on
Web Testing Methods, that is on the new issues raised by the application of testing
methods to WEB based applications. Mr. Bazzana identified the Quality character-
istics specific to Internet software and the technical and process related peculiari-
ties of the WEB projects.
These peculiarities have an impact on the testing methods and Bazzana high-
lighted two major classes of tests:
• tests targeted to static aspects, relevant mostly for static HTML pages
• tests targeted to dynamic aspects, relevant for real interactive applications
which apply the client/server paradigm to the Web.
7.1 Second Italian Workshop 143

Both classes were described in tenns of techniques and tools available on the
market. Tools and techniques were related to the specific quality characteristics
they could verify most effectively. A paper on Web Testing Methods written by
the expert is included hereafter
The paper covers the following aspects:
• testing challenges in Internet applications
• a survey on existing tools in the specific application domain
• a proposed approach for testing Internet applications
• testing aspects to be taken into account
• the relationships with the overall improvement program, results achieved and
lessons learnt.

7.1.3 Testing Web-based Applications

G. Bazzana, E. Fagnoni
Onion, Milan

7.1.3.1 Background
The last years have seen an explosive growth in the WWW. Currently the Web is
the most popular and fastest growing infonnation system deployed on the Internet,
representing more than 80% of its traffic.
The picture shown in Fig. 7.1 represents the growth of WWW servers world-
wide (data derived from Netcraft Web Survey - http://www.netcraft.co.uk/Survey)
that are now (May 1999) well over 5 millions!
It is a matter of fact that success of the Internet resides also in easiness of build-
ing HTML documents. This allows everyone to have its own home page.
A wide variety of web browsers, each implementing its own interpretation of
HTML, have also favoured this massive phenomenon.
Unfortunately humans, when perfonning manual tasks tend to make mistakes;
moreover a large number of non-programmers are also creating web pages, so we
cannot assume that everyone perfectly knows Defmition Type Document (DTD)
specifications.
Browsers are supposed to be liberal in what they accept, and do their best to
render pages, no matter how badly fonned they are. But a page that looks great
under Netscape may be non-viewable under other browsers.
Search engines are becoming the first lines of attack when surfing, so it is im-
portant that web pages are suitable to automatic processing, particularly for ex-
tracting titles and paragraph headings.
144 7 Lessons from the EUREX Workshops

In HTML development there are two kind of tools generally used by different
categories of authors. Well experienced programmers often prefer Standard
HTML Editors which allow to have more control on HTML tags, while non-
programmers tend to prefer WYSIWYG Editors (also well-known as Web Author-
ing tools) which promise an easier approach to web pages creating, leaving them
free of knowing HTML syntax.

6000000

5000000

4000000

3000000

2000000

1000000

0 OIl
~ -£ <:
<" <5
....,"
0
""
'"
~ ~

Fig. 7.1 Growth ofWWW servers world-wide

All these things mean that it is increasingly important that web pages are
checked for legal syntax and for additional problems, such as portability across
browsers.
In addition, we have seen the emergence of new "active" languages in develop-
ing WWW applications (e.g.: Perl, Java) as well as the increasing development of
dynamic applications.
Additional trends are:
• interaction of Web-based solutions with large DBMS
• web-portals
• usage of Web-based interfaces for Intranet/ Extranet applications that directly
interface the company legacy system
• usage of Web-based approaches for critical applications (e.g.: on-line trading)
• access to the Web by different media (e.g.: mobile phones, TV)
• need to allow equal opportunities to Web access also for impaired or disabled
people, in order not to exclude them from the new "Information Society".
7.1 Second Italian Workshop 145

This has increased the complexity and criticality of applications, requiring the
adoption of systematic testing activities also in the Web-based realm that is far too
often wrongly considered an application domain populated mostly by hackers.
As of date, we can say that Web-based applications deserve a high level of all
software quality characteristics defined in the ISO 9126 standard, namely:
• Functionality: Verified content of Web must be ensured as wel1 as fitness for
intended purpose.
• Reliability: Security and availability are of utmost importance especial1y for
applications that required trusted transactions or that must exclude the possibil-
ity that information is tampered.
• Efficiency: Response times are one of the success criteria for on-line services.
• Usability: High user satisfaction is the basis for success.
• Portability: Platform independence must be ensured at client level.
• Maintainability: High evolution speed of services (a "Web Year" normally lasts
a couple of months) requires that applications can be evolved very quickly.

7.1.3.2 Project Management Peculiarities


In the experiences of the authors, Web-based applications are characterised by the
following project management peculiarities:
• Development is managed in accordance with Rapid Application Development
(RAD) approach.
• This implies that analysis and design are scarce when compared to "standard"
applications: the goal is to sketch-out an innovative idea into a service, rather
than to build a product starting from very precise specifications and architec-
tural design.
• Proof of concept presentations are the normal way to set-up "live" specifica-
tions.
• Limited formalisation of analysis and design automatical1y implies that usage
of defect prevention techniques can only be marginal: most of things to be
checked are thus let to dynamic testing.
• Round-trip engineering is fol1owed, by which we do not have a waterfal1 model
but rather we: "design a little, implement a little, test a little" several times on
incremental versions.
Such characteristics have marked the success of the Web; hence we do not
think that Web development has to be adjusted in order to fulfil traditional soft-
ware engineering practices, but rather testing techniques and tools have to be ca-
pable of operating within such innovative approach.
At the same time, there is no reason to believe that Internet based applications
are substantially better than traditional ones; henceforth we need testing also for
Web-based applications.
146 7 Lessons from the EUREX Workshops

To picture is further complicated by the following aspects:


• Designers are often not professional software developers or at least are not
aligned with conventional software engineering practices.
• Compressed deadlines for services are normal considering the pace of innova-
tion in the field.
• Evolutionary maintenance is a must; we can say that Web-based products sel-
dom reach a mature status, since they are replaced by newer version when they
still are beta-tested.
• Underpinning technologies are changing very quickly.
• There is a close contact with users in field which can give immediate feedback,
but at the same time
• users may be unknown, especially for Intranet! Extranet applications, where
there is no possibility of doing training, distributing user manuals and following
the standard practices for deployment of conventional applications.

7.1.3.3 Technical Peculiarities


In addition to project-management challenges, we have to take into account also
technical peculiarities, notably:
• Web-based applications consist to a large degree of components written by
somebody else and "integrated" together with glue and application software.
• User interface is often more complex than many GUI-based client-server appli-
cations.
• Performance behaviour is largely unpredictable and depends on many factors
which are not under the control of the developers.
• Security threats can come from anywhere.
• Despite the standardisation work by IETF and W3C, there is a low conformity
to specifications, especially as a consequence of the "battle of browsers" of the
last years, which led Explorer and Mozilla to provide rich extensions to HTML
3.x and 4.0 standards.
• We do not have only HTML, but also: Perl, Java, VRML, Visual Basic, C++,
ASP, etc.
• Compatibility is mandatory but is made difficult by layers and multi-platforms.
• Reference platforms are brand new and are being changed constantly.
• Interoperability issues are magnified and thorough testing requires substantial
investments in software and hardware.
• Regression testing is a headache: if we have an application which references
external links we shall perform regular regression testing even if we do not
change a single line of code, to ensure the hyper-links validity.
• Usage of separated environments is not widespread; whereas the adoption of
separated environments for development - test - production is a standard prac-
tice in "conventional" software development, this is not always the case in
Web-based applications, at least according to the experience of the authors.
7.1 Second Italian Workshop 147

7.1.3.4 The Proposed Methodology


Reference Architecture
Concerning testing, a comprehensive methodology must include the following
aspects: test design methods (reference documents, methods for extracting test
cases, etc.), practices for unit test, practices for integration test, practices for sys-
tem/-acceptance test, methods for problem notification and tracking, test reporting.
In the following we try to cover the distinguishing features of Testing for Web-
based applications, taking into account the reference architecture depicted in the
following picture, which is important to understand the terminology that will be
used in the remainder of the document.

Client side

/' /'

Web ~erver'/s
..6

Web Applications
Java Machine Ix'
I DBMS I Objects'.
HTMLI
I Legacy Apps
A!
I Script interpreter OS + M1lhtd~lI[l:

Fig. 7.2 Reference Architecture for Web/ based applications

Examples of different typologies of server-side are reported figure 7.3 (tech-


nologies are updated as of Apr-99, most are trade-marks).

Test Levels
As a consequence of the highlighted architecture, the following test levels have
been defined depending on whether the Web-based application is dynamic or only
static.
148 7 Lessons from the EUREX Workshops

.--- ~
NETSCAPE
MS-IIS APACHE
Server

1~
.--- ~> SAPIC
~> ~
Perl CGI
ASP
Programs Programs

~~ ~ ~ <~
~ ~

MS-SQL
ORACLE MINISQL
MS-ACCESS

~~ ~ ~ ~~ ~~
~ ~

MS-NT UNIX

Fig. 7.3 Examples of different typologies of server-side

Table 7.1 Test levels for various classes

Dynamic Web applications Static Web applications

Module Testing Syntactic Testing


Integration Testing Security Testing
System Test Service testing

The remainder of this document covers the two aspects in more details.

Testing of Static Web-Based Applications


Test levels of Static Web Sites are briefly summarised in the following.
Syntactic tests have the goal to check the basic correctness of Web Sites, from a
syntactic point of view, of structural point of view (in particular referring to "link
resolution" aspects) and a fast-loading point of view (in particular looking at the
number and size of pictures).
Security tests have the goal to validate the security mechanisms and security
enforcing functions, with particular emphasis on the reserved and restricted areas;
when looking for security of critical systems, the usage of ITSEC [CEC91] and
ITSEM [CEC93] guidelines can be extremely suitable.
7.1 Second Italian Workshop 149

Service tests have the goal to validate the resulting service from an user's point
of view, thus adopting a black-box strategy without any assumption on the under-
lying architecture and implementation choices.

WWW Syntactic Testing


In order to have a pragmatic approach to Web syntactic testing, a standard check-
list, containing more than 100 checks, was devised by ONION to be applied both
for acceptance purposes and for regression testing activities.
Such check-list covers the following aspects (for each class the number oftests
is given, together with some of the aspects checked):
• stylistic problems (9 tests, including: spelling errors, particular tags, use of
obsolete mark-up, particular content-free expression, empty container elements,
etc.),
• lexical problems (5 tests, including: use of character sets, formatting-related
problems, using white spaces around element tags, etc.),
• syntax problems (12 tests, including: illegal elements, illegal attributes, un-
closed container elements, malformed URLs and attribute values, etc.),
• fast loading related problems (26 tests, including: bandwidth consumption,
images syntax, etc.),
• document structure problems (4 tests applicable both to tables and forms),
• portability problems (17 tests including: accessibility by various browsers and
platforms, mark-up inside comments, use of single quotation marks for attribute
value, use of specific mark-up not supported by all browsers, liberal usage of
file naming, etc.),
• structural integrity problems (4 tests including: no index file for a directory,
dead links, limbo pages, etc.),
• security problems (7 tests including: no confidential data passed through form
without SSL, no user form field exposed to shell in CGI programs, etc.)
• accessibility problems (40 tests, based on the W3C recommendation on the
subject, which focuses on best way to grant access to the Web for people with
disabilities or that access the WWW from small/ slow devices or in situation in
which sight or hearing are limited).
• The full check-list is proprietary of ONION S.p.A.
• The adoption of supporting tools can allow to set-up a test factory and thus
running almost automatically a large proportion of the defined tests.

Testing Web Security


Besides the basic Security checking performed during the previous test level,
specific security testing has to be performed when Web applications make usage
of sensitive data.
150 7 Lessons from the EUREX Workshops

First of all it shall be clear that on the WWW there is no silver bullet for abso-
lute security, likewise in real life, and that security techniques and checks shall be
tailored depending on the value to be protected.
Moreover, it is important to underline that security enforcing involves both
organisational and technical issues; namely organisational issues are often much
more important than technical ones, at least in Intranet and Extranet applications.
In such cases, the approach is to defme a "Security Policy" at company level
and then to tailor it for the risk level associated to the various services. Security
testing will thus focus on checking that those Policy rules that rely on technical
aspects have been correctly implemented.
For Web-based application intended for Internet usage with security con-
straints, specific tests will have to be devised on a case-by-case strategy remem-
bering that at the time being Internet is a so called "open net" without any central
management. It derives that availability of any service cannot be guaranteed and
that confidentiality and integrity of not encrypted communications cannot be guar-
anteed.
In general, aspects to be covered by security testing will include:
• Password security and authentication.
• Encryption of business Transaction on WWW (SSL, Secure Socket Layer)
• Encryption of e-mails (PGP - Pretty Good Privacy)
• Firewalls, Routers and Proxy Servers
• Web Site Security
• Virus Detection
• Transmission Logging
• Physical Security and Backups.
Testing can take benefit of available programs (most are shareware), among
which: COPS, CRACK; TCP Wrapper and Satan. Quite interesting to notice that
some of them have been developed by hackers and crackers.

Testing Web Service


Service testing of stating Web applications is much similar to black-box testing of
conventional software applications. What follows is equally applicable also to
system testing of dynamic Web applications.
Special care has to be devoted to the following aspects:
• Focus on usability test.
• Focus on performance test.
• Focus on load-stress.
• Focus on installation test.
• Perform well-managed beta test.
Aspects to be tested with care for usability include: the coherence of look and
feel, the navigational aids, the user interactions and the help messages. This has to
7.1 Second Italian Workshop 151

be done with respect to: nonnal behaviour; destructive behaviour and behaviour of
inexperienced users.
Aspects to be tested with care for perfonnance include: searches on database,
turnaround time of custom applications embedded in the WWW, verification of
server response. This has to be done with respect to: platfonns, browsers, network
connections.
Aspects to be tested with care for loading include support to many connections
and users and management of message peaks and resource locking. This has to be
done with respect to server load and client load.

Tools for Testing Static Web Applications


In the following the most known tools for Testing Web Static Applications are
quickly examined.
This does not imply any endorsement by the authors of any of the listed tools.
Infonnation on listed tools might not be always fully aligned with their latest ver-
sion, owing to the fast evolution pace.
In general we can say that no one is fully comprehensive in its coverage, and is
best used in combination with additional tools. For most tools, coverage improves
with every release.

The W3C Validation Suite


The World-Wide Web Consortium (W3C), hosted by MIT, INRIA and University
of Keyo, has been committed from its beginning, under the leadership of Tim
Berners-Lee, the inventor of the WWW, to developing a neutral, open forum for
the evolution of Web Technology.
Like its partner standard body, the Internet Engineering Task Force (IETF)
W3C is committed to developing open, technically sound specifications, backed
by running sample code.
As a consequence W3C has developed various tools for Web Testing including:
• an HTML Validator, which allows HTML documents to be validated against
the DTDs for HTML, including HTML 4.0
• a CSS Validator, which allows the user to validate the CSS style sheets used by
HTML and XML pages
• HTML Tidy, a free utility for correcting HTML syntax automatically and pro-
ducing clean mark-up. Tidy can be used to convert existing HTML content into
compliant XML.
All W3C software is open source and can be retrieved from http://www.w3.org

DoctorHTML
Doctor HTML [ImagiWare] is a Web site analysis product, whose main features
are:
152 7 Lessons from the EUREX Workshops

• Check the document for spelling errors.


• Perform an analysis of the images: This section loads all the images in a docu-
ment and determines a few important properties of each image. Excessive load
times for individual images are highlighted.
• Test the document structure. The test looks for unclosed HTML codes which
may cause problems on some browsers.
• Look at image syntax: This test deals with one of the most common mistakes in
HTML coding: overlooked image command tags.
• Examine table structure: This feature tests the table structure on the page.
• Verify that all hyper-links are valid.
• Examine form structure: For those sites which employ forms, this tool can be
used for checking input types and variable names.
• Show command hierarchy: This task presents the HTML commands that are
found in the document.
• Doctor HTML can be used as a service or as a tool, with two options:
• Manual System. Enter the URLs one at a time and receive a report for each
page analyzed.
• Automatic System. Enter a top-level page for your site, and ask to check every
page on your site or in specified sub-directories. For this option, Doctor HTML
will generate a full site report, including a report for each individual page, and
an overall site index that highlights trouble spots.

Web Lint
Weblint [Bowers96] is a syntax and style checker for HTML: a Perl script which
picks fluff off HTML pages, much in the same way traditional lint picks fluff off
C programs.
There are several dozens of different checks and warnings that can be en-
abled/disabled individually, as per your preference.
Weblint is free and regularly updated; it is written in Perl scripting language,
which was designed for processing and generating text and has powerful regular
expression capabilities. Perl is also very portable, so weblint can be used under
Unix, VMS, Windows, Mac and other platforms.
Weblint does not perform a strict HTML validation test but gives you some
level of assurance that web pages provide the intended content to the reader. It is
easy to obtain, install, use, configure and its warnings are easy to understand. It
also has many web-based interfaces gateway (a weblint gateway is an HTML form
which lets you type in a URL and have it checked by weblint without having to
install weblint locally).

The WWW Test Pattern


The WWW Test Pattern [Bergel] is a general purpose test bench developed by the
University of Arkansas, that can be used by both Web users and developers to
7.1 Second Italian Workshop 153

check for HTML compliance. Its goal is to check any inconsistency with which
Web client developers comply with the emerging standards.
As a matter of fact, the HTML protocol has evolved in stages. As a conse-
quence of the "browsers' battle" between Netscape and Microsoft, non-standard
extensions are emerging in parallel with orthodox versions; as a consequence
typical Web-client developers make usually no claim of HTML compatibility but
simply add as many features as they feel they can manage in the latest browser
release.
Henceforth, the WWW Test Pattern has been created for:
• monitoring the degree ofHTML compliance of Web Clients;
• checking Netscape Mozilla and Microsoft Explorer extensions;
• analysing MPEG, AVI and QuickTime animations.
Some of the tests are passive (that is, the user merely loads the test document
and views the results), whereas some other require direct user involvement.
The WWW Test Pattern has to be kept aligned with the fast evolution of the
browsers and thus its standard suite of tests for text, audio, graphics, meta-links,
animations, forms and tables is always under modification. This urges also the
need for reducing the multiplicity of tests and providing a standardised test report.

Other Validation Tools


Other WWW validation tools include:
• W3C's HTML validator (http://validator.w3.org/): it provides validation against
a number of selectable DTDs, using a SGML parser and thus being aligned
with the last word in HTML conformance;
• Web Site Garage, which has a deep coverage of syntax, compatibility, load
time, spelling and link check features
• Site Inspector, with also similar features.
A list of hyper-textual links to Web Validators can be found in Yahoo
(http://dir.yahoo.com/Computers_and_Internet/Data]ormats/HTMLNalidation_
and_Checkers/)

Testing Features Embedded in Web Editors


Several Web testing features are also embedded in Web editors, for instance:
• Web-Edit Professional includes a multi-lingual Spell Checker, which corrects
the spelling of your documents directly within WebEdit using built-in spell
checker. The spelling checker currently supports American English, British
English, Dutch, French, German, Italian and Spanish. Portuguese, Finnish and
Swedish will be available soon; moreover, it includes also an HTML tag
154 7 Lessons from the EUREX Workshops

checker which validates your HTML to ensure the correctness of your pages;
supporting various HTML, Netscape and Internet Explorer versions;
• Hot Dog includes a syntax and spelling checker as well as a Width Checker
which you can use to see how pages will appear to users whose monitors are
running with screen widths of 640, 800 or 1024;
• Web Suite includes a Load Manager which provides size and download
information for components created in the Component Editor. It determines
download times for the different connection speeds, and lets you keep the
download time in mind while you are creating your components.

Tools for Service Testing


In the following a list is provided of tools intended for Web service and load!
stress testing: Net.Medic; AutoTester Web; WebART; SilkTest; Platinum Final
Exam; Test Works/Web; MKS Web Integrity; Socrates; WebART; Pre-View
Web.
Due to the nature of the application domain, the list can be neither exhaustive
nor up-to-date.

Tools for JAVA Static and Dynamic Coverage


When you use Java development you can of course take advantage of static and
dynamic analysers, which can be assimilated to the same tools available since long
time on the market for C language.
Among the various development! test environments offering such features it is
worth remembering at least: SUN's Java Test Tools, TCAT for Java, White Box
Deep Cover for Java and RST Test tools (inclusive of: Deep Cover for coverage
analysis; Assert Mate for pre-conditions, post-conditions and data assertions test-
ing; Total Metric for static analysis).

Testing of Dynamic Web-Based Applications


Testing Approaches for Dynamic WWW
Testing of Dynamic Web-based applications deserves much of the challenges of
client-server applications, with additional constraints posed by the underlying
architecture, which can be summarised as represented in figure 7.4.
Dynamic WWW development can be done, at the current level of technology,
with two main approaches: CGI Programming (Perl, C, TCL, ...) or Server Exten-
sion Programming (ASP, Apache Server API, Netscape Server API, ...)
It has to be noted that the risk level associated to the two techniques is utterly
different, as clearly highlighted by the following pictures.
In fact, whereas when a CGI call fails just a program fails, when a server exten-
sion fails, the whole server might crash!
For Testing Server Side, an approach similar to client-server testing has to be
taken, covering (as depicted in figures 7.7 and 7.8):
7.1 Second Italian Workshop 155

\- W_E_B_S...,E_R_V_E_R ~·~.---- ·IL__ ....... B_ro_w_se_r:__-.I

Extension
Server

EJ "'~I---··I C_G_I _
Transaction
Mng

Fig. 7.4 Architecture of dynamic web-based applications

HITPSERVER

CGI PROGRAM

DB
11I LEGACY
APPLICATION
Fig. 7.5 eGI programming

Besides the well known techniques for client-server testing, you should beware
of complexity from included software layers; it has to be remembered that often
more than 90% of the software is out of the developers' control, being re-used
from other sources.
156 7 Lessons from the EUREX Workshops

I SERVER EXTENSION
I

n n
HTTP SERVER

DB lEGACV
APPLICATION

Fig. 7.6 Server extension programming

WWWSERVER

Capture & ••- . . . Script File


Playback

Fig. 7.7 Black-Box GUI Testing approach


7.1 Second Italian Workshop 157

WWWSERVER

Capture

'I 1/
Script File
Playback
Fig. 7.8 Grey-box HTTP Testing approach

Testing Tools for Dynamic WWW


Some tools for dynamic Web applications testing include: Pure Performix/Web;
Platinum's WebLoad; Radview's WebLoad; Load Runner; Astra Site Test.
The goal of such tools is to answer to questions like: "Will my WWW work?
How fast will it work? What's the server maximum capacity?" and thus to be sure
that your application will behave ok when thousands of people are using it at the
peak hour of the busiest day.
In the following features of some tools are briefly sketched.

Performix / Web
This tool captures actual user activity at workstation and derives from that a script.
Script is then compiled and executed; scripts can also be combined to simulate
multiple users.
At the end, log files and reports are produced. The tool reveals bottlenecks at
the client, the server and the network, adopting a good recording technology that
allows testing without many PCs: one driver machine emulates unlimited users.
Peculiarities are: conversion facilities for our scripts from all major vendors as
well as client application and DB server tested simultaneously.

Webstone
It is the most widely cited WWW Benchmark, useful in evaluating the server
capability to deliver static web object.
A rigorous testing methodology makes usage of a Webmaster process that con-
trols several WebChiidren who do the actual banging of the HTTP Server.
It is downloadable for free.
158 7 Lessons from the EUREX Workshops

TestWorks for WWW


This is a bundle of software test tools tailored to support complete regression
testing for Web Sites, including those that exploit Java features.
It is composed of three major components:
• CAPBAKIX for WWW
• X virtual
• TCAT for Java
CAPBAKIX for WWW captures and plays back complex Web interactions in-
cluding sequences that display and print WWW site contents; X virtual can run up
to 255 simultaneous processes executing recorded tests; TCAT for Java performs
test coverage analysis tool for applets in Java.

Astra Site Test


This is a stress testing tool for WWW-based systems that provides consistent,
repeatable and measurable load to exercise a system.
Developed starting from Windows environment the tool offers the fol1owing com-
ponents:
• Virtual User Generator: captures outputs from Nestcape or Explorer to create
virtual users; HTTP requests can be edited, changed or parameterised
• Web Scenario Wizard: orchestrates virtual users into a multi-user scenario; up
to 4.3 million hits per day like real users surfmg
• Visual Load Testing Control1er: drives, monitors and synchronises interactions
Data analysis statistics include the following data: number of virtual users,
transaction performance, completed transactions per sec., connections per sec.,
throughput.
Web Specific supporting features include cookies, proxy servers, user authenti-
cation, session Ids, CGI scripts, API calls, HTML forms, etc.

Testing ERP and WWW Integration


Future IntranetJ-Extranet applications will require more significant work and more
sophisticated skill set. In fact, Intranets will evolve into a component of the IT
infrastructure making distributed computing more open, simpler and more man-
ageable.
This will make possible the delivery of more flexible and manageable distrib-
uted business processes.
From a technical point of view, Web-enabled business applications will be
based on transaction-oriented business processes; hence Intranet based applica-
tions will merge with Extranet-based business-to-business transactions, ED! and
electronic commerce transactions.
Already today, to multiply benefits, companies need to integrate Web technol-
ogy with transaction-oriented business applications, group-ware and infrastructure
7.1 Second Italian Workshop 159

services, integrating Web-based application and MIS and setting-up simple, cross-
platform applications on top of a simple-to-manage and more centralised IT infra-
structure.
As far as testing is concerned, challenges are on:
• security,
• load testing,
• user authentication,
• server authentication,
• connection privacy,
• message integrity,
• payment security.
Tool providers are performing certified integration with ERP systems.

7.1.3.5 References
[Bazzana96]
G. Bazzana, E. Fagnoni, M. Piotti, G. Rumi, "Process Improvement in SMEs:
the ONION Experience in Internet Service Providing", SP 96 Conference Pro-
ceedings, Brighton, December 1996.
[Bazzana99]
G. Bazzana, E. Fagnoni, "Process Improvement in SMEs: an Experience in
Internet Service Providing", in Better Software Practice for Business Benefit,
IEEE Software, 1999.
[Visentin96]
F. Visentin, E. Fagnoni, G. Rumi, ONION Technology Survey on Testing and
Configuration Management, ONION, Id: PI3-D02, April 1996.
Excerpts available at: http://net.ONION.itlpi3/
[ImagiWare]
ImagiWare, Doctor HTML, http://www2.imagiware.com/RxHTMLI
[Bowers96]
N. Bowers, "WebUnt: Quality Assurance for the World Wide Web", Proceed-
ings of5th International WWW Conference, Paris, May 1996, pages 1283-1290
[BergeI]
H. Bergel, "Using the WWW Test Pattern to check HTML client compliance",
IEEE Computer, Vol. 28, No.9, pages 63-65, http://www.uark.edu/~wrg/
[CEC91]
Commission of the European Communities, Information Technology Security
Evaluation Criteria (ITSEC), Version 1.2, CEC, 28 June 1991.
[CEC93]
Commission of the European Communities, Information Technology Security
Evaluation Manual (ITSEM), Version 1.0, CEC, 10 September 1993.
160 7 Lessons from the EUREX Workshops

7.1.4 Workshop Conclusions

The starting point for discussion in the workshop was a hypothesis derived from
the direct analysis of the PIEs experience and drafted by GEMINI who, before the
workshop, circulated it to the invited PIEs to stimulate their reactions.
The thesis, illustrated in the following statements, summarise what was dis-
cussed with the PIEs before the workshop and an analysis of their project reports
available at the time.
As product complexity increases and customers' demand for high quality soft-
ware grows, the verification process is becoming a crucial one for software pro-
ducers. Unfortunately, even if verification techniques have been available for a
few years, little experience in their application can be found among commercial
software producers. For this reason we believe that the PIEs' experience will be of
significant relevance for a wider community, not least because it could demon-
strate the feasibility of a structured and quantitative approach to verification.
The approach of the PIEs focusing on testing selected by ~UR~X consisted in
setting up a verification method and supporting it with an automated infrastruc-
ture; in general they anticipated the following benefits:
• less defects escaping from quality control
• more reliability over subsequent evolutionary releases of the software product
• more productive verification activities, relaying on a replicable set of test cases
and test procedures
• the availability of quantitative data on the correctness of the product.
Some key morals summarise the lessons learned by the PIEs that we consider
the most valuable for a wider community and are summarised in the following
paragraphs.

You Need Something You Can Use Tomorrow


Sometimes process modification promises big effects after big efforts. You have
to do a lot of changing before you see any improvement. A better approach is to
target smaller effects that can be made right away. Moreover the small improve-
ment will be highly motivating. This is why new practices should be searched that
could be used immediately and could be enhanced once you are comfortable with
your first steps.

Handling Schedule Pressure and Risk


The software process is usually driven by tight pressure and you cannot count on
having always the time to verify thoroughly all the code you write. Thus the first
question is: "does the improved process handle schedule pressure in the right
way?" Some PIEs found an answer in using risk evaluation; i.e. deciding which
test units are riskiest and should be probed most. Depending on the risk grade the
amount of fault detection work can be carefully planned.
7.1 Second Italian Workshop 161

Consistency of Regression Testing


The repeatability of automated tests assures that system builds and new releases
are more thoroughly regression tested. Consistency allows also a comparison
between subsequent builds to spot immediately any decrease in the product cor-
rectness.

Modified Verification Practices Have an Impact on the Software


Development Process
Defects are found before release saving later work and costs.
Test case specification and design can be applied alongside program develop-
ment to prevent mistakes, enhancing the error prevention capability.
The verification process is general1y built to a great extent around automation
tools targeted at time-consuming activities. When verification activities are too
manual, programmers, even if they know that when they make changes to an ap-
plication they should run a non regression test suite, they often don't have enough
time to do it. Instead, if they can automate the process they are more likely to test
thoroughly and repeat their tests. The dispersion of resources can be removed by
providing an infrastructure that assures repeatability, traceability and availability
of the test cases; as a consequence the effort spent developing the automated infra-
structure can be matched by the savings gained through the reduction of the effort
saving.

Tracking Errors is a Best Practice


Setting up an error tracking database has proved to be one of the most effective
improvement practices; it al10ws tracking error reports from opening to closure
and it gathers relevant classification data to focus testing, assess the product qual-
ity, investigate the causes offailure.

Modified Verification Practices have an Impact on the Organisation


Having defined the verification process the roles were clearer and an independent
testing team was set up by most PIEs; nonetheless, according to their experience,
it appears a mistake to think that this team is responsible for assuring quality.
Developers have a big role to play also, they should learn how to test, acquire a
method and assume responsibility for certain levels of testing.

A Cultural Growth on Testing is Paramount


To produce high quality software, we must make programmers assume a higher
degree of responsibility for the code they write. The sharing of responsibility is
important because otherwise design and implementation wiIl not reflect software
quality as a high priority. To achieve this cultural growth we must cope with the
absence of a proper training curriculum. This lack of training creates an atmos-
phere where software testing is not considered important. For this reason building
162 7 Lessons from the EUREX Workshops

up a testing culture is high priority to put testers on the same level as design engi-
neers.

The ROI of Automated Testing


Automated testing is an investment that makes big promises, but changes and
maintenance of testware can postpone the Return on Investment (ROI). So you
should put a value on automation considering pros and cons:
• pros: repeatability, coverage
• cons: effort in preparation, maintenance, and verification oftestware.
A lot of effort goes into developing and maintaining test automation, and re-
covering the investment is not always evident. The successes are those focused on
specific areas of the application where it makes sense to automate, rather than
complete automation effort. Also, skiIled people are involved in these efforts and
they need the time to do it right.
Finally you should consider that tools almost always need a certain amount of
set up and adaptation work, consequently you should be careful in identifying
where and when automation pays off.
The effort of test automation is an investment. The benefits come from running
these automated tests every subsequent release. Therefore, ensuring that the test-
ware can be easily maintained becomes very important.

Developing a Testing Automation Strategy is Very Important


It is important to define the purpose of the test automation effort: what you want
to automate, where in the process wiIl be the first step in developing a test automa-
tion strategy. A test automation strategy should cover:
• what is to be automated
• how it will be done and in particular which categories of testing tools fit your
own purpose
• how the testware will be maintained
• what the expected costs and anticipated benefits will be.
In setting out your plan consider that the sooner tests can be executed after the
code is written, the more likely bugs wiIl not be carried forward, but start small
and plan for a sustainable growth, this wiIl also ease the work estimation task.

Traceability
Some of the PIEs explored the aspects involved in linking test cases to require-
ments and tried to measure coverage as requirements coverage.

Tools
There are several types of testing tools which can be applied at various points of
code integration (unit testing, integration testing, system testing). Many of these
7.1 Second Italian Workshop 163

tools are very sophisticated and use existing or proprietary coding languages. The
effort of automating an existing manual testing is no different than a programmer
using a coding language to write programs to automate any other manual process.
Treat the entire process of automating testing as you would any other software
development effort. Usual components of software development, such as configu-
ration management also apply.

Skills
Test automation summarises identify and write the right test cases (i.e. good test-
ing skills) and writing code (i.e good programming skills). Consequently a good
tester does not necessarily make a good test automator: these two roles are differ-
ent and the skill sets are different. Related to this point is the idea that testers and
software developers need to work as a team to make effective test automation
work, this does not disrupt test automation, to the contrary some advantages can
be gained. Working with developers also promotes building in "testability" into
the application code.

Visibility Can Be Gained Through Testing


The testing team's primary goal is discovering useful information and making it
available as early as possible, in particular during product stabilisation before
release. The project manager needs to know if and where there are danger areas
before the last minute. Testers are the keeper of data that can help understand
trends and special occurrences in the project. Use data as a springboard to under-
standing.

A Final Caution
Test automation is not a substitute for walkthroughs, good project management,
coding standards, good configuration management etc. Most of these efforts pro-
duce higher payback for the investment than does test automation. Testing should
not be looked as the primary activity in Software Quality Assurance.

7.1.5 Workshop Discussions

The final discussion was initiated by a provocative intervention by Luisa Con-


solini who asked the PIEs and the audience to react to some theses against the
relevance of testing to ensure quality and in favour of prevention and the process
aspects of quality assurance. The PIEs were asked to refute with an anti-thesis.
The workshop attendees were invited to take sides with one of the party in the
debate.
164 7 Lessons from the EUREX Workshops

7.1.5.1 Luisa's Theses


The theses presented by Luisa Consolini are being illustrated by the following
slides:

\fv110 would think


of realising the
quality of a car
during the
inspection ?!

Fig. 7.9 You cannot test in quality!

Quality cannot be put into the product at the end just by debugging the faults
discovered by testing. If you want quality at the end you have to design it earlier
on.
Who is responsible?
"If a constructor has built a house for someone and his work was not well done
so that the house crumbles killing those that live in it, then the constructor must be
executed" Code ofHammurabi 1750 b.c., para 229
Giving responsibility for quality to independent testers means separating who
does the job from those who secure its quality. This does not work, we should
come back to the ancient concept (see the Hammurabi law on craftsmen account-
ability) of being accountable for the quality of your own work.

Fig. 7.10 Testing is not everything?


7.1 Second Italian Workshop 165

Not all quality characteristics can be verified by testing, many relevant aspects
require a different approach.

Making Changes 1.0 %


Planning Changes 1.0 % Comprehension
Documenting Changes S % 50%
Testing Changes
Source Srudy by IBM
25%

Fig. 7.11 Maintenance Effort: we cannot afford it !

No matter how much we believe in the effectiveness of testing the crude reality
is that testing is not done because we have no time. No commercial organisation
ca really afford to spend 25% of the effort required by the implementation of a
change request on testing. Consequently the testing either is skimped or is done
ineffectively.

Total
Quality
Quality System 1990 - ...
1980-'90

Quality Assurance
1970-'80

Quality Control
1950-'60

Inspection
1900

Fig. 7.12 Where do the others go?


166 7 Lessons from the EUREX Workshops

Achilles and the turtle

System
Test evolution
automation slow and implaca
Fig. 7.13 ROI of testing?

In other sectors Quality has come a long way; in the software sector we are still
stuck to the Quality Control stage.
Automation makes big promises to make testing affordable and repeatable. But
automated tests have to be maintained and they have to keep pace with the product
evolution. It is very much a "Achilles and the turtle" paradox: tests lag inevitably
behind.

~--
~n~ Buys -----rhe test liberation
dream
The Manager The Test Automation w\"\\
package ~

~
Development
Champion

Fig. 7.14 The myth says


7.1 Second Italian Workshop 167

~----_.
~ J1 ~ Buys ~e test automation
nightmare
The Manager The Test Automation
package

~
6
"~ ~ Development
.... Etc..... i'~ Victim

Fig. 7.15 The neat says

There is a new "silver bullet": testing automation. the myth lets us think that
automation will relieve us from the burden of testing: we automate it so it will run
by itself! But development and maintenance work caused by testing automation is
a potentially disruptive "time bomb".

Final Warning
Test automation is not a substitute for:

• Good project management


• Good coding standards
• Good configuration management
Investments in improving these practices have a higher payback than invest-
ments in test automation.

7.1.5.2 Discussion of Participants


The PIEs responded to these theses and the participants asked questions to the
PIEs and to the external experts and participated very actively in the debate. What
follows paraphrases the main points, responses and questions during the discus-
sion.
PROVE: Of course you need good requirements and good design, you cannot
rely only on testing. But you must do the testing. I met Boris Beizer at a confer-
ence and I share his opinion: "let's not joke, we can play with CMM or BOOT-
STRAP, but the only effective quality measure is testing. Human beings make
mistakes anyway, all what you can do against it is testing, otherwise the only
assurance you will have is that the products will be bugged!"
TRUST: At Augusta (avionics sector n.d.r.) we cannot avoid testing. However
we recognise the value of testing, the warning that I raise is "be careful not to
168 7 Lessons from the EUREX Workshops

choose a wrong model". In every organisation you have to identify the right way
to testing, from a technical and organisational point of view. For example you
need to focus your testing the best you can.
You must remember that there is no silver bullet.
Another aspect is that engineers do not see a career in testing as an interesting
perspective, it very difficult to find brilliant people willing to do testing. The tester
is a rare cattle of fish!
PIE (IBM): As in many things the truth takes something from both positions:
pros and cons. A frequent error is to think that testing is a panacea. You cannot
think that a tool or a specific technique solves the quality problem is wrong. The
best approach is starting from your immediate need: I have a problem now and I
want to start solving it immediately in the simplest and more specific way, very
much like the TRUST approach. As the you get more experience and your process
evolves you will be mature enough to apply more sophisticated solutions. So the
problem is not whether testing is useful or not but the way you approach it.
Participant: It is true that you cannot test in quality, but we should not forget
that software is a flexible product, very different from a car, so we can profit the
fact that you can amend the software very easily. It's an opportunity that we have
and other sectors have not.
It is true that other sectors have come a long way on quality but they are not re-
nouncing to final quality control anyway.
As regards the responsibility for quality nowadays software producers are at
least liable for any damage caused by their products.
It is true that many characteristics of software quality are not verifiable through
testing, but we can develop different measurement mechanism, let's not forget that
software metrics is a young discipline.
As regards the affordability of testing we must be clear that testing our software
is a duty not an option, and our clients will be more and more demanding about
that.
Participant: I am a tester, my domain is integration testing. We need develop-
ers' testing, if we tested the code without this first level testing it would be a disas-
ter! Even if we have a structured life cycle approach and we also use code inspec-
tion tools.
Panel: One of the arguments against testing is the cost of correcting errors dis-
covered too late and mostly errors in requirements. We must keep in mind the
economic implication of testing and debugging. The testing is affordable only if it
is highly effective. The effectiveness of testing is related to finding the relevant
bugs Le. those highly visible or serious to the user.
Also related to the economy of testing is the impact on the bottom line of cor-
recting errors when the product is still in a guarantee period.
Panel: The affordability and usefulness of testing is determined also by your
market situation, if you are competing on a global scale the reliability of your
product is a critical factor. So you need to allocate resources to improve the prod-
7.2 Third Spanish Wor1tshop 169

uct quality no matter how you do it. When your market share is at stake your in-
vestment in testing is perceived in a totally different way.
Participant: We sell embedded systems, in our case software is a minor com-
ponent in our systems, we have an hard time convincing our customers to pay for
the software let alone the testing. However we take charge of the costs of testing
because we must deliver a reliable product: there's no discussion about that. Also
we have a one year guarantee or more during which whatever happens to our
product we have to support our customer and usually we stretch this period to
please our customers.

7.1.5.3 Workshop Panel Conclusions


The workshop panel pronounced the final conclusion as follows:
We can say that if in other life cycle phases there has been more convergence
on a fairly standard methodology, testing remains a very much organisation spe-
cific phase, and the techniques or tools that apply universally are very few. For
this reason and for all what was pointed out during the debate each and every
organisation must define its own testing strategy taking care not to apply wrong
models or ill suited solutions. Many factors have to be taken into account, among
others the following have been mentioned in the workshop:
• the requirements of your market
• your development environment
• the maturity of your product
• the economy of testing
• the benefits of automation.
The PIEs are an example of how all this can be considered in a sensible testing
strategy.

7.2 Third Spanish Workshop


7.2.1 Introduction

The workshop took place in Barcelona on 22 October 1998. SOCINTEC, the


Spanish partner of ~UR~X organised the workshop. The title of the second work-
shop was "Software Quality: when and how much to test?" The participants were
28 , six PIEs took part in the workshop.
The large number of people who assisted as well as the participation in the
workgroups and debates that followed, show the interest of this subject in many
companies.
All the cases presented by the participating PIEs in the first half of the work-
shop showed positive results demonstrating the use of testing in different fields.
The presentations were from four Spanish PIEs:
170 7 Lessons from the EUREX Workshops

• STAR from B-KIN SOFTWARE


• FCI STDE from PROCEDIMENTOS UNO
• EIP from CAJA MADRID
• TESTLIB from INTEGRA SYS
In the second half of the workshop, four parallel sessions were held, with about
10 persons attending each. There was a moderator and a rapporteur in each ses-
sion. Moderators were given written guidelines about the goals of each session, as
well as an initial list of ideas to provoke discussion. Each session was given a
topic (a particular aspect of testing). Sessions were encouraged to achieve consen-
sus on a short list ofconclusions, to share with the others afterwards.
The final activity was a plenary meeting where the rapporteurs presented the
conclusions of each session, with subsequent discussion from the audience. Con-
sensus on many conclusions from the sessions was achieved.
Finally, the expert, thus closing the workshop, presented a summary of the re-
sults obtained during the day.
The topic was introduced by M. del Coso Lampreabe from Alcatel Espana. AI-
catel is one of the most important Telecommunication companies in the world.
Alcatel Espana designs and produces, among other products, Exchange stations
that are marketed in Spain and other countries, mainly South America.
During the last few years, the increase in the level of competitiveness of Soft-
ware producers, and the continuous growth of customer requirements has favoura-
bly influenced the need of greater software quality. At Alcatel, the quality is the
first priority, and therefore, a major reengineering process has been carried out to
make sure that the released software will be, as much as possible, error free.
Since 1994, Alcatel Espana has adopted CMM as its model for Software devel-
opment. Thanks to this model, a product of higher quality and with a lesser
amount of defects has been progressively obtained.

7.2.2 Expert Presentation

The presentation focused on the Defects Detection activities that CMM, ISO, and
other quality management models define for software quality improvement; this
experience was carried out during the development of the Exchange products in
AlcateI.
7.2 Third Spanish Workshop 171

7.2.2.1 Software Quality: When and How Much to Test?


M. del Coso Lampreabe
ALCATEL ESPANA

Introduction
The development of Software projects have nowadays three big problems, which
the development models try to solve, these are lack of product quality, time-to-
market and development costs.
The Software Quality produced is directly related to:
• the correct implementation of the specified requirements
• the absence of problems with the code and the data
• the facility to use the Documentation given to Customers
• the facility to maintain and update the product
A sample of a project life cycle is shown in figure 7.16. In this figure the main
phases overlap one another indicating some important mechanisms that are carried
out during all the phases, facilitating the achievement of the objectives of the
development process. These mechanisms are:

• Defect Prevention Activities


• Defect Detection Activities
• Project Quality Plan, which comprises both activities
During the development process of a project, the designer introduces errors in
the product, which must be detected and corrected before delivery to the customer:
this is known as the defect injection/detection processes. The injection process can
only be measured during the detection function. The detection process should be
measurable, with the aim to know when its objective has been accomplished (de-
tect and correct the highest number possible of existing errors) and also be con-
tinuously improved.
The development process will be more efficient when fewer errors are intro-
duced (Prevention) and the earlier they are detected and corrected in the life cycle
(Detection). These processes are all included in the Quality Plan of the Project.
172 7 Lessons from the EUREX Workshops

Requirements
- Management

System
Design

Software
Design

B I Inst&lation I Mamtenance I
t
Defect
t
Defect
t
Project
Prevention Detection QualityPlan

Fig. 7.16 A sample ofa project life cycle

Defect Prevention and Detection


These two activities are the basis of all the effort made towards obtaining Soft-
ware Quality.
But, "what is understood by Prevention and Detection activities?". Detection
covers all those activities directed towards detecting and correcting the existing
defects in the product. Prevention covers all those activities that direct their effort
to avoid that any defects are produced (this means that the designer should avoid
introducing errors):
According to ISO 9000/3 and CMM, the activities that should be carried out to
achieve the Software Product Quality are:
• Prevention
• Process/Product Monitoring
• Analysis of Defect Causes
• Process Audit
• Detection
• Reviews/Inspections
• Tests
This paper deals only with the Defect Detection activities. Through these ac-
tivities the system guarantees that the product will finish its development cycle
free (nearly!) of errors.
7.2 Third Spanish Workshop 173

Table 7.2 Error Detection Efficiency

CMM Description Total Error Detection Field


Defects Efficiency defects
(FIKLOC) before delivery (F/KLOC)

5 Continuous Improvement institu- 10 95% 0,5


tionalised
4 Products and Processes are quanti- 20 93% 1,4
tatively managed
3 Methods and tools are institutional- 30 91% 2,7
ised
2 Project Management is applied 40 98% 4,4
I Processes are informal and ad-hoc 50 85% 7,5

Table 7.2 shows the efficiency in the detection of errors before the product is
released, according to the CMM level in which the development process is found.

Reviews and Inspections


The technical review is a visual verification process carried out by experts in the
matter; consisting in verifying the correct inclusion of the requirements and elimi-
nating the errors introduced during the creation of the item being revised. This
guarantees the verification of the design along all the phases of the life cycle.
There are code, data and document reviews/inspections.
The reviews and inspections should be carried out during:
• Contract review with the Customer, with the aim to agree on the technical defi-
nition of the product, and to define the acceptance tests by the Customer.
• Definition and analysis of the requirements, with the aim that the detailed de-
scription of requirements be understood and accepted by the people in charge
of the development.
• Generation of the Design and Coding documents, with the aim that all the er-
rors are detected as quickly as possible.
• Preparation of test lists, aimed at obtaining a maximum test coverage.
• Generation of the Users Documents, that will be used by the Customer to oper-
ate the system.
By performing formal reviews high efficiency is obtained in detecting errors.
A review can be considered formal if it meets the following conditions:
• All resources must be planned, the people taking part in the review should have
an assigned period of time in their work plans to carry out this activity with
sufficient priority.
174 7 Lessons from the EUREX Workshops

• The review should only begin if the criteria specified in the Project Quality
Plan have been fulfilled. One of these criteria is the availability of the docu-
ment with sufficient time for its review.
• Reviewer/reviewers should carry out the review on an individual basis.
• Comments should be fully documented so that the author can have answers to
them before the review meeting, thus increasing the productivity of the process.
• The review meeting should always take place when there is a need to put in
common the discussion/solution with the comments, in order to reach an im-
plementation agreement.
• All comments that are accepted by the author should be included in the revised
document before the end of the review meeting.
• Summary metrics should be available. These metrics are:
• Effectiveness: errors/size
• Efficiency: errors/hour
• Speed of review: size/hour
• The review is finished when all the output criteria are met according to the
Quality Plan of the Project.
Experts, who will receive the document, together with the author, should take
part under the supervision of a co-ordinator.
An inspection is basically the same as a formal review with the following differ-
ences:
• more formalism in the input/output and during the process, using the appropri-
ate checking lists
• mandatory comparison with the original document
• less reading speed
• more specialised roles
• The size of the inspected matter is determined in a dynamic way in relation to
the results obtained.

However, certain dangers must be avoided which could easily lead to this activity
not being fruitful. These dangers are:
• Inadequate planning, in which the reviewers have not been assigned a certain
period of time.
• The authors do not accept the process nor the review results of their documents.
The authors should know that all human activity produces mistakes and that it
does not invalidate the quality of the work.
• The document or the code to be reviewed is not made available with sufficient
time for review.
• Errors found are primarily publishing mistakes.
• Metrics being used in some way to evaluate the author(s) work.
• The process not being continuously improved.
7.2 Third Spanish Workshop 175

• People not feeling committed to the quality of the products for which they are
responsible.
• The essential mechanisms to be used in review/inspection activities not known,
and accepted at the beginning of the project:
• Non respected input and output criteria that need to be completed before a
process can be started or that the intermediate/final product can be delivered.
These criteria are applied either by phase, by design area or by function to be
implemented, depending on the selected life cycle.
Checking lists are a combination of verified questions in relation with the proc-
ess carried out, the effective use of the procedures and tools, or in relation with the
completeness and consistency of the product.
Different checking lists are applied in relation with each one of the different in-
termediate products to be revised or inspected.
Quality and process metrics are a combination of measurable conditions of each
product/process that will measure objectively its fulfilment or the quality of the
delivered product. They will measure the completeness of a phase, the number of
defects found, the effectiveness, the efficiency, the productivity in the work, etc.
Metrics should always be contrasted with the previously defined objectives,
which are related to the type of process used.

Tests
The aim of the tests is to validate that the designed software product fulfils all
specified requirements. For their correct implementation, the sub-processes in
which the test phase can be divided are:
• Definition of Test Types
• Test Preparation and Specification
• Execution and Reporting
• Regression Tests
• Problem Control
• Defect Cause Analysis.

Types of Tests
In figure 7.17, we can observe a breakdown of the testing system. Based on the
breakdown model, the following types can be defined.

Unit Tests of Modules (or Multi-Modules)


The designer carries out these tests, working in a tool simulated environment, to
test the performance of the module (or combination of modules) according to a
test list that is aimed at checking their implementation.
176 7 Lessons from the EUREX Workshops

Under this simulated environment the following aspects will be tested: the
module interface, the content of the updated databases, the hardware accesses, if
any, etc.

Service System Qualification


-+ Module -+ Subsystem Tests Tests
f-+ Tests
f-+ Client
Tests Tests

Preparation

Test Documents, Reporting

Regression Tests

_ _ _ _ _ _ _p_r_o_bl_e_m_C_o_n_tr_o_I ~
Fig 7.17 Breakdown model of the testing system

Any defect fixed afterwards can be submitted to a continuous regression test


using the simulation of the whole test cases, in order to assure that the test does
not affect the rest of the software.

SUbsystem Tests
The designer (or specialised tester) integrates the Software module within a whole
set of modules that make up a subsystem, and that of the Hardware in which it will
be integrated.
Simulated environments can also be used, because it is possible that not all the
subsystems have at the same time the same Quality level. The largest part of
Software runs in the final configuration (Operative System, Complete Hardware,
Application Code and Data) and will be supported by it right to the end.
The aim is to check, by area or subsystem, the requirements defined in the pro-
ject, as well as checking by regression that the rest of the reused Software (Code
and Data) has not been altered by its introduction.
Simulators are used to generate external events, required to create test condi-
tions.

Service Tests
The tester carries out the tests of each service or individual facility from the cus-
tomer's point of view, making special emphasis not only on its performance but
7.2 Third Spanish Workshop 177

also on its use. All the subsystems, which make up this application, are tested for
each service.
In this case, no simulations are used and the service runs in a total real envi-
ronment. However, simulations are used to generate real external events.
In this phase a complete Verification of the User's Documentation is made.

System Tests
The primary objective is to try the system in real working. This means that a set of
services are tested working simultaneously in conjunction with load tests, over-
charge processor overload tests, equipment strength tests, limit tests, capacity
tests, etc.
Therefore, simulators are necessary to generate external events for these tests,
as the real load generation can only be done with them.

Qualification Tests
A qualifying team carries these out, which is independent of the project. The posi-
tive result of these tests is necessary to authorise the product delivery to customer
and the commercialisation of the product.
To guarantee the quality of the product, these tests are carried out by means of
random samples and statistic results.
These tests should be carried out in an environment similar to the customer's
actual environment, non-simulated, with the same commercialised software, with
no corrections and with the User's documentation that will then be delivered.
The Results of this qualification are analysed during the product Delivery Re-
view: the Management guarantee that the product fulfils the output criteria and the
quantitative targets and authorises the product's delivery and its commercialisa-
tion.

Acceptance Tests
Some customers wish to carry out their own tests before authorising the installa-
tion of the product: therefore the customer usually generates hislher own set of
tests or uses the Supplier's Qualification Tests. At other times, the customer trusts
in the supplier's fulfilment of the specified quality level.
The tests are carried out in a real environment and simulators are used to gener-
ate Automatic tests.
One of the customer's aims is to check once more that the problems existing in
previous versions of the product are not reproduced.

Tests Preparation
It consists of 3 main parts:
• Definition of a testing strategy
• Specification of Test Cases
• Detailed Planning: Test Plan
178 7 Lessons from the EUREX Workshops

Definition of a Testing Strategy


The following type of concepts are defined: what is going to be tested in each
phase and for each subsystem, what risks exist, which is the "test coverage", train-
ing and documentation required, etc., as well as the numeric objectives of metrics
to be obtained.

Specification of Test Cases


The test cases are defined in relation to what is going to be tested: detailed design,
high-level design, services or product qualification. Therefore, different combina-
tions of test cases have to be defmed: units, subsystems, services, systems or
qualification tests.
These test cases should be defined in relation to the original documents. They
should be revised or inspected by the testers, and should go through a selection
process, because in the regression phases it is not necessary to pass all the speci-
fied tests.
Each test should de catalogued according to priorities, in a way that it can be
chosen in a dynamic manner whether it will be carried out or not, according to the
system behaviour under the higher priority tests.
The test cases should be stored in support tools or in specific databases in order
to facilitate its execution and reporting.

Detailed Planning: Test Plan


The detailed plan should at least be based on:
• Project Milestones
• Test cases by subsystem/phase to be tested
• Existing resources (human, computer facilities, simulators, etc).
• Objectives/effectiveness ratios, output criteria ...
This Testing Plan can be obviously modified depending on the results, changes
in resources, change of objectives and, therefore, it should be automated through a
test tool.

Execution and Reporting


The testing success is directly related to their reporting. This reporting has to be
reliable and based on follow-up criteria.
• The reporting can be done using the same Test Database, adding OK or NOK,
and comments corresponding to the failure.
• The tester identifies the failed test and shortens as much as possible the error. A
"Failure Report" will be written on a specific template and put under configura-
tion control. It will indicate the failure and a possible solution will be addressed
to the designer for its correction.
7.2 Third Spanish Workshop 179

• If it is not a blocking problem, the next test case is executed. When the new
source is corrected, it will check that the error is correctly fixed, and it will ap-
ply regression tests to the global operation, to make sure that the solution has
not affected the rest of the software.
• Follow-up Criteria
• Periodically, the evolution of the defined metrics is followed with the aim to
determine when the tests are finished.
• Ending criteria can be a minimum number of test cases to be passed, or the
number of pending errors to detect or even the behaviour of any of the ratios,
metrics. These criteria should be predetermined in the test strategy.
• Ending criteria should be guaranteed by a meeting at the end of the phase, in
which input/output criteria, the checking lists and the metrics are analysed. The
decision should be made objectively.

Existing Metrics
In table 7.3, there are shown some metric examples to be used in the test phase for
its control and follow-up:

Table 7.3 Metrics related to phase

Phase Characteristic Metric Unit of expres-


sion

PI Efficiency Planned effort %


Used effort
P2 Effectiveness No. oferrors found Errors!
No. of Test Cases executed Test Cases
P3 Degree of error cover- Errors found in tests %
age Errors found in tests + Customer
P4 Test Quality Errors found in tests %
Total errors in the project
P5 Productivity No of Cases executed Test Cases!
Effort used person weeks
P6 Error detection Cost Effort used Person weeks/
Error errors

Regression Tests
Regression tests are called those tests that are carried out not due to a new intro-
duced functionality, but those with the aim to control that the new introduced
software has not affected negatively existing capabilities of the previous commer-
cialised version.
180 7 Lessons from the EUREX Workshops

Two types of Regression Tests exist:


• Verification of corrections (check that a correction does not "danger" other
parts of the module already checked by Module tests).
• Continuous regression of the new functionality already tested in this package,
or of the reused software from previous package (functionality already com-
mercialised).
This last type of Regression Tests should be implemented via Automatic Tests
(ATCs).
The automatic tests are an excellent mechanism to performing Regression tests.
Supported by a machine and a specific language, they repeat themselves automati-
cally testing that he software does not degenerate. They help reducing costs and
making the product more reliable.
In this way, a group of test cases is available that can be passed at any moment,
not needing human resources and generally during the night and on weekends.
The test automation consists of four phases:
• Automatic Tests specification (ATCs)
• Design of ATCs (new/or reused)
• ATCs validation
• Application for error detection in System under test
Automatic tests are a code design similar to any other, and therefore they
should pass through all the phases: requirements, planning, configuration control,
subcontracting management, etc., with their corresponding activities of quality
assurance and metrics.
To automate certain software tests sophisticated test tools (HP, Tekelec) are
generally used. For the automation, specific programming languages are used (i.e.
TTCN in the Telecommunication environment) or the own language of the test
tool itself.
The language has to be capable of clearly specifying the type of error detected,
as well as the statistic measures of number of generated tests, number of errors,
etc.

Problem Control
Problem Configuration Control permits:
• Clearly specify all detected errors through formalised failure reports.
• Knowledge of error fixing status.
• Knowledge of which errors are fixed and in which version of intermediate
product/final product.
• Knowledge of which versions of development documents are up-dated in rela-
tion to the errors fixed.
• Knowledge of which solutions to errors have been delivered to the customer.
• Obtain metrics and problem statistic results.
• Knowledge of problem application among projects.
7.2 Third Spanish Workshop 181

The control of problems should be done by a tool that controls failure reports,
which has a node structure and communication mechanisms, and also has clearly
defined the problem status sequence (at least: CREATED, ACCEPTED, COR-
RECTED and VERIFIED).
The Problem configuration control should be carried out at least from the be-
ginning of the Subsystem test phase.
The Information supplied by each failure report, referring to when the error was
introduced, when it should have been detected and other information, is a very
important input to make the error cause analysis with the aim to define and im-
prove the processes by carrying out preventive actions.

Defect Cause Analysis


This is a defect prevention activity.
The aim of this process is to prevent the injection of errors by a continuous
change/improvement of the used processes. It consists in analysing detected de-
fects during test, installation and product commercialisation phases, in order to
determine:
• why they have not been detected before
• processes for detection and previous solution
• processes to avoid defects introduction
In these analysis there should be people involved from different areas, knowing
the process well, and acting proactively and with imagination.
Defect Cause Analysis of all the detected defects during the life cycle are also
made. These are made by statistic analysis. The aim is to improve the processes of
each area.
The result is very useful for a continuous improvement of our prevention and
error detection processes. The definition of Preventive Actions will be piloted and
introduced in the forthcoming projects.

Conclusion
The practices described in this article are a required support for the verification
and validation activities of a software product. We firmly believe that any Soft-
ware producing unit should follow the steps mentioned by CMM and ISO 9000/3,
to produce high quality products.
Alcatel has experience in its use. The obtained results make us feel sure that the
explained defect detection activities are an excellent mechanism to obtain a good
quality software product.

7.2.3 Workshop Discussion and Conclusions

Four areas or layers were established to focus the discussions in the Workshop.
182 7 Lessons from the EUREX Workshops

The four areas of interest used for the exchange of ideas and opinions were the
following:
• Methodology and Process area
• Technology area (tools used/implemented)
• Change Management area
• Business area
The work groups were made up of approximately 8 to 10 people. In this work-
shop, in order to optimise the size of the Working groups, 3 groups were set-up
(Areas A and B were joined into one group).
Previous to the Workshop, the people responsible for the organisation (SO-
CINTEC), with the collaboration of the expert in the area (Miguel del Coso Lam-
preabe, from ALCATEL Espana) analysed and processed the information received
from the PIEs. Their aim was to obtain the preliminary conclusions and findings,
which would serve as discussion topics and stimuli in the work group meetings.
On the other hand, 3 persons from SOCINTEC and the experts joined the
Groups with the aim of helping discussion.
Summaries of the conclusions presented by each of the groups can be found be-
low.

7.2.3.1 Methodology, Process and Technology Workgroup


The Experience of the Panel on this Subject was limited ... and
almost nothing on Tools (except in load test, efforts and memory
management)
The attendants felt the need of a wider dissemination of information on commer-
cial available test tools. These sessions should cover the following aspects: debug-
gers, preparation of test cases, planning and report, load generation for perform-
ance tests in extreme conditions, simulators of external events, automatic test
generation, etc.

Methodologies as such help but do not go into detail.


Methodologies, and also the reference models for software best practice, say what
has to be done but not how precise this activity should be carried out.

There are some standards ...


Standards relate to the subject and can provide certain orientation (although it is
not sufficient). This is the case of those published by IEEE:
• IEEE 829, for software testing documents
• IEEE 1008, for software unit tests
• IEEE 1012, to plan verification and validation activities
• IEEE 1028, software audits and reviews
7.2 Third Spanish Workshop 183

Measurements
It is necessary to use some minimum indicators that permit us to know the scope
of the tests and their status, the number of tested components and the effort dedi-
cated to verification and validation activities, for example.
The use of metrics is important to cover various fundamental aspects:
• Which is the coverage of a set of test cases?
• Which is the status of the tests?
• When can we start to try the system's test as one?
• When do we know that the system is sufficiently tested?

Various metrics exist that are considered to facilitate above mentioned analysis:
• number of tested requirements
• number of specified test cases already executed
• test effort used vs. planned
• number of detected defects
• defects ratio evolution by test case
• effort ratio evolution for trial cases

Test plan is enhanced as the design progresses


At the beginning of the project this plan is more generic, but afterwards it is par-
ticularised according to the design being carried out, and is becoming simpler
depending on code inspections carried out, or becoming complicated depending
on the existence of complex or critical software elements.
If the test plan is something realistic, its review will help to maximise the test
coverage, optimising the number of test cases, and to control their implementation
via the follow-up of its metrics and the evolution of its ratios.

How to demonstrate that the tests have been carried out?


To finalise the test plan, the project manager has to make sure of the correct exe-
cution of tests. The use of defined metrics in the plan will be of help in the follow-
up of the finalising criteria and the demonstration that the tests have been properly
carried out.
The actual values of these metrics should be known by the Project Manager for
its follow-up criteria, and for the analysis of test ending criteria.
When the product has been subcontracted, a defined plan should be demanded
from the subcontractors, including test cases, metrics and evolution of ratios, their
finalising criteria and an assessment test plan. In some cases, a suitable acceptance
test can benefit the demonstration of a correct testing process.
184 7 Lessons from the EUREX Workshops

Tools
Producers and distributors assure that there are tools that cover the whole test life
cycle. However, queries have been made about their use when not referring to the
development and maintenance ofbusiness management applications.

7.2.3.2 Change Management Working Group


More than an engineering problem, it is a question of internal sale to
the organisation
In general, the subjects regarding process improvement are not always treated in
the right way. They are considered only as an internal problem of the Engineering
department, as responsible for their solution and follow-up, instead of being con-
sidered as items to be discussed and conducted by Company Management.
Those responsible of the Engineering Department have to spend more time and
effort in making the Company Management, of which they are part members,
aware of the benefits that process improvement and the search for product quality
bring. Cost savings for the business, clear information of the ROI value due to the
introduction of new processes, methodologies and tools, should be analysed by
Management Committees. This should be understood as quality competitive ad-
vantages, cost savings and fulfilment of plans, as part of the financial studies and
business profitability.
Introduction of inspection and test practice should be analysed by Management
given its high cost (mainly human resources and computer equipment) and the
required organisation: an internal sales effort on behalf of the Engineering De-
partment based on data (and not only on concepts) is necessary.

During critical periods implementation decisions are obligatory


Generally in companies the procedure to improve processes is not followed if the
business is making profits, even if during this period it would be relatively simple
to modify the processes, as there is enough time and appropriate resources. Com-
panies always wait to introduce improved processes at critical moments, when it is
more costly and more difficult to do so.
All companies should have a tendency to improvement, optimisation and per-
manent process change, not only concerning software development activities but
the whole of the Company processes. For this, it is necessary both the creation of
a permanent Methods and Processes group, and that managers take into account
that one of the annual criteria for result evaluation (on which depend their salaries)
be the implementation of continuous improvement.

Disseminate the advantages of software engineering in higher


executive levels
At present there exist in our country many debates where many subjects on soft-
ware, products, methodologies and processes are presented for discussion. These
7.2 Third Spanish Workshop 185

debates are also held in many countries of similar level of development. Neverthe-
less, these debates are not sufficiently efficient and profitable, because manage-
ment is not involved, and they do not make decisions about software reengineer-
ing in their companies.
Exercises similar to these work sessions should be prepared and encouraged in
the software community, with a closer relationship with the press, either technical
or not specialised, in a way that company management be made aware of the need
for re-engineering and continuous process improvement.

Perception of professional category difference between software


developers and "testers"
There exists perception in big organisations that designers have more knowledge
of designed products and testers know more about markets, and therefore, design-
ers are more specialised and more effective. Hence the category difference percep-
tion of each one.
However, different organisation methods could lead to overcoming the differ-
ence between one and another. The rotation of a person working as designer and
tester during different stages of hislher professional life would lead to having
people that are clearly specialised; or that each person working as designer or
tester in the same project could benefit the result of that project. But the road to
changing the organisation model is long, and difficult to achieve, and should be
done following the company's objectives.

Maintenance is done by new employees or it is subcontracted


Not all companies have the concept of Customer satisfaction and that maintenance
tasks should be done by experienced professionals that will be committed to con-
tinuously improving service. These companies with no vision of the future hand
maintenance tasks over to new employees or they subcontract companies with less
preparation. However, companies that are aware of this problem use the services
of experienced professionals. A negligence in this matter can lead to customer
companies making complaints of Customer Help Desk and loosing the customer's
loyalty, which is so easy to lose and very difficult to recover.
Some of the metrics that Customers use deal with the response time to prob-
lems or with the quality of supplied solutions.

Specialised "tester" is very efficient, but creates barriers for


developers.
Specialised tester is very efficient in his testing tasks, but if the whole of the prod-
uct is considered, the non-specialised testers are more important. That is why
organisational change should lead to conversion from specialised testers to de-
signers capable of carrying out specialised tests. In addition some non- specialised
testers should be dedicated primarily to a more general testing, related to the prod-
uct as a whole and to its behaviour in an assigned context of use.
186 7 Lessons from the EUREX Workshops

Data collection during system operation: making developers aware of


extended quality
The collection of data through all the software's life cycle phase is important:
• to analyse the followed process
• to have knowledge of the offered quality
• to foresee the delivered quality
• to know when the tests should be finished
Most designers are not aware that the project management process is something
more than the control of the development of some software statements. That is the
reason why they do not consider that data collection regarding produced quality is
an added value which makes easier the analysis to know when and with what
quality the product is offered to customer. It is an important task to convince de-
signers of the importance of the delivery and reliability of requested data.

In some cases the independence of the tests is requested by


regulations
There have always been companies with various models regarding the organisa-
tion of test groups.
There are those companies with differentiated test organisation and those with
tests where designers and testers are the same people. There are even others in
which the people that specify, define, design and test the system are the same.
Each model has its advantages and disadvantages and have a lot to do with the
company's size, with used tools (for design or testing), with the delivery cycle of
the product (annual, half year) or with other considerations.
There are also companies where its organisation model is cyclic: sometimes it
is insisted on hybrid engineers and sometimes on differentiated engineers. Each
company must give an answer to its problems at each moment according to its
business objectives.
However, there are some type of tests, such as the qualification or the accep-
tance test, where regulations request total independence of test team with respect
to the participants in the project, be they designers or testers.

Company culture to reward "firemen"


Jumping over logical rules, companies are good at correcting problems, but no so
good in taking actions not to introduce them.
Firemen are those who give solutions to critical problems at punctual moments,
and they are more acknowledged and rewarded for their efficiency, and for avoid-
ing serious harm to companies. But the acknowledgement culture to firemen pro-
duces a loss of motivation to those that deliver high quality products with less
maintenance problems, thus reducing problems when the product is delivered.
It sometimes happens that the firemen are always test engineers, who normally
are repeatedly recognised and awarded in comparison with good designers that do
7.2 Third Spanish Workshop 187

not stand out, and are usually awarded in less extent. Without forgetting the just
acknowledgement to firemen, the evaluation processes should give more consid-
eration to all those that are efficient in the delivery oftheir products with quality.

No much training in techniques and test planning


In small organisations with little emphasis on the quality of their products, it is
normal that there is no differentiated test activity, mainly because the projects are
smal1 or because they have little software/hardware integration or commercial
software.
This is a cause why this function is not developed enough, and therefore, the
training regarding testing is insufficient. That is why the importance given to an
adequate test planning is very low. This training would produce compatibility
between high quality and low cost using the concept of test coverage.
Nor the training in technique area, metrics and test tools has been fostered in
the past. Concepts such as promoting loading techniques, statistical techniques or
qualification techniques are very far away from being used in our test standards.

7.2.3.3 Business Working Group


Software test is an unquestionable fact
Test activities are not questioned by any of the participants in the sessions. Never-
theless, the cost of tests is quite high in all companies that use this methodology.
Testing costs should be optimised regarding the types of tests to be implemented
and the test cases coverage; at the same time the number of tests should be re-
duced because of the implementation of review/inspection activities during the
project and its results.

In the quality-delivery time binomial, the type of customer (fix,


sporadic) is fundamental competitiveness can distort this affirmation.
There are companies that think that the delivery date of a product to be placed on
the market is more important than the product quality, so therefore, disputed mar-
ket niches are put at risk with the competitors. In other cases, it is convenient to
delay the delivery of the product till the planned quality is reached. In each case,
the company must have its own criteria depending on its business objectives. Elec-
tion of delivery date over quality can produce, especially in those cases of great
competence a competitive loss, and therefore of the business, which forces to
place all necessary means to obtain products of very high quality.

It is important that in the software supply contract quality metrics are


included.
In those cases where customer and supplier have a close relationship during many
years, a supply contract should be encouraged so that both parties are committed
188 7 Lessons from the EUREX Workshops

by contract. One of the tenns that should be considered in this contract is the qual-
ity metrics presentation; their numeric objectives should be stricter every year.
Some metrics to be supplied are related to foreseen number of defects, maxi-
mum period of time to fix them, minimum perfonnance guaranteed, etc.

Reviews are important, especially in the analysis and design phases.


The experiences presented in the session made reference to reliability and profit-
ability of reviews and code inspections. However, there was a clear conscience
that more emphasis had to be placed on reviews applied during the analysis and
design phases, where more relevant defects are injected, the solution is more prof-
itable and it will avoid additional problems with the product.

Client's collaboration in the suppliers training is profitable.


Within the Customer/Supplier contract, if a software company works for another
company, it is important that this one trains its suppliers on product's specifica-
tions and characteristics with the aim to produce software better suited to their
needs, avoiding defects in product consistency and suitability. Training will al-
ways be profitable creating also fidelity between the two companies. This happens
in many cases but specially between finance companies and their suppliers.

It is fundamental to review the test plan, with the aim to set-up the
test coverage and reduce the time duration.
The test plan is a powerful mechanism that defines what should be tested, when
and how. Therefore, one of the inputs to the test plan should be the classification
of elements as critics, analysing their complexity, reusability and the modification
percentage as well as the results of their reviews and inspections. In this way it
will be known what elements are more prone to defects, and based on this, the test
plan can be redefined.
Via its in-depth revision, the test plan can be simplified, if taking into account
the maximum test coverage at a minimum cost and in a minimum time period.
Whatever serious effort made in this sense would have a significant reduction of
project cost and time, assuring that the delivery criteria have been met without
impacting on quality objectives.

Inspections: these will not reduce the number of tests to be carried


out but it will affect their time duration (because of errors previously
detected in inspections)
Document and code inspections that are correctly made could reduce in a co-
ordinated way the number of test cases to be carried out, since the software has
fewer errors. However, project managers do not opt for this reduction, but based
on the fact that most of the projects will be carried out correctly the first time, they
decide for a test plan which is shorter in time and with less effort. And that is what
7.2 Third Spanish Workshop 189

really occurs, resulting in less effort than in previous projects and with productiv-
ity (tests/ person-day) and quality (errors/test case) ratios that are much better.

Software factory - information society: is the software factory nearer


to the industrial production world or to the industry of culture?
(Spielberg vs. Ford)
A lot has been debated about software development being an art, in which design-
ers capacities are the magic wand, or being it closer to an industrial process in
which its production has to follow some clear rules as much in its process as in its
output criteria.
During the session, it seemed clear that recent tendencies (ISO, CMM, and
SPICE) clearly pointed to well structured processes to be used.

7.2.3.4 Workshop Conclusions


These are the conclusions reached within the workshop:
• The experience in software companies regarding software inspections and tests
is very limited, and in most cases it refers to very specific applications making
more emphasis on testing than inspections. However, there are especially in big
companies, implementation projects that have been supported by the company
management.
• The Customer-Supplier contract concept is considered of high importance and,
therefore, should be encouraged. Customers with long standing suppliers of
software products ought to have a contract with them that would assure the con-
tinuity and facilitate training regarding Customers peculiarities and processes.
Suppliers should guarantee that their process improvement efforts increase
yearly in a way that their quality criteria and accomplishment of plans agree
with the Quality Plan that the Customer manages annually.
• The use of inspection and test practices in software projects will be different
depending on the company's size, whether it is small or large. Generally, the
processes that will be used should be adapted to the characteristics of each
company. In this context, a large company, that supplies software products with
long standing clients, can have quality as its first priority, followed by the de-
livery date. To the contrary in a small company the delivery date is the first pri-
ority, because if the product is not placed on the market on time, it might not be
able to sell it.
• However also in a large company, certain events (Y2k, EURO) have fixed
dates, and therefore, have higher priority than quality. Each company should
clearly know its priorities to see what activities are carried out and how. How-
ever, in order not to distort the reality, companies should consider what is pre-
eminent in situations of great competitiveness in the market, the quality of the
product or the delivery date, and adapt its processes accordingly.
190 7 Lessons from the EUREX Workshops

• Review activities and inspections should be analysed depending on the results


and on the cost benefit binomial. The advantages of this activity should be
clearly measured and analysed in terms of detected number of errors, their de-
tection cost, as well as the project knowledge transmission to the people in-
volved in the reviews.
• The relevance of the testing activities are not questioned by any of the partici-
pants in the sessions. However, their cost is quite high in all companies using
this methodology, therefore, the responsible of the software development de-
partment should optimise them to the maximum.
• The use of metrics during the test phase is important for their follow-up and
control, and to analyse what is their finalising criteria.
• Because of this lack of experience in our industrial environment, there are not
many consulting companies capable of helping to introduce these methodolo-
gies in the SMEs. These methodologies can and should be promoted and en-
couraged through work sessions for knowledge dissemination and through re-
sults publication and dissemination.
In general, it can be affirmed that this workshop has permitted to progress in
the business participant's awareness of the need to introduce review inspections
and test activities in their methodology, with the aim to improve the quality ofthe
delivered software products according to the company business objectives.

7.3 Pilot German Workshop


7.3.1 Introduction

The ~UR~X Pilot Workshop was held in Nuremberg on 26 September 1997.


HIGHWARE GmbH, the German partner of ~UR~X organised the workshop. 40
participants attended the workshop.
The workshop was appended to the CONQUEST'97 conference.
The ~UR~X Pilot Workshop Technical Program offered a variety of perspec-
tives on software testing, verification, and validation. This area represents one of
the continuing "great unknowns" in software development in the sense that many
organisations, especially small and medium enterprises, have no purposeful proc-
ess addressing testing, verification, or validation.
The ~UR~X Pilot Workshop included presentations by implementers from several
PIEs:
• ASTEP by ABF
• USST by ALCATEL SEL
• STOMP by TechnoData
• IMPACTS2 by DTK
• ATECON & ATOS by DLR
• RESTATE by BOSCH Telecom
7.3 Pilot German Workshop 191

• MPCM by Transaction.
The workshop was structured in three sessions. In the first session the PIEs pre-
sented their experience; in the second session the expert introduced the topic of
Object oriented testing and the last session was devoted to the discussion among
all the participants: PIEs, expert, external audience.
Tool vendors participated and provided additional views on the subject.

7.3.2 Expert Presentation

Robert V. Binder was the invited expert, he is an internationally recognised expert


in the area of testing object-oriented systems.
Mr. Binder has over 23 years of software development experience in a wide
range of technical and management roles. He has developed embedded systems
and transaction processing applications Cln platforms ranging from micro control-
lers to supercomputers. He is President and founder of RBSC Corporation.
Mr. Binder developed the FREE (Flattened REgular Expression) approach to
testing object-oriented systems and the CSSE (Client/Server Systems Engineering)
methodology.
Mr. Binder has presented papers on software engineering and software process
management at many technical conferences and has given software engineering
seminars to thousands of professionals.
He is Chair of the IEEE Standards Built-in Test for Object-oriented Software
study group and serves on the board of the annual Quality Week conference.

6
Keynote Address of Robert V. Bindel
Software development presents two hard problems that in my view have been
insufficiently addressed: requirements and testing. These problems are closely
related in that testing of a software system is meant to demonstrate - to some
degree of satisfaction - that the requirements for that software system have been
met by a given implementation. In spite of the advances that object-oriented pro-
gramming languages and methodologies offer, testing remains necessary. Devel-
opers make mistakes. Complex systems can easily produce unanticipated results.
The social and economic costs attributed to software failure continue to rise.
Testing as an element of the software development process is normally gov-
erned by economic constraints. That is to say, the degree to which a system is
tested is governed by the tension between the desire for reliability and the time
and money available to achieve it. We distinguish between a reliability-driven
process, which uses testing to demonstrate that a particular reliability goal has
been met, and a resource-limited process, which uses time and money to remove

36
R. Binder's presentation is presented here summarised by L. Consolini
192 7 Lessons from the EUREX Workshops

as many rough edges from the system as possible. The effect of this trade-off is
evident throughout the life cycle. Consider table 7.4:

Table 7.4 Defect density in relation to the testing environment

DefectslKLOC Testing Environment

0,1 Best Practice


10 No statistical testing
100 No good testing practice

These figures illustrate the cost in terms of reliability of various testing envi-
ronments.
Testing presents several fundamental technical difficulties. Since we can never
hope to exercise all possible inputs, paths, and states of a system under test, which
should we try? When should we stop? If we must rely on testing to prevent certain
kinds of failures, how can we design systems that are testable as well as reliable
and efficient?
Such issues have been considered in detail for so-called "conventional" soft-
ware systems. Many answers, both practical and otherwise, have been proposed
and debated and a few have even been subjected to empirical validation. Until
recently, less attention has been paid to testing of object-oriented implementations.
More attention is needed, however, because the increased power of object-oriented
languages creates new opportunities for error.
Testing of object oriented implementations raises further issues. Each lower
level in an inheritance hierarchy is a new context for inherited features; correct
behaviour at one level in no way guarantees correct behaviour at another level.
Polymorphism with dynamic binding dramatically increases the number of possi-
ble execution paths. Static analysis of source code to identifY paths bedrock of
conventional testing) is of relatively little help here. While limiting scope of ef-
fect, encapsulation is an obstacle to controllability and observability of the imple-
mentation state. Components offered for reuse should be highly reliable; extensive
testing is warranted when reuse is intended. However, each reuse is a new context
of usage and re-testing is prudent. It seems likely that more, not less, testing is
needed to obtain high reliability in object-oriented systems.
In addition, the role of the tester must be considered. Traditionally, testing has
been viewed as a low-level task, beneath the concern of "real" programmers. It is
becoming increasingly clear that the importance of testing is on a par with that of
design and development. Most implementations can benefit from various forms of
testing throughout the life cycle. It is important to consider, for example, whether
requirements are testable. If a requirement cannot be tested, how will it be judged
completed, much less correctly implemented?
7.3 Pilot Gennan Workshop 193

The development process should be organised so as to maximise the effective-


ness of testing. This is known as design for testability and it is particularly impor-
tant in object oriented systems because of the problems noted above.
Planning for testing and designing for testing can reduce the cost and difficulty of
testing. There are six key factors in this process:
• Representation
• Implementation
• Built-in Test
• Test Design
• Test Automation
• Process Implementation
• These factors provide a framework for software testability.
The notion of testability has two key aspects: controllability and observability.
To test something, you must be able to control its inputs and observe its outputs;
otherwise, you cannot be sure what has caused a given output. You must be able
to observe cause and effect with certainty.
There are a number of obstacles to both controllability and observability. In
particular, a system under test must be embedded in some sort of test framework.
This test framework is itself software that must be designed, developed, managed,
and tested. In general, it is not unreasonable to expect that 100 LOC in a system
under test require 500 LOC in the test frame. This is a startling result, but it serves
to emphasise that the process of testing should be considered part and parcel of the
entire development process.

7.3.3 Workshop Discussion and Conclusions

The discussion concentrated on two major areas:


• People Issues
• Tools

7.3.3.1 People Issues


People issues were recognised as paramount and a list of morals were agreed
upon:

• The advantages of testing for project members should be clearly pointed out in
order to get them "onboard".
• It is important to look for success experiences in an early stage.
• The PIE as a first experiment causes a higher effort. The advantage will be
shown in the following projects.
• Improvement of test-quality adds value of the jobs and therefore it provides
motivation and satisfaction to people involved in testing. This leads to the ef-
194 7 Lessons from the EUREX Workshops

fect, that more experienced people join the test groups and this additionally im-
proves the quality of testing.
• Job rotation between development and test groups is an effective means to
improve satisfaction and understanding.
• It is recommendable to get management support with a business case for im-
proving testing practices grounded on ROJ and customer satisfaction instead of
technical issues. The evaluation and presentation of reduced maintenance costs
in increasing the test quality should be addressed to convince management.
• Code reviews are recognised as one of the most effective quality practices
however it seems very hard to argue for this method in order to convince man-
agement and even team members.
• Establishing sound test practices costs too much effort for small companies.
Errors are detected at customer site.

7.3.3.2 Tools
It was recognised that technology should be seen as a supporting aid to a well-
defined process. Technology cannot be the first and only concern for testing im-
provement. A list of key lessons for success in the choice of tools were reached at
the workshop:
• it is necessary for the potential customer to evaluate a tool in his own environ-
ment and with his own data. A demonstration by vendor is not enough.
• a checklist of evaluation criteria establishes a sound base for communication
between the customer and the tool vendor. This common base will improve the
selection.
• the quality of the documentation is essential; the quality of the documentation
means readability and conformity with the software.
• the "standing" of the tool vendor (i.e. does the vendor still exist in 10 years)
was pointed out as an important point of selection.

7.4 Lessons Learned from the Workshops

The various workshops reported lessons in several areas. Chief among these were
the importance of managing people and business issues; however, a number of
technical conclusions were reached as well. The editors chose the most relevant
lessons and tied them back to the workshops' conclusions presented previously.
7.4 Lessons Learned from the Workshops 195

7.4.1 People Issues


7.4.1.1 Lesson 1
Well-defined responsibilities are paramount to the success of testing. Independent
testers should perform the testing from the point of view of the fmal user: they are
domain specialists and not necessarily developers. On the other hand, developers
should be accountable for the internal quality of the system they develop and
should be involved in testing automation. The best skill mix to ensure the effec-
tiveness of testing is achieved when developers (who know how to build software
and know the internals best) and independent testers (who know how the product
37, 38
will be used) work together.

7.4.1.2 Lesson 2
There is a persistent training and motivation problem with testing. Few engineers
have real testing skills and it is extremely difficult to fmd personnel motivated to
pursue a career in testing. A poor but common practice is to assign those who are
not good at programming to testing.
The human factors that associated with gathering measurements should be stud-
ied in great detail. The measurement objectives should be clear and respected by
all that use them. Everyone involved should agree to use of measurement for
.
process Improvement an d that data'IS not to be questlOne
. d afterwards. 39. 40

7.4.2 Business Issues


7.4.2.1 Lesson 3
There is still a fear among software companies to invest in testing automation,
because of the cost of developing and maintaining the new software needed to
implement and run the tests (testware). The set up and maintenance of a testing
automation environment requires dedicated staff and a considerable level of in-
vestment.
In order to be economically viable, testing should concentrate on the riskiest ar-
eas: namely, where the impact of a failure, its visibility to the client and its prob-

37
Refer to the third Spanish workshop chapter 7.2.3: Perception of professional category
difference between software developers and "testers"; specialised "tester" is very effi-
cient, but creates barriers for developers; in some cases the independence of the tests is
38 requested by regulations.
39 Refer also to the pilot German workshop chapter 7.3.3, people issues.
Refer to the third Spanish workshop chapter 7.2.3: Company culture to reward "fire-
40 men"; not much training in techniques and test planning.
Refer also to the second Italian workshop chapter 7.1.3: A cultural growth on testing is
paramount; skills.
196 7 Lessons from the EUREX Workshops

abili~ ar~l?jF.~3est. An analysis of where the risk is highest should be part of test
plannmg.

7.4.2.2 Lesson 4
The highest ROI can be achieved through re-use of test cases, but to ensure a
focused re-use you need to link tests and requirements to identify which tests
which. This type of traceability can be achieved by. extending the usual capabili-
'14
ties of configuration and version management tools.

7.4.2.3 Lesson 5
It is necessary to define a metrics policy so that an adequate balance can be ob-
tained between the measuring cost and the benefits in mind.
Metrics make it possible for the clients to change their perception of processes
and services in the long term. The information used should be part of the culture
of the organisation.
Metrics have served to increase awareness of the measuring objectives and to
transmit the priorities to the organisation.
Metrics are considered very important as a means to align the technological ob-
jectives with those of the business. They also help to determine the value that
t~chn~J.0JY adds to the company and to improve the credibility of the techni-
cians.

7.4.2.4 Lesson 6
The introduction of a software verification and measurement program is a strate-
gic project, and should not be planned exclusively as a classical investment benefit
analysis. If the implementation is to be successful there must be an adequate plan,
47. 48
organization and the necessary resources.

4\
Refer to the third Spanish workshop chapter 7.2.3: Software test is an unquestionable
42 fact.
43
Refer to the second Italian workshop chapter 7.1.3: Handling schedule pressure and risk.
44 Refer also to the pilot Gennan workshop Chapter 7.3.3.
45 Refer to the second Italian workshop chapter 7.1.3: The ROI of automated testing.
Refer to the third Spanish workshop Chapter 7.2.3: Data collection during system opera-
46 tion: making developers aware of extended quality.

47 Refer also to the pilot Gennan workshop Chapter 7.3.3.


Refer to the second Italian workshop Chapter 7.1.3: Developing a testing automation
48 strategy is very important.
Refer also to the pilot Gennan workshop Chapter 7.3.3.
7.4 Lessons Learned from the Workshops 197

7.4.3 Technical Issues


7.4.3.1 Lesson 7
Testing automation must be combined with manual testing, mostly in those do-
mains (such as income tax software) where changes are so frequent and under
such time pressure that investing resources in testing automation is simply not
possible.
Testing automation tools are not a panacea. They tend to be complex and so-
phisticated; moreover, to be really effective testing should be application specific
and consequently a certain amount of tailoring and development is always neces-
sary. Conversely, certain types of testing cannot be done without tools, for exam-
ple load testinl! for client/server applications and volume testing for data intensive
49. 50. 5T'
systems.

7.4.3.2 Lesson 8
With new technologies we are making steps backwards as regards testing. It is
very common to have no testing at all of Web applications. It is mistakenly be-
lieved that static WEB applications do not need any testing and the staff involved
is usually very much on the hacker side. We have to adapt traditional testing
methods to this 52new technological setting and ensure at least a 15% testing effort
within a project.

7.4.3.3 Lesson 9
We should not be ambitious in the re-collection of the measuring data. Only the
information related to the variables with which you are going to work should be
re-collected as the validity of the data and its tendency is more important than its
precision.
Metrics should be used in an 00 software development project:
• During the design and development stage to validate the quality of the software
architecture and of the code.
• They should be calculated at the same time as the corresponding elements are
going to form part of the configuration management.
The measurement objectives should be taken into account during the whole
identification and implementation processes. The use of the GQM methodology
has been of great use.

49
50 Refer to the third Spanish workshop chapter 7.2.3: Tools.
Refer to the second Italian workshop chapter 7.1.3: Developing a testing automation
5] strategy is very important; Tools.
52 Refer also to the pilot German workshop Chapter 7.3.3: Tools.
Refer to the second Italian workshop chapter 7.1.2: Expert presentations.
198 7 Lessons from the EUREX Workshops

When metrics were used as part of software maintenance processes, the


size/complexity metrics indicated that the defects also increased with the size and
complexity of the affected function. There is no evidence that its use improves
this process. The use of a workflow tool was also proved for collecting measuring
data in this process.
The interest in metrics is centred mainly on tactical applications whereas In-
formation System and Technology Management is still carried out with qualitative
and technological criteria without applying common use techniques with other
S3 4
company resources that require specific measurement utilization. .

7.4.4 Final Conclusions

In conclusion, the most relevant message was to avoid applying ready-made but
inadequate models to your organisation. One should take a balanced approach
where different levels and classes of testing and testing automation coexist. The
needs and the available resources should determine the approach.
It is evident that there is great interest in this subject, but the level of use of
metrics continues to be very limited. In many cases it is apparent that a metrics
program is carried out by personal or group initiative, rather than by the imple-
mentation of structured programs promoted by company management.
The use of metrics in some areas still finds itself in an immature stage. In gen-
eral, an existing gap has been verified between the world of research and the final
user to easily use these techniques.
Because of this lack of experience, market interest suffered until recently.
There are a limited number of consultancy companies that can offer experience,
knowledge and products within this area.
It is not possible to improve software quality without knowing the quantitative
change in process improvement. Metrics provide a necessary base that makes this
possible.

S3
S4 Refer to the third Spanish workshop Chapter 7.2.3: Measurements.
Refer also to the pilot German workshop Chapter 7.3.2: Expert presentations.
8 Significant Results

L. Consolini
Gemini, Bologna

The ~UR~X workshops covered a great deal: methods, tools, ski11s and new proc-
esses have been widely and deeply touched upon either by the speakers or by the
material collected and analysed by the organisers.
It is now time to draw some conclusion from it all.
In chapter 4, inadequate application of product verification methods, techniques
and tools by commercial software developers was lamented. The contrary is true
with the ESSI PIEs covered here: most of them were performed by commercial
software organisations. Evidently, rigorous software verification is rapidly exiting
its traditional niche (those who provide safety critical software) and entering a
world of carefully controlled budgets, tight schedules, and strong competitive
pressures. Until recently, we have been forced to come to terms with defective
commercial software. Customers learned to accept low quality software, almost
never fighting back, while developers continued to patch the code here and there
and testers continued to be frustrated and relegated to the lowest ranks of the de-
velopment team.
What emerged from the ~UR~X workshops is a realisation that the situation is
rapidly changing, in Michele Paradiso's words: "The competitive pressure on high
quality software, stringent budget and aggressive cycle time requires increased
productivity while sustaining quality in all phases of the software development life
cycle. The business imperative for organisation in the 2000's are to gain competi-
tive and advantage while reducing time to market and at the same time minimising
business risk: a11 this means getting a new application and/or solution out of the
door in a hurry."
Since it is no longer possible to safely ignore (in business terms) the product
quality issue, many companies are now asking the crucial question: What and how
should be improved in our process to achieve and maintain an competitive level of
product quality?

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
200 8 Significant Results

8.1 Barriers Preventing Change of Practices

In the author's experience, there are several barriers preventing software organisa-
tions from moving more decisively to upgrade their V& V practices. These are
discussed in the following sections.

8.1.1 Ignorance of the Software Product Quality Methods

In each of the [;UR[;)( workshops, it was pointed out several times that there is not
enough cultural support for testing and software verification. Software developers
do not learn how to do it and no reward system is in place to motivate and recom-
pense good testers. In many companies testing is not even a "job", it is simply
what remains for developers to do after the code is finished.
This leads us to conclude that training efforts aimed at spreading awareness and
knowledge are critical to really change the "quick and dirty" approach we have
been accustomed to.
Unfortunately an appropriate training offer seems lacking or, at least, it has a
difficult time in reaching many SMEs and practitioners who are still far from
getting the basic education needed to successfully implement and use new emerg-
ing software verification support tools.
In this context the risk is that the first and only contact practitioners have with
verification methodologies is through highly commercially pitched tools. Also, as
the experts made clear in their introduction, many tools are now commercially
available at affordable prices, although their capabilities need a "reality check"
and the people possessing the skills necessary to use them effectively are rare.
Developers and managers alike can be lured into thinking that tools (or even
just one tool) are the silver bullet. The expectations raised by catchy marketing
messages are almost never met. It becomes clear that automating poorly designed
tests makes it only easier to execute them; whereas, the quality level of the prod-
uct remains basically the same.
Designing good tests means designing tests with a high probability to find de-
fects, simple enough to make defects easily reproducible, significant enough to
make fixing the errors behind the defects well worth doing. When automation and
tools come into play, a good test is also a maintainable test, with minimal depend-
ence on what changes frequently in the code.
To achieve this "test quality" you need test design skills, experience and a cer-
tain amount of flair. Perhaps you need to become a professional tester: testing as
an "if-we-have-spare-time-occupation" is not enough any more.
8.1 Barriers Preventing Change of Practices 201

8.1.2 Uncertainty about the Return on Investment and Fear of


Raising Development Costs to an Unacceptable Level

This is the hardest barrier to overcome. In an industry where we are not used to
measure the results of process change, and even less to quantify the costs of non-
quality, it is generally difficult to tell whether and when change pays off.
On one side you have the costs, the resources and the time that a verification
process change takes out of your pocket, but you seldom have similarly concrete
values on the other side.
One must be very clear about this: you have to go a long way in training, men-
tality change and infrastructure set up before you can really have meaningful data
to measure your results in a way that can speak a clear business message to those
who hold the purse strings. You have to commit yourself to a difficult endeavour
and to keep momentum mostly out of your goodwill and of the comfort that oth-
ers' successful experiences can give you.
In this respect the ESSI PIEs and the analysis carried out by ~URr;.x can be help-
ful and inspiring, mostly to assist with choosing a suitable path and to avoid com-
mon mistakes.
In particular all three ~UR~X workshops warned against an unconsidered, head-
long plunge into testing automation. It was clear that automation pays off only if
applied in the right way and to the right types of testing. Moreover, testing is not
the only verification means and, very probably, not the most effective. ~UR~X
gave inspection-oriented PIEs a forum to present their results and to dispel some
myths. ~UR~X confirmed that there is a whole range of very promising manual or
semiautomatic verification techniques that deserve some attention because they
are effective and focused on early discovery of errors (not to mention their poten-
tial for prevention).
Quite naturally manual or semiautomatic verification techniques are more
costly and require higher skill levels, but as more data are being gathered about
the effectiveness of such techniques, they become more and more convincing, at
least for critical code.

8.1.3 Still Not Enough Pressure on Software Producers to Increase


Quality Standards

It is unpleasant news, but there is still too much acceptance of low quality by
software consumers. Many software organisations will not fully commit to quality
improvement unless they feel some market pressure to do it. To the contrary,
many workshop participants remarked that many customers are not willing to pay
for higher quality and they set the time and cost targets so low that meeting them
becomes a matter of deciding where to cut.
"How can you care for quality when the fiscal legislation changes overnight
and your customers want the new release at no cost at lightening speed unless they
202 8 Significant Results

incur bitter sanctions?" Similar questions arose time and time again at the fURfX
workshops. It is hard to give a reasonable answer except by offering very general
advice, such as: "Aim for enough quality, tailor your quality targets to what is
suitable to compete and win in your market!" In other words quality should be
providing value to somebody who recognises it. There is no such thing as absolute
quality.
From this line of reasoning stems the relevance of choosing the right (i.e. ade-
quate and appropriate) software quality strategy, which will then be your specific
strategy, not a blind application of standards and models good for all seasons.

8.2 Best Practices Recommended by Experts

To overcome real-world barriers, WRfX also identified some field-tested "best


practices" that can be seen as the distilled recommendations deriving from real
experiences. We summarise them as follows:
• investing in the acquisition of new skills;
• formalising the verification process and integrating it with the development
process;
• introducing automation prudently but inevitably;
• measuring results and return on investment.

8.2.1 Investing in the Acquisition of New Skills

It is evident that rising up software verification and testing in particular from the
lower ranks of software tasks has many educational and cultural implications.
The reported experiences revealed a dramatic lack of structured and effective
courses on product verification integrated within the usual software engineers'
educational curricula. In addition, little investment in training has traditionally
been made to prepare competent testers. The result is a lack of knowledge and,
consequently, a general lack of motivation to undertake a career in testing.
The best return on investment is certainly obtainable with the skill improve-
ment criteria.

8.2.2 Formalising the Verification Process and Integrating it with


the Development Process

In the improved process model discussed by one of the experts in Chapter 4, prod-
uct verification ceased to be an indistinct package of work to be carried out at the
end of development; rather, product verification was described as a process struc-
tured into manageable units and critical decision checkpoints. This view is coher-
8.2 Best Practices Recommended by Experts 203

ent with the unanimous emphasis that experts put on planning and making V&V
sufficient to achieve the desired quality targets.
Most of the PIEs related more to execution and how to make it effective than to
planning, however many of them realised along the way that to gain real benefits
it was necessary to formalise the new practices and to integrate them into the de-
velopment process.
Some of the PIEs interpreted integration as traceability, particularly with the
requirements process, others explored modifications to the project planning and
estimation process to take V& V into account. Of particular interest was the inte-
gration of regression testing and maintenance to ensure a consistent quality level
over time.

8.2.3 Investing Carefully but Inevitably in Automation

The great steps forward that have been made in automation are certainly part of
the reason for the predominance of product verification over other improvement
areas in ESSI. Both the experts and the software organisations performing the
PIEs are focused on the same theme. In accordance with the diffusion of more
rigorous verification practices into the commercial software industry on the one
hand and the general growth of the competitive pressure on the other, it is clear
that the increase in productivity and reuse promised by automaton can be ex-
tremely appealing.
The tool vendors are clearly responding to this market opportunity with a wide
and diversified range of products, which were described in Chapter 4. On the
down side, Chapter 4 also warns against common mistakes and unjustified expec-
tations that usually go hand in hand with automation. The PIEs' experience con-
firms that some bad news mitigates the enthusiasm engendered by the availability
of tools:
• investment in automation can be jeopardised by the product evolution if meas-
ures are not taken to stabilise test cases with respect to software changes
• tools need a considerable set-up and tailoring effort if not the development of
in-house integration software
• automation does not mean automatically improving the quality of the verifica-
tion cases that we run against our code. In other words automating a badly de-
signed set of cases only means doing a bad job faster.
Perhaps the most useful lessons to be derived from the ~UR~X workshops con-
cern the recommended approaches to the use of automation.
204 8 Significant Results

8.2.4 Measuring Results and Return on Investment

Since the investment in rigorous product verification can become significant, it


seems wise to integrate a mechanism to measure the results of and return on in-
vestment from any verification process improvement initiative. The ~UR~X PIEs
demonstrated that it is both feasible and necessary to motivate and sustain com-
mitment.
As one of the experts pointed out in Chapter 4, "any software process im-
provement should not be seen as a goal in itself but it pays off if it is clearly linked
to the business goals of an organisation". We can now consider again the table
provided in Chapter 4 and integrate it with the concrete experience of the PIEs.

Table 8.1 PIE experience related to testing process

Driver Testing Process Contribution PIEs experience

Cost productivity improvement especially in true only iftestware maintenance


regression testing does not eat up all gains
Time time reduction in testing execution and generally true, managing testing
testing data management activities data can require non trivial work
Product reduction in the number of defects deliv- true only if test design is of good
and ser- ered with the product quality, test automation of ineffec-
vice tive tests is useless to this aim
quality
reduction of the average time necessary for availability of regression testing
problem fixing suites and automatic re-execution
helps

8.3 Revisiting the Classic Testing Mistakes

To conclude, it is appropriate to recall from Brian Marick's paper in Chapter 4


some Classic Testing Mistakes to avoid carefully. The prevention of future mis-
handling of this sophisticated discipline is certainly the best message to conclude
with:

8.3.1 Mistakes in the Role of Testing

• Thinking the testing team is responsible for assuring quality.


• Thinking that the purpose of testing is to find bugs.
• Not finding the important bugs.
• Not reporting usability problems.
• No focus on an estimate of quality (and on the quality of that estimate).
8.3 Revisiting the Classic Testing Mistakes 205

• Reporting bug data without putting it into context.


• Starting testing too late (bug detection, not bug reduction)

8.3.2 Mistakes in Planning the Complete Testing Effort

• A testing effort biased toward functional testing.


• Under-emphasising configuration testing.
• Putting stress and load testing off to the last minute.
• Not testing the documentation.
• Not testing installation procedures.
• An over-reliance on beta testing.
• Finishing one testing task before moving on to the next.
• Failing to correctly identify risky areas.
• Sticking stubbornly to the test plan.

8.3.3 Mistakes in Personnel Issues

• Using testing as a transitional job for new programmers.


• Recruiting testers from the ranks of failed programmers.
• Testers are not domain experts.
• Not seeking candidates from the customer service staff or technical writing
staff.
• Insisting that testers be able to program.
• A testing team that lacks diversity.
• A physical separation between developers and testers.
• Believing that programmers can't test their own code.
• Programmers are neither trained nor motivated to test.

8.3.4 Mistakes in the Tester-at-Work

• Paying more attention to running tests than to designing them.


• Non reviewed test designs.
• Being too specific about test inputs and procedures.
• Not noticing and exploring "irrelevant" oddities.
• Checking that the product does what it's supposed to do, but not that it doesn't
do what it isn't supposed to do.
• Test suites that are understandable only by their owners.
• Testing only through the user-visible interface.
• Poor bug reporting.
• Adding only regression tests when bugs are found.
206 8 Significant Results

• Failing to take notes for the next testing effort.

8.3.5 Mistakes in Test Automation

• Attempting to automate all tests.


• Expecting to rerun manual tests.
• Using GUI capture/replay tools to reduce test creation cost.
• Expecting regression tests to find a high proportion of new bugs.

8.3.6 Mistakes in Code Coverage

• Embracing code coverage with the devotion that only simple numbers can
inspire.
• Removing tests from a regression test suite just because they don't add cover-
age.
• Using coverage as a performance goal for testers.
• Abandoning coverage entirely.

8.4 The [UR[X Process

A last word on the ~UR~X process is in order. It is clear that we are not talking of a
scientifically validated way to formulate hypotheses and to check them out. How-
ever, rURrX made an effort to achieve a systematic and grounded interpretation of
a considerable number of field cases and to derive a set of lessons transferable to a
wider community of potential users.
The process for doing this was conceived of from scratch. For the most part,
nothing of the sort had been attempted before in the software engineering and
quality field. As a result, it inevitably underwent considerable day-to-day adjust-
ments as it was deployed.
The rURrX approach had to consider different cultures, changing audiences, and
the contingent difficulties. The interpretation and the value of the data we were
gathering was neither always evident nor readable in just one unambiguous way.
However we are fairly convinced of the final results for several reasons:
• the conclusions of the various workshops were uniform, little if no contrast was
observed by the editors;
• the PIEs' view - mostly a practitioners' one - was balanced by the experts'
position that was certainly more inspired by what the theory and the literature
say;
8.4 The EUREX Process 207

• the final lessons have been elaborated on the basis of a rich and vast material:
PIEs reports, workshop proceedings, experts' papers, domain specific literature.
The confidence we have developed in the I;URI;X results makes us also confi-
dent that there is a need to publish them and bring them to the attention of the
software community at large in order to stimulate further discussion and action.
We therefore claim not an academic but a practical- and practitioners-oriented
value for our work, very much in line with what the workshop audiences always
asked from us
Part III

Process Improvement Experiments


9 Table of PIEs

Table 9.1 below lists each of the PIEs considered as part of the ~UR~X taxonomy
within the problem domain of Testing, Validation and Verification.

Table 9.1 Table of PIEs

Project Year of Acronym Project Partners Country


No CfP

21757 1995 ACIQIM DENKARTNV B


10965 1993 AERIDS SAlT DEVLONICS B
10146 1993 ALCAST VOLUNTARY HEALTH INSURANCE IRL
BOARD
21222 1995 AMIGO ELIOP, SA E
24148 1996 ARETES ARCHETYPON SA GR
23860 1996 ASTEP ABF - Industrielle Automation GmbH A
23860 1996 ASTEP VOEST-ALPINE STAHL LINZ GmbH A
23951 1996 ASTERIX ANITE SYSTEMS Ltd UK
10464 1993 ATECON DLR, Zettler, CAM D
21823 1995 ATM SVIMSERVICE I
21530 1995 ATOS DEUTSCHE FORSCHUNGSGESELL- D
SCHAFT FOR LUFT- UND RAUM-
FAHRT E.V. (DLR)
10564 1993 AUTOMA DATAMAT INGEGNERIA DEI SISTEMI
SPA
24143 1996 AUTOQUAL SAET S.p.A. 1
21362 1995 AVAL OBJECTIF TECHNOLOGIE F
21682 1995 AVE MUNICIPALITY OF KAVALA GR
21284 1995 BEPTAM NOKIA TELECOMMUNICATIONS OY SF
21826 1995 CALM CHASE COMPUTER SERVICES Ltd UK
23671 1996 CITRATE NOVABASE Sistemas de Informacao e P
Bases de Dados S.A
21465 1995 CLEANAP LABEN S.p.A. I
24206 1996 CLISERT CREDO GROUP Ltd IRL
24362 1996 CONFITEST TeSSA NV B
21265 1995 DATM-RV FAME COMPUTERS Ltd UK
21306 1995 DOCTES TEKNlKER E
21302 1995 EMINTP NORD-MICRO D
21162 1995 ENG-MEAS ENGINEERING INGEGNERIA INFOR- I

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
212 9 Table of PIEs

Project Year of Acronym Project Partners Country


No CfP

MATICA S.p.A.
24266 1996 EXOTEST DASSAULT ELECTRONIQUE F
24157 1996 FCI·STDE PROCEDIMIENTOS - UNO S.L. E
21367 1995 FI·TOOLS TT TIETO TEHDAS OY SF
23887 1996 GRIPS SIMCORP A/S DK
24306 1996 GUI-TEST IMBUS GmbH D
23833 1996 IDEA ISTITUTO NAZIONALE PREVIDENZA I
SOCIALE
24078 1996 IMPACTS2 DTK GESELLSCHAFT FOR TECHNI- D
SCHE KOMMUNIKAnON mbH
21733 1995 INCOME FINSIEL S.p.A. I
10482 1993 IQASP LABEIN E
10482 1993 IQASP B.Y.G. SYSTEMS LTD UK
10163 1993 IRMA BRITISH AEROSPACE DEFENCE LTD UK
23690 1996 MAGICIAN MAGIC SOFTWARE ENTERPRISES ISR
LTD
21224 1995 METEOR LOG.IN I
10228 1993 MIST GEC-MARCONI AVIONICS LTD UK
10788 1993 ODP FESTO Ges.m.b.H. A
24053 1996 OMP/CAST OM PARTNERS N.V. B
23743 1996 PCFM LUCAS AEROSPACE UK
10438 1993 PET BRueEL & KJAER MEASUREMENTS DK
A/S
21199 1995 PI' ONION I
24344 1996 PIE - TEST LGTsoft B
23705 1996 PREV - DEV MOTOROLA COMMUNICATIONS ISR
ISRAEL
21417 1995 PROVE CAD.LAB S.p.A. I
23834 1996 QUALITAS Management Data, Datenverarbeitungs- A
und Untemehmensberatungsges. m.b.B.
23978 1996 RESTATE BOSCH Telecom GmbH D
10494 1993 SDI-WAN TECNOMET PESCARA S.p.A I
10824 1993 SIMTEST DATASPAZIO TELESPAZIO E DATA- l
MAT PER L'INGEGNERIA DEI SISTEMI
SPA
21612 1995 SMUIT ABB Netzleittechnik GmbH D
21394 1995 SPIDER ETRA SA E
10875 1993 SPIMP PHILIPS MEDICAL SYSTEMS NL
23750 1996 SPIP ONYX TECHNOLOGIES ISR
21799 1995 SPIRIT BAAN COMPANY N.V. NL
24193 1996 STOMP TECHNODATA INFORMATIONS- D
TECHNIK GmbH
21160 1995 STUT·!U OY LM ERICSSON AB SF
23855 1996 SWAT TELECOM SCIENCES CORP. Ltd UK
21385 1995 TEPRIM IBM SEMEA SUD s.d. I
9 Table of PIEs 213

Project Year of Acronym Project Partners Country


No CfP

23683 1996 TESTART ISRAEL AIRCRAFT INDUSTRIES COR- ISR


PORATE R&D (Dept 9100)
21170 1995 TESTING LABEIN E
21216 1995 TESTLIB INTEGRACION Y SISTEMAS DE ME- E
DIA, SA
23754 1996 TRUST AGUSTA UN'AZIENDA FINMECCAN-
CIA S.p.a.
23843 1996 USST ALCATEL SEL AG D
23732 1996 VERA GEC·MARCONI RADAR AND DEFEN- UK
CE SYSTEMS, Radar Division
23732 1996 VERA GEC-MARCONI RESEARCH CENTRE UK
21712 1995 VERDEST TTTIETO OY SF
24153 1996 VISTA ECN NL
10 Summaries of PIE Reports

In the following section, summary reports of the Process Improvement Experi-


ments are presented that have been assigned to the Problem Domain Software
Quality, Testing, Validation and Verification. These summaries have been taken
preferably from the PIE's own final reports. Where the fmal reports have not been
available, excerpts from other reports or the Project Summaries respectively the
Project Snapshots have been used.
Contact Details are provided where they have been already publicly available
somewhere else.

10.1 ACIQIM 21757

Automated Code Inspection for Quality Improvement


Business Motivation and Objectives
Since the cost of repairing faults in software is considerably higher later in the
development process, it is essential to obtain as small as possible fault tolerance
throughout the complete development cycle. Denkart's objective is to reduce the
workload by automating and streamlining the code inspection process. Our goal is
to build an automated frame-work which can be used in any software development
cycle.

The Experiment
To achieve the above mentioned objective, we will design and build an automated
system which gives the biggest benefit, meaning reduced workload and smallest
fault tolerance. During the baseline project we will concurrently do manual code
reviews and run static and dynamic code analysis tools to detennining the soft-
ware quality factor. The results of the reviews and the resulting metrics will be
combined to find a correlation. Out of this correlation report we will be able to
make an automated selection of sources which have to be manually reviewed. By
using the metric tools we will already exclude part of the faults while the auto-
matic selection system will make sure the critical sources get reviewed.
Though the framework must be applicable to any software development project
we will use one of Denkart's typical legacy business application migration pro-
jects as baseline project.

M. Haug et al. (eds.), Software Quality Approaches: Testing, Verification, and Validation
© Springer-Verlag Berlin Heidelberg 2001
216 10 Summaries of PIE Reports

Denkart employs 35 people of which 7 will be involved in the ACIQIM PIE.

Expected Impact and Experience


By setting up the automated framework for code reviewing and selection Denkart
will not only have build a mechanism useful for every development process, but
we will also gain understanding in metrics (automatically and manually collected)
and their relation towards software quality. Furthermore it will raise the quality
issue to the level of developers and management which is important regarding the
culture change needed to build a quality conscious company.
DENKARTNV
Molenweg 107
B-2830 Willebroek
Belgium

10.2 AERIDS 10965

ESSI in Context of Rail Station Information Display Sys-


tem Software
The main interest, in ESSI action, for SAlT Devlonics is to introduce practically
the concepts available in the quality manual. SAlT Devlonics wants to have a
Quality System which automates certain procedures/methods.
They are:
• to automate as much as possible the Software life cycle development;
• to execute a Configuration Management on the PC environment;
• to create a library of software components (reusability) and to incorporate its in
the Configuration Management System in order to increase dramatically the ef-
ficiency of software development.
Application Experiment was performed on the Railway station Information
Display System (AERIDS) software project. The RIDS is computing and
dispatching information (train's departure time, train's destinations, etc...) to a set
of display devices for the attention of the train's passengers.
The RIDS is a real time distributed system with a centralised data base and its
software application is composed of several programs for an overall estimated size
of 87000 lines of code.
The experiment was done on a part of the project (12000 lines of code) which
has been easily isolated.
SAlT has carried out this pilot project to develop part of the RIDS using as set
of CASE tools which have been integrated to work together and to automate the
entire software development life cycle including analysis, design, implementation
and maintenance phases.
10.3 ALCAST 10146 217

A process group support has been put in place in the engineering department with
two basic tasks:
• initiating and sustaining process change,
• supporting the projects as they use methods, standards, technology (normal
operations).
This group serves as a consolidating force for the change that have already been
made. Without such guidance, lasting process improvement is practically impossi-
ble.
At the end of this experiment, SAlT Devlonics has outlined the following key
lessons learnt:
• the introduction of an integrated tools support for the software life cyle devel-
opment are permitted to automate the procedures defmed in the SAlT Quality
System (SQS);
• the quality level was improved by a more involvement from the quality control
(QC) and the existence of static and dynamic metrics. However, the lack of
transparency between the host/target environment, the weakness of data set
management and no functional testing must be still solved.
• the configuration management system is appropriate for documents manage-
ment. Indeed, it offers much flexibility for adapting the configuration to the va-
riety of projects developed by SAlT Devlonics. But the facilities available for
the configuration of sources code and executables (software components reus-
ability) are insufficient.
To conclude, this experiment has contributed to the SAlT Devlonics efforts to
pass the ISO 9001 certification.
The next actions will be carried to improve the software components reusability
and to increase productivity in order to avoid to loos competitive power.
SAlT DEVLONICS s.a.ln.v.
Chaussee de Ruisbroek, 66
B 1190 BRUSSEL
BELGIUM

10.3 ALCAST 10146

Implementation of an Automated Life Cycle Approach to


Software Testing in the Finance & Insurance Sector
The Automated Life Cycle Approach to Software Testing (ALCAST) project
aimed to improve manual testing practices and then automate them in two fi-
nance/insurance sector organisations. The ALCAST consortium consisted of three
companies; the Voluntary Health Insurance Board (VHI), Quay Financial Soft-
ware (QFS) and Quality Software Engineering Technologies (Q·SET). The main
218 10 Summaries of PIE Reports

interest groups for this project are companies who test software as part of their
product development life cycle, and those who need to automate the process or
parts of it, using tools.
The ALCAST project ran from January 1994 until June 1995. During Phase 1,
software testing practices in both companies were first assessed against current
best practice in the industry. Having identified key areas for improvement, the V
Model was implemented as a process framework and then further enhanced using
the Systematic Test and Evaluation Process (STEP) methodology.
In Phase 2 of the project, the VHI piloted an on-line test environment with
automated defect tracking and change management. In QFS, support for STEP
was included in their existing corporate information system and automation of
both regression testing and static analysis took place. The main lessons learned
were as follows:
• Specific ALCAST lessons:
• Testing should be involved at the project requirements stage.
• The STEP methodology has proved effective when tailored for individual com-
pany needs.
• Unit testing pays, but overheads and administration should be kept to a mini-
mum.
• Test automation is beneficial but has a significant learning curve.
• Metrics should be kept simple and usable.
• Training for best practices is essential to ensure the success of a company wide
implementation.
• General Project Management lessons:
• Improvement must be managed as a mainstream project in a company, with
equal or higher priority than core business projects.
• Expertise in tools and automation should be gathered as a company asset into
teams and used as a resource on projects.
• The initial assessment in the cycle of Assess, Improve and Measure is critical
for gauging the success of the Project.
The next actions will be to run an end of project dissemination event in Ireland
(estimated 150 companies attending) and distribute this report in booklet form to
Q'SET customers (>7000).
The project was regarded as a success in all 3 companies and plans are in place
for company wide implementation.
Members of the ALCAST Project gratefully acknowledge the moral support
and financial help provided by the ESSI Group at the European Commission with-
out which ALCAST would not have happened.
10.4 AMIGO 21222 219

10.4 AMIGO 21222

Achievements of Software Maintenance Improvement


Goals
The objective of AMIGO has been to improve the organisation of the SW Mainte-
nance process at ELIOP in a systematic way in order to:
• Improve its effectiveness
• Track the problems and corrective actions efficiently and according to quality
standards
• Introduce some defect prevention activities
• Provide related quality metrics
• Increase the level of satisfaction of the involved people.
This Process Improvement Experiment (PIE) has been carried out by ELIOP as
the single contractor, with the financial support of the European Commission,
within the framework of the European Systems and Software Initiative (ESSI).
ELIOP SA is a Spanish company with 100 employees, 30 of them directly in-
volved in SW engineering. Its main activity is the delivery of Industrial Control
Systems, including software for standard computers or embedded into microproc-
essor-based in-house manufactured equipment. Such systems require real time,
continuous operation, and often they are controlling critical industrial processes.
Software is a very important part of the added value ofELIOP products.
The work performed started setting-up the project organisation, performing a
general approach study, and selecting the organisational and process changes to be
experienced in the project context. Appropriate metrics and tools were identified
and selected. A precise definition of the experiments was done and selected tools
were installed and set-up. In the second half of the project, after doing the needed
training activities, defined experiments were put into practice. Final stages of the
project include a review of the procedures and the evaluation of results of the
experiments. Some of the improvements were selected to be introduced as regular
practices in the company and a plan was set-up to do so.
Most significant lessons learnt from the project are the following ones:
• The efforts devoted to maintenance are very distributed in different tasks, and
the causes of the software defects are also very spread, so it is not feasible to
achieve outstanding improvements acting only on very specific aspects of the
maintenance process.
• A positive Return on Investment has been demonstrated for most of the im-
provements experienced.
• Object oriented code using C++ language raises some important maintenance
problems. It is very convenient introducing direct/reverse design tools early in
the life cycle, and starting using them firstly for direct design.
220 10 Summaries of PIE Reports

• The experiment has confirmed an initial negative consideration from the soft-
ware engineers towards the maintenance work. Some improvements introduced
in the management of software defects has reached an outstanding acceptance
from the involved people.

ELlOP, S.A

10.5 ARETES 24148

Application of Reliability Engineering in Testing During


Software Localisation RETES
The ARETES project aimed at the application of the Software Reliability Engi-
neering methodology into the testing performed during the software localisation
process. It is funded by the Commission of the European Communities (CEC) as
an Application Experiment under the ESSI programme: European System and
Software Initiative. The ESSI goal is to promote improvements in Software De-
velopment processes, in order to achieve greater efficiency, higher quality and
increased economy.
Archetypon introduced in ARETES the use of Software Reliability Techniques
in order to evaluate the benefits of it, during the testing of localised products.
According to this initiative a traditional software testing baseline project was se-
lected and held in Archetypon. A new approach was applied on parts of the pro-
ject's life cycle and the results were recorded and compared to the results of the
traditional testing process. The use of Software Reliability Engineering in Testing
has shown important results such as:
• Increase in testing productivity
• Earlier failure detection of severe failures
• Reduction of customer reported problems
• Identification of the areas of the Software that need increased testing effort.
• Effective Management of the testing resources and improvement of the result-
ing reliability of the application.
• Predict the level of the Reliability of the Software earlier than the ending date
of the testing process.

The key lessons learnt from this experiment can be summarised as follows:
• Development of an Operational Profile proved to be the most difficult and
time-consuming tasks.
• Special care should be given to the data collection since this will have great
impact in the accuracy of the measurements and the results
• Training of the people involved in SRE projects is very important for the suc-
cess of the experiment
10.6 ASTEP 23860 221

• SRE is applicable to large-scale projects in order to absorb the overhead intro-


duced from the initial phases of the project.
The ARETES experiment provided enough quantitative data that made feasible
a comparison of this approach with the traditional approach followed. The evalua-
tion of this new approach's usability concluded that the use of Software Reliability
techniques is beneficial in testing a product and providing critical results. Never-
theless, the application of such techniques is more effective in large-scale projects,
due to the overhead that appears in the early stages of the application of such
techniques.
All the members of the ARETES Project gratefully acknowledge the moral
support and financial help provided by the ESSI Group at the European Commis-
sion, without which ARETES would not have taken place.
ARCHETYPON SA

10.6 ASTEP 23860

Automated Software Test Environments for Process


Automation in the Steel Industry
Business Motivation and Objectives
The objective of the experiment is to put into practice an efficient, state-of-the-art
test approach and environment, which is modular and thus scaleable to projects of
different size but of similar base functionality and with similar requirements re-
garding reliability, maintainability, and safety. The experiment shall prove
whether the application of software test methods and tools result in higher quality
software systems, higher efficiency of the testing process ,and reduced efforts for
installing and maintaining the target process and production control systems in the
steel industry.

The Experiment
The test approach, consisting of a test concept and a supporting test and simulation
environment, shall cover all system test phases starting from the factory test onto
the final acceptance test.
In particular, it is foreseen to provide several sub-projects within the baseline
project with tools supporting the test specification, preparation, and performance
on system level. Especially for the integrate engineers of the customer in the ex-
periment from the very beginning. In a second step, the test environment will be
extended by an efficient simulation system to allow to determine in advance the
operational behaviour and the side effects due to changes in the plant configura-
222 10 Summaries of PIE Reports

tion as well as in production The last step will be the introduction of the test and
simulation environment at site for an even better integration of the end-user.

Expected Impact and Experience


The expected impacts are not only more efficiently tested applications, reduced
installation time and cots, reduced down time, better maintainability and training
facilities for the end-user, but also in general higher quality software systems.
ABF-Industrielle Automation GmbH
Wienerstrasse 131
4020 Linz
Austria

10.7 ASTERIX 23951

Automated Software Testing for Enhanced Reliability in


Execution
Business Motivation and Objectives
The objective of the ASTERIX project is to determine how greater attention to
system-level testing can improve software product quality whilst reducing overall
costs as measured across the full software development lifecycle (including ex-
tended warranty periods).
The resources, measured both in terms of cost and elapsed time, which have to
be allocated under normal circumstances to system-level testing mean that there is
usually a strong temptation to economise in this area. We believe that this situa-
tion will improve significantly only if a high level of automation can be brought to
such testing.
ASTERIX will therefore provide the resources needed to establish whether
such an approach, in conjunction with strong higher management support, will
lead to real benefits on a major representative project.

The Experiment
The baseline project chosen for this experiment is the Envisat-l Monitoring and
Control Facility (MCF) project. This 2.8 MECD development is being performed
to tight budget and schedule constraints, and can benefit directly from the applica-
tion of the techniques being proposed within ASTERIX.
The ASTERIX experiment will determine whether the additional effort sup-
plied in the system testing phase is repaid in terms of higher product quality at a
reduced overall price, compared with the measured quality on other similar pro-
jects performed by Anite Systems for major clients. This reflects the overall goal
to demonstrate that the approach proposed will result in real gains both for the
10.8 ATECON 10464 223

customer (through receiving a better quality product, allowing him to reduce op-
erational costs) and for Anite (through reducing the overaillifecycle costs).

Expected Impact and Experience


We see the results of ASTERIX as being having a wide potential application, and
we welcome the opportunity to contribute to an overall improvement in the ap-
proach of the European software industry in this area. We will therefore take care
to see that the ASTERIX results reach the widest possible European audience.
ANITE SYSTEMS Ltd
DAS House 3rd floor
Quayside Temple Back
Bristol BS I 6NH
UK

10.8 ATECON 10464

Application of an Integrated, Modular, Metric Based Sys-


tem and Software Test Concept
The ATECON project provides an important contribution to the systematic devel-
opment of software systems. It has defined a test approach which covers all phases
of testing and is cost effective, efficient, well founded and metric based. Special
attention was given to make the approach scalable, adaptable and tailorable to
projects with different reliability, availability, maintainability and safety require-
ments.
The approach has been applied and validated in seven real-world projects from
different application areas using heterogeneous hardware/software environments
and traditional programming languages like FORTRAN or C.
The project has shown that many practitioners underestimate the power of a
systematic test approach. Not only that the test effort becomes more predictable
and the software becomes more stable, but knowing more about testing techniques
and starting with the testing activities earlier in the project can lead to higher qual-
ity systems and reduced efforts.
Furthermore, it should identify the communities of interest most likely to be in-
terested in the technical details.
The results are especially valuable for companies concerned with the profes-
sional development of software systems. For anybody interested in more details
selected results are available in a computer based test approach consulting and
information system (TACIS).
In particular the executive summary should provide a brief summary of the pro-
ject goals, the work done, the results achieved and their significance, and the next
proposed actions.
224 10 Summaries of PIE Reports

The test approach defmed in the ATECON project will be exploited by all pro-
ject partners. It will be integrated in their respective software development stan-
dards and therefore used in future development projects within the organisations.
It is also planned to extend the test approach to cover object oriented system de-
velopment with new challenges like dynamic linking and overloading.
This section should refer to the support of the Commission in completing the
work, in accordance with Article 6.4 of Annex II of the contract.
This project was carried out in the framework of the Community Research Pro-
gramme with a financial contribution by the Commission of European Communi-
ties.
DLR
Oberpfaffenhofen
Germany

10.9 ATM 21823

Testing and Maintenance Activities Supported by Auto-


mated Tools
The ATM project has represented a good opportunity to improve the testing proc-
ess improvement, within the scope of a company context characterised by a soft-
ware development process already partially improved and formalised. The project
focus was the formalisation of new test procedure and standards, the defmition of
the testing environment supporting the test activities, and the definition of conse-
quent changes to the company Quality System [7]. Moreover, the experiment
aimed to provide all the theoretical and practical knowledge about the methodo-
logical and technological aspects confronted during the PIE execution.
In particular, the ATM project aimed to improve both the quality of the re-
leased software products and the efficiency of their maintenance, by means of the
definition of a testing methodology (standards, procedures, roles, skills) and the
use of automatic testing tools supporting the testing activities. Main project goals
were:
OMI: To introduce inside the company, the skills and culture of software vali-
dation. The set-up of a validation team composed by a restricted number of highly
skilled people.
OM2: To introduce testing procedures together with standards, tools etc. in the
company Quality System
OM3: To define and experiment techniques and methods for software testing.
OM4: To introduce automated testing tools in order to speed-up the implemen-
tation and the execution of tests and to assist technical people during the whole
test life cycle
10.10 ATOS 21530 225

By means of the ATM project execution we have been able to experiment and to
verify on a real project the methodological, technological and organisational solu-
tions adopted. In particular, the executed activities have permitted:
• to assess the initial testing process and to define the improvement and meas-
urements plan
• to train the people involved into the experiment on the theoretical and practical
aspects of the software testing
• to define an appropriate test life cycle coherent with the company software life
cycle
• to formalise the test procedure and to defme the responsibilities of the involved
roles on the test activities
• to define the new testing environment integrated into the company development
environment
• to experiment the test procedure and standards, and the test documentation
management system, on a real project
• to determine the most effective organisation for testing activities
• to assess the final testing process in order to measure the achieved improve-
ment
• to verify the usefulness of measures to evaluate the final product quality
Finally, some key lessons have been learnt by the ATM project execution, as a
result of the difficulties and problems met during the experimentation. Main key
lessons learnt concern the organisational, technological and business aspects. In
particular, we have been able to verify the importance of the people training, the
need of a strong support by the top management, the usefulness to experiment in
advance organisational and technical solutions before to adopt them.
SVIMSERVICE
Via Massaua "Complesso il Faro", Puglia
70123 Bari
Italy

10.10 ATOS 21530

Application of an Integrated Modular, Model Based, Test


Concept for Object Oriented Software Systems
Business Motivation and Objectives
The prime contractor, the Deutsche Forschungsanstalt fUr Luft- und Raumfahrt
e.V. (DLR), develops high technology software systems for aerospace applica-
tions, for ground support systems and for different research areas. This develop-
ment is increasingly been made using object oriented techniques. While support
for the early phases of the projects is adequate, the area of testing and integrating
226 10 Summaries of PIE Reports

object oriented systems is not enough developed. Thus the main interest of the
partners is to extend its software engineering approach by introducing a systematic
test concept for object oriented systems. This project aims at developing and put-
ting into practice an efficient, state-of-the-art, well founded, and cost-effective test
approach for object oriented systems that covers all test phases (unit, integration,
system and acceptance test phases). The test approach must be scalable to projects
of different sizes and with different reliability, availability, maintainability and
safety requirements. It must be a pragmatic approach to be applied on real-world
projects right away. This concept is to extend the software engineering approach
from the partners (DLR and ConSol) and is a further step towards the ISO 9001
certification. The installation of this defined test process will enable the improve-
ment of the software quality produced by the partners. Therefore, a higher cus-
tomer satisfaction will be reached and the consortium will achieve a quicker reac-
tion to new requirements imposed by the market. This will enable us to reach a
better position in the market, to increase the efficiency and effectiveness and
therefore to reduce costs. Selected results and experiences gained within the ex-
periment will be available to all companies interested in a systematic test approach
for object oriented systems. Additionally, companies aiming at the ISO 9001 certi-
fication will be able to have an inside in the costs, efforts and experiences made by
the ATOS consortium and analyse the possibility of including this test approach in
their own companies.

The Experiment
Within the framework of the entire development life cycle, the focus of this ex-
periment is the definition and application of a systematic test approach for object
oriented systems and the enrichment of the knowledge on testing object oriented
systems of the baseline projects team members and their management. The test
approach generated and applied in this application experiment shall fulfil the fol-
lowing main requirements:
• the test approach shaH be based on a modular test concept and on a supporting
test environment
• the concepts and the environment shall consider the interfaces to the different
life cycle phases
• it shall cover all test phases starting from the module test over the integration
and system test phase onto the acceptance test phase
• the integration of quality assurance activities shall enforce the correct applica-
tion of the test methods, procedures and tools defined within this approach and
result in an ongoing improvement of these concepts and environments
• CASE tools should be used to support the different test activities.
• The main activities to be performed within this application experiment are:
• an assessment in the baseline projects to identify the existing test practices and
to determine the knowledge on methods and techniques for testing object ori-
ented systems is available in the project teams.
10.11 AUTOMA 10564 227

• tailor a training program for the project team members


• select the methods and tools to be used in the different test phases will be se-
lected
• define and apply concepts for the unit, integration, system and acceptance test
phases
• evaluate and disseminate the results

Expected impact and experience


The impact and anticipated benefits can be summarised as follows:
• extend the DLR and ConSol company software engineering standard by incor-
porating a systematic and well founded test approach for object oriented sys-
tems
• reach higher customer satisfaction and improve the position in the market
through better product quality of the software developed in-house or by subcon-
tractors making it mandatory the use of this systematic test concept
• further step towards the ISO 9001 certification of the entire software develop-
ment process
• introduce a well defined, integrated, modular test concept and environment for
object oriented systems supported by the necessary process and project man-
agement and quality assurance activities
• increase the knowledge and qualification of the test engineers and project man-
agers in well founded test approaches for each test phase and gain hands-on ex-
perience with state-of-the-art test practices, i. e. methods, procedures and tools
supported by external training and consulting

Deutsche Forschungsanstalt flir


Luft- und Raumfahrt E.V. (DLR)
Oberpfaffenhofen
D-82234 WeBling
Germany

10.11 AUTOMA 10564

Automated Corrective and Evolutionary Maintenance for


Database Intensive Application Products
The AUTOMA experiment has concerned the improvement of maintenance activi-
ties for complex data management applications, through the formalisation and
automation of
• requirements management
228 10 Summaries of PIE Reports

• configuration management
• regression testing
The project has selected the appropriate tools and technologies, and has used
them to build two complementary experiment scenarios, based on the maintenance
activities of two project groups (one for each partner).
The project has been fully successful in the experimentation of Configuration
Management and Requirements management.
In the first case, the whole maintenance line of a complex system has been put
under fully automated control, developing (on top of the selected tool) a CM envi-
ronment and related procedures capable of ensuring full control while avoiding
any extra effort for the maintenance teams (actually contributing to improve the
overall efficiency).
On the second aspect, the specifications of another system (in continuous evo-
lution due to changing and increasing user requirements) have been formalised
and are now under tool-supported control.
The results obtained on testing shows some problem; at the beginning, more re-
sistance has been experienced on these aspects by the development teams, despite
the reduced involvement requested (the preparation of test procedures was per-
formed by dedicated resources), due to the difficulties of showing the advantages
of the approach.
The preparation phase has been however successful, and allowed to derive in-
teresting lessons on how to extract and formalise the functional knowledge re-
quired to prepare good, effective functional tests.
Once an initial set of automated test procedures has been prepared, its exploita-
tion suffered problems related to the high level of changes that the two systems are
still experiencing from one release to the other; this has prevented, till now, a real
deployment of automated testing on one of the two systems, while a partial auto-
mated testing approach is currently operational for the other.
Despite these difficulties, however, the need to formalise test procedures has in-
jected a radical organisational change in the two maintenance teams, that now
handle testing-related activities in a quite better way. This is demonstrated by the
comparison of the process assessments conducted before and after the experiment.
DATAMAT INGEGNERIA DEI SISTEMI SPA

10.12 AUTOQUAL 24143

Automation of Quality System


This document reports on the achievements of the ESSI PIE Autoqual, at its final
state, 18 months after start.
10.12 AUTOQUAL 24143 229

Autoqual deals with automating the quality system of SAET s.p.a., an Italian
SME, whose business is in electrical engineering, but with a growing part of soft-
ware activities.
To gain control of software production, SAET decided in 1995 to set up for its
software department a ISO 9001 compliant quality system. The quality system is,
up to date, paper oriented and has a number of drawbacks: waste of time, both
from the staff and the quality manager, for clerical tasks, negative influence on the
attitude of staff towards the quality system.
Autoqual aims at automating the clerical tasks of the quality system (document
search and retrieval, communication of documents, access to the quality manual,
access to quality sheets and logs), by exploiting as much as possible the context of
the organisation: a PC for each staff member, PCs connected by a LAN, MSOffice
tools on each Pc.
The PIE is relevant for any organisation having a quality system supporting the
production of software embedded in larger systems and trying to automate it.
While the experience in automation is generic, the experience gained in supporting
tools will be specific to tools running on networks of PCs.
The main lessons learnt from the PIE are:
• Automation of the quality system of a project - oriented company requires a
flexible and highly customisable tool.
• The customisation of the tool requires important resources, in the design, im-
plementation and put into service phase. The workflow analysis effort should
not be underestimated.
• With the prerequisites above, an automated quality system, well adapted to the
needs and structure of a company, is a powerful tool. In our case it allows the
project managers and the technical director to exploit quality records to have a
real-time overview of the state of projects.
• Before the PIE, with a paper quality systems, quality records were not easily
usable, and actually not used except by the quality function.
• While the cost of setting up such an automated systems are easy to compute,
the benefits are difficult to quantify, but we believe they make the investment
worthwhile.
• The automation has some drawbacks too: the automated system needs skilled
technical roles to be mantained. If those figures are not available inside the
company, any modification becomes slow and costly.
The Autoqual project is funded by the European Community, under the Esprit
ESSI initiative.
SAET S.P.A.
Viale dell'Industria 14
35030 Rubano (PD)
Italy
230 10 Summaries of PIE Reports

10.13 AVAL 21362

Improvement of Validation and Verification Practices


The AVAL project aimed to improve Verification and Validation techniques used
in Business software and Object-Oriented software, through ami® method intro-
duction in this field. This improvement is targeted on the tool used, but also and
more on the process in place. Results aimed at short term are firm control of qual-
ity of release delivered to the customer, for example in term of residual number of
problems in operational phase.
To perform the experiment, we have selected an internal project, developing a
software dedicated to a customer, with 0-0 and Client-Server technologies, to
control and manage equipment's status and teams who have to support them. But
such an approach is also easily applicable to any kind of software, as process cho-
sen and approach is not specific at all. Dissemination of the practical results
achieved by the experience and generalisation of the process, internally as well as
externally due to competence acquired, are mid-term goals of this improvement
project.
At this final term of the project, we have discovered interesting elements to dis-
seminate:
• A small structure for improvement is efficient in term of overhead, and know-
how gained is transferable to other customer or projects; the structure was
highly focused on practitioners and baseline project team involvement, and less
on outside support; even in a small team or a small company, process im-
provement is achievable, especially when centred on practitioners needs
• V&V process defmition has been improved and is now available to our other
internal projects and external customers; defining such a process has shown us
the interest to complete our methodological document (which is considered a
company's asset but is quite a high level document) by some more deep techni-
cal documents, containing real examples; Improving V&V techniques is also of
a great value as it targets both starting projects, projects in later phases, as well
as projects in a maintenance phase. In a more technical view, this process is
based on defining and implementing Verification and Validation process early
in the life cycle, targeting activities to cover risky areas, and defining traceable
test requirements to be used during later stages
• In quantitative terms, a 30% improvement in efficiency has been noted and
87% of defects are well tracked, showing a real deep involvement of the team.
50% of defects were discovered in early phases of the project, compared to
70% discovered on previous releases by the customer. Such results were high-
lighted by the application of the ami® method, which was a powerful commu-
nication and analysis instrument; we highly recommend this approach as a
practical guidance for an improvement project, through defined steps and by
10.13 AVAL 21362 231

the added-value of defining quantitative indicators to follow the improvement


project and its results
• Improving V&V process gave us a key advantage in our certification process:
we were capable to demonstrate knowledge and mastering of this elements, that
are requirements for certification of a quality system compared to ISO 9001 re-
quirements; therefore, improvement projects should be linked with certification
effort or quality program, where applicable, to get best business benefits on
these two axis
• Tool introduced has been proved efficient, even if some more advance func-
tions may be useful, and its usage is a key success, in term of repeatability,
coverage and visibility of the testing process; in a new project for a different
customer, the same tool will be used, as we were able to show practical results
achieved; Testing tools are of great help in the sense they free resources from
running test cases to defining those test cases or test strategy. In most cases, a
good ratio of test automation is reachable (100% of our regression tests); How-
ever, testing tools are only the solution part, and shall be considered after test-
ing strategy definition and early V&V activities planning
• Process improvement requires some resources and pilot (or baseline) project
first to demonstrate interest and second to produce real examples that can be
used by followers; we recommend to practitioners to defme their first process
in a light manner, as it will probably change with real experience. More effort
should be put on measures and application, so that in the later phases high-
value documents can be shown for internal dissemination effort. Most of the
time, project leaders and practitioners can be easily convinced with real docu-
ments rather than with solid but empty templates.
This improvement action can be therefore qualified as positive, offering us the
possibility to show to other project actors how V&V well-defined process can be
put in place. Such actors (project leaders, practitioners) were most concerned and
convinced by the fact they can use (directly or indirectly) real works products
(those developed by the baseline project) and we therefore consider piloting
through project a real advantage in introducing techniques, tools and process, even
if this requires some time.
OBJECTIF TECHNOLOGIE
28VilIaBAUDRAN
F-94742 ARCUEIL Cedex
FRANCE
232 10 Summaries of PIE Reports

10.14 AVE 21682

Acceptance Testing and Verification Engineering


Business Motivation and Objectives
The City of Kavala as well as most peer organisations in Greece, plan quite heavy
IT related investments, ranging from MIS systems replacement (implemented in
obsolete and expensive technology) to technical systems of great complexity (e.g.
GIS, CAD etc.). In this context the suggested PIE (acceptance testing and verifica-
tion) is of strategic importance for the Municipality, in order to ensure correct and
complete systems supporting mission critical operations. The main motivation
standing behind this PIE, stems from the observation, that a high percentage of IT
related projects, following public Calls for Tenders, resulted in low level of users
satisfaction, as they lacked specification clarity, user involvement at a very late
stage and poor definition of procedures concerning functional and non functional
requirements verification.

The Experiment
The objectives of this PIE proposal is to integrate into the development process of
the Municipality of Kavala, robust methods & tools for conducting the Acceptance
Testing and Verification & Validation phases of distributed, client/server, transac-
tion oriented applications, connected with large databases of indexed images.
These methods & supporting tools, shall be employed both for in-house develop-
ment as well as in subcontractors' management, and shall cover functional and
non functional requirements of such systems.
The AVE project shall be an experiment for the specification & conducting ac-
ceptance tests & verification procedures for systems delivered by third parties
following public Call for Tenders. The experiment shall cover all related phases,
from Call for Tenders write up ("rule of the game setting") down to the actual
acceptance testing and systems verification. The experiment shall be conducted
according to the guidelines defmed by the Information Strategy Plan & Process
Assessment study.
The baseline project (Human Resources Management) is considered as a most
critical application for the Prime User and it encompasses a variety of technolo-
gies and characteristics (large database content in tabular and image forms, cli-
ent/server architecture, distributed nature, GUIs, smart card integration etc.). Is-
sues to be addressed are processes sequences, methodology support, functional &
non-functional issues tackling, data correctness & completeness and standardisa-
tion as well as organisational and contracting issues.

Expected Impact and Experience


The City of Kavala plans to invest very heavily over the next four years in IT.
Protection of this investment can only be achieved by ensuring correct systems
delivery. Given the past experience of user dissatisfaction, skyrocketing mainte-
10.15 BEPTAM 21284 233

nance costs, frequent operations disruption, it became quite clear that a robust,
well defined process, adequately supported by mature methodologies and state of
the art software engineering tools, is the key issue. As the bulk of the software to
be acquired will be contracted to third parties, acceptance testing & verification is
considered amongst those being of principal importance. It should also be noted,
that the experience gained and lessons learnt, are critical also, for a very big num-
ber of user organisations, sharing similar needs and concerns. Therefore, it is be-
lieved that a successful project and a well planned dissemination component, will
result in a broad and deep impact in a whole class of IT users.
Municipality of Kavala
10, Cyprou str
Kavala 654 03, GREECE

10.15 BEPTAM 21284

Institutionalisation of best practices for test automation


and management
This is a final report of ESSI project 21284 "Institutionalisation of best practices
for test automation and management", BEPTAM. The project was supported from
planning to end by the Esprit/ESSI program of the European Commission finan-
cially and with the expertise of the ESSI office personnel. This support for the
project was a major boost for the thoroughness of the project, adding considerably
to the transferability of the results, as we believe. For this support end guidance
we are grateful.
The expected outcome of the project was originally - and still is - to under-
stand the effect and demands of utilising test automation technology. During the
first half of the project we concentrated into learning the issues surrounding test
automation: testing process in general, testing tool support, testing best practice,
large system testing and getting organised for test automation.
After evaluating three tools we selected the most suitable one for our needs:
QualityWorks from the Segue Software, Inc (MA, USA). We have carried out a
piloting project with that technology, and have found out that test automation is
worth the effort. We also have observed that the available test automation tech-
nology seems to be mature enough to be useful in production use. The feasible
scope or applicability of the test automation depends heavily besides the technical
testing environment, also on the capability of the testing organisation and its test-
ing methods.
Nokia Telecommunications Oy
P.O. Box 759
FIN-331 01 Tampere
Finland
234 10 Summaries of PIE Reports

10.16 CALM 21826

Construction and Application of CALM Library of Reus-


able Software Components
This report has been produced to communicate the findings of Process Improve-
ment Experiment (PIE) Construction and Application of CALM Library of Reus-
able Software Components. The PIE lasted for 26 months, from January 1996 to
February 1998 and took over 1500 man days. The PIE was conducted with the
support of The European Commission supported over half the effort through ,
under the European Systems and Software Initiative (ESSI). DGIII Industry has
provided financial support, encouragement and advice.
Without a doubt, the PIE has been a success for Chase. The software develop-
ment process has altered from its original, relatively theoretical and imported
approach to a more controlled and manageable one adapted to our needs.
The benefits have been realised and recognised to the extent that Chase will be
continuing this type of activity. In particular Chase will extend the reuse within
the testing environment and to continue improvement in the earlier phases ofthe
process.
Chase is committed to continuing process improvements, and is currently set-
ting up an experiment to review the testing and delivery processes. Initial results
from these are expected in the third quarter of 1998, with reusable assets being
available early in the second quarter.
The benefits realised may be summarised in two major areas:
• Chase has changed its culture, it is now starting on the road of continuous proc-
ess improvement.
• The software development approach has been defmed to a better standard and
has altered to consider reuse throughout the process.
Of prime importance in terms of the benefit accrued was that reuse should be a
consideration throughout the development process. An important consequence of
this was that reuse should be considered in a much wider context than the usual,
narrow, technological viewpoint of reuse of computer code. This wider ranging
consideration has two main thrusts, the first being to consider how the process can
best make use of reuse of computer code throughout the process. The second was
to consider how the deliverables from other parts of the development process can
be reused. Included in the latter are the production of reusable test packs for de-
veloped business functions, an often overlooked area which can reap many re-
wards in terms of reduced costs and error rates in the developed systems.
The Chase development approach has been improved and this will continue. As
part of the continuing drive to improve, Chase is considering the investigation of
process maturity measures to track where improvements should be targeted. It is
very unlikely that a full CMM model would be acceptable or appropriate, but a
10.17 CITRATE 23671 235

pragmatic subset may be identified in order to monitor and track future improve-
ments.
Chase Computer Services Ltd
83-85 Mansell Street
LONDON EC 1 8AN
United Kingdom

10.17 CITRATE 23671

Competitiveness Increase through Automated Testing


Business Motivation and Objectives
Software products require constant improvements, due to a rapid change of needs
by the users. Successive releases of the products are delivered to the clients, re-
quiring a great amount of time and effort for testing. The costs associated with this
activity move forward the break even point of each individual product.
This is a common business concern and the main driver of this experiment,
whose objective is to reduce total production effort, to reduce errors and to reduce
maintenance resources.

The Experiment
The CITRATE PIE will introduce methods and tools for automated testing, in the
software development process of NOVABASE, in a stepwise and continuous way.
The total duration of the experiment will be 18 months, starting in March, 1997.
The baseline project associated with the CITRATE PIE will be the develop-
ment and upgrade of the CSI software product line, in the areas of materials man-
agement and act management, which constitutes an integrated offer for health care
providers.
The testing effort, the number of errors and the maintenance costs will be com-
pared along the experiment, against current metrics of the company.
NOVABASE employs 80 people, 10 of them involved in the baseline project.

Expected Impact and Experience


Achieving the above objectives represents direct cost savings and an increase on
the quality of the products. Higher margins will be obtained and increased com-
petitiveness will be acquired.
NOVABASE expects the following impact:
• To limit the effort needed to test a software product to 10% of the global effort
in software production.
236 10 Summaries of PIE Reports

• To reduce by 50% the total number of errors found after product release, lead-
ing to a higher quality level and a considerable reduction of risk in the imple-
mentation phase.
• To reduce by 50% the effort needed for client support, transferring the available
resources to other productive areas.

NOVABASE
Sistemas de InformalYao e Bases de Dados S.A
Av. Antonio Jose de Almeida SF, 6°
1000 Lisboa
Portugal

10.18 CLEANAP 21465

Cleanroom Approach Experiment


Business Motivation and Objectives
In the aerospace market, mastering the development effort with respect to the
desired reliability level allows a sharp commitment toward the final customers
needs.
The project is aimed at achieving a process improvement on software develop-
ment in safety-critical domains with special regard to better qualification of the
software reliability of the final product through:
• better efficiency and rigorous statistic measurements on the software testing
process;
• development of intrinsic low-defective software (Cleanroom).
Measurable objectives shall be identified among process metrics related to time
and efforts spent to detect single defects as well as time and efforts spent to de-
sign, execute and report tests. Also product metrics related to code physical prop-
erties like resulting complexity and size shall be considered and correlated to
failure severity classes. With reference to the actual process some yardsticks are
considered.

The Experiment
Context of the Experiment.
The experiment shall be performed in the context of the European Photon Im-
aging Camera (EPIC) Project to be flown in the next XMM/ESA spacecraft. The
software for OnBoard Data Handling units is considered the Application Experi-
ment.
Description of the Baseline project.
10.18 CLEANAP 21465 237

The software for the OnBoard Data Handling shall assure the telecommand and
telemetry link between the payload low-level controller and the spacecraft central
data handling. As the payload is intended for an operational life of at least 2 years
with extension up to 10 years, the requirement for high reliability is very impor-
tant for the software as well.
In compliance to PSS-05, the software process is ruled by a Software Project
Management Plan (SPMP), referring to a Software Quality Assurance Plan
(SQAP), both appointed by ESA. Accordingly, a Waterfall Process is set up
through 4 main phases: SW Requirement phase (SR), Architectural Design phase
(AD), Detailed Design phase (DD) and Production, Transfer phase (TR). Incre-
mental deliveries are not explicitly stated, though several issues on EM, EQM, and
FM are planned. Different levels of testing are planned but not by means of a
separate Validation Team, although reviews are managed by Q.A. Personnel sepa-
rated from the Development Team.
Process Improvement Experiment Steps.
Introduction of a Cleanroom Process implies many impacts on a traditional
software development area as Engineering Process, Quality Assurance methods
and Configuration Management, which shall be set as a rigorous, but not heavy-
weight, process tool.
Beside a first assessment and a final results evaluation, two main steps shall be
performed by the experiment:
Introduction of a Cleanroom Process in the software development where the
Software Life cycle shall be thought for incremental development and followed by
a suitable Quality Assurance Reviews plan, able to support the increasing com-
plexity of the released software. Metrics shall be introduced to monitor progresses
Cleanroom Experimentation Software is incrementally developed starting from
the most critical components as kernel or operating system, in order to achieve an
early control on the trending reliability and a monitored reliability growth from the
software releases for EQM up to the last release for Flight Model (FM).

Expected Impact and Experience


At the end of the P.LE, results shall be matched with the traditional software proc-
ess by means of quantitative metrics on both product and process. Impacts are
expected in the following process area:
• Software testing: reduction of the 25% with respect to the development process;
testing coverage measured and focused on the most critical modules;
• Code Validation: MTTF equals to mission extent Statistical 95% confidence
level on the software correctness;
• Product and Process Quality: usual monitoring of process and product metrics
variance on committed delivery time: ± I month
• People Issues and future plans: at the end of the Experiment, the software de-
velopment process shall be provided with organisation-wide quality procedures
for a cleanroom development able to target the required reliability level of the
238 10 Summaries of PIE Reports

final product with reference to the level of safety of each project. Project man-
agers shall be able to use data coming from other similar projects to make more
accurate predictions on resulting reliability level and needed resources.

LABEN S.p.A.
S.S. Padana Superiore, 290
20090 VIMODRONE (MI)
Italy

10.19 CLiSERT 24206

Best Practise for Client I Server Testing


Business Motivation and Objectives
Software testing is very labour intensive across the industry. As our product grows
in functionality it also increases in complexity. Ideally the product will be flexible
enough to run on different platforms. Credo needs to keep up with the fast chang-
ing technologies of today. We need to organise our testing processes to allow
automated testing to ease the costs associated with regression testing which is the
main objective.
The costs of testing is a concern shared across the whole software industry.

The Experiment
The experiment will involve benchmarking manual tests in old projects. The proc-
ess of testing will be defmed and applied to baseline projects. This includes the
use of metrics, designing new test specifications, and the criteria for accepting
software for testing. Test automation plays a key factor in this experiment.
There are 66 staff in Credo Group; 27 in actual development.

Expected Impact and Experience


Reduced test time for major releases, improved software quality and customer
responsiveness, allowing market expansion.
A complementary goal is to make testing more interesting for those involved,
promoting it as an attractive, alternate career path.
CREDO GROUP Ltd
Longphort House
Earlsfort Centre
Lr. Leeson Street
Dublin 2
Ireland
10.20 CONFITEST 24362 239

10.20 CONFITEST 24362

Creating a Solid Configuration- and Test-Management


Infrastructure to Improve the Team Development of Criti-
cal Software Systems
The experiment could only be carried out with the financial support of the Com-
mission, in the specific programme for research and technological development in
the field of information technologies.
TeSSA Software NV is a developer of critical software systems for different
markets. The baseline project is a typical TeSSA Software NV product, situated in
the market of paypoints.
The experiment, situated in the domain of configuration- and test-management,
has contributed to the aim of being a software production house that delivers qual-
ity systems in time.

Project Goals
By implementing the software control management the project manager can opti-
mise the development process, in concrete terms:
• Cost reduction (l 0-15 %)
• Elimination of errors in an early phase of the process (l in stead of 10)
• Quality improvement of delivered programmes
• Reliability increase of installed programmes
• And last but not least, acceleration of the definite product delivery (about 10%)
Reaching these goals indirectly results in a better work-atmosphere for pro-
grammers, analysts, project managers and management.
This experiment will also be part of the efforts, TeSSA Software NV is making
to produce a quality manual and obtain ISO 9000 certification (specially ISO
12207).

Work done
A quality manager was indicated and an internal base-reference report is written,
to situate problems and costs. The global IT company strategy was defined and the
specific requirements of this PIE are exactly defined to fit in this strategy. In the
running of this PIE we had to change the global plan a few times. Looking for
other existing models we found SPIRE (ESSI Project 21419) promoting CMM
and BootCheck, 2 very interesting projects, giving a wider frame for the global
plan.
The strategic choice between the different tools is part of this PIE and the choice
has been made:
• Version control system and configuration management: PVCS
• Testtool: SQA Teamtest
240 10 Summaries of PIE Reports

One employee was trained in PYCS, another one in SQA Teamtest. Both prod-
ucts are installed, we got consultancy on both products and a global session on
test-methods was given to everyone in the company. This was an important ses-
sion to convince every one of the strategic choices.
In both domains the first procedures were implemented.

Results
At the end of the experiment, every employee agrees that quality and reliability of
the software development process is improved significantly. First figures give a
global improvement of 5%. This is less then expected (7 a 10%), but we believe
that the positive influence in productivity and reliability will become more and
more visible in the next years.
The confidence in this experiment certainly helps to get a better working at-
mosphere.
The responses of the customers prove the confidence in the strategy of our
company, working hard on the improvement of our internal processes and they see
the first results of the new working methods.

Future Actions.
Now the procedures are consolidated and standardised to support the development
cycle internally on the same LAN, the next step will be to extend the procedures to
also support external employees.
With the help of our internal organisation with Lotus Notes Applications, the
proceedings and the procedures are nowadays continuously internally dissemi-
nated.
At this moment we're still looking for opportunities to disseminate our knowl-
edge externally.
TeSSA NY
Clara Snellingsstraat 29
2100 Deurne
Belgium

10.21 DATM·RV 21265

Determination of Appropriate Tools and Methodology as


Applied to a Combined RAD·"V" Life Cycle
This report is the summary of the findings and results from ESSI experiment
21265 DATM-RY, Determination of Appropriate Tools and Methodology for a
combined Rad-"Y" life cycle. The business motivation behind the project was to
enable Fame Computers Ltd to be able to meet the increasing demands of our
10.21 DATM·RV 21265 241

customers to deliver products to market faster, at reduced cost and ever increasing
quality. An assessment was made of our ISO 9001 certified development proc-
esses, via our internal audit programme, to understand where key processes or
tools required improvement in order to meet these objectives of efficiency and
effectiveness. Whilst being compliant to international quality standards there is
always potential for improvement and the following common process characteris-
tics were assessed as being candidates for a series of mini improvement experi-
ments to be performed through the funding of the ESSI programme.
• Projects employed only a standard "V" life cycle and made no use of iterative
or rapid development life cycles and supporting tools where potentially these
could efficiently reduce cycle time or improve the quality of the requirements
capture process
• Limited use of CASE tools were used to support systems analysis. This was
restricted to classical database entity relationship analysis with little potential
for efficient forward engineering into subsequent project phases.
• Limited use of Object Oriented analysis and design methods or tools to support
existing Object Oriented implementation using conventional 3GL (C++) envi-
ronments, compromising the quality and continuity of the analysis, design and
implementation life cycle.
• Unit testing was performed by individual programmers but not formally speci-
fied to ensure rigour or analysed for effectiveness with coverage tools, thus di-
luting the effect of a valuable early stage of verification and validation.
• System testing was a textual scripting process that was manually executed.
With any test cycle repetition this process would become increasingly ineffi-
cient, therefore by limiting repetition to save time the potential effectiveness of
this process for finding system errors was not sufficiently exploited.
• The Quality Management System was manual1y implemented with a manual
document control system that reduced the potential effectiveness of the quality
practices and reuse of existing documentation because of inadequate and ineffi-
cient access to documentary and intellectual assets within the business.
By applying a range of mini experiments to improve these processes it was in-
tended that the business goals of productivity and quality would be more readily
achieved.
The results of the experiments are as fol1ows
• Projects can now employ a RAD life cycle with 4GL tools providing up to
709% productivity gains with an improved requirements capture and mainte-
nance process.
• CASE tools are now available to support object oriented analysis providing
productivity gains of 85% over previous methods, with improvement in quality
of analysis due to formal analysis methods
242 10 Summaries of PIE Reports

• Object Oriented analysis and design methods that show promise of high quality
reusable components developed to better meet requirements by the use of an it-
erative life cycle.
• Rigorous Unit testing that has improved the process to assure the quality of
receipt of third party developed software.
• System testing techniques that have provided a 600% cost reduction by detect-
ing errors earlier in the development life cycle, with the potential for cost re-
duction and increased effectiveness by automation of test repetition.
• An on-line Quality Management System that has reduced asset management
costs by 68% and increased the availability of quality procedures and reusable
documentary assets across the company. A 0.4% increase in productivity per
employee through better practice or document reuse would see the system cost
paid back within one year
From these results the ESSI experiment has been a success and has contributed
to achieving the overall business objectives of improved productivity and quality
of our processes and products.
Fame Computers Ltd
Fame House,
Ashted Lock
Aston Science Park,
Birmingham,
UK

10.22 DOCTES 21306

Document management and test procedure improvement


The objective of the DOCTES process improvement experiment was to improve
the competitive position of TEKNIKER in its market. When the project begun, a
new objective, namely the ISO 9001 certification, appeared to the company and it
has turned out that the experiences acquired by the company during the experi-
ment have been very useful to fulfil the requirements of the ISO 9001 certifica-
tion.
The DOCTES project "DOCument management and TESt procedure improve-
ment" is our approach for the use of systematic procedures in the development of
software products. When analysing the software development process, before the
project started, these two areas were identified as essential aspects of the fmal
quality of our products. We thought that improvements in these two areas would
produce an important progress in the overall quality of the process and, therefore,
an improvement in the fmal quality of our software products.
10.23 EMINTP 21302 243

Within the area of documentation, we have introduced templates and proce-


dures to plan, develop, check and use the different documents generated during the
development of a software product. Practices such as configuration management
or technologies such as workgroup have been taken into account.
Within the area of testing, we have made testing a compulsory requirement
from the beginning of the project with examples and procedures to perfonn tests
and collect results.
As a result of the experiment, an Intranet service has been created, and the pro-
cedures, templates and examples developed during the experiment have been
published on it. This solution has proved to be interesting and in the near future
other services related to software development, such as experiences with software
tools or new technologies and descriptions of reusable modules, will be added.
We have learned during the experiment that within an improvement process the
achievement of long-tenn advantages is as important as, and with partly depend
on, the consecution of short-tenn ones.
The Experiment was carried out within the framework of the Community Re-
search Programme with a financial contribution by the Commission of European
Communities.
TEKNIKER
Avda. Otaola, 20
Eibar (Gipuzkoa)
Spain

10.23 EMINTP 21302

Enhanced Module Integration Test Practice


One of the cost drivers in software design is the testing effort which can be as-
sessed to consume 30%-50% of the overall software development effort and
budget. The experiment described in this report should show a significant potential
for savings in time and cost by means of test automation and the introduction of a
new test harness structure allowing to run tests in parallel.
An obvious advantage of parallel testing versus the sequential approach is a
significant reduction in time scale by means of tests perfonned in parallel by mul-
tiple engineers. To enable such parallel approach the technical environment needs
to be modified.
Therefore the experiment Enhanced Module Integration Test Practice (EMINTP)
was conducted with the following highlights:
• define a new method for module integration testing
• prepare the current development environment for the experiment
• perfonn the alternative method with program samples
244 10 Summaries of PIE Reports

• compare, analyse and evaluate the alternative method with the previous ap-
proach
• develop common guidelines for future projects
• disseminate experience
For this experiment 17 functions of an already developed air data computer
software was taken, in order to reflect the baseline for this experiment. For details,
please refer to Annex I.
The experiment started at the beginning of January '96 with a duration of 12
months and consisting of 14 tasks. It had been run under the European Software
and Systems Initiative (ESSI) with funding by the European Commission.
In advance to the experiment the following objectives had been defined:
• Automation of module integration test execution
• Efficient usage oftest resources/equipment
• Independence from target hardware
• Reproducibility of each individual test
• Use of experience in other/future project

After completion and evaluation of the results, the project accomplishments can be
summarised as follows:
• module integration tests on a host computer system available for multiple users.
• integration tests without any target hardware environment.
• automated test run controlled by a test executive
• automated test protocol generation
• automated test result evaluation
• automated regression testing
• parallel testing by multiple engineers possible
• overall cost reduction for the complete S/W development process achieved by
this test phase
After results analysis of the Module Integration Tests with the new method, we
can conclude that a reduction of integration test time by up to 35% is feasible. The
envisaged goal could be met
Nord-Micro
Victor-Slotosch-Strasse 20
D-60388 FRANKFURT
Germany
10.24 ENG·MEAS 21162 245

10.24 ENG..MEAS 21162

Introduction of Effective Metrics for Software Productions


in a Custom Software Development Environment
Measurement is an integral part of total quality management and process im-
provement strategies. We measure to understand and improve our processes. A
software measurement programme allows organisations to improve their under-
standing of their development and support processes, leading to rational, planned
improvements. Measurement programs also provide organisations with the ability
to prioritise and concentrate their efforts on areas needing the greatest improve-
ments.
Motivated by such considerations, the ENG-MEAS Process Improvement Ex-
periment - conducted by Engineering Ingegneria Informatica S.p.A with the sup-
port of the Commission of the European Communities within the European Soft-
ware and Systems Initiative (ESSI) - dealt with the defmition of a company wide
software metrics database. The project started in January 1996 and ended in June
1997, lasting 18 months.
As a large Italian software house, Engineering Ingegneria Informatica S.p.A.
experiments all problems typical of software organisations involved in large turn-
key projects for custom software products in rapidly evolving technological envi-
ronments. Concerning this scenario, the definition and implementation of a meas-
ure programme was aimed to increase the company's capabilities in predicting and
controlling software projects and in ensuring objective assessment of both the
developed software and the software development process.
Since the most successful way to determine what we should measure is to tie
the measurement program to our organisational goals and objectives, the PIE
selected the "Goal-Question-Metric" (GQM) method for tying the measurements
to the goals.
The experiment was intended to characterise the company's process productiv-
ity and defectiveness in terms of some technological, methodological, organisa-
tional and cultural factors, chosen for their relevance (i.e., that might have a sig-
nificant effect, typically those that are not restricted to a small part of the lifecy-
cle).
Main choices included:
• introduction of new software sizing techniques, based on Function Points
analysis,
• systematic definition of the measurement plan through the GQM method,
• adoption of adequate statistical procedures for data analysis:
• a procedure for analysing unbalanced datasets that include many nominal and
ordinal scale factors. It is adequate for obtaining company statistical baselines.
In our context, a statistical baseline comprises the average values and variance
246 10 Summaries of PIE Reports

of productivity or defect rates for projects developed by the company, allowing


for the affect of significant variation factors,
• an anomaly detection analysis. We used it in identifying those projects that
deviate significantly from other projects either by being extremely good or ex-
tremely bad.

Engineering - Ingegneria Informatica S.p.A.


Corso Stati Uniti, 23/C
35127 Padova
ITALY

10.25 EXOTEST 24266

Experiment of New Testing Strategies


Business Motivation and Objectives
Dassault Electronique manufactures systems which are more and more elaborated.
They require correction and reliability from integrated software. Bearing this in
mind, testing strategy and software validation performance is a key to the success
of these projects.
The better control of the testing process we expect to draw from this experi-
ment, will help us to reduce our development lead-times and costs, to improve our
control over the risk management and to reduce our maintenance costs and the
operating costs for our customers.

The Experiment
To achieve these objectives, we will evaluate the potential contribution of statisti-
cal techniques in each and every aspect of the development cycle, from the unit
tests to the acceptance test. These techniques, combined with our test tools (DE-
VISOR and SYLVIE) and methods, are: code quality measurement (using M-
Square), statistical testing and software reliability modelling (using M-elopee).
The experiment will be performed on the embedded software of electronic
equipment for commercial aircraft, developed by a five-person team over a two-
year period. The main idea is to set up a test team using new testing strategies in
addition to the project team.

Expected Impact and Experience


Besides the above objectives, the quantification of our software reliability will
certainly play a great part in the motivation of the technical teams, by offering
10.26 FCI·STDE 24157 247

them measurable targets and the direct feedback of their work results. Further-
more, it will also give concrete elements to decision-makers.
If these techniques are demonstrated to be powerful, they will be transferred to
our software development process at large.
DASSAULT ELECTRONIQUE
55 Quai Marcel Dassault
92214 SAINT-CLOUD
France

10.26 FCI-STDE 24157

Formal Code Inspection in Small Technical Development


Environments
This report reflects the final results of the FCI-STDE experiment and activities
that have taken place following the experiment. The authors' opinion is based on
the experiment itself and its organisational impacts. The FCI-STDE experiment as
such was possible thanks to support from the European Commission under the
ESSI initiative.
Many lessons may be learnt from the experiment, some related to FCI-STDE
itself, some related to the consequences of setting up similar experiments.
Experimental results confirm the relationship between the cost of inspecting
and testing and the complexity of the object subject to these procedures. With the
experimental data, taking in account the overall limitations of a small experiment,
a linear cost/complexity tendency fits well. Other tendency lines are either similar
or hard to interpret.
Code inspection is only marginally cost effective in our environment whatever
the complexity of inspected objects, if only measured in the terms of this PIE. This
is dealt with in greater detail within this report.
Young and relatively flat organisations, such as our company, offer little or no
resistance to the introduction of code inspections (experimental practices prove
easy to introduce at full scale). Current literature tends to maintain the opposite.
The introduction of code inspection implies the introduction of new processes,
new deliverables and changes in workflow. Though care must be taken to avoid
human errors in the adoption of these new procedures, they are mostly straight
forward and easily understood.
The lessons learnt by Procedimientos-Uno SL are of interest to other software
enterprises interested in improving their practices. National and international dis-
semination of the results in general has led to active debate and audience
participation. Concrete results may be especially significant to small technically
orientated enterprises.
248 10 Summaries of PIE Reports

Procedimientos-Uno S.L.
Juan Lopez Penvalver C.T.LA. P.T.A.
29590 Malaga
Spain

10.27 FI-TOOLS 21367

Application of Object Based Case Tools in Systems De-


velopment Process
Project Goals
The main objective of the experiment is to apply a new method and new tools for
systems development. The method and CASE-tools support object-oriented sys-
tem development, software re-use, iteration and prototyping on various platforms.
The testing-tool supports testing of client-server applications.

Work Performed and Results


At the beginning of the experiment two object oriented CASE-tools (SelectOMT
and Rational Rose) were evaluated and SelectOMT was selected to be the CASE-
tool, that was taken into use in the baseline project. All the standards and instruc-
tions for using SelectOMT -tool were produced.
The Object-TT analysing-method was taken into use in the experiment. Object-
TT in an object oriented systems development method developed by Tieto. The
method is based on the latest version of Unified Modeling Language.
The client/server testing-tool SQA-TeamTest was also tested and taken into use
in the baseline project. The testing was earlier carried out totally manually and
basic idea in taking a testing tool with into the Process Improvement Experiment
(PIE) was to see, if testing tools reduce the amount of errors in completed applica-
tion.
The baseline project of the PIE is the development of the Personnel Compe-
tence and Workload System (KUTI). It was defined as one of the most urgent
development tasks in the business process re-engineering project implemented in
Tieto.
The baseline project is suitable for the application experiment for several rea-
sons:
It is an internal project, it is large enough with several people involved in test-
ing the method and the tools making the results of the experiment reliable.
The KUTI System has interfaces to other systems.
The results and measurements have been collected during the experiment in
each work package. The analyses of the results were done at the end of the ex-
periment.
10.28 GRIPS 23887 249

Main Conclusions
When using an object-oriented CASE-tool from methodological point of view, the
CASE-tool builds a connection between design and implementation phases of the
project.
The personnel in Banking and Financial systems has become aware of compo-
nent libraries. This is a remarkable point both from the business and technical
point of view in the longer run. When we are producing new components it is
crucial for the further use of the components, that they are very well tested.
TT Tieto Oy
Kutojantie 10
Espoo Finland

10.28 GRIPS 23887

Getting the Grip on Software Product Support through


Error Prevention Measures
This management summary lists the most important features of the GRIPS, as
described in this fmal report. The report is to be considered as an early evaluation
of the project, which includes quantitative as well as qualitative assessments of the
project process, project management and results.
• Although we are still waiting for quantitative results in some areas, most of the
project goals have been achieved; the most important being that fewer fatal er-
rors slip through to released customer products.
• The GRIPS has resulted in several organisational changes, including the intro-
duction of three new departments.
• There has been a shift towards increased quality and customer focus among the
involved employees.
• The project has resulted in a great number of spin-offs related to or initiated as
a result of the process.
• From a project management point of view, we have learned to be very careful
not planning projects in detail at beforehand.
• SimCorp's reputation has been improved.
• Although the GRIPS project did not evolve as originally planned, and in spite
of the fact that the goals so far have been only partly accomplished, we con-
sider the project and as a success.

SimCorp A/S
Kompagnistrrede 20-22
1208 Copenhagen
Denmark
250 10 Summaries of PIE Reports

10.29 GUI-TEST 24306

Automated Testing of Graphical User Interfaces


Business Motivation and Objectives
The goal of this PIE is to standardise, optimise and automate our methods for
testing Graphical User Interfaces (GUI).
Currently our GUI-Systems are only tested manually with high expenditures in
manpower and costs. The completeness and effectiveness of these tests are
strongly dependent on the intuition of the corresponding tester; the results are
extremely difficult to quantify and qualify. These deficits will be corrected by the
experiment, resulting in the commercial benefits below:
• Improved calculability of testing costs due to standardised test procedures and
on the basis of the expenditure variables accumulated during the experiment.
• Reduced testing costs and times due to test standardisation and automation.
• Lower guarantee and warranty costs because testing is more thorough and ef-
fective, thus leading to lower residual error rates.
• Lower total development costs and shorter time-to-market.

The Experiment
We will first expand our know-how about formal test specifications, test methods,
and the methodology of the GUI test and ensure that this know-how is state-of-the
art. On this basis we will select the most promising methods for the GUI test and
demonstrate these within the company.
We will select a commercially available tool which allows us to apply the test
methods selected in Action I to the baseline project. We will procure the selected
test tool and introduce it to those employees involved in the PIE and the baseline
project.
We will evaluate these new testing methods and test tool in the baseline project.
For this purpose, during the test phase of the baseline project, we will conduct
tests both according to our traditional manual methods and utilising the new, semi-
automated or automated methods.
The old and the new methods will be compared. The criteria for this compari-
son will be established at the beginning of the PIE. If this analysis shows the supe-
riority of the new methods, we will begin using these throughout the company at
the conclusion of the PIE.

Expected Impact and Experience


We anticipate the following results:
• Software developers and testers will be familiar with state-of-the-art methods
for the formal specification of GUI Tests.
10.30 IDEA 23833 251

• A test-notation will be selected and put into use which pennits the test specifi-
cations to be transfonned into the syntax required by the test tools with a mini-
mum amount of manpower.
• Templates will be available for the efficient specification and notation of the
tests. If necessary, templates will also be available for the specification of the
corresponding system requirements.
• GUI-Tests will be executable and recordable almost solely by means of
tools/automatically.
• The test documentation will be generated by the testing tool.
• An appropriate test tool will be available and have been tested in routine use.

IMBDS GmbH
Kleinseebacher Str 9
91096 Moehrendorf
Gennany

10.30 IDEA 23833

Improving Documentation, Verification and Validation Ac-


tivities in the Software Life Cycle
Business Motivation and Objectives
Software produced by INPS is aimed at automating its social security services and
therefore it is highly related to the continuous law changes and needs for new
services to the community. By the defmition of a comprehensive document proc-
ess during the software life cycle and of well specified Verification and Validation
activities we want to improve the control over the whole software development
process and, consequently, the ability to timely adequate our services to customer
requirements.

The Experiment
The intent of this project is to define a set of document standards together with the
definition of clear rules and roles involved in the document flow management.
Moreover we intend to experiment Verification and Validation activities to be
executed on the outputs of each phase. Available environments as Lotus Notes for
document management and a metrication tool as Metrication (by SPC) for collec-
tion and analysis ofmetrics (also derived by V&V activities) will be experimented
in PIE project.
Major activities in the experiment will be: PIE management, PIE qualification
and monitoring; Set up of the IDEA experiment, inclusive of training, definition of
a methodology defming document and V&V processes, software tools selection
252 10 Summaries of PIE Reports

and acquisition, preparation of technological layout; Application of defined meth-


odology on top of the baseline project (methodology tailoring, definition of base-
line project plans, production of specified documents, application of defined pro-
cedures on software development products, collection and analysis of metrics);
internal and external dissemination of results.
The baseline project selected for this PIE concerns the reengineering of soft-
ware used to collect requests and to order payments of unemployment indemnities
to specific workers categories, a limited project but highly representative of our
typical software production. It will be carried out by an IT peripheral software
development structure (SIR, Bari), already owning a noticeable experience on this
particular application domain, with the control and supervision of the IT central
structure (DCTI, Roma).
The whole IT department of INPS employs about 800 people in Italy, while the
involved peripheral structure employs about 20 people.

Expected Impact and Experience


We expect to gain some advantages such as the reduction of software maintenance
times, a better understanding between different work groups and an easier ex-
change of people between them, the availability of standardised documents, etc.
As an indirect goal we think to increase the attention to well founded practices
of software engineering in the whole IT department and, generally, to promote the
introduction of quality concepts in our software development process.
I.N.P.S. Istituto Nazionale
DELLA Previdenza Sociale
via Putignani 108
70122 Bari
Italy

10.31 IMPACTS2 24078

Improvement of Process Architecture through Configura-


tion and Change Management and Enhanced Test Strate-
gies for a Knowledge-based Teat Path Generator
Business Motivation and Objectives
DTK is developing a knowledge-based test path generator for safety relevant
analogous hardware components for the use in the railway environment. High
transparency, high traceability up to the developer in the case of incidents and
very high product quality is required. It will be demonstrated how the process
architecture for this test path generator can be improved by applying configuration
& change management methods as well as enhanced test strategies.
10.32 INCOME 21733 253

The Experiment
The development process shall be significantly improved by focusing on two key
areas of SPI: testing and configuration & change management. Thus the PIE will
deal with the following:
• Introducing configuration & change management techniques and tools,
• Introducing systematic testing methods and procedures, supported by suitable
tools.
The PIE will concentrate on that part of the software that is already multiply
reused under different configurations. This part would benefit most and has the
most significance for the successful evaluation of techniques and tools. It will be
referred to as the baseline project.

Expected Impact and Experience


DTK expects to gain know-how on optimising its software development process
with respect to configuration & change management and testing. Improved effort,
time and cost estimations and planning of future projects will be achieved through
enhanced test strategies, reliable managing of software documents as well as fast
handling of change requests.
DTK Gesellschaft flir technische
Kommunikation MbH
Palmaille 82
22767 Hamburg
Germany

10.32 INCOME 21733

Increasing Capability Level with Opportune Metrics and


Tools
This final report illustrates the results of the ESSI project n. 21733, a Process
Improvement Experiment (PIE), named INCOME (INcreasing Capability level
with Opportune MEtrics and tools). The Project started in January the 15th 1996
and had a duration of 21 months.
The goal of the experiment was to demonstrate how the use of an assessment
method such as SPICE [1] and a goal-oriented measurement approach like ami [2]
along with specific tools can help a medium to large critical and complex software
development project improve its development process in its weakest areas and
maintain ISO 9001 compliance.
Finsiel, Italy's largest IT services and consultancy group, was the prime user in
this experiment and no associated partners were involved. Finsiel's customers
254 10 Summaries of PIE Reports

include central and local government departments, leading banks and large indus-
trial groups.
The baseline project was a CASE tools development project, to which a signifi-
cant number of resources are assigned each year in different geographical sites,
and in which several innovative technologies are used.
The PIE is now completed and can be considered successful from several points of
view:
• the approach followed in the experiment is valid, the adoption of SPICE and
ami has been effective and the two methods appear to be complementary;
• the improvement actions defined and executed in the areas of the Project Man-
agement, Testing and Configuration Management caused a progress in the
baseline project development process as shown by the specific indicators and
by the process assessment performed at the end of the PIE applying the SPICE
prospective standard;
• both the approach and some of the solutions within the improvement actions
can be generalised and reused in a more general context within Finsiel and the
IT community; indeed, a new improvement plan is being defined within a dif-
ferent Business Unit in Finsiel

Finsiel S.p.A
v. Matteucci (ang. v. Malagoli)
56100 PISA
ITALY

10.33 MAGICIAN 23690

Magic Development Process Improvement


Business Motivation and Objectives
This project is designed to improve the software testing methodology of Magic
Software Enterprises Ltd. It is part of an overall process to raise the level of de-
velopment technology from level one on the SEI CMM scale to level two. Magic's
business is related to the provision of application development tools supported on
a very wide range of platforms, as and networks and interfacing to many different
Data Base Systems. The principal product of the company is the Magic applica-
tions development tool which is an interactive, GUI form based system that is sold
world wide to a broad range of customers. The effectiveness of the software test-
ing activity is to ensure forward compatibility without regression. The objectives
are to increase test coverage by 25%, to improve stability by 25%, to reduce QA
(Quality Assurance) cycle time by 15% and improve reliability by 20%.
10.33 MAGICIAN 23690 255

In order to reduce the cost associated with the production and maintenance of
each new system version and variant, we need to improve the efficiency and effec-
tiveness of our software process by the use of a automated and comprehensive test
environment. The results of an external review of our processes has recommended
the adoption of an integrated test management system as an essential step in
achieving our quality goals.

The Experiment
This 12 month PIE will enable us to complete the fmal selection of a suitable test
tool set, its integration into the environment and its trial use of in a typical devel-
opment project. The baseline project is planned to be part of a release of MAGIC
Version 8 that will include expanded capability in handling of Web connectivity.
In the PIE, the DB Gateway part of the product will be re-tested using new auto-
matic procedures and compared to the current manual method.
In addition to the work to be done by Magic Software Enterprises itself, KPA
Ltd, an Israeli consultancy specialising in Software Process Improvement will be
used to assist, particularly in reengineering of part of the test process. Consultants
will also be used from the chosen tool supplier to integrate the test tools into the
experimental environment.

Expected Impact and Experience


The results will be broadly disseminated both within the company, within our
industrial association and within the broader European IT users community via a
series of presentations and events. If it is shown that the use of automatic testing
has a positive impact, then they will be adopted into the company standard devel-
opment environment for future product releases. Improvement in the customer
perceived quality of delivered product; reduction in the manpower required in the
testing phase for new product variants such as re-hosting onto different platforms
and better assurance in the validity of different variant combinations, will allow a
reduction in the uncertainty factor in delivery commitments. It will also lead to a
reduction in the effort required for maintenance and ongoing support; the ability to
plan future versions with greater confidence and will result in ongoing manage-
ment commitment to further process improvement steps.
MAGIC SOFTWARE ENTERPRISES LTD
5 Haplada Street
Or Yehuda
ISRAEL 60218
256 10 Summaries of PIE Reports

10.34 METEOR 21224

Methodology and Technology-Oriented Improvement to


Reduce Software Process Deficiencies
This document reports on the final results of the ESSI project "METEOR - A
Methodology and Technology-Oriented Improvement to Reduce Software Process
Deficiencies" (project 21224).
Built around COSTAR, a radio control system developed by LOG.IN and de-
livered to clients after customisation, the project addressed the application area of
technically oriented systems, specifically real-time and distributed applications. It
focused on the design and testing phases of the software development cycle and
aimed at providing the right methodology and technology to produce software
with greater characteristics of interoperability, maintainability and reliability.
The technological part of the experiment was based upon a cultural transition
from the proprietary architecture used by the original systems towards an open-
system architecture. Training courses were undertaken on real-time software
architecture and design and a new development platfonn was acquired. The archi-
tecture of the original systems was completely reviewed in the light of clientserver
concepts with stimulating results. The methodological part of the experiment in-
creased our software testing capabilities thanks to a comprehensive set of training
courses and the adoption of two automated testing tools that demonstrated to be
the ideal companion of the new development platfonn. The results of the im-
provement were measured and are available in a comparison table which helps to
compare the current situation with the previous status.
All the people involved in the project agree that much was learnt from it and
seem motivated to keep on improving, looking for the occasion to put the key
lessons into practice outside the project.

10.35 MIST 10228

Measurable Improvement with Specification Techniques


This report describes the results and conclusion obtained from ESSI Application
Experiment 10228, MIST, on the application of the B-Method (ref 6) to the devel-
opment of safety critical, embedded, avionics software. This work was supported
by the Commision of the European Communities under the European Systems and
Software Initiative.
The main goal of the project was to develop procedures for using the B-Method
on the critical software functions of an avionics system while using the conven-
tional development techniques on the rest of the software. An avionics case study
was used to evaluate the new procedures against the conventional procedures.
10.36 OOP 10788 257

The main results were


• For the case study, the overall effort for the development was about the same
but the formal development had more effort in the earlier stages of the lifecy-
cleo
• The formal development detected errors earlier in the lifecycle.
• A small core of experienced engineers is sufficient to develop the safety func-
tion of moderately sized avionics software.
The procedures developed by the MIST project will be included in the divi-
sional instructions as the standard approach for applying formal methods within
Mission Avionics Division of GEC-Marconi Avionics.
The procedures can be refined during future projects. The interface between the
formal and informal developed code needs further investigation. The scope of the
formal specification can be expanded to include some non-functional require-
ments, such as timing.
GEC-MARCONI AVIONICS LTD
AIRPORT WORKS
United Kingdom

10.36 ODP 10788

Universal Online Database Manager for Better Project Su-


pervisory
The baseline-project OnlineDataBase is a toolbox for process control systems with
an open architecture. The idea of toolbox means, that the system will never be
finished and is open for further enhancements based on standardised interfaces.
Engineering becomes easier, because not everything has to be implemented by
ourselves. If there is a program on market which satisfies our needs we can easily
connect it to the existing ODB architecture via the standardised interfaces. This
leads to an open system architecture which can easily be enhanced, adapted, and
tailored.
Annex A contains a graphical representation of the baseline project ODE. In
the upper part of the page you can see the standard products ODB is able to com-
municate with. In the lower part of the page were the several procedures of the
ODB themselves.
Based on this baseline-project we developed a Life-Cycle- and Team-Model for
enterprises smaller than 10 employees. This model shall decrease the amount of
maintenance for a SW-project and increase the efficiency, speed and transparency
of a SW-development process. In the first phase of the project the old projectman-
ager has tried to install a formalistic model for the team. It failed due to the fact
258 10 Summaries of PIE Reports

that small teams are highly interactive and multifunctional so that conventional
approaches used in large fIrms do not work.
The new team model bases on 'Programming by Contract' in which the single
modules are given to sub- and subsubcontractors. The concept is, that each func-
tionality is programmed as a separate task with a small and well-defIned interface
to the rest of the system (object oriented philosophy). The implementation inside
the module is not so important, if the programmer follows the rules of quality
assurance. This idea follows the strategy of cohesion, coupling and information
hiding. So we hope to reduce the amount of software corrections and the cost of a
bug and alteration. Also a parallel working on the project is only possible with a
concept like this, because each group can simulate the interface from a task, which
will be developed from another group.
In November 1995 our team with the Life-Cycle- and Team-Model was ISO
900 I certifIed.
FESTa Gesellschaft M.B.H.
Luetzowgasse 14
AUSTRIA

10.37 aMP/CAST 24053

OM Partners / Computer Aided Software Testing


Business Motivation and Objectives
Our software systems are becoming more and more important for the daily opera-
tion of our customers. It is therefore necessary to deliver extremely reliable and
stable software. On top of this, a rapidly evolving environment necessitates short
time to market cycles. In this context, automated software testing provides a way
to perform tests more consistently and more systematically.
The objective of our experiment is therefore to increase software reliability and
stability, and shorten time to market cycles for our generic modules. We want to
decrease the time spent providing customer support solving software anomalies.
This will allow us to implement a great product variety without loss of reliability.

The Experiment
The experiment will consist of the following actions:
• Market survey, selection and acquisition of a software testing tool
• Installation of a measurement method
• Set-up of the test environment
• Experiment with change management of the testing environment in the light of
software evolutions (releases)
10.38 PCFM 23743 259

The development process of our OMPlMaster Production Scheduling software


will be used as a baseline project. This system runs on a database, and is strongly
oriented towards its graphical user interface. It consists of approximately 200.000
lines of C-code, some of which is shared with other products.

Expected Impact and Experience


OM Partners n.v. expects to be able to increase even further its software quality
while shortening the time to market of new releases. We intend to decrease the
time spent on customer support due to software anomalies, while providing a large
product customisability. This will be obtained thanks to a reduction of the time
spent on executing a test session, resulting in higher test frequency.
After finishing the experiment, we foresee the application of the newly ac-
quired know-how on the development process of our other products. To this end,
work instructions will be established.
A future process improvement might consist in the implementation of a help
desk tool, customer satisfaction being a continuous concern of OM Partners n.v.
OM PARTNERS N.V
Michielssendreef 40-42
2930 Brasschaat
Belgium
http://www.ompartners.be

10.38 PCFM 23743

Proof by Construct Using Formal Methods


Business Motivation and Objectives
Commercial negotiations in the Aerospace business are changing towards includ-
ing system development costs within the system production price. To remain
competitive we must therefore reduce our development costs for the same, if not
better, quality.
V& V accounts for typically 41 % of our total software production costs and
code corrections for typically 32% of our total maintenance costs. Aiming to reo
duce these costs by 10% will represent an achievable and worthwhile contribution
towards this business goal.
These types of cost are common to all software producers, although their sever-
ity will depend upon the level of quality and certification required. This experi-
ment will provide a practical illustration for a real time, safety critical control
system that can be interpreted by other applications for their particular needs. No
special skills will be required for this, only an understanding of the application.
260 10 Summaries of PIE Reports

The Experiment
The technical objectives of the experiment are to integrate design and V& V by
formalising the definitions and terms used in the design and to enforce the neces-
sary constructs/constraints to ensure that these formal definitions and terms will
always be correct in the code.
The experiment starts by producing formal definitions for commonly used defi-
nitions and terms within our projects. These are then used to specify and prove
part of the baseline design in parallel with the baseline project design and V&V.
The results from both can then be directly compared and the benefits quantified.
The experiment is resourced from the project teams to ensure that the process is
practical and acceptable to the ultimate users.
The baseline project will be a real time, safety critical control system for an
aerospace application.
Lucas Aerospace, York Road, design, develop and manufacture real time,
safety critical control systems. The site employs 880 people of which 130 are
involved with engineering software.

Expected Impact and Experience


It is expected to reduce the V& V and code correction costs by at least 10%. This
will immediately increase our competitiveness, and justify the need for further
improvements. How to use formal methods in a practical and acceptable manner
to the project teams (users), and contribute to the personal development of engi-
neers is also expected.
LUCAS AEROSPACE
York Road
Birmingham, B28 8LN
UK

10.39 PET 10438

The Prevention of Errors through Experience-driven Test


Efforts
Through a rigorous analysis of problem reports from previous projects the compa-
nies behind the PET project have achieved a step change in the testing process of
embedded real-time software. The measurable objectives have been to reduce the
number of bugs reported after release by 50%, and reduce the hours of test effort
per bug found by 40%. Both of these goals have been met. The actual numbers
achieved were 75% less bugs reported, and a 46% improvement in test efficiency.
The problem reports have been analysed and bugs in them categorised accord-
ing to Boris Beizer's categorisation scheme [BB}. We have found that bugs in
10.39 PET 10438 261

embedded real-time software follow the same pattern as other types of software
reported by Boris Beizer.
We have also found that the major cause of bugs reported (36%) are directly re-
lated to requirements, or can be derived from problems with requirements. Im-
proved tracking of requirements through the development process has been
achieved through the introduction of a life-cycle management CASE tool. Unfor-
tunately the customisation of the life-cycle management tool took longer than
expected, so no actual numbers on the positive effect of the tool are available at
present, but it is expected that the integration and system testing phases can be
combined, resulting in a major reduction of testing effort.
The second largest cause of bugs (22%) stems from lack of systematic unit test-
ing, allegedly because of the lack of tools for an embedded environment. We have
found that tools do exist to assist this activity, but their application requires some
customisation. We have introduced a unit testing environment based on EPROM
emulators enabling the use of symbolic debuggers and test coverage tools for
systematic unit testing. The unit testing methods employed were: Static and dy-
namic analysis.
We have demonstrated that the number of bugs that can be found by static and
dynamic analysis is quite large, even in code that has been released. The results
we have found are applicable to the software community in general, not only to
embedded real-time software, because the methods and tools are generally avail-
able. Finally a cost/benefit analysis of our results with static and dynamic analysis
indicates that there could be an immediate payback on tools and training already
on the first project.
The efficiency of static analysis to fmd bugs was very high (only 1.6
hourslbug). Dynamic analysis was found to be less efficient (9.2 hourslbug), but
still represented a significant improvement over finding bugs after release (14
hourslbug). We achieved a test coverage (branch coverage) for all units in the
product of 85%, which is considered best-practice for most software, e.g. non
safety critical software.
The PET project has been funded by the Commission of the European Commu-
nities (CEC) as an Application Experiment under the ESSI programme: European
System and Software Initiative.
Mr. Otto Vinter
Brueel & Kjaer Measurements A/S
Skodsborgvej 307,
Denmark
262 10 Summaries of PIE Reports

10.40 PI321199

Process Improvement in Internet Service Providing


This document is the Final Report for the ESSI Project 21199 PI3 (Process Im-
provement in Internet Service Providing) executed by Onion Communications -
Technologies - Consulting, whose headquarters are established in Brescia, Italy.
The main message of the report is a positive one, indicating the successful set-
up and initial deployment of Process Improvement practices in a SME involved in
sw development for telematic services.
Improvements have focused on testing and configuration management practices
and the key lesson learnt is that the major breakthrough does not simply rely in
technological innovation but in a combination of organisational and methodologi-
cal measures.
Communities most likely to be interested in the technical details of this report
are managers of SMEs involved in technical sw development and especially
focusing their attention on testing and configuration management process areas.
The goals of the project were to improve the SW capabilities of an organisation
involved in the fields of communications, technologies and consulting, whose SW
development processes were at maturity level 1 at the beginning of the PIE.
The largest pay-off should be obtained by addressing low maturity findings
first; for this reason the introduction of mature methods and tools was performed
in the key process areas of testing and configuration management.
The work done in the PIE included:
• the selection and procurement of selected methods and tools;
• the training on technology and underpinning methods;
• the set-up of a Quality Manual for the company;
• the definition of rules on how to apply the selected methods and tools onto the
baseline projects;
• the application of selected methodologies and tools within two the pilot base-
line projects;
• the collection and analysis of measurement data with the derivation of quantita-
tive cost-benefit analysis;
• the performance of a fmal process assessment;
• the dissemination of experiences gained both within the company at to the
wider software engineering community through a number of conferences/
workshops.
These activities have provided tangible benefits at several levels: technical
soundness of projects, company visibility, staff motivation, process standardisa-
tion and business operation.
With respect to the main business goals of the software producing unit, the
quantitative improvements that have been observed can be summarised as follows:
10.41 PIE·TEST 24344 263

• Product quality Better (increase) by 17%


• Time-to-market: Better (reduction) by 10%
• Cost: Better (reduction) by 9%

For the future the company is aiming at continuing Process Improvement activi-
ties, especially in the following directions:
• full deployment of the enhanced practices to the daily routine work of all pro-
jects;
• definition of life cycle and methodologies/ tools for Rapid Application Devel-
opment
• completion of the Quality Management System for the sake of ISO 900 I regis-
tration.
The PI3 Project was run under the auspices of the CEC DG III within the scope
of the ESSI Initiative of the ESPRIT Fourth Framework. This support has proven
to be extremely important in ensuring the overall success of the initiative, which
lies in the mainstream of the company core business
ONION
Via L. Gussalli 11
25131 BRESCIA
ITALY
http://net.onion. it/

10.41 PIE-TEST 24344

Introducing a Testing Method


Business Motivation and Objectives
Most GUI development tools support an incremental life cycle. As subsequent
versions of a system are released, testing and maintenance become the most effort
demanding issues. The cause for this is that working components of the system
have to be re-tested when new modules are released or when other components are
changed. This calls for automated regression testing and in general for a more
profoundly understood testing method and supporting tools. This testing method
should also integrate a defect tracking mechanism, guaranteeing that discovered
bugs are managed efficiently and ultimately get solved.
This PIE is aimed at introducing such a testing method in development process.
This concern will gain increasing importance in the wider community of profes-
sional developers using a similar development method. In particular it is a step
towards ISO 9001 certification.
The objectives are mainly to reduce maintenance costs and to test more effi-
ciently.
264 10 Summaries of PIE Reports

The Experiment
We will introduce SQA test tools and provide training and consulting to the em-
ployees involved with the base project. Especially people from QA will be in-
volved, since they will be the key to reducing maintenance costs.
Special attention will be given to the testing method as this is the only guaran-
tee that we will use the tool as efficient as possible.
LGTsoft currently employs 9 people, 8 of them are involved in software devel-
opment

Expected Impact and Experience


We expect to reduce maintenance costs and structure our testing efforts. As a spin
off we expect to get more motivated and skilled employees. We should also obtain
a documented testing method.
LGTsoft
Moorseelsesteenweg 18
8800 Roeselare
Belgium

10.42 PREV-DEV 23705

Establish Defect Prevention Mechanisms and Test Sam-


pling Strategy to Achieve High Quality in Reduced Cycle
Time
Business Motivation and Objectives
Motorola Communications Israel Ltd. is an industrial company developing and
manufacturing radio equipment and radio related products. MCIL is at the stage
where products with embedded software are released to market with ever-growing
quality. However, with new complex systems, the overheads required to achieve
high quality are becoming substantial and development time is lengthening.
Correcting errors in later stages of the development is much more complicated
and much more expensive and time consuming than correcting the same errors in
earlier stages. It is intuitively understood that the avoidance of errors is the best
way to optimise the development process and shorten the development time.

The Experiment
The experiment will investigate patterns of errors that commonly occur in the
development process and will define and implement techniques to prevent these
errors from occurring. Success in this area will reduce the amount of re-work
caused by errors and shorten the development time.
10.43 PROVE 21417 265

The PIE will define and implement defect prevention methods for each of the
phases of the development life cycle and will conclude a strategy for determining
the amount of testing needed in a defect prevention environment.

Expected Impact and Experience


The experiment will yield a better knowledge of common error types and establish
mechanisms to update this knowledge. The results of the experiment may be rele-
vant to a wide range of companies developing telecommunications and real time
software. The experience and mechanisms implemented in the experiment can
benefit these companies and expedite their defect prevention programs activities.
Motorola Communications Israel Ltd.
3 Kremenetski St.
67899 Tel Aviv
Israel

10.43 PROVE 21417

Quality Improvement through Verification Process


Cad.Lab S.p.A., a CADIPDM systems producer based in Italy, carried out the
Process Improvement Experiment (PIE) PROVE to improve the quality of its
software products by implementing a measurable verification process in parallel
with the software development cycle. Two approaches were experimented: dy-
namic verification (testing) and static verification (inspection).
The goal of high software quality is obvious: to produce software that works
flawlessly, but the quality has to be reached without hindering development; thus
the verification process had to be compatible with other priorities like time-to-
market and adding leading-edge features to the product.
As product complexity increases and customers' demand for high quality soft-
ware grows, the verification process is becoming a crucial one for software pro-
ducers. Unfortunately, even if verification techniques have been available for a
few years, little experience in their application can be found among commercial
software producers. For this reason we believe that our experience, will be of
significant relevance for a wider community, not least because it could demon-
strate the feasibility of a structured and quantitative approach to verification in a
commercial software producer whose products sell on the national and interna-
tional market.
266 10 Summaries of PIE Reports

By setting up a verification method and supporting it with an automated


infrastructure we were able to demonstrate the following results on a baseline
project, based on our flagship product for three-dimensional design, Eureka:
• less errors are escaping from our ordinary quality control
• more reliability is assured in the subsequent releases through which our product
evolves
• verification activities are more productive because they can relay on a replic-
able set of procedures which now form the core of our non-regression quality
control
• quantitative data on the correctness of the product are gathered and analysed
continuously
Some key sentences summarise the lessons that we consider most valuable for
whoever will repeat a similar experiment:
"A cultural growth on testing is paramount"
"Developers won't accept change which does not provide a clear return on their
effort investment"
"Provide senior managers the results which are useful to pursuing their business
strategy"
PROVE received the financial contribution of the European Commission under
the ESPRIT/ESSI Programme. The project lasted 21 months
Cad.lab spa
Via Ronzani 7/29
40033 Casalecchio di Reno (80)
Italy

10.44 QUALITAS 23834

Quality Implementation and Testing Application System


Business Motivation and Objectives
Given the fact that the baseline project concerns the client/server version of a
standard banking product which is used by hundreds of financial institutions, it is
imperative that the solution offers not only sophisticated functionality but also
superior quality with respect to reliability and security. This is an ambitious goal
in light of the fact that state-of-the-art technology is used, software development
cycles chronically suffer from time constraints, and demand among customers for
a great variety of hardware and software requirements is high. Testing the right
things at the right time to the right extent is the key to reducing failures at the
customer sites and minimising the cost of fixing software problems during the
development process.
10.44 QUALITAS 23834 267

Another objective of the project, which is important to any software solution


provider, is to meet the market needs by ensuring that the functional specifications
of the software product are exactly tailored to the customers' requirements. Also,
the software provider needs to guarantee that the planned functionality has been
realised technically in a way that any variations and combinations possible are
taken into account.

The Experiment
The implementation of a wider and deeper testing approach is meant to help us
achieve the objectives stated above. There are three areas for improving the cur-
rent test process which are reflected in the three experiments of the QUALITAS
project:
I. Enhanced document testing by running intensified reviews on early develop-
ment results, Le. functional specifications and design documents.
II. Systematic functional testing and test-case determination is to be used to en-
sure that the functional requirement specification document is complete, that there
are no versioning problems, and that implemented functions work as intended.
III. Automated installation and system integration testing is to be performed for
each delivery to a customer site. Each delivered product is to come with an auto-
mated installation test suite guaranteeing that the product works properly in the
customer's software and hardware environment.
The experiments will be conducted in the context of the realisation of a new
version of CORONA CS, a client/server solution for the automatic reconciliation
of accounts.
Management Data Vienna has 132 employees, with about 50% of them being
employed in the software development unit. A staff of 15-20 is involved in the
CORONA CS team.

Expected Impact and Experience


Management Data expects to benefit from the optimised quality of the software
product leading to an estimated cost reduction of about 20-40% owing to the de-
crease in the number of problem-fixing releases from 4 to
1. This will bring about further cost reductions e.g. in the support effort and
other MD software projects. Besides, project-management procedures and co-
operation between the product owners and the developing unit will be optimised;
customer satisfaction will rise, installation time and effort will be reduced, which
altogether is sure to boost our organisation's image as a reliable and highly profes-
sional software provider.
Management data, Datenverarbeitungs
und Untemehmensberatungsges m.b.H.
Althanstrasse 21-25
1090 Vienna
Austria
268 10 Summaries of PIE Reports

10.45 RESTATE 23978

Reuse of System Test Cases through Applying a TTCN


Environment .
Business Motivation and Objectives
Market openness, technological variety and rising customer expectations are forc-
ing vendors in the field of telecommunications to come to market faster with
higher quality products. Bosch Telecom as a developing and manufacturing com-
pany in this area is obliged to react to these requirements.
Currently used system test methods, notations and environments are expensive,
difficult to use and very time consuming. With the ISO standard 9646 including
TTCN (Tree and Tabular Combined Notation) a growing technology has been
established to overcome most of these problems.
The main objective of the experiment is to reduce effort and development time
by the automation of the test derivation process and by the reuse of formal test
case descriptions.

The Experiment
The experiment Will use TTCN to introduce a formal notation for specifying ab-
stract system test cases which are then automatically transformed into executable
programs. These may be used and reused at system test but also during develop-
ment and maintenance. Thus the experiment consists of two steps.
The first step is the system testing of a product feature with TTCN, by means of
the specification, translation and execution ofTTCN test suites.
The second step is the integration of a TTCN test system in the development
environment, and the use of that system together with the test suites arising from
the first step.
The baseline project will be the comparable test tasks of the private communi-
cation network division performed with the presently prevailing development and
test methods . The software department employs approx. 120 employees, 80 of
them involved in the release development of the communication network which
includes the baseline project.

Expected Impact and Experience


Bosch Telecom expects to gain expertise in using a formal test specification lan-
guage and an automated test environment to reduce testing effort, thus shortening
development by 10 % (e.g. 150 KECU per release) and reducing test effort during
customer-specific adaptation or maintenance (e.g. 5 KECU each).
The main expected experience is the winning of developer acceptance for a
formal language and the investigation of the possibilities for using TTCN in the
integration phase.
10.46 SOI·WAN 10494 269

BOSCH Telecom GmbH


Kleyerstrasse 94
60326 Frankfurt am Main
Germany

10.46 SOl-WAN 10494

Software Development Improvement within a Group of


SME's, Focussed on the Wide Area Network Management
The stated objective of the SDI·WAN ESSI project was, for the Tecnomet group,
to achieve a fully repeatable maturity grade (level 2 in SET CMM and Bootstrap
grading) in its software development process, in order to allow for higher produc-
tivity and quality.
The work done within the SDI-WAN project started from the initial Bootstrap
assessment (which pointed out the weaknesses of the software laboratory and
defined the most effective actions to be undertaken), to go on with two partially
independent activities. The former was the defmition of the software development
cycle: the resulting model has been the four phases simplified model detailed in
[16], which was validated through its application into the PACMAN project. In-
stead, the latter activity was the selection of the CASE tool supporting the meth-
odology and helping the software designer in an effective adoption of the method-
ology itself.
While the newly defined development cycle was applied to the PACMAN pro-
ject, a set of widespread training events were organised to be attended by all the
personnel of the Tecnomet Group, in particular by the software designers.
A final Bootstrap assessment was performed at the end of the project, which
was focused on the PACMAN project, in order to measure the actual improvement
introduced by methodology innovations defined during the SDI-WAN project.
The results achieved within the project are definitely positive, since the defined
development cycle model has been coupled with a methodology for the analysis
and design phases. Moreover, all the needed standards have been defined to sup-
port the methodology, being either a template to easily prepare the required docu-
mentation (i.e., software requirements, test case specification, test report) or a
standard automatically provided by the CASE tool (i.e., GRADE).
The next actions will be the application of both the development cycle model
and the analysis and design methodology to the other projects of the software
laboratory.
Tecnomet Pescara Spa
Via Voltumo 135
Italy
270 10 Summaries of PIE Reports

10.47 SIMTEST 10824

Automated Testing for the Man Machine Interface of a


Training Simulator
The project is funded by the EEC in the frame of the ESSI project; the goal of
ESSI is to promote improvements in the software development process industry,
to achieve greater efficiency, higher quality and greater economy.
This initiative required to select a baseline project (Le. an existing development
activity) upon which a new approach on some part of the life cycle could bring
significant results in the above mentioned areas.
Dataspazio has successfully proposed SIMTEST, the evaluation of automated
testing (Le. the testing supported by a software tool) of the real time simulator of
the SAX satellite (SAXSIM).
The experiment is meant to evaluate the differences between a "traditional" ap-
proach to testing (i.e. manual execution of test procedures coded by the developer)
against the automated approach (Le. automatic execution of test procedures where
part of the code is automatically obtained by describing the expected behaviour of
the test).
The automation of integration and test has shown important results:
• less expensive testing
• more reliable procedures
• test repeatability.

Mainly, lessons learned that have been derived from this experiment can be so
summarised:
• it is better to take into account the use of these tools at a very early stage of
system design as they could require architectural and/or implementation peculi-
arities;
• automatic test procedures can be more exhaustive and can produce a better
coverage w.r.t. the traditional approach. This is paid with an extra effort in their
preparation, although capture and playback features can ease the MMI test cod-
ing. An economical return is mainly possible for systems which undergo many
changes during their operational life and for which non-regression testing is a
significant cost;
• the quality and the robustness of the software to be produced is better assured,
as more deep and intensive tests can be easily implemented and run with a low
cost;
• the repeatability of the tests is really granted using an automated tool, as there
are no way to misunderstand or not fully perform a test step, as during manual
test execution;
10.48 SMUIT 21612 271

• if properly planned, the testing tool can do much more than just test for the
system: it can effectively be used to encapsulate the application under devel-
opment, reproducing the context environment and facilitating the integration.
As a result of the experiment, our Company is carrying out an evaluation of the
use of the same approach for a new development in a similar application; the
quantitative data collected during the experiment are being the basis for this deci-
sion.
Dataspazio Telespazio
e Datamat per
L'ingegneria dei
Sistemi Spa

10.48 SMUIT 21612

Systematic Module and Interface Testing


The software process improvement experiment SMUIT was executed by ABB
Netzleittechnik Ladenburg. ABB Netzleittechnik offers to its customers a network
control system, called S.P.I.D.E.R., integrating functions like energy management,
low and medium voltage distribution, supervisor control or data acquisition. The
development of S.P.I.D.E.R. is distributed over three sites in three different coun-
tries (Germany, Sweden, Switzerland). In order to manage its software projects,
ABB Netzleittechnik has developed a software process model, which takes into
account all major aspects of software projects: organisation, planning, realisation
and control. While experience showed, that the results of the early development
activities (e.g. functional specification) are of reasonable quality, major problems
and deficits existed in module testing and testing of interactive user interfaces.
Particular weakness concerned the early detection of errors and the efficiency of
testing. These findings were also encountered by a CMM assessment, that was
conducted in February 1995. Consequently ABB Netzleittechnik started a SPI
initiative. The PIE SMUIT was a part of this initiative. The main objectives of the
PIE were
• Definition and implementation of a systematic test process for modules and
user interfaces
• Definition of a tool environment supporting the test activities
• Measurement of test activities and the collection of relevant data
• Collection of experiences on systematic testing for internal and external dis-
semination
SMUIT is now closed. In its first half a testing process was defined. Further-
more template documents for test planning, test specification writing, and test
272 10 Summaries of PIE Reports

reporting were prepared. Constrained by the development environment of ABB


Netzleittechnik a CAST tool for module test was evaluated and selected. After
having trained the project team in both testing methods and in using the selected
tool, the tool was customised to the specific needs of S.P.I.D.E.R. development
and integrated in the existing tool environment.
The second half of the project was dominated by two circumstances:
• We were not able to identify modules that could be tested separately. The rea-
son for this was that the S.P.I.D.E.R. system is highly integrated with strong re-
lationships between its parts. Furthermore we encountered that we had
underestimated the effort needed to develop test environments in terms of test
drivers and test stubs.
• ABB Netzleittechnik had to reduce its effort for SMUIT. As a consequence
some team members were no longer available.
Based on these circumstances, the project team decided to re-plan the project.
Having our former experience in mind concerning test environment development,
we had to realise, that we were no longer able to perform dynamic tests as planned
in the project programme. Consequently we decided not to invest much in prepar-
ing and performing dynamic tests. Instead we concentrated on static code analysis,
as we expected that the results of static code analysis could give good hints in
terms of complexity and testability of S.P.I.D.E.R. Furthermore our decision was
driven by the wish to save most of the investment made in the first half of the
project and to obtain substantial results valuable for ABB Netzleittechnik as well
as for other software development organisations.
Nevertheless we regard the results obtained during the PIE very valuable for
ABB Netzleittechnik as well as for a wider community. Consequently we present
in this report mostly results and experience gained in applying our systematic
approach of static code analysis. Applying our systematic approach of static code
analysis we were able to make the quality of S.P.I.D.E.R. modules more transpar-
ent to the management. We could provide quality attributes (e.g. complexity etc.)
to the management offering a better basis for making planning of maintenance and
adaptation projects more accurate. This can result in the future in more accurate
schedules and budgets. Furthermore our results can serve as a sound basis to im-
prove decision processes.
The technical and non-technical results of this PIE can be summarised as follows:
• Code analysis is a good means to detect weaknesses and "software tumours" in
software systems.
• Code analysis is able to reduce the effort needed to maintain new or unknown
software components.
• The results obtained by code analysis have to be interpreted by man.
• Every software development environment shall contain a tool supporting code
analysis.
10.49 SPIDER 21394 273

• You need sufficient theoretical and technical know-how to apply code analysis
systematically.
• To get all benefits from code analysis it has to be integrated in the organisa-
tion's software process model.
• Code analysis may not be applied to measure the capability of developers.
• The results of code analysis serve as an input to support management decision
processes.
• Code analysis will only be accepted and applied by developers, if they get
benefits from it.
The know-how gained during the PIE is "stored" by means of a "test office"
which was established and is working at ABB Netzleittechnik. The main reason
behind the concept of a test office was, that the core competence and knowledge
needed to systematically perform our testing and analysis procedure should be
concentrated. The test office offers method and tool know-how as well as practical
support concerning all testing activities to all S.P.I.D.E.R. development teams.
The software process improvement project reported here was funded as a so
called Process Improvement Experiment (PIE) by the ESSI program of the Euro-
pean Community (ESSI stands for European System and Software Initiative).
ABB Netzleittechnik GmbH
Postfach 1140
D-68520 Ladenburg
Germany

10.49 SPIDER 21394

Software Process Improvement Directed to Errors Reduc-


tion
Business Motivation and Objectives
ETRA's main business are complex tum-key control systems (including software,
hardware, sensors, and telecoms) in the domains of traffic, transport and public
services (lighting, etc.). The company's concern on quality and improvement,
particularly in what concerns Software Production, makes absolutely important
continuing the stepped process successfully started with ESSI 10493 MACRO
project. There is a special interest in reducing maintenance and errors costs be-
tween 15 and 20% by getting a detailed definition and implementation of Test,
Installation and Maintenance phases as well as of Configuration Mgmt. Require-
ments Mgmt. and Errors Mgmt.
274 10 Summaries of PIE Reports

The Experiment
ETRA's products typically have a long life cycle, of the order of five to ten years.
This fact, together with their complexity, make Maintainability, Errors Manage-
ment (EM), Tests, Configuration Management (CM) and Requirements Manage-
ment (RM) crucial issues. Up to now, none ofthese issues have been satisfactorily
tackled.
The pilot application to be used in SPIDER as the Baseline Project (BP) to
carry out the PIE will be the kernel ofETRA's Traffic Control System.
The User Requirements, Analysis, Design and Coding of the BP will be revis-
ited, and the processes and documentation standards defined in MACRO will be
applied.
The testing, installation and maintenance phases will be formalised and the cor-
responding procedures defined, so that they are implemented, in co-ordination
with the user of the system, within the frame of the BP.
In parallel with the above, it will be carried out the definition, and implementa-
tion of the processes of EM, RM and CM. Special attention will be put to meas-
urement of cost effectiveness and level of errors reduction.
ETRA employs 70 people, 25 of which are involved in the development unit.

Expected impact and experience

• To extend an improvement culture throughout the group of companies of which


ETRA is the head. The development of a mechanism to incrementally improve
all the phases of the life cycle on the base of the feedback provided by the er-
rors analysis.
• Improving the profile of the Software Department, facilitating a future ISO
9000 certification.

ETRA SA
Tres Forques, 147
46014 Valencia
Spain

10.50 SPI 23750

Software Projects Improvement Process


Business Motivation and Objectives
Due to ever growing user needs, software development complexity has reached a
level where automated tools and methodologies have to be used. Our software
10.51 SPIRIT 21799 275

development process needs reinforcement especially in the areas of testing and


configuration management.
The PIE objectives are part of a larger program of improving our software de-
velopment processes and practices. The software improvement process, although
is a goal by itself, it is initiated by a wider, more important organisational and
business goal, of positioning us as a provider of high quality software products.

The Experiment
The experiment will include selecting tools and methodologies for configuration
management and testing. The tools and methodologies will be used by the baseline
project for 9 months. During this time data will be gathered and the results of the
experiment will be evaluated according to this data. Based on the evaluation,
changes will be introduced (if necessary) and a plan for assimilation will be de-
veloped.
The experiment will be performed at our office on a client/server project of
about 15 man years. Onyx Technologies employs 90 full time employees, 70 of
them involved in software engineering. 6 as part of the baseline project.

Expected Impact and Experience


The expected impact includes: increase in customer satisfaction, reduction in the
number of defects in the developed products, reduction in human resources needed
for maintenance and increase in productivity.
ONYX TECHNOLOGIES
10 Kehilat Venezia St
Tel Aviv 61241
Israel

10.51 SPIRIT 21799

Software Process Improvement: Recondition Information


Technology
SPIRIT is a subproject of the software process improvement project of Baan De-
velopment. Baan's business can be characterised as provider of enterprise resource
planning (ERP) software-packages (standard software). These products come
under the names TRITON (old) and BAAN (new).
Motivations for SPIRlT were: Baan Development was assessed at CMM-level I
(1994). The global release of a complex product requires improved reliability:
TIME & QUALITY. Major customers appreciate commitment to Process Im-
provement.
276 10 Summaries of PIE Reports

Approach of SPIRIT was a comparison between: The old Software Develop-


ment Process (SDP) as used for the development of the Manufacturing package in
the TRITON 3 release, and The improved SDP as used for the development of that
package for the BAAN V release.
Aims of SPIRIT were: Uncontrolled delay: 18 % to < 5% Bad fixes per 100
calls: 4 to 2 Postrelease defects per Kloc: 2 to < 1
The enormous growth of Baan had some impact on SPIRIT: extension of the
duration (from 18 till 24 months) appeared necessary due to a rescheduling of the
baseline project and the second aim (bad fixes) had to be adapted because of a
reorganisation of Baan Development. Moreover the approach of SPIRIT required
adaptation because insufficient reliable data on TRITON 3 was available.
This document describes the four topics highlighted below:
Project goals: SPIRIT aims to demonstrate that improvement of the Software
Development Process (SDP) results in predictable SW-delivery and quality.
Work done: SPIRIT incorporates assessment, development and implementation
of software development process (SDP) practices following the CMM approach
for development of BAAN V Manufacturing.
Results achieved: SPIRIT reveals that we have currently the capabilities for
CMM-IeveI2 and are approaching CMM-IeveI3.
Significance: SPIRIT enables organisations to benefit from our key lessons:
How to structure an SDP to enable controlled development in parallel with fur-
ther process improvements. How to make an SDP description accessible to facili-
tate its application in daily practice. How to measure the kind of objectives as
defined for SPIRIT and immediately feedback the measured results to all persons
involved so that they are enabled and motivated to adjust timely.
The Commission of the European Community (CEC), funded SPIRIT as a
process improvement experiment (PIE), under European Systems and Software
Initiative (ESSI) project number 21799.
Baan Company N.V
P.O.Box 143
3770 AC Barneveld
The Netherlands

10.52 STOMP 24193

Systematic Test of Multi-platform Product


Business Motivation and Objectives
• cost reduction for software maintenance (50% reduction of error-costs within
the first 6 months after release shipment)
• establish tool supply for software tests
10.53 STUT-IU 21160 277

• build up know-how for employees in software testing


• cost reduction of 50% for the porting of software to UNIX-platforms
• improve methods and procedures for software engineering

The Experiment
• definition of procedures for systematic tool-based software testing
• build up database with test cases
• training of staff in test methods and tools
• compare new test method with existing test procedure
• detect higher amount of defects before shipment to the customers

Expected Impact and Experience


• improve existing phase model for software development
• higher qualification of staff concerning software testing
• employees realise benefits of systematic software testing with tools
• achieve higher standards in software quality
• continuos improvement of software test process
• reuse of test case database in further releases for other platforms

TECHNODATA
INFORMATIONSTECHNIK GmbH
Postfach 1346
71266 Renningen
Germany

10.53 SlUT-IU 21160

Statistical Usage Testing for Industrial Use


Software testing is a very important part of software development. Total costs for
testing are approximately equal to total costs for software design. Customer re-
quirements for product quality are very high. A new software testing method,
statistical usage testing or STUT, is expected to provide essential improvements to
the quality and productivity of the test process.
The STUT-IU (statistical usage testing for industrial use) project where the
STUT method is experimented, is now ready. Evaluation results regarding appli-
cability of STUT method to baseline project are now available. The results of the
project are likely to interest people within and outside Ericsson that have knowl-
edge of black-box software test methods or that are interested how to implement
software process changes. The goal of the STUT-IU project was to demonstrate
278 10 Summaries of PIE Reports

and evaluate the applicability of the STUT method on industrial scale. In the pro-
ject, STUT method was adapted to applicable parts to baseline test process.
In STUT method a new concept, Function Usage Model (FUM), is used. It has
proved to be a powerful way to understand and describe the functionality and use
it as a basis for test case specification. Usage modelling has obvious benefits when
the function can be described by a black-box interface and when a graphical view
of the function helps to learn the functionality. Results show that the FUM must
be a compromise between real usage and test requirements of Function Test. Also,
a good STUT tool is essential to be able to industrialise the STUT method prop-
erly. When STUT method is was adapted to our baseline test process it was no-
ticed that the role of reliability estimates was not as great as was supposed earlier.
In STUT-IU project it has been proved that STUT method is technically appli-
cable to baseline project. Evaluation results indicate that little better quality is
obtained with STUT method than with baseline test method. They also show that
it is beneficial to apply STUT method to network and traffic function classes in
the short-term. For these function classes productivity has increased a little. As
figures in productivity calculations are based on quite little data, they should only
be interpreted as indications, not as statistically strong conclusions.
In the next baseline project STUT method will be applied to network function
class and applicability of STUT method will be studied in more detail for traffic
function class. A new STUT tool, that fulfils the criteria required, will be taken
intd use. In the long-term essential improvements should be made to test execution
support so that more wider application of STUT method would be profitable.
Oy LM Ericsson Ab
FIN-02420 Jorvas
FINLAND

10.54 SWAT 23855

Software Automated Testing in a Real Time Environment


Business Motivation and Objectives
TSc is a supplier of international telecommunications equipment which contain
elements of integrated hardware, software and mechanical engineering. The soft-
ware element continues to grow in importance and size and is often now the only
differentiator in the market. With this increasing dependence on software, TSc
wants to continue to increase its software reliability and quality. Improved Soft-
ware testing is a key element required and the SWAT automated software testing
project is part of a wider drive within TSc to deliver improved software. This will
give external business benefits of improved time to market by increasing speed of
10.54 SWAT 23855 279

testing; increasing the number of defects found and reducing the time taken to fix
by providing better test feedback.

The Experiment
The achievement of the above objectives require the improvement of the software
testing phase by performing the following tasks:
1. Perform a "Health Check" on and update existing Automatic Test System.
2. Measure software complexity and feed into test creation activity.
3. Compare manual and automatic testing on the baseline project.
This SWAT PIE involves the automating of the test phase of a separate baseline
software development project (typically a future release of Central Unit software).
The Baseline project will be selected on the basis of its suitability for use in the
SWAT PIE. This baseline software requires a team of around 6 engineers over an
extended period, which involves Alpha and Beta Test phases. A 50% reduction in
the length of these phases and an increase in the number of defects found are
SWAT PIE aims. TSc employs around 470 people, of which 45 are involved in
software development.

Expected Impact and Experience


The current test process relies heavily on manual testing. By automating testing,
an increase in the reliability of the product software will be achieved, concurrently
with a decrease in the time taken to carry out the testing. The decrease in the test
time will allow the product to reach the market place earlier, leading to commer-
cial advantage. Additionally the discovery of software defects earlier in the soft-
ware lifecycle leads to reduced defect repair costs and improved customer percep-
tions. The experience gained will be applicable to other real-time software
applications within TSc and other interested companies.
Telecom Sciences Corp. Ltd
Victoria Place
Airdrie
Scotland, ML6 9BL, UK
280 10 Summaries of PIE Reports

10.55 TEPRIM 21385

Test Process Improvement Library of Reusable Test


Cases, Centralised Test Documentation Management,
Metrics
In order to gain cost reduction, greater quality (product, service) and better cus-
tomer satisfaction the TEPRIM PIE aimed to improve the testing process by im-
plementing:
• An electronic test documentation management system, based on a centralised
repository of all the test data (Product specification test plan, test cases, errors,
cost, etc.);
• The automatic recording and automatic re-execution of test cases;
• The review of metrics and measurements according to product evaluation
model defined in ISO/IEC 9126 and the implementation of process metrics;
In fact testing phase is, for us, the most intensive labour, time consuming phase
in the software project life-cycle, in addition, testing effort is about 30% of the
total project effort and the activities are based on the knowledge offew specialists.
The TEPRIM project ran from January 1996 till June 1997 and all project ac-
tivities were successfully completed. The TEPRIM project results are addressed to
software companies whose core activities are software development, software
maintenance and related services. Particularly the test data model and the specific
testing environment managed can be easily reproduced in the companies which
have the AS/400 or PC platform for software development, maintenance and PC
platform for testing activities.
Some figures, diagrams and graphical images have been included in the an-
nexes to help the processing ofthe report.
Here is a brief summary of the work done and the results achieved:
• Initial process assessment of Engineering processes (ENG.S Integrate and test
software - ENG.7 Maintain system and Software) according to the SPICE
model. The conformity profile (weakness and strengths) of the selected PIE
processes were identified and specific improvement steps were established.
• A final formal assessment of the same processes (ENG.S and ENG.7) according
to SPICE model. The initial assessment in comparison with the final assess-
ment has brought out, in objective way, the improvement obtained from the PIE
execution.
• Identification of adequate software testing tools (available on the market) and
the building up of a new testing environment on PC platform. The new testing
environment is based on interactive testing tools for testing recording/playback,
tools for test data documentation and data error tracking.
• Identification of a new test data model and the building up of an electronic
documentation management system working in CIS architecture with a reposi-
10.56 TESTART 23683 281

tory of all the test data records. A total of 40 data were selected to cover test
plan, test case information, test execution data and test errors.
• Execution of two pilots with 450 test cases in electronic form and reusable, 170
test cases recorded and re-executable, test errors stored and all test data usable
for statistic and improvement.
• Identification, experimentation and validation of a Quality profile (set of met-
rics) for the Business Management System (BMS) product following the AMI
approach and the ISO/lEC 9126.
• Acquisition of advanced skills on testing tools and methodologies, on SPICE
assessment model and on process/product metrics (AMI approach).
• Introduction of a workgroup organisation for testing management activities.
• Specific internal and external dissemination activities were implemented to
give a wide visibility of the experiment and the availability of a web page
(http://max.tno.it/esssiteprim/teprim.htm).
The results achieved were considered very positive and the use of new testing
environment was extended to other software product development projects. Plans
are in place for Company wide implementation of TEPRIM and for further dis-
semination of activities on both internal and external levels.
The TEPRIM project has been funded by the Commission of European Com-
munities (CEC) as a Process Improvement Experiment under the ESSI pro-
gramme.
IBM SEMEA SUD s.r.l

10.56 TESTART 23683

Improvement of Software Testing Phase, Especially with


Respect to Requirements Management and Change Con-
trol
Business Motivation and Objectives
Israel Aircraft Industries (IAI) develops, manufactures and maintains embedded
systems for the aerospace and large electronics systems market. Software is an
essential part of all lArs advanced products. We are interested to enhance the
efficiency and effectiveness of the software testing phase, a major quality factor
and cost driver in complex mission critical systems, especially with respect to
automation of requirements coverage and change control.
The objective of this experiment is to reduce the time and effort required for
testing activities without reducing product quality, thus reducing the cost of devel-
opment to improve our competitiveness and shortening time to market while im-
proving customer satisfaction.
282 10 Summaries of PIE Reports

The Experiment
The aim of the proposed Process Improvement Experiment (PIE) is to improve the
software testing phase at the component and at the integration level. The experi-
ment especially focuses on the relation between testing and requirements coverage
which is central to achieving satisfaction of our customers needs. This experiment
will refer to following aspects of the development process:
I. Software testing during various software testing phases to obtain satisfactory
coverage within project time and budget constrains using the Logiscope tool (or
similar).
2. Management of system requirements allocated to software using RTM tool (or
similar).
3. Management of requirements changes using a data base application developed
at IAI (RCR). This will be used to control requirements changes: description of
desired change, cost estimation of the change, impact and risk assessment of
the change and recording of resolution. The output of this change control proc-
ess will drive the changes to the base line requirements data base in RTM.
The baseline project is an advanced avionics system enhancement including
hardware modifications as well as extensive software development of its main five
components. The development effort is presently estimated at over 20 man years.
This baseline project is typical for IAI and reflects the commercial market trend of
our products which demands high quality, increasingly complex functionality and
short development time.

Expected Impact and Experience:


IAI expects to increase test coverage of requirements, increase the percentage of
code exercised in testing and reduce integration testing cycle time. Consequently
we expect to reduce software development costs while increasing product quality,
especially in mission critical systems.
In the PIE we want to demonstrate the following Measurable Objectives:
• Increase test coverage of requirements by 15%;
• Reduce integration testing cycle time by 5%
• Reduce software testing costs by 10%;
• Increase percentage of code exercised in testing by 15%.

Israel Aircraft Industries


Corporate R&D (Dept 9100)
Ben Gurion Airport
70100 Lod, Israel
10.57 TESTING 21170 283

10.57 TESTING 21170

Testing Methodology for Technical Applications


Business Motivation and Objectives
On February 1995 LABEIN was awarded the ISO 9001 registration for its soft-
ware development activities. One of the main comments ofthe ISO 9001 auditors
concerning LABEIN's Quality System was related to the testing procedures,
which could be subject to much improvement. The lack of a good testing method-
ology had been recognised for some time to be a source of high cost in order to
obtain commercial levels of quality, and is frequently the cause of budget and/or
schedule slips. The project objective is to improve the testing process at LABEIN
so that it can be done quicker and cheaper as well as to improve the quality of the
fmal product.

The Experiment
The objective above will be achieved developing the appropriate testing proce-
dures, selecting the most suitable tools that help the developers during the testing
phase both in the testing itself and in the testing managing and accounting, refin-
ing the method with its application to the pilot project and disseminating the
method both internally and externally. The baseline project will be the SRS I and
II projects. These are parts 1 and II of a technical decision-support system running
on UNIX workstations which are connected on-line with a SCADA (Supervisory
Control And Data Acquisition) system controlling a large electricity transportation
network. The SRS-I is a 10 man-year system which has been completed during the
first quarter of 1995, while SRS-II is the second part of SRS, a 4 man-year project,
and is scheduled to last from July 95 through November 96. Most of the data from
project SRS-I are already available, such as effort spent in testing and bugs re-
ported, among others. Since SRS-II includes new functionality plus the deploy-
ment of the system at a new customer, real quality data such as bugs found and
number of complaints will be available for its use in this PIE, and the improve-
ment in the testing phases of SRS-II due to the PIE will be easily compared with
earlier data from SRS-I.

Expected Impact and Experience


The main expected benefit of the experiment is the availability of a Methodology
which provides support in the accurate dimensioning and management of the test-
ing activities in software development projects, as well as tool support for these
activities. Personnel productivity is expected to increase through decreased rework
and earlier problem detection. At the same time, an increase of the quality of the
products will produce important savings in the maintenance phase, since fewer
bugs will be left in the system, which means important savings for the customers
284 10 Summaries of PIE Reports

and an improved commercial image of the developer. Also, since much of repeti-
tive testing is expected to become automated, the maintenance phase of products
is expected to be cheaper and of better quality.
LABEIN
Parque Tecnologico Ed 101
48170 Zamudio (Bizkaia)
Spain

10.58 TESTLIB 21216

Use of Object Oriented Test Libraries for Generic Re-use


of Test Code in Type Approval Computer Controlled Test
Systems
TestLib was aimed to explore the use of C++ object oriented test libraries for the
automated generation of reusable test software code in automated test system for
telecommunication devices. The outcomes of this experiment are directly applica-
ble to a very wide community involved in instrumentation control software devel-
opment or measurement automation under computer control. By applying the
techniques involved in this experiment software engineering and test engineering
specialities interact within the test software development process in a higly effi-
cient way through task specialisation. Software allows development effort to be
reduced in a 30 to 40 percent. Project goals included:
• The automation of test software generation by using high level and friendly
syntax command language and a high level code compiler.
• The automated generation of a relational data base for allocating and handling
all test code components and definitions.
• The automated integration of generated test code within a Test Engine Core
application software, developed under the baseline project, to obtain the final
run-time test execution environment.
The work carried out in this experiment unequivocaly has demostrated all these
goals were 100 % achievable, assuring the mentioned development effort reduc-
tion in future systems and rationalizing the development organizational structure.
The resulting products of the development work performed, which support this
previous statement are as follows:
• A library for tests repertory generation and handling has been delivered.
• A library for tests sequence definitions including the procedures for generation
of an arbitrary test flow sequence has been delivered. These procedures are
based on the development of a high level command language and its associated
compiler, also delivered.
10.59 TRUST 23754 285

• A library for automated generation of data bases has also been produced.
• A library of generic instrumentacion drivers has been generated and an
importaton tool for incorporating drivers of specific instruments from different
vendors developed according to the emerging industry standard VXI p&p have
also been developed.
• A library for the automation of measurement software code generation in C++
has been delivered.
• The automation of the integration process for all the required modules into the
Test Engine Core (base line project) to generate, without user intervention, the
final run-time test application software.
• The encapsulation and integration of TestLib within the standarized comer-
cially available CASE tool, HP SoftBench, widely used in Unix ( Sun, HP)
software development environments.
Due to the technical success of this project, the contractor has taken the deci-
sion to commit in investing in further software development to create a commer-
cial product based on the proved technology used by TestLib. Plans for releasing
this product into the test & measurement automation market are before the end of
next year.
Integracion y Sistemas de Media, SA
C/Esquilo, 1
28230 LAS ROZAS (Madrid)
Spain

10.59 TRUST 23754

Improvement of the Testing Process Exploiting Require-


ments Traceability
Business Motivation and Objectives
In the Aerospace Industry product costs and time-to-market are today 2 key com-
petitive levers. Helicopter is a very software intensive product, in which the avion-
ics software contributes to the product costs by more than 30% and to the overall
time-to-market by 40%. The software Testing and Validation activities signifi-
cantly contribute to the above mentioned high avionics software costs (50%) and
lead-time (40%).
The overall objective of the TRUST PIE is to improve the Testing and Valida-
tion Process, in order to reduce the related costs by at least 15% and to cut the
avionics software development time by at least 10%. The emphasis is on improv-
ing the process, rather than on just automating some of the testing/validation ac-
tivities.
286 10 Summaries of PIE Reports

The Experiment
In order to achieve this objective, requirements traceability will be exploited, as
the mean to directly and un-ambiguously relate subsets of testing and validation
sequences to specific subsets of requirements. The goal is to keep track of what
and how should be tested when requirements change or when amendments to
faults in a product ReleaseNariant have to be propagated to all the other relevant
active ReleasesNariants.

Expected Impact and Experience


In the context above the expected impacts are:
• reduction of the development costs of new version/variant
• improvement of the testing effectiveness and consistent reduction of testing
effort
• improvement of the global software life cycle efficiency due to better traceabil-
ity support

Participants
• Agusta - Un' Azienda Finmeccanica SpA: Helicopter manufacturer, having in
charge the development of both mechanical and avionics
• TXT Ingegneria Informatica: software house with large experience in software
engineering practice and products development involved as external assistance
provider.

AGUSTA UN' AZIENDA


FINMECCANICA S.p.a
Via Giovanni Agusta 520
21017 Cascina Costa di Samarate
Italy

10.60 USST 23843

Usage Specification and Statistical Testing


USST - ESPRIT Project 23843 - is supported by the European Commission
within the European Software and Systems Initiative (ESSI) as a Process Im-
provement Experiment (PIE).
The objective of the USST experiment was to introduce Statistical Testing
based on a Usage Specification in Alcatel's test process for public switching sys-
10.61 VERA 23732 287

terns. We expected an improved and predictable reliability of the delivered system


and an improved test efficiency.
By concentrating the test effort on the intended use, USST helps to overcome
the cost-coverage dilemma, which is common to testing of all real systems.
Usage Specification means modelling of the intended use of the system as
Markov chains. From the Markov chain test cases a tool generates test cases by
Monte Carlo simulation.
• The generated tests were applied to two steps of the test process:
• During functional tests the focus was on covering the whole model by tests in
order to remove as much defects as possible.
• During system test the focus was on statistically generated tests in order to
measure the reliability of the tested features.
The experiment has shown the applicability of USST in the field of telecom ap-
plications. The method however is transferable to other application domains. The
effectiveness of tests in finding defect is significantly better than conventional
"hand-crafted" tests. The quality of features tested by USST is very high, it ex-
ceeds the quality of comparable features the were tested with the "traditional"
approach.
However, the experiment has also shown some areas for improvement:
• For complicated features and feature interactions a sophisticated modelling
notation is necessary, "flat" state machines aren't sufficient.
• A smooth integration of tools is desirable. This would allow full automation of
tests: generation, execution and evaluation.
• The usage model could be a valuable input for SW design; therefore it should
be created early in the SW life cycle.

Alcatel SEL AG
LorenzstraJ3e 10
70435 Stuttgart
Germany

10.61 VERA 23732

Verification, Evaluation, Review and Analysis


The aim ofthe VERA (Verification, Evaluation, Review and Analysis) experiment
was to investigate the balance between the use of review techniques and testing
techniques to provide cost-effective error detection for real-time software systems.
The experiment involved the analysis of existing metrics for code reviews and
Fagan inspections, experiments with automated review techniques supported by
Rational's Ada Analyzer and experiments with automated testing techniques sup-
288 10 Summaries of PIE Reports

ported by Rational's TestMate. Data from the baseline project, including the inte-
gration test phase, has been used to determine the cost effectiveness of the differ-
ent review and test activities. Analysis has also been undertaken to determine
which metrics best indicate where verification effort should be concentrated.
Results indicate that:
• Fagan inspections are two and a half times more effective at finding major
defects than code reviews.
• Static analysis can find 26% more major defects than code reviews. Further-
more only 5% of the defects detected through static analysis were also detected
by code review.
• The benefits of TestMate for unit test case generation are limited within the
baseline project.
• The people are important to the effectiveness of the process.
The conclusion of the VERA experiment is that the most cost effective error
detection process for real-time software is static analysis coupled with Fagan in-
spections.
This project was funded by ESSI under the PIE programme and will be of in-
terest to the radar control and processing software, real-time software, reliability
and embedded Ada software communities.
GEC-Marconi Radar and Defence Systems, Radar Division
Glebe Rd
Chelmsford Essex CMl lQW
UK

10.62 VERDEST 21712

Software Version Control, Documentation and Test Man-


agement
Project Goals
The objective of PIE is to purchase and implement version control, document
management and automated software testing tools to improve and develop soft-
ware design and development process itself in our company and in our associated
partner organisations.

Work Performed and Results


Project gained its functional goals at the scheduled time. The expected technical,
business, organisational and skill goals were mostly achieved. The actual cost was
about 70 % of the estimated total cost
10.63 VISTA 24153 289

The experiment project was divided to three phases.


The first phase of the experiment project focused on the analyses and defini-
tions of our traditional and the objective ways to manage the practices and meth-
ods. We also defined specifications of the tools to be used. After this we made a
choice of the software tools to be used in the experiment project. We installed also
the selected tools to our use. The result of the first phase was the summary docu-
mentation of defined practices and tools
The second phase of the experiment project focused on the implementation ac-
tivities. This included more detailed definitions of the workflow management and
how these definitions should be brought into use in the daily operations and rou-
tines in the baseline project. This phase included also user training activities. We
also got preliminary results of the project concerning testing and version manage-
ment.
The third and last phase of the experiment project included document manage-
ment based on Intranet-solution including user training activities. The result
measurement as well as the result comparisons to other projects was also included
to this project phase.
TT Tieto Oy
Kutojantie 10
Espoo
Finland

10.63 VISTA 24153

Visualisation Software Testing Automation


The objective of the PIE was to introduce the use of automated testing and man-
agement of the testing process in the software life-cycle at ECN. In particular this
project is targeted at visualisation and graphical user interface software. Automat-
ing the testing process is expected to lead to considerable time and cost savings,
and thus in a shorter time to market and a lower cost price of ECN visualisation
and user interface software packages. Furthermore quality improvements are to be
seen.
Our experience in the beginning of the project showed, that, as for every activ-
ity to automate, first a more precise definition of the activities and roles in the
testing process in software development, in particular related to the state of devel-
opment, is required. Testing is deemed to be postponed up to stages close to deliv-
ery, where the effort-to-fix is most costly. Defming the process of test structuring
(design of test-ware) and management of defect repair appeared as important as
automation of the execution of tests. With these findings, evaluation criteria were
formulated for test automation packages to be used in the base-line project. An
integrated management and test-replay tool-set was found, that matched our re-
290 10 Summaries of PIE Reports

quirements. Engineers were trained in using the tools and a plan for introduction
of the tools in a running project, the base-line, was fonnulated. Furthennore the
role of testing in our CMM-based software process model was defined as a sepa-
rate chapter in our system handbook.
Development activities in the software packages originally to be used in the
base-line project were postponed due to changed market conditions for nuclear
geometry viewing front-ends and back-ends. Therefore, another project was cho-
sen to serve as a base-line. Adjusting the planning for VISTA to the planning for
this project introduced a delay with respect to the original VISTA-time schedule.
An initial inventory of the testing process and error statistics in the base-line
showed, that internal pre-delivery testing was undervalued. With the aid of tool
consultants test-sets and test-scripts were built according to the test-specification.
Structured validation was introduced in version-deliveries in the baseline project.
Techniques developed in our baseline project have been used in structuring tests in
another project concerned with applying the analysis of large time-series of meas-
urements. This application was less complex in tenns of underlying hardware
architecture and allowed us to demonstrate the value of the new techniques much
quicker.
Test-tools are now in common use with large software development finns. For
some finns, software test engineers outgrow software engineers in number. Intro-
duction of test-tools on a scale as for our organisation (18 software developers)
introduces problems. The complete set of functionality of the tools is large and the
view on the application-under-test is significantly different from the software
engineer view. Given the size of our software engineering group, this forced us to
make careful choices and be not too ambitious in adopting tool functionality. As a
last activity in work package II, deliveries of releases of the base-line project
software are done after automated regression testing using the tools.
The results of our working package III, evaluation, indeed indicate the value of
structuring and automating the testing process. Furthennore, the tools seem very
promising for managing defect repair and the analysis of the delivery process. In
our organisation the qualitative evaluation and quantification of our experiences
and further dissemination has been perfonned. The value of the new techniques
for a number of typical application types was detennined. An application potential
matrix is given per application type. Our experience is, that the size and scope of
projects for setting a test-organisation is confined to those, where application
domain knowledge can be shared over two teams. For all application-types struc-
turing the test process is useful; for large model based client/server-systems with a
large GUIs and for user-intensive data driven applications automation of test-
execution and management is most appropriate.
Looking at the system development process, testing automation should be set
up apart from the software design. Testing and design engineers should consider
projects from a different perspective. Quantitatively, the amount of data collected
is too small to justify conclusions about the business advantage. Qualitative in-
10.64 STAR 27378 291

fonnation as gathered from the participating engineers in the project is very posi-
tive.
This report is an extension of the MTR of July 1998 [1]. Emphasis has been
laid in this report on the evaluation phase of the project and in an assessment of
the usability of the new procedures and techniques for software development
groups in R&D organisations comparable to ECN.
A first international dissemination of the results has been given as a part of an
EPIC-workshop (Eindhoven, 28 April 1998). On Eurostar 1999 a final presenta-
tion will be given of the project. A paper describing our experiences [2] to the
software engineering community is in preparation.
ECN
P.O. box 1
1755 ZG Petten
The Netherlands

10.64 STAR 27378

Systematic Testing And Reviewing


B-kin software is a small independent software development company, specialised
on process and infonnation control applications for the industry like industrial
process control, documentation and quality infonnation management, knowledge
management, work-flow, GroupWare and collaborative systems. Half of activity is
custom applications while other half is a commercial product, B-kin Links. This is
a product completely build over Object Oriented technology with a true compo-
nents architecture and a OODBMS. B-kin Links is our core for rapid development
of collaborative applications for infonnation items management, items that repre-
sent any material or immaterial object characterised by some infonnation and
some behaviour.
B-kin Software is committed to Quality Assurance with the goal of improving
the quality of its products and optimising use of resources. This strategy is market-
driven. Not only a key customer is interested in our testing, requesting an im-
proved process able to show statistical evidence that the software covers required
functionality with fewer errors than the maximum specified by contract, but also is
an important internal issue to get an adequate level of quality on B-kin Links.
This Quality strategy has been initiated last year with a improvement project
aimed at establishing the requirements management process. Following the suc-
cess of that project, now this new, complementary, project is proposed to assure
that the requirements agreed are delivered as they were agreed, and demonstrate
that to the customer in the delivery process.
The objectives of the experiment are two: to improve the efficiency in errors
detection and removing along the project life cycle by means of combining testing
292 10 Summaries of PIE Reports

procedures at module and integration level with reviewing method of each devel-
opment product, form requirements to delivery; assure a committed level of qual-
ity in the product delivery by means of applying statistical control of errors related
to the requirements approved.
Establishing a cost-benefit analysis through a controlled experiment will give
us the objective information needed to validate the procedures, methods and tools
defined to do a complete Testing process in a baseline project. All other practices
being maintained as they are today during the experiment, we will be able to
measure the effect of the Testing process in the key features of quality, plan and
cost of the resulting software product.
For the PIE measurements we have defined internal quality, committed quality
and testing cost in the following way, according to the main objective of the pro-
posal:
• Internal Quality will be measured in terms of the ratio of errors discovered in
the acceptance test over the errors reported and removed in the software tests
phase. The current measure that will be used for reference is 25% to 40%, with
a figure of 37%, measured in our last project as control reference for the pilot
project.
• Committed Quality will be measured in terms of the ratio of errors discovered
in our standard six-month guarantee period over the errors reported in the ac-
ceptance test. The current measure that will be used for reference is 15% to
75%.
• Testing Cost will be measured in terms of the actual time used in testing activi-
ties (plan and execution), as ratio over the total development cost. In our last
project, used as control, have get a reference cost of 27% (9 man months) over
a total cost of 33 man months.
The proposed target is to reach a radical reduction in the number of errors de-
livered to the customer. For that, two complementary actions have been proposed:
to improve the module and integration test phase, and to perform an acceptance
test phase with the ability to demonstrate the satisfaction of requirement specifica-
tions without the need of perform an exhaustive test of each individual require-
ment, but instead based on statistically controlled parameters.
The original Workplan defined for the project contains four main activities. Now,
at middle of the project, we have completely done the first one and are working on
Base Line project:
• Training and definition of the formal procedures, methods and tools that need
be put in place to establish an operational and efficient Systematic Testing
process.
• Application of the Test planning and execution procedures on the base line
project.
10.64 STAR 27378 293

• Analysis of results, conclusions and review of the procedures, methods and


tools used during the base line project. Decision, based on internal and external
user feedback, on their adoption as general practice.
• Internal and external dissemination of project results.
Also dissemination actions are being done at local level, while on the interna-
tional level it has not been possible to arrange any participation yet, although we
have applied for the European SEPG conference to be held in Amsterdam next
June-99.
At the middle of the project we have a very good expectation on results as we
can anticipate that it is possible to control in a better manner our development
process and workproducts doing things in a different way. The over cost of inspec-
tion and test procedures could be controlled in appropriate manner so it can be
compensated by lees corrective work and we expect a moderate effort reduction
over completed life cycle. As well we expect an important reduction on errors as
their are early detected and corrected, when their have less impact, also we have
noticeable sign of an important effective reduction of errors on the final product
bigger than our initials previsions.
Main results of STAR project are: A compressive procedure to plan testing and
inspection linked to the work done on requirement analysis phase. An integrate
process to review and test. A means to record testing results. A statistical process
control of intermediate software development processes. A final validation test
statistically controlled to have a defined level of confidence. All this artefacts
which are in ongoing use on pilot project are showing a very good expectation on
its ability to get the proposed objectives on quality and costs.
Of course, at this moment we have not defmitive results and all details are pro-
visional. We have introduced a continuous monitoring and statistical follow up,
based on SPC graphics, of testing and reviewing tasks so we can collect data that
could demonstrate a significant improvement on fmal number of errors and user
satisfaction, with less errors, and, as final result, better quality of the final product.
B-kin Software, S.L.
Paseo del Puerto, 20-A
E-48990 Getxo
Spain
Index

Advanced Systems Engineering 86 application 68


Aerospace Industry product costs 107 systems 53
AFNOR 24 Baseline project 12 [, 128, 133
Agruss, Chris 79 subsystems 101
Agusta S.p.A. 107 Bazzana, Gua[tiero 1,95, 142, 143, 159
ALCAST 83 Becart, Olivier VIII
Alcatel Espana I, 170, 171 Beizer, Boris 84, 167
ALCATEL SEL 190 Bergel, H. 96, 159
Amting, Corinna V, VIII Bergman, Lars VIII
Apache Server API 154 Best Practice 4
APls 76 achievements 5
Application Experiment 11 Binder, Robert V. 191
Architecture 52,94, 1[9,147,149 Black-Box GUI Testing 156
dynamic web-based applications 155 BOSCHTelecom 190
software 197 BoundsChecker 137
Arnold, Tom 79 Braunschober, Wilhelm VIII
Asset management 92 Brooks, Jim 79
Association for Computing Machinery 85 Business applications 158
ATECON 132,133,190
ATOS 190 C 49, 132, 152, 154
ATS 127,128 C++ 49,121,127,129,146,219,241,
projects 131 284,285
software [27 Caja Madrid 170
Automatic tests 177, 180 Capture & playback tools 50
Automation view 35 Categorisation 19
Automotive industry 3 CCC Software 24
Avionics 107 Centre de Recherche Public Henri Tudor
software 107, 109 24
software costs 107 Centre for Software Engineering, Ireland
software development process 24
lead-time of 109 Certification of software development
software development time 108 process 22
software testing [09 CGI programming 154
Civil standards 116
Bach, James 79 Classic Testing Mistakes 57,204
Banking Commercial software 34
296 Index

products 57 Distribution channels 21


Commercialisation 177 Distribution of time 131
phases 181 Doctor HTML 152
product 177 Domain knowledge 40
Commercialised software 177 Domain of testing 142
Commitment of management 42 Dunn, R. 84
Communication
encrypted 150 Editions HIGHWARE sarI 20
mechanisms 181 Education 13,200
Company management 184 Eisenbieg1er, Him VIII
Component editor 154 Encryption 150
Computer Aided Design tool 10 Engineering department 184
Computer control 127 ERP systems 159
Computer controllable instrumentation Error database 106
130 Error detection efficiency 173
Configuration Management 10,90,159 ESI
Consolini, Luisa VIII, 1,33,83,87,141, European Software Institute 23
142,163,164,199 ESPITI
Consumer electronics industry 3 European Software Process
Contract quality metrics 187 Improvement Training Initiative 22
CP (checkpoint) 37 Goals 23
Cross-fertilisation 14 Project 17, 23
CSIRO-Macquarie University Joint User Survey 17
Research Centre 86 ESPRIT
CSS Validator 151 Programme V, 8
CSSE methodology 191 ESSI
Customer Help Desk 185 Actions 22
Customer requirements 170 Dissemination Actions 17, 20, 27
Customer satisfaction 40, 185 European Systems and Software
Customer/Supplier contract 188, 189 Initiative V, 3
Cusumano, M. 80 Initiative VII, VIII
Czyzewski, Paul 79 PIE Prime Users 19
PIEs 88, 199,201
Daiqui, Sylvia 87 Pilot Phase 8
Data Flow Analysis 47 Process Improvement Experiments
Defect Cause Analysis 181 (PIEs) 17
Defect detection activities 170, 172 Program 21
Definition Type Document (DTD) 143 Projects 29
Delivery review 177 Tasks 14
Design phase 130 ETNOTEAM 24
Desk checking 47 EUREX
Develop & Verify Product phase 37 Analysis 30
Development process 116 European Experience Exchange project
Dissemination Action 11,21,22 V,17
Index 297

PIEs 204 Goal Question Metrics approach (GQM)


Pilot Workshop 190 90
Project 25, 30 Gomez, Ifiaki VIII
Software Best Practice Reports 20 Gomez, Jon VIII
Taxonomy 25 Graham, Dorothy 85
Workshops VIII, 18,35,36,199,201, Graphical User Interface (GUI) 120, 136,
202,203 137
European Commission 8 optimise testing methods 137
European enterprises systems 137
software development capabilities of V test automation 137, 139
European Industry 19
European Network of SPI Nodes 21 Handbook of Software Reliability
European publications 21 Engineering 80
European SMEs 17 Handbook Of Walkthroughs, Inspections,
European software developers 4 and Technical Reviews 83
European software industry VII, 23 Hardware/software environments 132
ExperiencelUser Networks 12 Haug, Michael VII, I, 17, 25
Extranet-based business-to-business Helicopter 107
transactions 158 development process 109
Hetzel, Bill 83
Fagnoni, E. 95, 143, 159 HIGHWARE France 26, 28
Failure Report 178 HIGHWARE Germany 26, 27
Falk, J. 80 Hoffman, Doug 79
FCI-STDE 122, 124 Holmes, Brian VIII
Fifth Framework of European Research V Hovemeyer, D. 96
Formal Code Inspection 121 HP SoftBench 126
adoption of 124 HTML compliance 153
Forschungszentrum Karlsruhe 23, 24
FORTRAN 132 Iannino, A. 80
Fourth Framework IBM Semea Sud 1,35,36
of European Research V IEEE Computer 85,96, 159
Programme 8 Imbus GmbH 136
Fouts, Peggy 79 Implementation database 112
FREE approach 191 Incremental test suites 103
Freedman, Daniel P. 83 Indicator 92, 93
Friedman, M. 80 Industrial
Functional specification document 37 benefits 11
Functional testing 37 competitiveness 7
FZI Forschungszentrum Informatik 20 consultation group VII
environment 190
Garment industry 3 sectors 4, 7
GEMINI 20, 33, 160 strength 39
German PIEs 26 Industry Standards 128
Gilb, Tom 83, 85 Information Infrastructure 7
298 Index

Information Society 7
Technology programme V Macquarie University 86
Instituto Portugues da Qualidade 24 Manual GUI testing 89
Instrumentation software 130 MARl 20,24
Integra Sys 170 Marick, Brian 1,35,57,80,83,84,204
Integracion y Sistemas de Medida 126 Market share 40, 169
Integration McCabe & Associates 83
integration software 203 McConnell, Steve 80
of product verification 88 McGraw, G. 96
of test concept 133 Metric Plan definition 45
of verification process 35 Microsoft
Internet 89,90,141,143,145,150 Explorer 153
applications 90, 143 Visual C++ Development System 121
realm 90 Visual C++ Visual Studio 137
service 87 Milanese, Fabio 35,46
so ftware 142 Miller, K. 81
usage 150 MKS Web Integrity 154
Internet Engineering Task Force W3C 151 Moore, Geoffrey A. 80
InternetlIntranet 50, 91 Morel1, L. 81
applications 91, 142 Mosley, Daniel 1. 84
remote access 53 Musa, J. 80
Intranet services 94 Myers, Glenford J. 56
Intranet/Extranet applications 144, 146,
150, 158 Netscape 143
INTRASOFT SA 24 Mozil1a 153
Italian PIEs 141 Server API 154
New Procedures
Jalote, Pankaj 84 adoption of 116
Java 49, 158 Nguyen, H.Q. 80
applets 73 Nielsen, Jakob 80
development environments 59 Norsk Regnesentral 24
test tools 154 Nyman, Noel 79

Kaner, Cem 79, 80, 84, 86 Object Orientation 20


Kolmel, Bernhard VIII Object Orientation Paradigm 129
Object Oriented Programming techniques
Lampreabe, M. del Coso 1,170,171 10
Lawrence, Brian 79 Okumoto, K. 80
Libraries 87, 127 Olsen, Eric W. VIII, I, 17,25
instrumentation 128 Operating systems
Lindemuth; Tom 79 UNIX 50
Load Manager 154 Orci, Terttu VIII
Logging 99 Organisational
Lyu, Michael R. 80 change 119,185
Index 299

implications 35 Quality Management 30, 33, 34


issues 150 classification 30
level 124 System 136
Quality Manual 90
Papini, Elisabetta VIII Quality Plan 189
Paradiso, Michele 1, 35, 36, 199 ofa project 171,174
Perl 154 Quality Profile
Perry, William 56 Identification of product 45
Petersen, Eric 79 Quality Week conference 191
Pettichord, Brett 79, 80, 86
PI3 90,94 Radio base stations 137
PIE communities RAMS analysis activities 133
European network of 13 Rapid Application Development (RAD)
PIE organisations VIII 145
PIE Users 18, 19 RBSC Corporation 191
Piotti, M. 95, 159 Reduction of test execution cost 45
Planning the testing effort 57 Regional Organisations
Platinum Final Exam 154 ofESPITI 23
Politecnico di Milano 46 ofEUREX 18
Preventive Actions 181 Responsibilities of EUREX consortium
Pritsker, Drew 79 26
Problem Configuration Control 180, 181 Regional Workshops 19,25
Problem Domains 19 Regression Tests 180
Procedimentos Uno 170 Relational Databases 51
Process group 184 ReleasesNariants 108,110
Process Improvement Experiment VII, 10, Reliable Software Technologies 86
21,22,25 Repository Database 52
distribution of 28, 30 Requirement construction 113
program 19 Requirement Database 110, 111, 112
Process Improvement Plan 121 Queries 115
Product and Process Metrics 45 Requirement inheritance III
Product Development Process 36 Review Test Metrics 121
Product variant cost estimate 108 Robotics 132
Program Inspections 47 Rohen, Mechthild VIII
Programmable instrument 128 Role of Testing 57
Programming languages 132, 134, 180 Rothman, Johanna 81
Project Management RST Test tools 154
classification 28 Rumi, Gianni 95, 159
Project Manager 183 Runtime Analyser & Error Detection tools
Project Quality Plan 171,174 137
PROVE 83,97, 142
Security Policy 150
Quality characteristics 142 Selby, R. 80
Quality control activities 34 Semiautomatic verification techniques 201
300 Index

Senior Management 6 Software Process Improvement VII, VIII,


SERC 24 17,19
Server Extension Programming 154 actions 18
SGML parser 153 benefits 22
Simulators 176, 177 business driven 17
Sip Consultoria y formaci6n 24 institutions 21
SISSI marketing of 22
Case Studies 21 marketing plan 21
Marketing Plan 22 needs 23
Mechanisms of 21 programmes 18
SISU 24 Software Process Management 191
Small or Medium Enterprises (SME) 17 Software Process Model 101
Software Software Process Self-Assessment 91
customer-specific 89, 136 Software Product Quality 172
safety critical 34 Software projects 171
Software Best Practice 5,7,8,10, 1I, 17, Software Quality 84,142,169,170,171,
19 172
Adoption of 3, 5, 11 analysis 133
Dissemination Actions II assurance 34, 136, 163
Initiative 10 Software reliability 39
Reports 19,25 Software Requirements
Software components 128 Document 110
Software developers 7, 146, 163 Engineering 86
of technical applications 120 Software science knowledge 130
Software development 130, 170 Software specific international standards
Software Development Factory 94 42
Software development practices Software suppliers 12
adoption of improved 11 Software Testing 109
Software development process 3, 9 Software Testing Labs 86
stepwise improvement 5 Software Verification 97,107,141,196,
Software Engineering 7,20,126,142,191 199,202
modules 133 process 35
practice 107, 146 support tools 200
processes 43 Software Verification & Validation 36
seminars 191 Source Code Coverage 49
skills 130 Spanish PIEs 169
Software engineers 6, 202 Spell Checker
Software generation 127 multi-lingual 153
Software industry 12, 141 SQL Server 51
Software inspection 83 Stand Alone Assessment 8
Software life cycle 116 Standard HTML Editors 144
Software module 176 Static Analysis 47
Software organisations 33 Static verification 33
Static Web Sites 148
Index 301

Stock exchange systems 53 operating systems 50


Strazzere, Joe 79 Uriarte, Manu de VIII
Subsystem test 181 User Documents
Svoboda, Melora 79 generation of 173
Systematic Test and Evaluation Process Using WWW Test Pattern 96,159
methodology (STEP) 83
Validation Process 108, 109
Talbot, David VII, VIII Vallet, Gilles VIII
TCL 154 Verification and Validation
TCP Wrapper 150 activities 34,36,37
Technological areas/criteria 25, 198 planning 34
Technology Management 198 practices 35
Telecommunication 136 software 84
companies 170 Verification of User's Documentation 177
devices 126, 127 VersionslVariants
Televisions 3 product 108
Test software 109
Apportionment 118 Virtual User Generator 158
Database 178 Visentin, F. 95, 159
Items 116 Visual Basic 49, 146
Sequences 108,110 Visual Load Testing Controller 158
Testbeds 126 Visual programming environment 128
Tester's Network 86 Voas, J. 80,81
Testing
dynamic Web-based applications 154 W3C software 151
plan 178 Walkthrough 47
server side 154 Web
software 107, 141 applications 149, 150, 159
techniques 45 client 153
Web static applications 151 editors 153
Testpoints 71 interactions 158
Testware 54, 162, 204 interface 105
Third Party Tester 137 projects 142
Traceability 108, 116 Scenario Wizard 158
TRUST 108 service 154
TXT Ingegneria Informatica 107 syntactic testing 149
Type Approval Test Systems 127 technology 151,158
~sting 14~ 143, 151, 153
Ullman, R. 84 validators 153
University of Arkansas 152 Web Site Garage 153
University ofIceland 24 Web Suite 154
University of Keyo 151 Web-Edit Professional 153
UNIX 152 Weblint 152
environment 54 Weinberg, Gerald M. 83,85
302 Index

White Box Deep Cover 154 Test Pattern 152,153


Width Checker 154 WYSIWYG Editors 144
Word processor 52
plug-ins 111 XML pages 151
Workarounds 63
World Wide Web 63 Young, Stephanie 79
Consortium 151 Yourdon, E. 96
Wossner, Hans VIII
WWW Zimmermann, Rainer VII, VIII
Test Environment 92

You might also like