You are on page 1of 12

About Software Quality Assurance ....

by Georg Sehl, privat@GeorgSehl.de written in July 2001, last update in December 2001

Prolog: Trapped in the Debugging Box


While testing system, middleware and business software for several years as a Quality Assurance Engineer (I also spent some time in software development and did implementation projects on customer sites), I frequently wrote down my experience about Quality Assurance in order to understand what I had been doing and why it sometimes hadnt worked. This is my recent version. If I again become a QA Engineer in my next life (would this be a blessing or a curse?), I will write a book eventually - provided there will still be software bugs For QA engineers are masters in the art of negative thinking, I will start this tiny paper with describing the problems, pitfalls and contradictions of day-to-day Quality Assurance - before I will argue in favour of a comprehensive project-based approach to Quality Assurance for small to medium-size software development projects in the next chapter. I will focus on topics related to my experience and leave other cool topics (for example such as XP) to more competent and experienced authors. So lets start with my conviction that Quality Assurance doesnt fit well in a departmental and hierarchical organisation. In such an environment it will get harder to take care of the cross-functional and cross-departmental aspects of Quality Assurance (for example to strive for defect prevention instead of pure defect detection). Trapped within organisational and hierarchical boundaries (i.e. trapped within a box), it's tough to practice out-of-the-box thinking. QA comes on board too late and leaves too early to have impact neither on the quality of specifications, design and schedule, nor on the field introduction. Even worse a QA department may have the negative side effect of other departments reducing their quality efforts. Often QA departments are pure debugging departments: the last barrier, the very last defence between the development suspected of producing lousy and buggy software and the annoyed customers. The mission of such a debugging department may be: find as many bugs as possible within the time left till the deadline. Thats a rather narrow, defensive and disheartening approach to Quality Assurance. Lets have a closer look on some problems that a Quality Assurance department may face: As mentioned above QA hasn't any influence on the quality of the specification, the design and the schedule - it comes on board too late. If the specification, design or schedule are messed up, QA will have failed before it will start to work. The currency to pay for a unrealistic schedule is always quality: if the schedule and the deadlines aren't realistic, quality and functionality have to suffer. I'm also convinced that one of the most important tasks of QA is to do proper risk management, I will discuss this topic below. The existence of a QA department may has a secondary effect of reducing quality efforts in other departments. While QA hasn't the power to make a real difference with respect to quality, other departments may neglect this topic, because it seems to be the job of the QA department. For example development may skip the smoke tests and deliver untested software to QA, which already crashes during installation. Because of aggressive schedules, that's often the only chance of development to meet a deadline. If there are two or more levels of hierarchy within the QA department, communication with other departments will become slow and cumbersome. Two levels of hierarchy within QA will mean that someone responsible for the whole software project will be three levels away from daily
1

business. Furthermore many levels of hierarchy always imply some politics, which will distract energy from the tasks at hand and may lead to bad decisions. Because of departmental boundaries QA wont participate in the skills, knowledge and experience of development and field people. This for example will prevent QA from testing efficiently, whether some internal changes of software (new interfaces, new structure, ...) will cause problems QA often wont be informed about these changes. Also the quality of the bug reports depends on the insider knowledge (with respect to development and to the field) of QA. If QA people are able to recognise, that several bugs may be due to a new interface or to a screwed class definition, it will be very helpful hint for development. Without (basic) knowledge of development and some insight in the customers business - QA should act as the eye of the customer! -, the QA people are limited to simple functional tests, not to mention defect prevention. That's not a challenging job for QA people, even worse it's possible to run a QA with low-skilled testers, while some intelligence is put into the test plans and processes (the QA manager may have the only interesting job within a QA department). Why is it unusual or even unthinkable that QA people change (temporarily) to development or e.g. support a pilot customer and vice versa? There are employees and managers with a Can Do and others with a Cant Do attitude. While its common sense that a Can Do attitude is the preferable one (e.g. taking risks and challenges), its better for QA people to be of the Cant Do type than of the Can Do one (Sure we will meet the deadline..., Bugs? No, there arent any bugs ...). Therefore Cant Do people will tend to cluster at QA. What may be bad is that QA will develop a negative Cant Do and That-will-never-work culture, which will exclude them from the world outside QA. While all other guys are running ambitious projects, QA stands grumbling aside. This negative and cynical culture will more likely develop within a QA department, than within a project team. There are no common tools and processes for development and QA, for example QA has no access to the Configuration Management tools of development to trace changes in software. People are more committed to their department than to their projects.

Another aspect may be less obvious, but I think its crucial and it's worth some discussion: it's harder for a debugging department to show some output and achievements (i.e. to justify it's existence and the existence of QA managers) than it is for other departments. How can a QA department report results? Reduce the number of bugs is a fuzzy mission its difficult to measure and it doesnt sound very challenging. Furthermore as long as there are bugs in the field, the debugging department has to defend itself. A debugging department often stands in the shadow of development and may have a lower status than other parts of the organisation. Especially if QA is embedded in a cover-your-back kind of corporate culture it will be in a very defensive position. But wait: there are ways to measure QA output and achievements (and present it to management). You may cut your tests into small pieces (called test cases), count them and measure the test coverage: the number of test cases being performed divided by the number of all test cases to be run. You may count your bugs, normalise them using something like the number of lines of code or modules, calculate the number of bugs found per day or per week, create impressive pie charts, trend charts or whatever charts. You may use sophisticated bug reporting and tracking systems, with huge databases containing thousands, millions or billions of bugs (perhaps in the future there will be some smart data mining technology available to make some sensible use of this huge amount of archaeological data). You may even enter the realm of software development by developing automated tests and test frames (by the way: who will test the code written by QA?). As you may have realised by the irony in my sentences, I dont recommend doing so. My concern is that there will be too many negative side effects, if there's too much emphasis on methods, processes, charts and spreadsheets introduced to improve the status of the testing department (and its managers). For example if test coverage matters, the testers are seduced to prefer the fast and easy test cases and leave the more time-consuming and sophisticated ones for later on. 90 % test coverage may sound
2

impressive, but the remaining 10 % may be the sophisticated and time-consuming ones, perhaps these 10 % are the only ones worth running. If the number of found bugs matters, the testers may be seduced to report as many bugs as possible. This may be great if these bugs are all relevant, but it's more probable that the fraction of less important bugs will increase and that the testers will spend less time on preanalysis. Because every reported bug will create some administrative overhead, more reported bugs will result in more administrative work for development and QA - regardless of the relevance of the reported bugs. Finally, creating and maintaining (!) automated tests may be more expensive than running them manually again and again. To distract QAs focus from quality and to redirect it to some secondary targets is only one group of negative side effect. There are two other ones: first, there are elements such as intuition and experience in the testing business, which are difficult to organise, to measure and to manage. You may suppress these elements by a perfect organisation of your testing business. One tiny example: if you find a bug when running a test case, it will be sensible to create a link between the bug number and the test case, which caused the bug. So ... to retest bug x, you have to run test case y. That sounds smart. But lets assume youll find a bug without running a documented test case or by doing some additional steps when running a test. Maybe because of your intuition. Or maybe you have just found other bugs in this area and you know, that bugs tend to cluster. Now you really get into a mess! How to document this? Writing a new test case? Modify an existing one? Looking for a test who (by chance) matches these test steps, which youve just done? If there are already hundreds of tests, that will be time-consuming and interrupted fluid and flexible testing. There is no way to let the results of the previous tests influence the next one. You wont have this problem, if you exactly follow your test plans. But do we need skilled and qualified testers to do so? That's my second concern: each step towards better processes and formalisms may degrade the role of the testers, they become interchangeable pieces of a debugging machine. If each tiny test step is documented ("Tick the xyz button"), skilled testers may be replaced by well-trained monkeys. This may sound a little bit extreme, but there are two causes, which may lead you towards such an extreme situation. First - if some of the testers get the feeling, that their work is mechanised and that they are becoming superfluous and interchangeable, they will quit eventually. To reduce the costs of fluctuation and to become less vulnerable to it's consequences, it's obvious to remove more intelligence and skill from the testers and put it into the processes, e.g. by investing more effort in the documentation of test cases. As a consequence, more testers may leave a death spiral starts to turn. Second - to do testing by unskilled testers, which were controlled by smart processes and management, will deliver some benefits for example better control, more flexibility (i.e. shifting resources on short notice) and easy career opportunities. Depending on the corporate culture of your company, these benefits may compensate or even outweigh bad software quality. The sad thing is, that there are no basic skills required to do some kind of low-level testing, while for example an unskilled software developer, who isnt familiar with any programming language, is unthinkable. Well ... some of these problems may not only be typical for QA departments, they may also show up in any other part of an organisation. But because of its weak, defensive and isolated position described above, QA may be more susceptible to it. But Quality Assurance is more than finding bugs during the time left till deadline following exactly some detailed test plans (Click the Start button...). It's more than defect detection (neglecting defect prevention). It can be a challenging job for highly skilled QA people, the people who do this job really matters! Quality Assurance is more about workmanship, perhaps even about art, than it is about painting-by-numbers. Its more about communication than about making marks in spreadsheets. And it's more about project teams and collaboration than about departments. How can quality assurance be integrated and making a difference in the whole process of software development, starting at the specification and ending at customer sites? Here's my vision.

The Core Team


Let's assume a new version of your successful product DoSomething should be released. Marketing has ideas about new and improved functionality, there are request from some of your major customers, the field people add ideas from projects on customer sites, support is grumbling about old bugs which after all should be fixed now, development has brilliant ideas about improving the software's infrastructure. Furthermore it's a medium-sized project, i.e. some dozen developers, testers, technical writers and support people will join it. These are the most difficult projects to manage: to apply all methods, tools and processes designed for large-scale projects would be overkill and slow down the project to stasis. Even worse you can't rely on statistics to estimate your efforts: if 30 people were estimated to work on a (sub-) project for six months, the deviation from your estimation would be very much smaller than if there were only 2 people estimated to work for 2 months. For example if somebody from the 30 left or got sick would be less disastrous than if one of the two became unavailable. On the other hand, running a project with more than 2 people requires some organisation. So there's a fine line between organisational overkill and chaos! After this emotive introduction, let's get down to business: the first steps will be

writing the specification, determine what quality means with respect to this new version of the product, analyse field bugs from the previous version, discuss and fix the processes, create the schedule and do some risk management.

To do so I suggest establishing a core team of four people: one developer, one tester, one technical writer and somebody from support/field. These four people will act as project and team leaders as soon as more people will come on board during the project, their task is to take care of the prerequisites for each team and to coach the team members. For larger projects it may be useful to start with more than four people at the beginning, but there have to be four named team leaders from development, QA, technical writing and field/support. The basic idea of this approach is to start with a "lean" team and to avoid early overstaffing of the project. For example as long as there aren't any specifications available developers can't start coding. The QA team leader will involve more and more testers as soon as functions are implemented. The technical writers can start after the specification will have been finished. And the support and field people will become active as soon as there will be a "prototype" to be shown on fairs or to those customers, whose requests have found their way into the specification. Even more important: the six tasks mentioned above require a lot of intense communication within the core team - with respect to communication, a small team is more efficient than a huge one. Let's have a look on each of these six tasks: Specification: A specification mustn't be a ton of paper, it's much more important that the four core team members share a seamless common understanding. A specification, which nobody will read and which will end as shelf-ware, because it's to thick and cumbersome, won't have any positive impact. Furthermore the specifications must be clear and unambiguous to each new team member, which will come on board later on. I wont discuss the object oriented approach (OOA and OOD) or the use of specification languages (such as UML) here. But if these techniques are applied, it will be mandatory to involve QA as soon as possible otherwise QA is limited to do some white-box testing later on. Any smart new technique to transform specifications into tests will be useless without participation of QA people. Quality: I've mentioned above that quality is more than reducing the number of bugs. Important aspects of quality are, for example, a product's uniqueness, the positive impact it has on the business of the customers and it's usability and user-friendliness. These aspects, which may differ slightly for each product, have to find their way to the specification and will be tested by QA later on. What does
4

quality mean with respect to the product at hand? Which aspects of quality are crucial, which are less important? What are the criteria for delivery? Analyse field bugs: To learn from old bugs is a very efficient way to improve quality and make live easier, but its not a trivial job. To recognise error patterns and probable causes requires experience and intuition, you will need at least a glimpse of an idea what you are looking for in order to find a pattern. And its a job for developers and QA and technical writers and support, because everybody has another point of view and another approach to look at errors. Development will be able to recognise, whether the bugs cluster around a module, interface or class definition, the technical writers may realise that the many user errors are due to a vague and misleading documentation or to bad usability, QA will see the holes and gaps in their testing and support will know, which errors are the most common ones. Processes: It wouldn't be a good idea to invent software development processes from the scratch for each project, but it's sensible to adjust your processes for each one. Especially in the software business even processes are subject of perpetual change. For a small project some basic processes - including configuration and change management - would be sufficient. For example if you expect 100-200 bugs, a spreadsheet will be sufficient to track your bugs, while for a huge number of expected bugs you will need a database. For process obsession is a widespread management fad, it may be difficult to start with "lean" processes, but it's worthwhile. Processes have to make a project transparent to all participants and stakeholders, too many or too sophisticated processes may mask the daily business and problems. A process is mandatory as long as it is changed by an agreement of all stakeholders. So there have to be a process to change processes - and this process is absolutely mandatory! Schedule: If you are creating plans and schedules, the underlying assumptions, prerequisites and expectations are much more important than the milestones and deadlines themselves. As soon as one assumption, prerequisite and expectation turns out to be wrong, you have to rethink and update your schedules. Thats the first step to an early-warning-system and to effective risk management. For example to determine a date by asking some guys about their estimates and taking some (weighted) mean value wont work well - you would realise too late, that this date wasnt met, and, even worse, you wouldnt understand why. Therefore its crucial that the way a plan was made (i.e. the assumptions, prerequisites and expectations) must be clear and transparent to every stakeholder! Thats why open communication and teamwork is extremely important in the planning phase. By the way: in spite of milestones, schedules, deadlines and project plans it's important to remain open-minded: things may happen which will be in favour of you. An awkward function may be replaced by a simple one, a solution for a problem may be easier than planned, somebody may have a brilliant idea. This may happen - eventually. Risk Management: For risk analysis is one of the main tasks of QA, I will discuss this topic in detail below. But now I want to stress, that risk management mean to live with risks, not to avoid them: projects which make a difference are always risky. Often the most risky projects are the only ones worth doing. I've mentioned above that QA tend to play a defensive risk-avoiding role, especially if it's imbedded in a defensive corporate culture or - even worse - in a corporate culture of fear. Risk Management means a offensive approach to risks. Why do I emphasise that software development should be team-based? What are the benefits of this core team idea? In the first chapter I showed some scepticism about QA departments, but some sympathy about teams and team based organisations. The reason is quite simple: teams need some common goal, some common understanding "who we are". While a sport team wants to win - no matter whether it's a world or a backyard championship - , a team in business life wants to do a good and successful job, i.e. there is a natural sense for quality. The best way to kill a team is telling them: "I know this software is still rubbish, but we have to get it out until next week. We only need to assure that it didn't crash immediately". To force professionals to do unprofessional work will only create cynicism and destroy the basis of team formation. Departments are neutral with respect to quality: it depends on the
5

(corporate) environment whether people act as professionals or as cynics. If the existence of the Quality Assurance is only justified by the buggy and error-prone software delivered by those lousy developers (which weren't granted the time to do a better job, for example to do some testing on their own), and if the Quality Assurance is only supposed to reduce the number of bugs to a less annoying level, development and QA will end up as disillusioned cynics without any team-spirit. The core team idea avoids departmental boundaries and corporate politics, it enforces communication between the main stakeholders and uses synergies. For example within the core team the main task of development is to estimate the costs and problems of implementing new functions. A "simple" function may be expensive to implement or doesn't fit well into the structure of the software. Development is the only instance that can deliver this kind of information. QA may raise the topic how to avoid the typical bugs from previous versions in the new release. If there is an error-prone part of the software, now it's the time to redesign it. If QA needs better traces, logs, test points and test hooks that will be a topic for the specification too. Furthermore being involved in the project from the very beginning enables QA to do one of it's main tasks: to make a statement about the quality of the product as soon as possible. Therefore QA has to create a short preliminary test plan a soon as the specification is finished. If QA starts testing late and leaves important topics untouched until two weeks before the deadline (this may happen if test coverage is the most important goal), you will run into avoidable risks. The domain of the technical writers will be to check the usability and consistency of the product. If it's difficult to describe a new function in a understandable and user-friendly way, this function won't be very usable. If it's hard to summarise the basic ideas, concepts and benefits of a product at the beginning of the manual, your product may be a bunch of functions, but not a consistent product. A good specification is also a good basis for the manual. The support and field people are the representatives of the customers. They will know the best real life war stories. Being involved in the whole development process, they also will get deep and early insight into the product itself. After the core team has finished his final work, the 4 teams will start to grow and to evolve. I will now focus on my domain: the Quality Assurance part.

About Quality Assurance


Lets assume all stakeholders (development, QA, writers, support and other field people) agreed on specification, schedule and common processes, field bugs were analysed, quality requirements defined and some risk management was already done. The development team was put together and has started designing and coding. Its now time to form the QA team and focus - beside reviewing the design - on one of the main tasks of QA: to make a statement about the quality of the product as soon as possible after the first release. This statement is needed for further planning and project management, one of the worst things which could happen is to discover serious quality problems too late. Therefore QA has to develop tests that are comprehensive and can be done in a limited time a contradictory task. To do so I suggest to write tests which imitate typical user tasks, i.e. to start with task-based, use-case and scenario testing. These kinds of tests are cross-functional, so pure functional testing thats what a QA uses to do most of its time is neglected during this early phase of testing. To neglect functional tests may sound a little bit revolutionary, but there are two reasons why you can afford to do so: first cross-functional testing will reveal many functional bugs incidentally and these bugs will be the important ones from a customers point of view ! Second to find functional bugs is mainly the job of the development team. If a developer checks a new function after he or she coded it, most functional bugs will be found and fixed in development provided that development will have been granted the time to do so. This idea to give developers the time to test their own code is both trivial and revolutionary. I argued before that no developer, who spend a lot of time and effort to become an IT professional and who expects to get some professional satisfaction by his job, wants to deliver shoddy software. Why not giving him the chance to do so? The domain of Quality Assurance are the more complex bugs, those bugs which shows up under stress and load, in an special environment or if there is a special sequence of actions, so lets focus QA on this kind of bugs. By the way: theres another advantage of starting with task-based, use-case and scenario testing: there will soon be a software build which may be shown (but not handed over!) to customers, e.g. on fairs. This will result in an early feedback from customers and may result in further improvements. Of course there will be only some tested straight-forward ways to run the product now, but thats sufficient to show the product by some experienced guys, who knows which functions they are allowed to call without unveiling bugs. The idea to start with the more complex kinds of tests soon may raise the objection that there wont be a 100% test coverage for functional tests. This leads to one of the most crucial questions in Quality Assurance is: how much testing - or how much quality - is enough? When will QA have done its job? How should quality be measured? I think the most feasible and pragmatic approach is to decide on these questions, i.e. to agree on the areas to be tested, on the depth for each test area and on the criteria for good (-enough) quality. For development, technical writers, support and field folks are involved in laying down the job and targets for QA, such a decision should be sensible and realistic for example the support and field people are the best advocates for quality, because they would suffer themselves, if quality was shoddy. As I mentioned in the last chapter, one of the six tasks of the core team will be to determine what quality means for a given product. Based on their work and on the specification, QA will create a first plan what should be tested, which methods should be used and what should be the exit criteria. This plan will be discussed with development, tech writers, support and field people this will be an iterative and ongoing process. By the way it's hard for members of the development project to criticise the criteria for shipment after they were involved in laying down these criteria. To document their tests and make them repeatable QA people designs test cases (by the way: German testers use to translate "test case" by "Testfall" - a very odd and bureaucratic word). Writing good test cases is a rather demanding task, because good test cases should fulfil the following requirements: the idea, goal and target of the test must be clear, they have to be easy to maintain and to update, for example there shouldn't be descriptions of details which changes frequently, all requirements and prerequisites are described,
7

because development and support will review them, test cases have to be understandable not only to Quality Assurance people, A good test case looks like an onion: while the outside skins contain all information an expert tester or a reviewer from development and support will require to perform or to review the test (idea, goal, usecase, scenario, ), the inner skins give more detailed information how to run it and additional technical information. That's another argument for task-based, use-case and scenario testing - it's easier to write and to maintain test cases for these kind of tests than it is for isolated functional tests. For example a well-written task-based test will remain more or less unchanged if the GUI is redesigned, while a detailed functional test ("Open the Tools box, click on ") has to be rewritten completely. To write sensible tests QA people require good writing and presentation skills. Here's a list of all skills Quality Assurance Engineers require to do their job: Testers have to be at least as skilled as an expert user, not only as a novice one. What's an expert user depends on the product, e.g. a network management program has to be tested by people that know the daily business of and have similar skills like a network administrator. Testers require domain knowledge, that means they have know the domain the software was written for. E.g. for testing accounting or banking software you need to have some accounting and banking knowledge. QA people ought to have a broad and comprehensive knowledge in IT (for example different operating systems, networks, Internet, databases, programming languages, OOA/OOD ). Good writing skills are mandatory, for example for writing test cases and error reports. Writing understandable and repeatable test scripts, which also contains information about the idea and the intention of the test, the expected results and technical background, isnt easy. There's also a fine line between too detailed test scripts, which are difficult to maintain, and too superficial ones. Analytical skills are absolutely mandatory. QA people need to work in systematic and pragmatic way. Quality Assurance is related to development, support, technical writing and field service. Therefore QA engineer should have some knowledge and experience in at least one of these areas. There shouldn't also be any borders or constraints that prevent or discourage QA people to switch to development, technical writing, field service or support - and vice versa. Last, but not least QA people need to be good communicators and team players. They must be diplomatic (but sometimes tough), convincing and able to present. The most important skill is the ability to choose the relevant tests from the huge bulk of tests which are thinkable and imaginable. There won't be enough time to test everything, but there is always some danger to miss important bugs. You'll need a contradictory mixture of technical and business knowledge, of experience, intuition and pragmatism to be a good tester. Dont establish an up and out scheme, i.e. you cant stay at QA, if you dont move up in hierarchy (many consulting companies follow such a scheme). QA shouldn't become a dead-end job for the majority of the testers. If the theres only one career path in QA, i.e. to become a manager, joining QA or staying in QA wont be an attractive option. You cant expect good quality assurance if QA only is a transit camp for people on their way to become developers, managers or field people. Its impossible to write something about Quality Assurance without mentioning test automation, which has been a buzzword for a long time. I dont want to discuss this topic in detail now, but I offer five statements: Test Automation means software development, even if you only intend to write some scripts. Automated tests have to be much more stable, reliable and robust than the software to be tested. Therefore you need QA people with a development background and you need methods from software development to create sound automated tests. Automated tests have to be maintained. If the costs of maintenance are too high, test automation wont pay off. Test automation is always an investment which (hopefully) will pay off later on. You have to be granted the resources to do so - thats a management decision. Therefore support from management is a crucial prerequisite to start test automation.
8

Only simple routine tests (smoke tests, regression tests) can be automated with a reasonable effort. Therefore the main benefit of test automation is to set free resources for more demanding manual tests. Its very hard to recognise error patterns automatically and its expensive to write tests, which are able to do this. Human beings are much more effective in pattern recognition than computers thats why graphical user interfaces have become popular. Especially to evaluate the output of a GUI based product automatically is rather odd, for this product was designed to be used by a human user with his/her superior visual pattern recognition ability. Tests automation projects usually are smaller than product development projects, but require higher flexibility and early results (i.e. useable tests). Therefore test automation may be the right kind of projects to experiment with slim and flexible development methods and processes such as XP, Scrum and Crystal. Let me finish this chapter with some more philosophical remarks about what Quality Assurance is about or what it may be about. Quality Assurance never starts on a clean slate or happens in a cleanroom, QA reacts sensitive to the environment its embedded in. For example the way we do Quality Assurance today is influenced by its roots and history. These roots and history may be a valuable heritage or a burden. For example there are concepts of Software Development that have their origin in industrial production. Think of an assembly line with some guys at the end, holding a long checklist in their hands and checking the final product. The idea, that software may be produced like any other industrial product was popular for several years. While there are discussions in industrial production whether taylorism is up-to-date yet and whether the assembly line may be replaced by better concepts, it still survives somewhere in software development (think of the waterfall model). There also are different approaches to testing depending on whether the product has a command line or more modern graphic user interface, or whether the software runs on a host or a client-server environment or is internet-based. To test modern software (graphical user interface, internet-based, OO) in an old style (e.g. command line interfaces, host, ..) won't work well. Also QA culture is a subject of change: theres an ancient legend about the Black Team at IBM in the sixties, where the new QA group developed its own culture of being nasty and hideous destroyers and code breakers (therefore they were dressed in black). But I think, that this kind of QA culture wont fit in our time anymore. There may be an objection that what I suggest isnt an independent QA, i.e. testers and developers may become too familiar with each other and so may accept bugs they shouldnt accept. But if testers become more familiar with developers, the developers also become more familiar with testers there is a chance that developers will adapt to the thinking and attitudes of QA. Therefore you may add strong personality to the skills required from QA people. Second there is still need for an independent Quality Assurance in case of large-scale projects, where software from several internal and external development teams have to be integrated and tested. But thats another story my point is that especially small- to medium-size development projects are difficult to manage because you cant use the full range of clumsy methods, processes and tools described in tons of books and taught in many seminars and workshops. If speed and flexibility matters, running a software development project with developers, testers, writers and support people being integrated from the very beginning seems to me a more sensible and effective approach. The graphic on the following page suggests that there are three options to place Quality Assurance within the development process chain. The first two options Quality assurance is close to or part of development or field organisation doesnt require an independent QA department.

10

Project and Risk Management :


As I mentioned before, Im addressing small- to mid-size projects, which are difficult to manage because you have to find a way between organisational overkill and chaos. To do so I suggest applying a lean version of Eliyahu Goldratts Critical Chain project management approach combined with proper Risk Management (which I stole from Tom DeMarco) both items fit together seamlessly. The basic idea of the Critical Chain concept is to remove (hidden) time buffers from single tasks, collect them and put them in a global buffer at the end of the project. By adding buffers to each (sub-) task you will loose some time, if the buffer isnt needed. Adding all buffers to a global one will make the whole project more transparent: especially you now are able to determine by a look on the global buffer whether the project is on track or whether the global buffer is used up too fast. Why should this work in a better way? The more traditional approach is to divide the project into sub-projects and tasks, look for the critical path, define milestones and check, whether a milestone is reached in time or not. There are also plenty of project management tools that support this approach. But theres a hidden disadvantage of this model: because the milestones have to be reached in time, the project members will add buffers to each task. If a project leader asks a team member, how long it will take to finish task X, the answer may be 5 days, although 3 days would be sufficient. The team member added a hidden buffer in case their will be unforeseen problems. The project leader may add another two days to be on the safe side. In case everything runs smoothly, 4 days were lost. This game is fostered by an environment, in which defensiveness, risk-avoidance or even intimidation are part of the corporate culture. Its important to be aware of the fact that there arent fixed milestones anymore after applying critical chain: each time a part of the global buffer is used, the dates of all remaining milestones will shift, therefore there are no fixed deadlines due to milestones. That doesnt matter, because now the usage of the global buffer is the critical observable. This must be clearly communicated the first time this approach is applied, otherwise there will be confusion and misunderstandings. Furthermore theres another disadvantage of the milestone concept, which will have an unfavourable effect especially in smaller projects. Because there are often parallel tasks, a team member may be involved in more than one task at the same time (thats less probable in large-scale projects). Frequent task switches are not for free, to avoid them is another way to increase effectiveness. By focusing on buffers instead of milestones you will gain more degrees of freedom to avoid fragmentation of your team members. Put in a nutshell the basic ideas of a project run in a Critical Chain style are: Estimate the duration of each task without any buffer, i.e. the probability to finish a task in time should be 50 %; you neednt hang your head in shame if you use some time from the buffer, thats why the buffer was created for; avoid fragmentation of tasks, multitasking is fine for computers, but not for team members; if a task is finished earlier than estimated, please inform the team members who will need the output from this task think of a relay race or an early warning system. Beside the global project buffer placed at the end of the project there may be additional feeding buffers for all paths, which feed the critical path or chain. How to determine the size of the buffer? Thats one output of risk management. The gist of risk management is to list each risk, there should also be an ongoing process of discovering new risks; to estimate potential impact and likelihood of each risk; to find an indicator for each risk which warns you in time that a risk is going to materialise; to plan proactively (buzz-word !), what you will do if a risk materialises or how you can reduce potential impact in advance. Adjusting the project buffer with respect to the risks is one very obvious action to mitigate the impact of materialised risks.
11

Two points are important. First risk management means planning for failure, therefore it may be difficult to sell it to stakeholders and managers it even may misfit to the corporate culture. Nobody likes to talk about (potential) failure. Second it will cost something i.e. the most optimistic date (no problems, project runs absolutely smoothly) will be shifted. Without risk management the project may end at the 1st of June at the earliest thats the blue-eyed, fairy-tale best-case scenario. But its more probable, that it will end in September, while the worst case will be .... summer next year. With risk management the best case will be 1st of July so you will pay four weeks for risk management, while the worst case will be October. Thats a major improvement! So risk management means to deal with and to reduce uncertainty. I havent talked about processes and metrics (how to measure progress, quality, etc.) up to now, because sometimes I have an odd feeling when talking about them. So let me do this now, starting with processes. I admit that I have three problems with processes. First Process has become a very popular buzzword, which meanwhile means everything, it often lost its original meaning (Who remember?). Second process thinking establishes a layer of abstraction a process layer - between the daily business and management. As a manager you dont deal with people reporting to you anymore, but you own and control processes. No wonder that processes becomes a matter of company politics, for example if you want to move up, you had to define and own a process. Third there is a hidden, but obvious message in process obsession: people dont matter and are interchangeable. I believe the best way to deal with processes is to look at them as a service and as a part of the project infrastructure, which should make work more smooth, focused and successful. Effective processes are an important prerequisite in project work they shouldnt end up as a boring buzzword, a management fad or an obsession. Concerning my odd feelings about metrics: if you want to measure something, you need a model describing the dependencies between the things you want to improve or to optimise and the observables you are able to measure. Thats not a trivial task, its very close to science (creating models and proving or disproving them by measurements is the daily business of scientists). Increasing the population of storks in an area doesnt increase birth rate, although theres a correlation between the number of storks and the number of births. To mix up observables and the underlying causes is one risk when dealing with metrics. Another aspect is that measurements made over a period of time will only be comparable, if the environment doesnt change dramatically. For example if you improve one aspect of a process, you have to keep all other aspects unchanged to see the benefits or problems of this improvement. Thats not very realistic in a fast changing environment. If there are a lot of changes (new and different products and projects, new technology, changes in processes, ...), the metrics will also be subject of frequent change, which makes choosing and using metrics more difficult. On the other hand, if you run a project in an iterative way, you will be able to adjust your metrics and to learn with each iteration - a strong argument for an iterative and spiral approach. One final thought: metrics are useless as long as there are no options and actions to react on some negative or alarming measurements. During a running project, metrics only are sensible in the context of risk management. To summarize: metrics depend on models, require a stable environment to be comparable and arent useful without any scope for action. To avoid misunderstandings: processes and metrics arent something negative, but they are demanding and theres some chance to misinterpret or to abuse them. They wont make project management easier, but more focused and successful. They never are an end in themselves. Software Testing - as a part of the art of developing sound and successful software is a vivid and living profession, which have changed and grown during the recent years. Im sure it will continue to grow and mature in the years to come.

12

You might also like