You are on page 1of 66

Critical acclaim for

Steve McConnell’s CODE COMPLETE

“Every half an age or so, you come across a book that short-circuits the
school of experience and saves you years of purgatory.... I cannot
adequately express how good this book really is. Code Complete is
a pretty lame title for a work of brilliance.”
PC Techniques
“Microsoft Press has published what I consider to be the definitive book
on software construction: Code Complete by Steve McConnell. This is a
book that belongs on every software developer’s shelf.”
Software Development
“Every programmer should read this outstanding book.”
Computer
“If you are or aspire to be a professional programmer, this may be the
wisest $35 investment you’ll ever make. Don’t stop to read the rest of
this review: just run out and buy it. McConnell’s stated purpose is to
narrow the gap between the knowledge of industry gurus and common
commercial practice.... The amazing thing is that he succeeds.”
IEEE Micro
“Code Complete should be required reading for anyone...in software
development.”
C Users Journal
“I’m encouraged to stick my neck out a bit further than usual and
recommend, without reservation, Steve McConnell’s Code Complete....
My copy has replaced my API reference manuals as the book that’s
closest to my keyboard while I work.”
Windows Tech Journal
“This well-written but massive tome is arguably the best single volume
ever written on the practical aspects of software implementation.”
Embedded Systems Programming
“This is the best book on software engineering that I have yet read.”
.EXE Magazine
“This book deserves to become a classic, and should be compulsory
reading for all developers, and those responsible for managing them.”
Program Now
PUBLISHED BY
Microsoft Press
A Division of Microsoft Corporation
One Microsoft Way
Redmond, Washington 98052-6399

Copyright © 1996 by Steve McConnell

All rights reserved. No part of the contents of this book may be reproduced or
transmitted in any form or by any means without the written permission of the publisher.

Library of Congress Cataloging-in-Publication Data


McConnell, Steve.
Rapid Development : Taming Wild Software Schedules / Steve
McConnell.
p. cm.
Includes index.
ISBN 978-1-556-15900-8
1. Computer software--Development. I. Title.
QA76.76.D47M393 1996
005.1'068--dc20 96-21517
CIP

Printed and bound in the United States of America.

14 15 16 17 18 19 MLML 06 05 04 03 02 01

Distributed in Canada by Penguin Books Canada Limited.

A CIP catalogue record for this book is available from the British Library.

Microsoft Press books are available through booksellers and distributors worldwide. For further informa-
tion about international editions, contact your local Microsoft Corporation office or contact Microsoft Press
International directly at fax (425) 936-7329. Visit our Web site at www.microsoft.com/mspress. Send
comments to mspinput@microsoft.com.

AT&T is a registered trademark of American Telephone and Telegraph Company. Apple and Macintosh are
registered trademarks of Apple Computer, Inc. Boeing is a registered trademark of The Boeing Company.
Borland and Delphi are registered trademarks of Borland International, Inc. FileMaker is a registered
trademark of Claris Corporation. Dupont is a registered trademark of E.I. Du Pont de Nemours and
Company. Gupta is a registered trademark of Gupta Corporation (a California Corporation). Hewlett-
Packard is a registered trademark of Hewlett-Packard Company. Intel is a registered trademark of Intel
Corporation. IBM is a registered trademark of International Business Machines Corporation. ITT is a
registered trademark of International Telephone and Telegraph Corporation. FoxPro, Microsoft, MS-DOS,
PowerPoint, Visual Basic, Windows, and Windows NT are registered trademarks and Visual FoxPro is a
trademark of Microsoft Corporation. Powersoft is a registered trademark and PowerBuilder is a trademark
of PowerSoft Corporation. Raytheon is a registered trademark of Raytheon Company. Other product and
company names mentioned herein may be the trademarks of their respective owners.

The example companies, organizations, products, domain names, e-mail addresses, logos, people, places,
and events depicted herein are fictitious. No association with any real company, organization, product,
domain name, e-mail address, logo, person, place, or event is intended or should be inferred.

Acquisitions Editor: David J. Clark


Project Editor: Jack Litewka
Contents

Case Studies ix
Reference Tables x
Preface xiii

PART I EFFICIENT DEVELOPMENT


1 Welcome to Rapid Development 1
What Is Rapid Development? • Attaining Rapid Development
2 Rapid-Development Strategy 5
General Strategy for Rapid Development • Four Dimensions of
Development Speed • General Kinds of Fast Development • Which
Dimension Matters the Most? • An Alternative Rapid-Development
Strategy • Further Reading
3 Classic Mistakes 29
Case Study in Classic Mistakes • Effect of Mistakes on a Development
Schedule • Classic Mistakes Enumerated • Escape from Gilligan’s
Island • Further Reading
4 Software-Development Fundamentals 51
Management Fundamentals • Technical Fundamentals • Quality-
Assurance Fundamentals • Following the Instructions • Further
General Reading
5 Risk Management 81
Elements of Risk Management • Risk Identification • Risk Analysis •
Risk Prioritization • Risk Control • Risk, High Risk, and Gambling •
Further Reading

PART II RAPID DEVELOPMENT


6 Core Issues in Rapid Development 109
Does One Size Fit All? • What Kind of Rapid Development Do You
Need? • Odds of Completing on Time • Perception and Reality • Where
the Time Goes • Development-Speed Trade-Offs • Typical Schedule-
Improvement Pattern • Onward to Rapid Development • Further Reading

v
Contents

7 Lifecycle Planning 133


Pure Waterfall • Code-and-Fix • Spiral • Modified Waterfalls • Evolution-
ary Prototyping • Staged Delivery • Design-to-Schedule • Evolutionary
Delivery • Design-to-Tools • Commercial Off-the-Shelf Software •
Choosing the Most Rapid Lifecycle for Your Project • Further Reading
8 Estimation 163
The Software-Estimation Story • Estimation-Process Overview • Size
Estimation • Effort Estimation • Schedule Estimation • Ballpark Schedule
Estimates • Estimate Refinement • Further Reading
9 Scheduling 205
Overly Optimistic Scheduling • Beating Schedule Pressure •
Further Reading
10 Customer-Oriented Development 233
Customers’ Importance to Rapid Development • Customer-Oriented
Practices • Managing Customer Expectations • Further Reading
11 Motivation 249
Typical Developer Motivations • Using the Top Five Motivation
Factors • Using Other Motivation Factors • Morale Killers •
Further Reading
12 Teamwork 273
Software Uses of Teamwork • Teamwork’s Importance to Rapid
Development • Creating a High-Performance Team • Why Teams
Fail • Long-Term Teambuilding • Summary of Teamwork
Guidelines • Further Reading
13 Team Structure 297
Team-Structure Considerations • Team Models • Managers and Technical
Leads • Further Reading
14 Feature-Set Control 319
Early Project: Feature-Set Reduction • Mid-Project: Feature-Creep
Control • Late Project: Feature Cuts • Further Reading
15 Productivity Tools 345
Role of Productivity Tools in Rapid Development • Productivity-Tool
Strategy • Productivity-Tool Acquisition • Productivity-Tool Use •
Silver-Bullet Syndrome • Further Reading
16 Project Recovery 371
General Recovery Options • Recovery Plan • Further Reading

vi
Contents

PART III BEST PRACTICES


Introduction to Best Practices 390
Organization of Best-Practice Chapters • Summary of Best-Practice
Candidates • Summary of Best-Practice Evaluations
17 Change Board 403
18 Daily Build and Smoke Test 405
19 Designing for Change 415
20 Evolutionary Delivery 425
21 Evolutionary Prototyping 433
22 Goal Setting 445
23 Inspections 447
24 Joint Application Development (JAD) 449
25 Lifecycle Model Selection 465
26 Measurement 467
27 Miniature Milestones 481
28 Outsourcing 491
29 Principled Negotiation 503
30 Productivity Environments 505
31 Rapid-Development Languages (RDLs) 515
32 Requirements Scrubbing 525
33 Reuse 527
34 Signing Up 539
35 Spiral Lifecycle Model 547
36 Staged Delivery 549
37 Theory-W Management 559
38 Throwaway Prototyping 569
39 Timebox Development 575
40 Tools Group 585
41 Top-10 Risks List 587

vii
Contents

42 User-Interface Prototyping 589


43 Voluntary Overtime 599

Bibliography 609
Index 625

viii
Case Studies

2-1. Rapid Development Without a Clear Strategy 6


2-2. Rapid Development with a Clear Strategy 25
3-1. Classic Mistakes 29
4-1. Lack of Fundamentals 52
5-1. Lack of Contractor Risk Management 82
5-2. Systematic Risk Management 103
6-1. Wandering in the Fuzzy Front End 124
7-1. Ineffective Lifecycle Model Selection 134
7-2. Effective Lifecycle Model Selection 159
8-1. Seat-of-the-Pants Project Estimation 164
8-2. Careful Project Estimation 200
9-1. A Successful Schedule Negotiation 229
10-1. The Requirements Club 234
10-2. The Requirements Club Revisited 246
11-1. A Disheartening Lunch with the Boss 250
11-2. A Highly Motivational Environment 270
12-1. You Call This a Team? 274
12-2. A High-Performance Team 277
12-3. Typical Team-Member Selection 282
12-4. A Second High-Performance Team 294
13-1. Mismatch Between Project Objectives and
Team Structure 297
13-2. Good Match Between Project Objectives and
Team Structure 315
14-1. Managing Change Effectively 342
15-1. Ineffective Tool Use 346
15-2. Effective Tool Use 368
16-1. An Unsuccessful Project Recovery 372
16-2. A Successful Project Recovery 385

ix
Reference Tables

2-1. Characteristics of Standard Approaches to Schedule-


Oriented Development 18
2-2. Code-Like-Hell Approach Compared to This Book’s
Approach 24
3-1. Summary of Classic Mistakes 49
5-1. Levels of Risk Management 84
5-2. Most Common Schedule Risks 86
5-3. Potential Schedule Risks 87
5-4. Example of a Risk-Assessment Table 92
5-5. Example of a Prioritized Risk-Assessment Table 95
5-6. Means of Controlling the Most Common
Schedule Risks 98
5-7. Example of a “Top-10 Risks List” 101
6-1. Approximate Activity Breakdown by Size of Project 122
7-1. Lifecycle Model Strengths and Weaknesses 156
8-1. Estimate Multipliers by Project Phase 169
8-2. Function-Point Multipliers 176
8-3. Example of Computing the Number of Function Points 177
8-4. Example of a Risk-Quantification Estimate 180
8-5. Example of a Case-Based Estimate 181
8-6. Example of a Confidence-Factor Estimate 182
8-7. Exponents for Computing Schedules from Function Points 185
8-8. Shortest Possible Schedules 190
8-9. Efficient Schedules 194
8-10. Nominal Schedules 196
8-11. Example of a Single-Point–Estimation History 197
8-12. Example of a Range-Estimation History 198
9-1. Scheduling History of Word for Windows 1.0 208
11-1. Comparison of Motivators for Programmer Analysts vs.
Managers and the General Population 252

x
Reference Tables

11-2. Team Performance Ranked Against Objectives That Teams


Were Told to Optimize 256
12-1. Practical Guidelines for Team Members and Leaders 295
13-1. Team Objectives and Team Structures 301
15-1. Example of Savings Realized by Switching from a 3GL to a 4GL
for 50 Percent of a 32,000 LOC Project 361
15-2. Example of Savings Realized by Switching from a 3GL to a 4GL
for 100 Percent of a 32,000 LOC Project 362
III-1. Summary of Best-Practice Candidates 396
III-2. Summary of Best-Practice Evaluations 400
26-1. Examples of Kinds of Measurement Data 470
26-2. Example of Time-Accounting Activities 472
28-1. Vendor-Evaluation Questionnaire 497
28-2. Contract Considerations 498
30-1. Differences in Office Environments Between Best and Worst
Performers in a Programming Competition 512
31-1. Approximate Function-Points to Lines-of-Code
Conversions 517
31-2. Approximate Language Levels 519
36-1. Example of a Staged-Delivery Schedule for a
Word Processor 552
37-1. Project Stakeholders and Their Objectives 560
37-2. Steps in Theory-W Project Management 562

xi
Preface

Preface

Software developers are caught on the horns of a dilemma. One horn of the
dilemma is that developers are working too hard to have time to learn about
effective practices that can solve most development-time problems; the other
horn is that they won’t get the time until they do learn more about rapid
development.
Other problems in our industry can wait. It’s hard to justify taking time to
learn more about quality when you’re under intense schedule pressure to
“just ship it.” It’s hard to learn more about usability when you’ve worked 20
days in a row and haven’t had time to see a movie, go shopping, work out,
read the paper, mow your lawn, or play with your kids. Until we as an in-
dustry learn to control our schedules and free up time for developers and
managers to learn more about their professions, we will never have enough
time to put the rest of our house in order.
The development-time problem is pervasive. Several surveys have found that
about two-thirds of all projects substantially overrun their estimates (Lederer
and Prasad 1992, Gibbs 1994, Standish Group 1994). The average large
project misses its planned delivery date by 25 to 50 percent, and the size of
the average schedule slip increases with the size of the project (Jones 1994).
Year after year, development-speed issues have appeared at the tops of lists
of the most critical issues facing the software-development community
(Symons 1991).
Although the slow-development problem is pervasive, some organizations
are developing rapidly. Researchers have found 10-to-1 differences in pro-
ductivity between companies within the same industries, and some research-
ers have found even greater variations (Jones 1994).
The purpose of this book is to provide the groups that are currently on the
“1” side of that 10-to-1 ratio with the information they need to move toward
the “10” side of the ratio. This book will help you bring your projects un-
der control. It will help you deliver more functionality to your users in less
time. You don’t have to read the whole book to learn something useful; no
matter what state your project is in, you will find practices that will enable
you to improve its condition.

xiii
Preface

Who Should Read This Book?


Slow development affects everyone involved with software development,
including developers, managers, clients, and end-users—even their families
and friends. Each of these groups has a stake in solving the slow-develop-
ment problem, and there is something in this book for each of them.
This book is intended to help developers and managers know what’s pos-
sible, to help managers and clients know what’s realistic, and to serve as an
avenue of communication between developers, managers, and clients so that
they can tailor the best possible approach to meet their schedule, cost, qual-
ity, and other goals.

Technical Leads
This book is written primarily with technical leads or team leads in mind. If
that’s your role, you usually bear primary responsibility for increasing the
speed of software development, and this book explains how to do that.
It also describes the development-speed limits so that you’ll have a firm
foundation for distinguishing between realistic improvement programs and
wishful-thinking fantasies.
Some of the practices this book describes are wholly technical. As a tech-
nical lead, you should have no problem implementing those. Other practices
are more management oriented, and you might wonder why they are in-
cluded here. In writing the book, I have made the simplifying assumption
that you are Technical Super Lead—faster than a speeding hacker; more
powerful than a loco-manager; able to leap both technical problems and
management problems in a single bound. That is somewhat unrealistic, I
know, but it saves both of us from the distraction of my constantly saying,
“If you’re a manager, do this, and if you’re a developer, do that.” Moreover,
assuming that technical leads are responsible for both technical and man-
agement practices is not as far-fetched as it might sound. Technical leads are
often called upon to make recommendations to upper management about
technically oriented management issues, and this book will help prepare you
to do that.

Individual Programmers
Many software projects are run by individual programmers or self-managed
teams, and that puts individual technical participants into de facto techni-
cal-lead roles. If you’re in that role, this book will help you improve your
development speed for the same reasons that it will help bona fide tech-
nical leads.

xiv
Preface

Managers
Managers sometimes think that achieving rapid software development is
primarily a technical job. If you’re a manager, however, you can usually do
as much to improve development speed as your developers can. This book
describes many management-level rapid-development practices. Of course,
you can also read the technically oriented practices to understand what your
developers can do at their level.

Key Benefits of This Book


I conceived of this book as a Common Sense for software developers. Like
Thomas Paine’s original Common Sense, which laid out in pragmatic terms
why America should secede from Mother England, this book lays out in
pragmatic terms why many of our most common views about rapid devel-
opment are fundamentally broken. These are the times that try developers’
souls, and, for that reason, this book advocates its own small revolution in
software-development practices.
My view of software development is that software projects can be optimized
for any of several goals—lowest defect rate, fastest execution speed, great-
est user acceptance, best maintainability, lowest cost, or shortest develop-
ment schedule. Part of an engineering approach to software is to balance
trade-offs: Can you optimize for development time by cutting quality? By
cutting usability? By requiring developers to work overtime? When crunch
time comes, how much schedule reduction can you ultimately achieve? This
book helps answer such key trade-off questions as well as other questions.
Improved development speed. You can use the strategy and best prac-
tices described in this book to achieve the maximum possible development
speed in your specific circumstances. Over time, most people can realize dra-
matic improvements in development speed by applying the strategies and
practices described in this book. Some best practices won’t work on some
kinds of projects, but for virtually any kind of project, you’ll find other best
practices that will. Depending on your circumstances, “maximum develop-
ment speed” might not be as fast as you’d like, but you’ll never be completely
out of luck just because you can’t use a rapid-development language, are
maintaining legacy code, or work in a noisy, unproductive environment.
Rapid-development slant on traditional topics. Some of the practices
described in this book aren’t typically thought of as rapid-development prac-
tices. Practices such as risk management, software-development fundamen-
tals, and lifecycle planning are more commonly thought of as “good
software-development practices” than as rapid-development methodologies.

xv
Preface

These practices, however, have profound development-speed implications


that in many cases dwarf those of the so-called rapid-development methods.
This book puts the development-speed benefits of these practices into con-
text with other practices.
Practical focus. To some people, “practical” means “code,” and to those
people I have to admit that this book might not seem very practical. I’ve
avoided code-focused practices for two reasons. First, I’ve already written
800 pages about effective coding practices in Code Complete (Microsoft Press,
1993). I don’t have much more to say about them. Second, it turns out that
many of the critical insights about rapid development are not code-focused;
they’re strategic and philosophical. Sometimes, there is nothing more prac-
tical than a good theory.
Quick-reading organization. I’ve done all I can to present this book’s
rapid-development information in the most practical way possible. The first
400 pages of the book (Parts I and II) describe a strategy and philosophy
of rapid development. About 50 pages of case studies are integrated into that
discussion so that you can see how the strategy and philosophy play out
in practice. If you don’t like case studies, they’ve been formatted so that
you can easily skip them. The rest of the book consists of a set of rapid-
development best practices. The practices are described in quick-reference
format so that you can skim to find the practices that will work best on your
projects. The book describes how to use each practice, how much sched-
ule reduction to expect, and what risks to watch out for.
The book also makes extensive use of marginal icons and text to help you
quickly find additional information related to the topic you’re reading about,
avoid classic mistakes, zero in on best practices, and find quantitative sup-
port for many of the claims made in this book.
A new way to think about the topic of rapid development. In no other
area of software development has there been as much disinformation as in
the area of rapid development. Nearly useless development practices have
been relentlessly hyped as “rapid-development practices,” which has caused
many developers to become cynical about claims made for any development
practices whatsoever. Other practices are genuinely useful, but they have
been hyped so far beyond their real capabilities that they too have contrib-
uted to developers’ cynicism.
Each tool vendor and each methodology vendor want to convince you that
their new silver bullet will be the answer to your development needs. In no
other software area do you have to work as hard to separate the wheat from
the chaff. This book provides guidelines for analyzing rapid-development
information and finding the few grains of truth.

xvi
Preface

This book provides ready-made mental models that will allow you to assess
what the silver-bullet vendors tell you and will also allow you to incorpo-
rate new ideas of your own. When someone comes into your office and says,
“I just heard about a great new tool from the GigaCorp Silver Bullet Com-
pany that will cut our development time by 80 percent!” you will know how
to react. It doesn’t matter that I haven’t said anything specifically about the
GigaCorp Silver Bullet Company or their new tool. By the time you finish
this book, you’ll know what questions to ask, how seriously to take
GigaCorp’s claims, and how to incorporate their new tool into your devel-
opment environment, if you decide to do that.
Unlike other books on rapid development, I’m not asking you to put all of
your eggs into a single, one-size-fits-all basket. I recognize that different
projects have different needs, and that one magic method is usually not
enough to solve even one project’s schedule problems. I have tried to be
skeptical without being cynical—to be critical of practices’ effectiveness but
to stop short of assuming that they don’t work. I revisit those old, overhyped
practices and salvage some that are still genuinely useful—even if they aren’t
as useful as they were originally promised to be.
Why is this book about rapid development so big? Developers in the
IS, shrink-wrap, military, and software-engineering fields have all discovered
valuable rapid-development practices, but the people from these different
fields rarely talk to one another. This book collects the most valuable prac-
tices from each field, bringing together rapid-development information from
a wide variety of sources for the first time.
Does anyone who needs to know about rapid development really have time
to read 650 pages about it? Possibly not, but a book half as long would have
had to be oversimplified to the point of uselessness. To compensate, I’ve
organized the book so that it can be read quickly and selectively—you can
read short snippets while you’re traveling or waiting. Chapters 1 and 2 con-
tain the material that you must read to understand how to develop products
more quickly. After you read those chapters, you can read whatever inter-
ests you most.

Why This Book Was Written


Clients’ and managers’ first response to the problem of slow development
is usually to increase the amount of schedule pressure and overtime they
heap on developers. Excessive schedule pressure occurs in about 75 percent
of all large projects and in close to 100 percent of all very large projects
(Jones 1994). Nearly 60 percent of developers report that the level of stress
they feel is increasing (Glass 1994c). The average developer in the U.S. works
from 48 to 50 hours per week (Krantz 1995). Many work considerably more.

xvii
Preface

In this environment, it isn’t surprising that general job satisfaction of software


developers has dropped significantly in the last 15 years (Zawacki 1993), and
at a time when the industry desperately needs to be recruiting additional
programmers to ease the schedule pressure, developers are spreading the
word to their younger sisters, brothers, and children that our field is no fun
anymore.
Clearly our field can be fun. Many of us got into it originally because we
couldn’t believe that people would actually pay us to write software. But
something not-so-funny happened on the way to the forum, and that some-
thing is intimately bound up with the topic of rapid development.
It’s time to start shoring up the dike that separates software developers from
the sea of scheduling madness. This book is my attempt to stick a few fin-
gers into that dike, holding the madness at bay long enough to get the job
started.

Acknowledgments
Heartfelt thanks go first to Jack Litewka, the project editor, for making the
creation of this book a thoroughly constructive and enjoyable experience.
Thanks also go to Kim Eggleston for the book’s design, to Michael Victor for
the diagrams, and to Mark Monlux for the terrific illustrations. Sally
Brunsman, David Clark, Susanne Freet, Dean Holmes, Wendy Maier, and
Heidi Saastamoinen also helped this project go smoothly. Literally dozens
of other people contributed to this book in one way or another; I didn’t have
personal contact with any of them, but I appreciate their contributions, too.
(Chief among these, I am told, are layout artist Jeannie McGivern and pro-
duction manager Jean Trenary of ArtSource and Microsoft Press’s proof/copy-
edit platoon supervised by Brenda Morris: Richard Carey, Roger LeBlanc,
Patrick Forgette, Ron Drummond, Patricia Masserman, Paula Thurman,
Jocelyn Elliott, Deborah Long, and Devon Musgrave.)
Microsoft Corporation’s technical library provided invaluable aid in digging
up the hundreds of books and articles that laid the foundation for this book.
Keith Barland spearheaded that effort, making my research efforts much less
arduous and time-consuming than they otherwise might have been. Other
people at the library who helped included Janelle Jones, Christine Shannon,
Linda Shaw, Amy Victor, Kyle Wagner, Amy Westfall, and Eilidh Zuvich.
I expound on the virtue of reviews in several places in this book, and this
book has benefited greatly from extensive peer reviews. Al Corwin, Pat
Forman, Tony Garland, Hank Meuret, and Matt Peloquin stuck with the
project from beginning to end. Thanks to them for seeing that the book you

xviii
Preface

hold in your hands doesn’t look very much like the book I originally set out
to write! I also received valuable comments from Wayne Beardsley, Duane
Bedard, Ray Bernard, Bob Glass, Sharon Graham, Greg Hitchcock, Dave
Moore, Tony Pisculli, Steve Rinn, and Bob Stacy—constructive critics, all.
David Sommer (age 11) came up with the idea for the last panel of Figure
14-3. Thanks, David. And, finally, I’d like to thank my wife, Tammy, for her
moral support and good humor. I have to start working on my third book
immediately so that she will stop elbowing me in the ribs and calling me a
Two-Time Author!
Bellevue, Washington
June 1996

xix
3
Classic Mistakes

Contents
3.1 Case Study in Classic Mistakes
3.2 Effect of Mistakes on a Development Schedule
3.3 Classic Mistakes Enumerated
3.4 Escape from Gilligan’s Island

Related Topics
Risk management: Chapter 5
Rapid-development strategy: Chapter 2

SOFTWARE DEVELOPMENT IS A COMPLICATED ACTIVITY. A typical soft-


ware project can present more opportunities to learn from mistakes than
some people get in a lifetime. This chapter examines some of the classic mis-
takes that people make when they try to develop software rapidly.

3.1 Case Study in Classic Mistakes


The following case study is a little bit like the children’ s picture puzzles in
which you try to find all the objects whose names begin with the letter “M”.
How many classic mistakes can you find in the following case study?

Case Study 3-1. Classic Mistakes

Mike, a technical lead for Giga Safe, was eating lunch in his office and look-
ing out his window on a bright April morning.
“Mike, you got the funding for the Giga-Quote program! Congratulations!” It
was Bill, Mike’ s boss at Giga, a medical insurance company. “The executive
committee loved the idea of automating our medical insurance quotes. It also

(continued)

29
Chapter 3: Classic Mistakes

Case Study 3-1. Classic Mistakes, continued

loved the idea of uploading the day’s quotes to the head office every night
so that we always have the latest sales leads online. I’ve got a meeting now,
but we can discuss the details later. Good job on that proposal!”
Mike had written the proposal for the Giga-Quote program months earlier, but
his proposal had been for a stand-alone PC program without any ability to
communicate with the head office. Oh well. This would give him a chance
to lead a client-server project in a modern GUI environment—something he
had wanted to do. They had almost a year to do the project, and that should
give them plenty of time to add a new feature. Mike picked up the phone and
dialed his wife’s number. “Honey, let’s go out to dinner tonight to celebrate...”
The next morning, Mike met with Bill to discuss the project. “OK, Bill. What’s
up? This doesn’t sound like quite the same proposal I worked on.”
Bill felt uneasy. Mike hadn’t participated in the revisions to the proposal, but
there hadn’t been time to involve him. Once the executive committee heard
about the Giga-Quote program, they’d taken over. “The executive committee
loves the idea of building software to automate medical insurance quotes. But
they want to be able to transfer the field quotes into the mainframe computer
automatically. And they want to have the system done before our new rates
take effect January 1. They moved the software-complete date you proposed
up from March 1 to November 1, which shrinks your schedule to 6 months.”
Mike had estimated the job would take 12 months. He didn’t think they had
much chance of finishing in 6 months, and he told Bill so. “Let me get this
straight,” Mike said. “It sounds like you’re saying that the committee added a
big communications requirement and chopped the schedule from 12 months
to 6?”
Bill shrugged. “I know it will be a challenge, but you’re creative, and I think
you can pull it off. They approved the budget you wanted, and adding the
communications link can’t be that hard. You asked for 36 staff-months, and
you got it. You can recruit anyone you like to work on the project and increase
the team size, too.” Bill told him to go talk with some other developers and
figure out a way to deliver the software on time.
Mike got together with Carl, another technical lead, and they looked for ways
to shorten the schedule. “Why don’t you use C++ and object-oriented design?”
Carl asked. “You’ll be more productive than with C, and that should shave a
month or two off the schedule.” Mike thought that sounded good. Carl also
knew of a report-building tool that was supposed to cut development time
in half. The project had a lot of reports, so those two changes would get them
down to about 9 months. They were due for newer, faster hardware, too, and
that could shave off a couple weeks. If he could recruit really top-notch de-
velopers, that might bring them down to about 7 months. That should be close
enough. Mike took his findings back to Bill.

(continued)

30
Case Study 3-1. Classic Mistakes, continued

“Look,” Bill said. “Getting the schedule down to 7 months is good, but it isn’t
good enough. The committee was very clear about the 6-month deadline. They
didn’t give me a choice. I can get you the new hardware you want, but you
and your team are going to have to find some way to get the schedule down
to 6 months or work some overtime to make up the difference.”
Mike considered the fact that his initial estimate had just been a ballpark guess
and thought maybe he could pull it off in 6 months. “OK, Bill. I’ll hire a couple
of sharp contractors for the project. Maybe we can find some people with
communications experience to help with uploading data from the PC to the
mainframe.”
By May 1, Mike had put a team together. Jill, Sue, and Tomas were solid, in-
house developers, and they happened to be unassigned. He rounded out the
team with Keiko and Chip, two contractors. Keiko had experience both on
PCs and the kind of mainframe they would interface with. Jill and Tomas had
interviewed Chip and recommended against hiring him, but Mike was im-
pressed. He had communications experience and was available immediately,
so Mike hired him anyway.
At the first team meeting, Bill told the team that the Giga-Quote program was
strategically important to the Giga Safe Corporation. Some of the top people
in the company would be watching them. If they succeeded, there would be
rewards all around. He said he was sure that they could pull it off.
After Bill’s pep talk, Mike sat down with the team and laid out the schedule.
The executive committee had more or less handed them a specification, and
they would spend the next 2 weeks filling in the gaps. Then they’d spend 6
weeks on design, which would leave them 4 months for construction and
testing. His seat-of-the-pants estimate was that the final product would con-
sist of about 30,000 lines of code in C++. Everyone around the table nodded
agreement. It was ambitious, but they’d known that when they signed up for
the project.
The next week, Mike met with Stacy, the testing lead. She explained that they
should begin handing product builds over to testing no later than September
1, and they should aim to hand over a feature-complete build by October 1.
Mike agreed.
The team finished the requirements specification quickly, and dove into design.
They came up with a design that seemed to make good use of C++’s features.
They finished the design by June 15, ahead of schedule, and began coding
like crazy to meet their goal of a first-release-to-testing by September 1. Work
on the project wasn’t entirely smooth. Neither Jill nor Tomas liked Chip, and
Sue had also complained that he wouldn’t let anyone near his code. Mike

(continued)

31
Chapter 3: Classic Mistakes

Case Study 3-1. Classic Mistakes, continued

attributed the personality clashes to the long hours everyone was working.
Nevertheless, by early August, they reported that they were between 85-
percent and 90-percent done.
In mid-August, the actuarial department released the rates for the next year,
and the team discovered that they had to accommodate an entirely new rate
structure. The new rating method required them to ask questions about ex-
ercise habits, drinking habits, smoking habits, recreational activities, and other
factors that hadn’t been included in the rating formulas before. C++, they
thought, was supposed to shield them from the effects of such changes. They
had been counting on just plugging some new numbers into a ratings table.
But they had to change the input dialogs, database design, database access,
and communications objects to accommodate the new structure. As the team
scrambled to retrofit their design, Mike told Stacy that they might be a few
days late releasing the first build to testing.
The team didn’t have a build ready by September 1, and Mike assured Stacy
that the build was only a day or two away.
Days turned into weeks. The October 1 deadline for handing over the feature-
complete build to testing came and went. Development still hadn’t handed
over the first build to testing. Stacy called a meeting with Bill to discuss the
schedule. “We haven’t gotten a build from development yet,” she said. “We
were supposed to get our first build on September 1, and since we haven’t
gotten one yet, they’ve got to be at least a full month behind schedule. I think
they’re in trouble.”
“They’re in trouble, all right,” Bill said. “Let me talk to the team. I’ve prom-
ised 600 agents that they would have this program by November 1. We have
to get that program out in time for the rate change.”
Bill called a team meeting. “This is a fantastic team, and you should be meeting
your commitments,” he told them. “I don’t know what’s gone wrong here, but
I expect everyone to work hard and deliver this software on time. You can
still earn your bonuses, but now you’re going to have to work for them. As
of now, I’m putting all of you on a 6-day-per-week, 10-hour-per-day sched-
ule until this software is done.” After the meeting, Jill and Tomas grumbled
to Mike about not needing to be treated like children, but they agreed to work
the hours Bill wanted.
The team slipped the schedule two weeks, promising a feature-complete build
by November 15. That allowed for 6 weeks of testing before the new rates
went into effect in January.
The team released its first build to testing 4 weeks later on November 1 and
met to discuss a few remaining problem areas.

(continued)

32
Case Study 3-1. Classic Mistakes, continued

Tomas was working on report generation and had run into a roadblock. “The
quote summary page includes a simple bar chart. I’m using a report genera-
tor that’s supposed to generate bar charts, but the only way it will generate
them is on pages by themselves. We have a requirement from the sales group
to put the text and bar charts on the same page. I’ve figured out that I can
hack up a report with a bar chart by passing in the report text as a legend to
the bar-chart object. It’s definitely a hack, but I can always go back and
reimplement it more cleanly after the first release.”
Mike responded, “I don’t see where the issue is. We have to get the product
out, and we don’t have time to make the code perfect. Bill has made it crys-
tal clear that there can’t be any more slips. Do the hack.”
Chip reported that his communications code was 95-percent done and that
it worked, but he still had a few more tests to run. Mike caught Jill and Tomas
rolling their eyes, but he decided to ignore it.
The team worked hard through November 15, including working almost all
the way through the nights of the 14th and 15th, but they still didn’t make their
November 15 release date. The team was exhausted, but on the morning of
the 16th, it was Bill who felt sick. Stacy had called to tell him that develop-
ment hadn’t released its feature-complete build the day before. Last week he
had told the executive committee that the project was on track. Another project
manager, Claire, had probed into the team’s progress, saying that she had
heard that they weren’t making their scheduled releases to testing. Bill thought
Claire was uptight, and he didn’t like her. He had assured her that his team
was definitely on track to make their scheduled releases.
Bill told Mike to get the team together, and when he did, they looked defeated.
A month and a half of 60-hour weeks had taken their toll. Mike asked what
time today they would have the build ready, but the only response he got was
silence. “What are you telling me?” he said. “We are going to have the feature-
complete build today, aren’t we?”
“Look, Mike,” Tomas said. “I can hand off my code today and call it ‘feature
complete’, but I’ve probably got 3 weeks of cleanup work to do once I hand
it off.” Mike asked what Tomas meant by “cleanup.” “I haven’t gotten the
company logo to show up on every page, and I haven’t gotten the agent’s
name and phone number to print on the bottom of every page. It’s little stuff
like that. All of the important stuff works fine. I’m 99-percent done.”
“I’m not exactly 100-percent done either,” Jill admitted. “My old group has
been calling me for technical support a lot, and I’ve been spending a couple
hours a day working for them. Plus, I had forgotten until just now that we were
supposed to give the agents the ability to put their names and phone num-
bers on the reports. I haven’t implemented the dialogs to input that data yet,
and I still have to do some of the other housekeeping dialogs, too. I didn’t
think we needed them to make our ‘feature-complete’ milestone.”

(continued)

33
Chapter 3: Classic Mistakes

Case Study 3-1. Classic Mistakes, continued

Now Mike started to feel sick, too. “If I’m hearing what I think I’m hearing,
you’re telling me that we’re 3 weeks away from having feature-complete
software. Is that right?”
“Three weeks at least,” Jill said. The rest of the developers agreed. Mike went
around the table one by one and asked the developers if they could com-
pletely finish their assignments in 3 weeks. One by one, the developers said
that if they worked hard, they thought they could make it.
Later that day, after a long, uncomfortable discussion, Mike and Bill agreed
to slip the schedule 3 weeks to December 5, as long as the team promised to
work 12-hour days instead of 10. Bill said he needed to show his boss that
he was holding the development team’s feet to the fire. The revised sched-
ule meant that they would have to test the code and train the field agents
concurrently, but that was the only way they could hope to release the soft-
ware by January 1. Stacy complained that that wouldn’t give QA enough time
to test the software, but Bill overruled her.
On December 5, the Giga-Quote team handed off the feature-complete Giga-
Quote program to testing before noon and left work early to take a long-
awaited break. They had worked almost constantly since September 1.
Two days later, Stacy released the first bug list, and all hell broke loose. In
two days, the testing group had identified more than 200 defects in the Giga-
Quote program, including 23 that were classified as Severity 1— “ Must Fix”—
errors. “I don’t see any way that the software will be ready to release to the
field agents by January 1,” she said. “It will probably take the test group that
long just to write the regression test cases for the defects we’ve already dis-
covered, and we’re finding new defects every hour.”
Mike called a staff meeting for 8 o’clock the next morning. The developers
were touchy. They said that although there were a few serious problems, a
lot of the reported bugs weren’t really bugs at all but were misinterpretations
of how the program was supposed to operate. Tomas pointed to bug #143
as an example. “The test report for bug #143 says that on the quote summary
page, the bar chart is required to be on the right side of the page rather than
the left. That’s hardly a Sev-1 error. This is typical of the way that testing
overreacts to problems.”
Mike distributed copies of the bug reports. He tasked the developers to re-
view the bugs that testing had assigned to them and to estimate how much
time it would take to fix each one.
When the team met again that afternoon, the news wasn’t good. “Realistically,
I would estimate that I have 2 weeks’ worth of work just to fix the bugs that
have already been reported,” Sue said. “Plus I still have to finish the referen-
tial integrity checks in the database. I’ve got 4 weeks of work right now, total.”

(continued)

34
Case Study 3-1. Classic Mistakes, continued

Tomas had assigned bug #143 back to testing, changing its priority from Sev-1
to Sev-3—“ Cosmetic Change.” Testing had responded that Giga-Quote’s sum-
mary reports had to match similar reports generated by the mainframe policy-
renewal program, which were also similar to preprinted marketing materials
that the company had used for many years. The company’s 600 agents were
accustomed to giving their sales pitches with the bar chart on the right, and it
had to stay on the right. The bug stayed at Sev-1, and that created a problem.
“Remember the hack I used to get the bar chart and the report to print on the
same page in the first place?” Tomas asked. “To put the bar chart on the right,
I will have to rewrite this particular report from scratch, which means that I
will have to write my own low-level code to do the report formatting and
graphics.” Mike cringed, and asked for a ballpark estimate of how long all that
would take. Tomas said it would take at least 10 days, but he would have to
look into it more before he would know for sure.
Before he went home for the day, Mike told Stacy and Bill that the team would
work through the holidays and have all the reported defects fixed by Janu-
ary 7. Bill said he had almost been expecting this one and approved a 4-week
schedule slip before leaving for a monthlong Caribbean cruise he had been
planning since the previous summer.
Mike spent the next month holding the troops together. For 4 months, they
had been working as hard as it was possible to work, and he didn’t think he
could push them any harder. They were at the office 12 hours a day, but they
were spending a lot of time reading magazines, paying bills, and talking on
the phone. They seemed to make a point of getting irritable whenever he
asked how long it would take to get the bug count down. For every bug they
fixed, testing discovered two new ones. Bugs that should have taken minutes
to fix had projectwide implications and took days instead. They soon realized
there was no way they could fix all the defects by January 7.
On January 7, Bill returned from his vacation, and Mike told him that the
development team would need another 4 weeks. “Get serious,” Bill said. “I’ve
got 600 field agents who are tired of getting jerked around by a bunch of
computer guys. The executive committee is talking about canceling the project.
You have to find a way to deliver the software within the next 2 weeks, no
matter what.”
Mike called a team meeting to discuss their options. He told them about Bill’s
ultimatum and asked for a ballpark estimate of when they could release the
product, first just in weeks, then in months. The team was silent. No one would
hazard a guess about when they might finally release the product. Mike didn’t
know what to tell Bill.

(continued)

35
Chapter 3: Classic Mistakes

Case Study 3-1. Classic Mistakes, continued

After the meeting, Chip told Mike that he had accepted a contract with a dif-
ferent company that started February 3. Mike began to feel that it would be
a relief if the project were canceled.
Mike got Kip, the programmer who had been responsible for the mainframe
side of the PC-to-mainframe communications, reassigned to help out on the
project and assigned him to fix bugs in the PC communications code. After
struggling with Chip’s code for a week, Kip realized that it contained some
deep conceptual flaws that meant it could never work correctly. Kip was
forced to redesign and reimplement the PC side of the PC-to-mainframe com-
munications link.
As Bill rambled on at an executive meeting in the middle of February, Claire
finally decided that she had heard enough and called a “stop work” on the
Giga-Quote program. She met with Mike on Friday. “This project is out of
control,” she said. “I haven’t gotten a reliable schedule estimate from Bill for
months. This was a 6-month project, and it’s now more than 3 months late
with no end in sight. I’ve looked over the bug statistics, and the team isn’t
closing the gap. You’re all working such long hours that you’re not even
making progress anymore. I want you all to take the weekend off; then I want
you to develop a detailed, step-by-step report that includes everything—and
I do mean everything—that remains to be done on that project. I don’t want
you to force-fit the project into an artificial schedule. If it’s going to take an-
other 9 months, I want to know that. I want that report by end-of-work
Wednesday. It doesn’t have to be fancy, but it does have to be complete.”
The development team was glad to have the weekend off, and during the next
week they attacked the detailed report with renewed energy. It was on Claire’s
desk Wednesday. She had the report reviewed by Charles, a software engi-
neering consultant who also reviewed the project’s bug statistics. Charles rec-
ommended that the team focus its efforts on a handful of error-prone modules,
that it immediately institute design and code reviews for all bug fixes, and that
the team start working regular hours so that they could get an accurate mea-
sure of how much effort was being expended on the project and how much
would be needed to finish.
Three weeks later, in the first week in March, the open-bug count had ticked
down a notch for the first time. Team morale had ticked up a notch, and based
on the steady progress being made, the consultant projected that the software
could be delivered—fully tested and reliable—by May 15. Since Giga Safe’s
semi-annual rate increase would go into effect July 1, Claire set the official
launch date for June 1.

Epilogue
The Giga-Quote program was released to the field agents according to plan
on June 1. Giga Safe’s field agents greeted it with a warm if somewhat skep-
tical reception.

(continued)

36
3.2 Effect of Mistakes on a Development Schedule

Case Study 3-1. Classic Mistakes, continued

The Giga Safe Corporation showed its appreciation for the development team’s
hard work by presenting each of the developers with a $250 bonus. A few
weeks later, Tomas asked for an extended leave of absence, and Jill went to
work for another company.
The final Giga-Quote product was delivered in 13 months rather than 6, a
schedule overrun of more than 100 percent. The developer effort, including
overtime, consisted of 98 staff-months, which was a 170-percent overrun of
the planned 36 staff-months.
CROSS-REFERENCE The final product was determined to consist of about 40,000 nonblank,
For a table of ballpark noncomment lines of code in C++, which was about 33 percent more than
estimates such as these for Mike’s seat-of-the-pants guess. As a product that was distributed to 600 in-
projects of various sizes, see
house sites, Giga-Quote was a hybrid between a business product and a
Section 8.6, “Ballpark
Schedule Estimates.”
shrink-wrap product. A product of its size and type should normally have been
completed in 11.5 months with 71 staff-months of effort. The project had
overshot both of those nominals.

3.2 Effect of Mistakes on a Development Schedule


Michael Jackson (the singer, not the computer scientist) sang that “One bad
apple don’t spoil the whole bunch, baby.” That might be true for apples, but
it isn’t true for software. One bad apple can spoil your whole project.
A group of ITT researchers reviewed 44 projects in 9 countries to examine
the impact of 13 productivity factors on productivity (Vosburgh et al. 1984).
The factors included the use of modern programming practices, code diffi-
culty, performance requirements, level of client participation in requirements
specification, personnel experience, and several others. They divided each
of the factors into categories that you would expect to be associated with low,
medium, and high performance. For example, they divided the “modern
programming practices” factor into low use, medium use, and high use. Fig-
ure 3-1 on the next page shows what the researchers found for the “use of
modern programming practices” factor.
The longer you study Figure 3-1, the more interesting it becomes. The gen-
eral pattern it shows is representative of the findings for each of the produc-
tivity factors studied. The ITT researchers found that projects in the categories
that they expected to have poor productivity did in fact have poor produc-
tivity, such as the narrow range shown in the Low category in Figure 3-1. But
productivity in the high-performance categories varied greatly, such as the
wide range shown in the High category in Figure 3-1. Productivity of projects
in the High category varied from poor to excellent.

37
Chapter 3: Classic Mistakes

CROSS-REFERENCE Use of Modern Programming Practices


For more discussion of (percentage of total system)
this specific graph, see
Section 4.2, “Technical
Percent of
Fundamentals.”
Nominal Low Medium High
Productivity (0–25%) (26–75%) (76–100%)

+200

Legend
+100
Maximum
75th percentile
0 (average) Mean
25th percentile
Minimum
-100

Figure 3-1. Findings for “Use of Modern Programming Practices” factor (Vosburgh
et al. 1984). Doing a few things right doesn’t guarantee rapid development. You
also have to avoid doing anything wrong.

CROSS-REFERENCE That projects that were expected to have poor productivity do in fact have
For more on the role that
poor productivity shouldn’t surprise you. But the finding that many of the
mistakes play in rapid
development, see Section projects expected to have excellent productivity actually have poor produc-
2.1, “General Strategy for tivity just might be a surprise. What this graph and other graphs like it
Rapid Development.” throughout the book show is that the use of any specific best practice is
necessary but not sufficient for achieving maximum development speed.
Even if you do a few things right, such as making high use of modern pro-
gramming practices, you might still make a mistake that nullifies your pro-
ductivity gains.
When thinking about rapid development, it’s tempting to think that all you
have to do is identify the root causes of slow development and eliminate
them—and then you’ll have rapid development. The problem is that there
aren’t just a handful of root causes of slow development, and in the end
trying to identify the root causes of slow development isn’t very useful. It’s
like asking, ‘What is the root cause of my not being able to run a 4-minute
mile?’ Well, I’m too old. I weigh too much. I’m too out of shape. I’m not
willing to train that hard. I don’t have a world-class coach or athletic facil-
ity. I wasn’t all that fast even when I was younger. The list goes on and on.
When you talk about exceptional achievements, the reasons that people don’t
rise to the top are simply too numerous to list. The Giga-Quote team in Case
Study 3-1 made many of the mistakes that have plagued software develop-
ers since the earliest days of computing. The software-development road is

38
3.3 Classic Mistakes Enumerated

mined with potholes, and the potholes you fall into partially determine how
quickly or slowly you develop software.
In software, one bad apple can spoil the whole bunch, baby. To slip into
slow development, all you need to do is make one really big mistake; to
achieve rapid development you need to avoid making any big mistakes. The
next section lists the most common of those big mistakes.

3.3 Classic Mistakes Enumerated


Some ineffective development practices have been chosen so often, by so
many people, with such predictable, bad results that they deserve to be called
“classic mistakes.” Most of the mistakes have a seductive appeal. Do you need
to rescue a project that’s behind schedule? Add more people! Do you want
to reduce your schedule? Schedule more aggressively! Is one of your key con-
CLASSIC MISTAKE tributors aggravating the rest of the team? Wait until the end of the project
to fire him! Do you have a rush project to complete? Take whatever devel-
opers are available right now and get started as soon as possible!

Figure 3-2. The software project was riddled with mistakes, and all the king’s
managers and technical leads couldn’t rescue the project for anyone’s sake.

39
Chapter 3: Classic Mistakes

Developers, managers, and customers usually have good reasons for making
the decisions they do, and the seductive appeal of the classic mistakes is part
of the reason these mistakes have been made so often. But because they have
been made so many times, their consequences have become easy to predict.
And classic mistakes rarely produce the results that people hope for.
This section enumerates about two dozen classic mistakes. I have person-
ally seen each of these mistakes made at least once, and I’ve made more than
a few of them myself. Many of them crop up in Case Study 3-1. The com-
mon denominator of these mistakes is that you won’t necessarily get rapid
development if you avoid these mistakes, but you will definitely get slow de-
velopment if you don’t avoid them.
If some of these mistakes sound familiar, take heart—many other people have
made them too. Once you understand their effect on development speed you
can use this list to help with your project planning and risk management.
Some of the more significant mistakes are discussed in their own sections in
other parts of this book. Others are not discussed further. For ease of refer-
ence, the list has been divided along the development-speed dimensions of
people, process, product, and technology.

People
Here are some of the people-related classic mistakes.
CROSS-REFERENCE 1: Undermined motivation. Study after study has shown that motivation prob-
For more on the uses and ably has a larger effect on productivity and quality than any other factor
misuses of motivation, see
Chapter 11, “Motivation.”
(Boehm 1981). In Case Study 3-1, management took steps that undermined
morale throughout the project—from giving a hokey pep talk at the begin-
ning to requiring overtime in the middle, from going on a long vacation while
the team worked through the holidays to providing end-of-project bonuses
that worked out to less than a dollar per overtime hour at the end.
2: Weak personnel. After motivation, either the individual capabilities of the
team members or their relationship as a team probably has the greatest in-
fluence on productivity (Boehm 1981, Lakhanpal 1993). Hiring from the
bottom of the barrel will threaten a rapid-development effort. In the case
study, personnel selections were made with an eye toward who could be
hired fastest instead of who would get the most work done over the life of
the project. That practice gets the project off to a quick start but doesn’t set
it up for rapid completion.
CROSS-REFERENCE 3: Uncontrolled problem employees. Failure to deal with problem person-
For more on creating nel also threatens development speed. This is a common problem and has
effective teams, see
Chapter 12, “Teamwork.”
been well-understood at least since Gerald Weinberg published Psychology

40
3.3 Classic Mistakes Enumerated

of Computer Programming in 1971. Failure to take action to deal with a


problem employee is the most common complaint that team members have
about their leaders (Larson and LaFasto 1989). In Case Study 3-1, the team
knew that Chip was a bad apple, but the team lead didn’t do anything about
it. The result—redoing all of Chip’s work—was predictable.
CROSS-REFERENCE 4: Heroics. Some software developers place a high emphasis on project he-
For more on heroics roics (Bach 1995). But I think that they do more harm than good. In the case
and commitment-based
projects, see Section 2.5,
study, mid-level management placed a higher premium on can-do attitudes
“An Alternative Rapid- than on steady and consistent progress and meaningful progress reporting.
Development Strategy,” The result was a pattern of scheduling brinkmanship in which impending
“Commitment-Based schedule slips weren’t detected, acknowledged, or reported up the manage-
Scheduling” in Section ment chain until the last minute. A small development team and its imme-
8.5, and Chapter 34,
“Signing Up.”
diate management held an entire company hostage because they wouldn’t
admit that they were having trouble meeting their schedule. An emphasis on
heroics encourages extreme risk taking and discourages cooperation among
the many stakeholders in the software-development process.
Some managers encourage heroic behavior when they focus too strongly on
can-do attitudes. By elevating can-do attitudes above accurate-and-some-
times-gloomy status reporting, such project managers undercut their ability
to take corrective action. They don’t even know they need to take correc-
tive action until the damage has been done. As Tom DeMarco says, can-do
attitudes escalate minor setbacks into true disasters (DeMarco 1995).
CROSS-REFERENCE 5: Adding people to a late project. This is perhaps the most classic of the clas-
For alternative means of sic mistakes. When a project is behind, adding people can take more pro-
rescuing a late project,
see Chapter 16,
ductivity away from existing team members than it adds through new ones.
“Project Recovery.” Fred Brooks likened adding people to a late project to pouring gasoline on
a fire (Brooks 1975).
CROSS-REFERENCE 6: Noisy, crowded offices. Most developers rate their working conditions as
For more on the effects of the unsatisfactory. About 60 percent report that they are neither sufficiently quiet
physical environment on
productivity, see Chapter 30,
nor sufficiently private (DeMarco and Lister 1987). Workers who occupy
“Productivity Environments.” quiet, private offices tend to perform significantly better than workers who
occupy noisy, crowded work bays or cubicles. Noisy, crowded work envi-
ronments lengthen development schedules.
CROSS-REFERENCE 7: Friction between developers and customers. Friction between develop-
For more on effective ers and customers can arise in several ways. Customers may feel that develop-
customer relations, see
Chapter 10, “Customer-
ers are not cooperative when they refuse to sign up for the development
Oriented Development.” schedule that the customers want or when they fail to deliver on their prom-
ises. Developers may feel that customers are unreasonably insisting on un-
realistic schedules or requirements changes after the requirements have
been baselined. There might simply be personality conflicts between the
two groups.

41
Chapter 3: Classic Mistakes

The primary effect of this friction is poor communication, and the second-
ary effects of poor communication include poorly understood requirements,
poor user-interface design, and, in the worst case, customers’ refusing to
accept the completed product. On average, friction between customers and
software developers becomes so severe that both parties consider canceling
the project (Jones 1994). Such friction is time-consuming to overcome, and
it distracts both customers and developers from the real work of the project.
CROSS-REFERENCE 8: Unrealistic expectations. One of the most common causes of friction be-
For more on setting tween developers and their customers or managers is unrealistic expectations.
expectations, see
Section 10.3, “Managing
In Case Study 3-1, Bill had no sound reason to think that the Giga-Quote
Customer Expectations.” program could be developed in 6 months, but that’s when the company’s
executive committee wanted it done. Mike’s inability to correct that unreal-
istic expectation was a major source of problems.
In other cases, project managers or developers ask for trouble by getting
funding based on overly optimistic schedule estimates. Sometimes they prom-
ise a pie-in-the-sky feature set.
Although unrealistic expectations do not in themselves lengthen develop-
ment schedules, they contribute to the perception that development sched-
ules are too long, and that can be almost as bad. A Standish Group survey
listed realistic expectations as one of the top five factors needed to ensure
the success of an in-house business-software project (Standish Group 1994).
9: Lack of effective project sponsorship. High-level project sponsorship is
necessary to support many aspects of rapid development, including realis-
tic planning, change control, and the introduction of new development prac-
tices. Without an effective executive sponsor, other high-level personnel in
your organization can force you to accept unrealistic deadlines or make
changes that undermine your project. Australian consultant Rob Thomsett
argues that lack of an effective executive sponsor virtually guarantees project
failure (Thomsett 1995).
10: Lack of stakeholder buy-in. All the major players in a software-develop-
ment effort must buy in to the project. That includes the executive sponsor,
team leader, team members, marketing staff, end-users, customers, and any-
one else who has a stake in it. The close cooperation that occurs only when
you have complete buy-in from all stakeholders allows for precise coordi-
nation of a rapid-development effort that is impossible to attain without good
buy-in.
11: Lack of user input. The Standish Group survey found that the number
one reason that IS projects succeed is because of user involvement (Standish
Group 1994). Projects without early end-user involvement risk misunderstand-
ing the projects’ requirements and are vulnerable to time-consuming feature
creep later in the project.

42
3.3 Classic Mistakes Enumerated

CROSS-REFERENCE 12: Politics placed over substance. Larry Constantine reported on four teams
For more on healthy politics,
that had four different kinds of political orientations (Constantine 1995a).
see Section 10.3, “Managing
Customer Expectations.”
“Politicians” specialized in “managing up,” concentrating on relationships
with their managers. “Researchers” concentrated on scouting out and gath-
ering information. “Isolationists” kept to themselves, creating project bound-
aries that they kept closed to non-team members. “Generalists” did a little
bit of everything: they tended their relationships with their managers, per-
formed research and scouting activities, and coordinated with other teams
through the course of their normal workflow. Constantine reported that ini-
tially the political and generalist teams were both well regarded by top
management. But after a year and a half, the political team was ranked dead
last. Putting politics over results is fatal to speed-oriented development.
13: Wishful thinking. I am amazed at how many problems in software de-
velopment boil down to wishful thinking. How many times have you heard
statements like these from different people:
“None of the team members really believed that they could complete
the project according to the schedule they were given, but they thought
that maybe if everyone worked hard, and nothing went wrong, and they
got a few lucky breaks, they just might be able to pull it off.”

“Our team hasn’t done very much work to coordinate the interfaces
among the different parts of the product, but we’ve all been in good
communication about other things, and the interfaces are relatively
simple, so it’ll probably take only a day or two to shake out the bugs.”
“We know that we went with the low-ball contractor on the database
subsystem, and it was hard to see how they were going to complete the
work with the staffing levels they specified in their proposal. They didn’t
have as much experience as some of the other contractors, but maybe
they can make up in energy what they lack in experience. They’ll prob-
ably deliver on time.”

“We don’t need to show the final round of changes to the prototype to
the customer. I’m sure we know what they want by now.”
“The team is saying that it will take an extraordinary effort to meet the
deadline, and they missed their first milestone by a few days, but I think
they can bring this one in on time.”

Wishful thinking isn’t just optimism. It’s closing your eyes and hoping some-
thing works when you have no reasonable basis for thinking it will. Wish-
ful thinking at the beginning of a project leads to big blowups at the end of
a project. It undermines meaningful planning and may be at the root of more
software problems than all other causes combined.

43
Chapter 3: Classic Mistakes

Process
Process-related mistakes slow down projects because they squander people’s
talents and efforts. Here are some of the worst process-related mistakes.
CROSS-REFERENCE 14: Overly optimistic schedules. The challenges faced by someone building
For more on unrealistic a 3-month application are quite different from the challenges faced by some-
schedules, see Section 9.1,
“Overly Optimistic
one building a 1-year application. Setting an overly optimistic schedule sets
Scheduling.” a project up for failure by underscoping the project, undermining effective
planning, and abbreviating critical upstream development activities such as
requirements analysis and design. It also puts excessive pressure on devel-
opers, which hurts long-term developer morale and productivity. This was
a major source of problems in Case Study 3-1.
CROSS-REFERENCE 15: Insufficient risk management. Some mistakes are not common enough
For more on risk manage- to be considered classic. Those are called “risks.” As with the classic mistakes,
ment, see Chapter 5, “Risk
Management.”
if you don’t actively manage risks, only one thing has to go wrong to change
your project from a rapid-development project to a slow-development one.
The failure to manage such unique risks is a classic mistake.
CROSS-REFERENCE 16: Contractor failure. Companies sometimes contract out pieces of a project
For more on contractors, see when they are too rushed to do the work in-house. But contractors frequently
Chapter 28, “Outsourcing.”
deliver work that’s late, that’s of unacceptably low quality, or that fails to meet
specifications (Boehm 1989). Risks such as unstable requirements or ill-
defined interfaces can be magnified when you bring a contractor into the
picture. If the contractor relationship isn’t managed carefully, the use of con-
tractors can slow a project down rather than speed it up.
CROSS-REFERENCE 17: Insufficient planning. If you don’t plan to achieve rapid development,
For more on planning, see
you can’t expect to achieve it.
“Planning” in Section 4.1.
18: Abandonment of planning under pressure. Project teams make plans and
CROSS-REFERENCE then routinely abandon them when they run into schedule trouble
For more on planning under
(Humphrey 1989). The problem isn’t so much in abandoning the plan as in
pressure, see Section 9.2,
“Beating Schedule Pressure,” failing to create a substitute, and then falling into code-and-fix mode instead.
and Chapter 16, “Project In Case Study 3-1, the team abandoned its plan after it missed its first deliv-
Recovery.” ery, and that’s typical. The work after that point was uncoordinated and
awkward—to the point that Jill even started working on a project for her old
group part of the time and no one even knew it.
19: Wasted time during the fuzzy front end. The “fuzzy front end” is the time
before the project starts, the time normally spent in the approval and bud-
geting process. It’s not uncommon for a project to spend months or years
in the fuzzy front end and then to come out of the gates with an aggressive
schedule. It’s much easier and cheaper and less risky to save a few weeks
or months in the fuzzy front end than it is to compress a development sched-
ule by the same amount.

44
3.3 Classic Mistakes Enumerated

CROSS-REFERENCE 20: Shortchanged upstream activities. Projects that are in a hurry try to cut
For more on shortchanging
out nonessential activities, and since requirements analysis, architecture, and
upstream activities, see
“Effects of Overly Optimistic
design don’t directly produce code, they are easy targets. On one disastrous
Schedules” in Section 9.1. project that I took over, I asked to see the design. The team lead told me,
“We didn’t have time to do a design.”
The results of this mistake—also known as “jumping into coding”— are all
too predictable. In the case study, a design hack in the bar-chart report was
substituted for quality design work. Before the product could be released,
HARD DATA the hack work had to be thrown out and the higher-quality work had to be
done anyway. Projects that skimp on upstream activities typically have to do
the same work downstream at anywhere from 10 to 100 times the cost of
doing it properly in the first place (Fagan 1976; Boehm and Papaccio 1988).
If you can’t find the 5 hours to do the job right the first time, where are you
going to find the 50 hours to do it right later?
21: Inadequate design. A special case of shortchanging upstream activities
is inadequate design. Rush projects undermine design by not allocating
enough time for it and by creating a pressure cooker environment that makes
thoughtful consideration of design alternatives difficult. The design emphasis
is on expediency rather than quality, so you tend to need several ultimately
time-consuming design cycles before you can finally complete the system.
CROSS-REFERENCE 22: Shortchanged quality assurance. Projects that are in a hurry often cut
For more on quality
corners by eliminating design and code reviews, eliminating test planning,
assurance, see Section 4.3,
“Quality-Assurance
and performing only perfunctory testing. In the case study, design reviews
Fundamentals.” and code reviews were given short shrift in order to achieve a perceived
schedule advantage. As it turned out, when the project reached its feature-
complete milestone it was still too buggy to release for 5 more months. This
result is typical. Shortcutting 1 day of QA activity early in the project is likely
to cost you from 3 to 10 days of activity downstream (Jones 1994). This short-
HARD DATA
cut undermines development speed.
CROSS-REFERENCE 23: Insufficient management controls. In the case study, few management
For more on management controls were in place to provide timely warnings of impending schedule
controls, see “Tracking” in
Section 4.1 and Chapter 27,
slips, and the few controls that were in place at the beginning were aban-
“Miniature Milestones.” doned once the project ran into trouble. Before you can keep a project on
track, you have to be able to tell whether it’s on track in the first place.
CROSS-REFERENCE 24: Premature or overly frequent convergence. Shortly before a product is
For more on premature scheduled to be released, there is a push to prepare the product for release—
convergence, see
“Premature convergence”
improve the product’s performance, print final documentation, incorporate
in Section 9.1. final help-system hooks, polish the installation program, stub out function-
ality that’s not going to be ready on time, and so on. On rush projects, there
is a tendency to force convergence early. Since it’s not possible to force the
product to converge when desired, some rapid-development projects try to

45
Chapter 3: Classic Mistakes

force convergence a half dozen times or more before they finally succeed.
The extra convergence attempts don’t benefit the product. They just waste
time and prolong the schedule.
CROSS-REFERENCE 25: Omitting necessary tasks from estimates. If people don’t keep careful
For a list of commonly records of previous projects, they forget about the less visible tasks, but those
omitted tasks, see “Don’t
omit common tasks”
tasks add up. Omitted effort often adds about 20 to 30 percent to a devel-
in Section 8.3. opment schedule (van Genuchten 1991).
26: Planning to catch up later. One kind of reestimation is responding inap-
propriately to a schedule slip. If you’re working on a 6-month project, and
it takes you 3 months to meet your 2-month milestone, what do you do?
CROSS-REFERENCE Many projects simply plan to catch up later, but they never do. You learn
For more on reestimation, more about the product as you build it, including more about what it will take
see “Recalibration”
in Section 8.7.
to build it. That learning needs to be reflected in the reestimated schedule.
Another kind of reestimation mistake arises from product changes. If the
product you’re building changes, the amount of time you need to build it
changes too. In Case Study 3-1, major requirements changed between the
original proposal and the project start without any corresponding
reestimation of schedule or resources. Piling on new features without adjust-
ing the schedule guarantees that you will miss your deadline.
CROSS-REFERENCE 27: Code-like-hell programming. Some organizations think that fast, loose,
For more on the Code-and- all-as-you-go coding is a route to rapid development. If the developers are
Fix approach, see Section
7.2, “Code-and-Fix.”
sufficiently motivated, they reason, they can overcome any obstacles. For
reasons that will become clear throughout this book, this is far from the truth.
This approach is sometimes presented as an “entrepreneurial” approach to
software development, but it is really just a cover for the old Code-and-Fix
paradigm combined with an ambitious schedule, and that combination al-
most never works. It’s an example of two wrongs not making a right.

Product
Here are classic mistakes related to the way the product is defined.
28: Requirements gold-plating. Some projects have more requirements than
they need, right from the beginning. Performance is stated as a requirement
more often than it needs to be, and that can unnecessarily lengthen a soft-
ware schedule. Users tend to be less interested in complex features than mar-
keting and development are, and complex features add disproportionately
to a development schedule.
CROSS-REFERENCE 29: Feature creep. Even if you’re successful at avoiding requirements gold-
For more on feature creep,
plating, the average project experiences about a 25-percent change in re-
see Chapter 14,
“Feature-Set Control.”
quirements over its lifetime (Jones 1994). Such a change can produce at least

46
3.3 Classic Mistakes Enumerated

a 25-percent addition to the software schedule, which can be fatal to a rapid-


development project.
CROSS-REFERENCE 30: Developer gold-plating. Developers are fascinated by new technology
For an example of the way
and are sometimes anxious to try out new features of their language or en-
that developer gold-plating
can occur even accidentally,
vironment or to create their own implementation of a slick feature they saw
see “Unclear or Impossible in another product—whether or not it’s required in their product. The effort
Goals” in Section 14.2. required to design, implement, test, document, and support features that are
not required lengthens the schedule.
31: Push-me, pull-me negotiation. One bizarre negotiating ploy occurs when
a manager approves a schedule slip on a project that’s progressing slower
than expected and then adds completely new tasks after the schedule
change. The underlying reason for this is hard to fathom, because the man-
ager who approves the schedule slip is implicitly acknowledging that the
schedule was in error. But once the schedule has been corrected, the same
person takes explicit action to make it wrong again. This can’t help but
undermine the schedule.
32: Research-oriented development. Seymour Cray, the designer of the Cray
supercomputers, says that he does not attempt to exceed engineering lim-
its in more than two areas at a time because the risk of failure is too high
(Gilb 1988). Many software projects could learn a lesson from Cray. If your
project strains the limits of computer science by requiring the creation of new
algorithms or new computing practices, you’re not doing software develop-
ment; you’re doing software research. Software-development schedules are
reasonably predictable; software research schedules are not even theoreti-
cally predictable.
If you have product goals that push the state of the art—algorithms, speed,
memory usage, and so on—you should assume that your scheduling is highly
speculative. If you’re pushing the state of the art and you have any other
weaknesses in your project—personnel shortages, personnel weaknesses,
vague requirements, unstable interfaces with outside contractors—you can
throw predictable scheduling out the window. If you want to advance the
state of the art, by all means, do it. But don’t expect to do it rapidly!

Technology
The remaining classic mistakes have to do with the use and misuse of modern
technology.
CROSS-REFERENCE 33: Silver-bullet syndrome. In the case study, there was too much reliance
For more on the silver-bullet
on the advertised benefits of previously unused technologies (report genera-
syndrome, see Section 15.5,
“Silver-Bullet Syndrome.”
tor, object-oriented design, and C++) and too little information about how

47
Chapter 3: Classic Mistakes

well they would do in this particular development environment. When project


teams latch onto a single new practice, new technology, or rigid process and
expect it to solve their schedule problems, they are inevitably disappointed
(Jones 1994).
CROSS-REFERENCE 34: Overestimated savings from new tools or methods. Organizations seldom
For more on estimating improve their productivity in giant leaps, no matter how many new tools or
savings from productivity
tools, see “How Much
methods they adopt or how good they are. Benefits of new practices are par-
Schedule Reduction to tially offset by the learning curves associated with them, and learning to use
Expect” in Section 15.4. new practices to their maximum advantage takes time. New practices also
entail new risks, which you’re likely to discover only by using them. You are
more likely to experience slow, steady improvement on the order of a few
percent per project than you are to experience dramatic gains. The team in
Case Study 3-1 should have planned on, at most, a 10-percent gain in pro-
ductivity from the use of the new technologies instead of assuming that they
would nearly double their productivity.
CROSS-REFERENCE A special case of overestimated savings arises when projects reuse code from
For more on reuse, see previous projects. This kind of reuse can be a very effective approach, but
Chapter 33, “Reuse.”
the time savings is rarely as dramatic as expected.
35: Switching tools in the middle of a project. This is an old standby that
hardly ever works. Sometimes it can make sense to upgrade incrementally
within the same product line, from version 3 to version 3.1 or sometimes even
to version 4. But the learning curve, rework, and inevitable mistakes made
with a totally new tool usually cancel out any benefit when you’re in the
middle of a project.
CROSS-REFERENCE 36: Lack of automated source-code control. Failure to use automated source-
For more on source-code code control exposes projects to needless risks. Without it, if two develop-
control, see “Software
Configuration Management”
ers are working on the same part of the program, they have to coordinate
in Section 4.2. their work manually. They might agree to put the latest versions of each file
into a master directory and to check with each other before copying files into
that directory. But someone invariably overwrites someone else’s work.
People develop new code to out-of-date interfaces and then have to rede-
sign their code when they discover that they were using the wrong version
of the interface. Users report defects that you can’t reproduce because you
have no way to re-create the build they were using. On average, source code
changes at a rate of about 10 percent per month, and manual source-code
control can’t keep up (Jones 1994).
Table 3-1 contains a complete list of classic mistakes.

48
3.4 Escape from Gilligan’s Island

Table 3-1. Summary of Classic Mistakes

People-Related Process-Related Product-Related Technology-Related


Mistakes Mistakes Mistakes Mistakes

1. Undermined 14. Overly optimistic 28. Requirements 33. Silver-bullet


motivation schedules gold-plating syndrome
2. Weak personnel 15. Insufficient risk 29. Feature creep 34. Overestimated
3. Uncontrolled management 30. Developer savings from
problem employees 16. Contractor failure gold-plating new tools or
methods
4. Heroics 17. Insufficient 31. Push-me, pull-me
planning negotiation 35. Switching tools
5. Adding people to in the middle
a late project 18. Abandonment of 32. Research-oriented of a project
6. Noisy, crowded planning under development
pressure 36. Lack of
offices automated
7. Friction between 19. Wasted time during source-code
developers and the fuzzy front end control
customers 20. Shortchanged
8. Unrealistic upstream activities
expectations 21. Inadequate design
9. Lack of effective 22. Shortchanged
project sponsorship quality assurance
10. Lack of stakeholder 23. Insufficient manage-
buy-in ment controls
11. Lack of user input 24. Premature or overly
12. Politics placed frequent convergence
over substance 25. Omitting necessary
13. Wishful thinking tasks from estimates
26. Planning to catch
up later
27. Code-like-hell
programming

3.4 Escape from Gilligan’s Island


A complete list of classic mistakes would go on for pages more, but those
presented are the most common and the most serious. As Seattle University’s
David Umphress points out, watching most organizations attempt to avoid
these classic mistakes seems like watching reruns of Gilligan’s Island. At the
beginning of each episode, Gilligan, the Skipper, or the Professor comes up
with a cockamamie scheme to get off the island. The scheme seems as
though it’s going to work for a while, but as the episode unfolds, something
goes wrong, and by the end of the episode the castaways find themselves
right back where they started—stuck on the island.

49
Chapter 3: Classic Mistakes

Similarly, most companies at the end of each project find that they have made
yet another classic mistake and that they have delivered yet another project
behind schedule or over budget or both.

Your Own List of Worst Practices


Be aware of the classic mistakes. Create lists of “worst practices” to avoid on
future projects. Start with the list in this chapter. Add to the list by conduct-
ing project postmortems to learn from your team’s mistakes. Encourage other
projects within your organization to conduct postmortems so that you can
learn from their mistakes. Exchange war stories with your colleagues in other
organizations, and learn from their experiences. Display your list of mistakes
prominently so that people will see it and learn not to make the same mis-
takes yet another time.

Further Reading
Although a few books discuss coding mistakes, there are no books that I
know of that describe classic mistakes related to development schedules.
Further reading on related topics is provided throughout the rest of this book.

50
Index

Special Characters “Automatic Programming: Myths and Prospects”


3GLs (third-generation languages), 365 (Rich and Waters), 366
switching from a, 360–62 autonomy, sense of
4GLs (fourth-generation languages), 365 of team members, 286–87
switching to a, 360–62 work itself as motivation and, 259
90-90 problem, 580 avoiding risks, 97. See also risk management

A B
Aburdene, Patricia, 258 Babich, W., 68
accountability, team structure and, 302 Bach, James, 41
accuracy Baker, F. Terry, 305, 306
of data, measurement and, 476 ballpark schedule estimates, 185–97
precision vs., 173 efficient schedules, 192–94
achievement motivation, 255 first thing to do with, 196–97
adherence to plan, overly optimistic scheduling nominal schedules, 194–96
and, 211 shortest possible schedules, 188–92
Advances in Software Inspections, 77 tables of, 186–88
air-traffic-control workstation software, 320 barn raising. See Amish barn raising team
Albrecht, Allan J., 174, 187 baseline report, 475
all-out rapid development, 20 Basili, Victor R., 12, 14, 74, 360, 469, 475, 556
Amish barn raising team, 273–74, 277, 279, 280, Baumert, John, 57
286, 287, 292 Bayer, Sam, 287
Andriole, Steven J., 573 Becoming a Technical Leader (Weinberg), 271
Applied Software Measurement (Jones), 204, 479 Belbin, Meredith, 284
appreciation (recognition) Belz, F. C., 161
failure of teams and lack of, 290 Bentley, Jon, 68, 436
lack of, as morale killer, 267–68 Bersoff, Edward H., 68
motivation and gestures of, 262–63 best practices in rapid development, 390–401.
“Article of Understanding” (with customers), 325 See also specific practices
Art of Software Testing, The (Myers), 76 chapters in this book on, 390
Assessment and Control of Software Risks (Jones), organization of, 392–94
106, 161, 204, 343, 369, 608 sections in, 394
assuming risks, 98 reasons for excluding candidates for, 395–96
Auerbach, Red, 51 summary of best-practice candidates, 394–400
August, Judy, 452, 455, 461, 463 summary of evaluations of, 400–401
Australian Computer Society Software summary tables on
Challenge, 61 “chance of first-time success” entry, 393
automatic programming, 366–67 “chance of long-term success” entry, 393–94

625
Index

best practices in rapid development, summary C


tables on, continued cancelation of projects, 319
“effect on schedule risk” entry, 393 Capability Maturity Model, The (Carnegie Mellon
“improvement in progress visibility” University/Software Engineering Institute),
entry, 393 28, 59
“major interactions and trade-offs” entry, 394 Card, David N., 12, 74, 249, 531, 532
“major risks” entry, 394 Carnegie Mellon University, 28
“potential reduction from nominal schedule” Carroll, Paul B., 23, 343, 488
entry, 392 case-based estimating, 181
sample table, 390–91 case studies
Bieman, James M., 431, 435, 436, 438, 439, 441, change management, effective, 342–43
443, 571, 597 classic mistakes, 29–37
Biggerstaff, Ted, 535 estimation
Bock, Douglas, 360 careful project estimation, 200–202
Boddie, John, 314, 338, 339, 388, 603, 604 seat-of-the-pants, 164
Boehm, Barry W., 12–13, 40, 44, 45, 62, 71–73, 84, fundamentals, lack of, 52–55
94, 106, 123, 124, 161, 181–83, 192, 203, 212, fuzzy front end, 124–25
217, 249, 257, 262, 263, 276, 277, 331, 335, lifecycle models
417, 462, 484, 511, 518, 559, 568, 585, 601 effective selection of lifecycle model, 159–61
Boeing, 256 ineffective selection of, 134–36
Booch, Grady, 67 motivation
Brady, Shelia, 543, 603 disheartening lunch with the boss, 250–51
breaking the build, penalty for, 410 highly motivational environment, 270–71
Brodman, Judith G., 474 productivity tools
Brooks, Frederick P., Jr., 41, 207, 222, 305, 311, effective tool use, 368–69
317, 348, 349, 365, 369, 378, 411, 418, 534 ineffective tool use, 346–47
Brooks, Ruven, 348 recovery of projects
budget, unrealistic expectations about, evolution- successful project recovery, 385–87
ary prototyping and, 436 unsuccessful project recovery, 372–73
buffers, project overrun and, 94 requirements, 234–36, 246
Bugsy (film), 344 risk management
build group, establishing a, 408–9 contractor, lack of, 82–83
building a program, 406. See also daily build and systematic, 103–5
smoke test schedule negotiation, successful, 229–30
Building Quality Software (Glass), 76 strategy for rapid development
Burlton, Roger, 55, 428 clear, 25–27
burnout lack of a clear, 6
overly optimistic scheduling and, 217 team structure
signing up and, 544 good match between project objectives and
business software, defined, 22 team structure, 315–16
business-team structure, 304 mismatch between project objectives and
Butler Cox Foundation, 293 team structure, 297–99
buy-in by stakeholders, lack of, 42
Bylinsky, Gene, 207

626
Index

case studies, continued cohesiveness of teams, performance and, 277


teamwork Comer, Ed, 531, 532
high-performance team, 277–78 commercial software, defined, 22
poor group dynamics, 274–75 commitment
selection of team members, 282–83 to productivity tool selection, 358
CASE tools, 363, 364, 366 to the team, 285
Caswell, Deborah L., 478 commitment-based approach. See code-like-hell
categories, estimation by, 178 approach (commitment-based approach)
“chance of first-time success” entry, in communication
best-practice summary tables, 393 failure of teams and, 290
“chance of long-term success” entry, in among team members, 286
best-practice summary tables, 393–94 team structure and, 302
change Communications of the ACM, 317
designing for (see designing for change) Complete Guide to Software Testing, The
managing a high-performance team and, 288 (Hetzel), 76
change analysis, 338–39 “Component Software” (Udell), 538
change-control boards, 339–41, 403 Confessions of a Used Program Salesman
Cherlin, M., 293 (Tracz), 538
chief-programmer team structure, 305 confidence factors, estimation and, 181–82
Chow, Tsun S., 76 Connell, John, 443
classic mistakes, 8, 29–50 Constantine, Larry L., 28, 43, 61, 67, 294, 296, 317
awareness of, 50 Constantine on Peopleware (Constantine), 28, 317
case study, 29–37 construction
effect on development schedule, 37–39 customer-oriented, 242–43
people-related, 40–43 as development fundamental, 64–65
process-related, 44–46 further reading on, 67–68
product-related, 46–47 Conte, S. D., 479
recovery of projects and, 379–80 contractors. See also outsourcing
technology-related, 47–48 failure of, as classic mistake, 44
Clements, Paul C., 423 risks associated with, 89
CNA Insurance Company, 461 contracts, outsourcing, 493, 495, 498–99
Coad, Peter, 67 control
COCOMO model, 182 estimation vs., 170
code-and-fix lifecycle model, 140 evolutionary prototyping and diminished,
strengths and weaknesses of, 156 436–37
Code Complete (McConnell), xvi, 67, 75, 343, 414, loss of
423, 432, 524 outsourcing and, 500
code-like-hell approach (commitment-based signing up and, 544
approach), 22–25, 184 Controlling Software Projects (DeMarco), 59,
as classic mistake, 46 131, 203
disadvantages of, 23–24 convergence, premature or overly frequent
successes and advantages of, 22–23 as classic mistake, 45–46
this book’s approach compared to, 24–25 overly optimistic scheduling and, 213–15
code reading, 74

627
Index

cost D
lowest, as rapid development look-alike, 114 daily build and smoke test, 405–14
trade-offs among product, schedule, and, 126 bottom line on, 413–14
Costello, Scott H., 207, 231 further reading on, 414
cost ratios, 72 interactions with other practices, 413
Cox, Charles, 468, 474 managing risks of, 412–13
Cray, Seymour, 47 side effects of, 413
“Creating New Software Was Agonizing Task for summary of, 405
Mitch Kapor Firm” (Carroll), 343 time savings produced by, 406–7
creativity, overly optimistic scheduling and, 216–17 using, 407–12
creativity team, 300 adding code to the build, 409
crisis, miniature milestones in response to a, 484 broken builds, checking for, 407–8
crowded work environments, 41 build and smoke testing even under
Crunch Mode (Boddie), 388 pressure, 411
Curtis, Bill, 12, 213, 249 build group, establishing a, 408–9
custom applications, productivity tools and, 350 daily building of the product, 407
customer-oriented development, 16, 233–47 daily smoke test, 408
importance of customers to rapid development, holding area for code that’s to be added
236–38 to the build, 409–10
customer-oriented practices, 238–43 keys to success in, 414
construction, 242–43 kinds of projects that can use this process,
design, 242 411–12
further reading on, 247 penalty for breaking the build, 410
planning, 239 releasing builds in the morning, 410–11
requirements analysis, 239–42 requiring developers to smoke test before
requirements practices, 338 adding code to system, 409
customer relations Davis, Alan M., 68, 366
daily build and smoke test and, 407 DBMS-oriented applications, productivity tools
overly optimistic scheduling and, 213 and, 350
Theory-W management and, 561 deadlocks, breaking, in principled negotiations,
customers 227
developers’ understanding with, 325 Debugging the Development Process (Maguire), 28,
expectations of, 243–45 231, 608
unrealistic, 120–21, 243, 245 Dedene, Guido, 493, 496, 502
friction between developers and, 41–42 defect rates, 69–71
identifying, 116, 239 of error-prone modules, 72
risks associated with, 88–89 defects, preventing, 71
customer-satisfaction surveys, 241, 242 DeGrace, Peter, 143, 161
Cusumano, Michael, 116, 255, 270, 272, 290, 324, DeMarco, Tom, 12, 28, 41, 59, 118, 131, 174,
327, 341, 407, 414, 432, 536 186, 203, 217, 231, 249, 271, 276, 288, 290,
cycles, short release, 339 293–95, 317, 334, 506, 513, 543, 603, 608
Deming, W. Edwards, 265
demonstration prototyping, 591

628
Index

design. See also designing for change developers, continued


customer-oriented, 242 recovery of projects and, 379
as development fundamental, 62–63 reuse and, 535
evolutionary prototyping and poor, 438–39 Developing Products in Half the Time (Smith and
further reading on, 66–67 Reinertsen), 131
inadequate, as classic mistake, 45 development environment, risks associated
risks associated with, 91 with, 88
“Design and Code Inspections to Reduce Errors in development speed. See speed of development
Program Development” (Fagan), 77 De Vreese, Jean-Pierre, 493, 496, 502
designing for change, 415–23 diagnosis of defects, daily build and smoke test
bottom line on, 422 and, 406
further reading on, 423 Discipline for Software Engineering, A
interactions with other practices, 422 (Humphrey), 28
managing the risks of, 421–22 documentation
side effects of, 422 graphically oriented, 329
summary of, 415 reuse and, 534
using, 416–21 Dreger, Brian, 204
change plans, 419 drop-dead date, fixed, 114–15
defining families of programs, 419–21 Dunn, Robert H., 62
identifying areas likely to change, 416–17 Dunsmore, H. E., 479
information hiding, 417–19 Dupont, 293, 586
keys to success in, 423 Dyer, William G., 296
object-oriented design, 421 Dynamics of Software Development (McCarthy),
Design Objectives and Criteria (Boeing), 256 317, 388, 414, 432
design-to-schedule lifecycle model, 149–50
strengths and weaknesses of, 157 E
design-to-tools lifecycle model, 152–53 “effect on schedule risk” entry, in best-practice
strengths and weaknesses of, 157 summary tables, 393
developers efficient development, 19–20
as bad negotiators, 221 customer relationships and, 237
estimates prepared by, 178 tilted toward best schedule, 20
feedback on measurement data to, 474–75 efficient schedules
friction between customers and, 41–42 ballpark estimates for, 192–94
gold-plating by, 47 shortest possible schedules and, 193
hygiene factors for, 265–66 effort estimation, 182–83
increased attachment to specific features, 328 Elements of Friendly Software Design, The (Heckel),
in JAD sessions, 454 317
miniature milestones of, 485 embedded software, 22
motivations of, 251–54 Emery, Fred, 287, 288
not involving, in decisions that affect them, as Emery, Merrelyn, 287, 288
morale killer, 268–69 employees. See personnel (employees; staff)
overly optimistic scheduling and relationships empowerment, sense of, team members’, 287
between managers and, 217 encapsulation, planned reuse and, 534

629
Index

end-users. See also customer-oriented estimates (estimation), continued


development; customers plus-or-minus style, 180
lack of involvement, 42 presentation styles for, 179–82
risks associated with, 88 previous projects as basis for, 178
Erikson, W. J., 12, 217, 249 principled negotiation method and, 227–29
Ernst and Young, 61 probability of loss, 93
error-prone modules, 72 ranges for, 180
error rates. See defect rates recalibration of schedules, 199–200
estimate-convergence graph, 168 reestimation mistakes, 46
estimates (estimation), 163–204 refinement of, 197–200
accuracy vs. precision and, 173 risk quantification and, 180–81
allowing time planning for, 178 schedule, 183–85
ballpark schedule, 185–97 size, 174–82
efficient schedules, 192–94 definition of size, 175
first thing to do with, 196–97 function-point estimation, 174
nominal schedules, 194–96 size of loss, 93
shortest possible schedules, 188–92 software tools for, 179
tables of, 186–88 staged delivery and, 552
case-based, 181 story of, 165–72, 221
case studies tips on, 177–79
careful project estimation, 200–202 using several different techniques for, 179
seat-of-the-pants estimation, 164 by walk-through, 178
by categories, 178 evolutionary delivery, 151–52, 425–32
changing practices of, 179 further reading on, 432
commitment-based scheduling and, 184 interactions with other practices, 431
confidence factors and, 181–82 managing the risks of, 429
construction example, 166 side effects of, 430
control vs., 170 strengths and weaknesses of, 157
convergence between reality and, 171–72 summary of, 425
cooperation and, 171 using, 427–29
developer-based, 178 keys to success in, 432
as development fundamental, 55 release order, 428–29
effort, 182–83 when to use, 429
function-point, 174–77 evolutionary prototyping lifecycle model, 147–48,
further reading on, 203–4 433–43
as gradual refinement process, 165–69 bottom line on, 441–42
at low level of detail, 178 evolutionary delivery and, 425, 426, 428–31
off-the-cuff, 177 further reading on, 443
omission of common tasks and, 179 interactions with other practices, 441
omitting necessary tasks from, as classic managing the risks of, 435–40
mistake, 46 diminished project control, 436–37
overview of the process of, 173–74 feature creep, 439
padding of, 184 inefficient use of prototyping time, 440

630
Index

evolutionary prototyping lifecycle model, managing features (feature set), creeping (creeping require-
the risks of, continued ments), continued
poor design, 438–39 staged delivery and, 555
poor end-user or customer feedback, 437 timebox development and, 581
poor maintainability, 439 user-interface prototyping and, 591
poor product performance, 437 versioned development, 330–31
unrealistic performance expectations, 437 further reading on, 343–44
unrealistic schedule and budget late-project feature cuts, 341–42
expectations, 436 mid-project changes, 331–41
side effects of, 440 effects of change, 335–36
strengths and weaknesses of, 156 killer-app syndrome, 332
summary of, 433–34 methods of change control, 338–41
user-interface prototyping and, 591–92 sources of change, 331–32
using, 435 stopping changes altogether, 336–37
keys to success in, 442 unclear or impossible goals, 332–35
expectations recovery of projects and, 382–84
measurements and, 468 timebox development and, 580
unrealistic, 42 feature-team structure, 307
of customers, 120–21 Federal Aviation Administration (FAA), air-traffic-
after JAD session, 460 control workstation software, 320
about performance, evolutionary prototyping feedback
and, 437–38 end-user or customer, evolutionary prototyping
about schedule and budget, evolutionary and, 437
prototyping and, 436 on measurement data, 474–75
about schedule and budget, throwaway team structure and, 302
prototyping and, 572 Fenton, Norman, 474, 475
Exploring Requirements (Gause and Weinberg), 66 firmware, 22
external environment, risks associated with, 90 first-order estimation practice, Jones’s, 185
Fisher, Roger, 231, 562
F flow time, productivity environments and, 506
Fagan, M. E., 45, 77, 123, 212 focus groups, 241, 242
Faller, Benoît, 530, 534 fourth-generation languages. See 4GLs
families of programs/products, 420–21 Fox, Christopher J., 534, 535
features (feature set), 319–44 Frakes, William B., 534, 535
case study: managing change effectively, 342–43 Freedman, Daniel P., 77
creeping (creeping requirements), 46–47, 123, Freeman, Peter, 538
319 (see also requirements (requirements friction
specifications or analysis)) customer relationships and, 238
evolutionary prototyping and, 439 between developers and customers, 41–42
kinds of controls, 320–21 Function Point Analysis (Dreger), 204
outsourcing and, 492 function-point estimation, 174–77
reducing the feature set, 321–31 further reading on, 204
scrubbing (entirely removing) features, function-point multipliers, 176
329–30

631
Index

fundamentals of software development, 8, 15, H


51–79, 395 Hackman, J. Richard, 258, 272
case study: lack of fundamentals, 52–55 Hall, Tracy, 474, 475
following the instructions on, 77–78 Hamilton, Scott, 14
further reading on, 66–68 Handbook of Walkthroughs, Inspections
management, 55–60 and Technical Reviews (Freedman and
quality assurance (see quality assurance) Gerald), 77
technical, 60–66 Hatley, Derek J., 66
fuzzy front end, 124 Hawthorne effect, 264–65
case study, 124–25 Hawthorne Works of the Western Electric
wasted time during, 44 Company, 263
Heckel, Paul, 317
G Henry, S. M., 529, 534
Gaffney, J., 187 Herbsleb, James, 537
gambles, 103 heroics, 41
overly optimistic scheduling and, 216 Herzberg, Frederick, 254, 265, 272
Gates, Bill, 208, 290 Hetzel, Bill, 52, 56, 57, 76, 351
Gause, Donald C., 66 High Output Management (Grove), 60
Getting to Yes (Fisher and Ury), 222, 231 high risk, 102–3
Gibbs, W. Wayt, xiii, 14, 320, 343 Highsmith, Jim, 287
Gilb, Tom, 47, 59, 71, 74, 76, 82, 106, 204, 231, hiring practices, 40
428, 430, 432, 489, 558 human resources, code-like-hell approach and, 24
Glass, Robert L., xvii, 76, 82, 206, 215, 216, 352, Humphrey, Watts S., 28, 44, 211, 502
367, 369, 416 Hunter, Richard, 262, 321, 540
goals Hutchens, David H., 74
setting, 445 hygiene factors
achievement motivation and, 255–57 morale and, 265
measurement and, 469 motivation and, 265–66
unclear or impossible productivity environments and, 506
mid-project changes and, 332–35
minimal-specification approach and, 326 I
gold-plating Iansiti, Marco, 116, 207, 231
minimal-specification approach and, 326–27 IBM, 70, 72, 174
timebox development and, 581 Santa Teresa office facility, 509, 512
Gordon, V. Scott, 431, 435, 436, 438, 439, 441, 443, “IBM’s Santa Teresa Laboratory-Architectural
571, 597 Design for Program Development”
Grady, Robert B., 478, 479 (McCue), 513
Graham, Dorothy, 74, 76 idle periods, long-term teambuilding and, 293–94
Grant, E. E., 12, 217, 249 IEEE Software (magazine), 479, 538
granularity of data, 471 “Impacts of Life Cycle Models on Software
Gray, T. E., 12, 217, 277 Configuration Management” (Bersoff and
Griss, 531, 534 Davis), 68
groups, teams distinguished from, 276 implementation, risks associated with, 91
Grove, Andrew S., 60, 265
growth, possibility for, as motivation, 257–58

632
Index

“improvement in progress visibility” entry, in Joint Application Design (August), 463


best-practice summary tables, 393 Joint Application Development. See JAD
incentives, motivation and, 262 Joint Application Development (Wood and
incremental development practices, 152 Silver), 463
ineffective development practices, 3. See also Joint Requirements Planning. See JRP
classic mistakes Jones, Capers, xiii, xvii, 12, 42, 45, 46, 48, 57, 66,
inefficient schedule-oriented practices, 20–21 67, 69–73, 81, 106, 123, 161, 163, 174, 179,
information 185, 192, 204, 207, 211, 215, 217, 227, 244,
about risks, 97 258, 297, 305, 309, 319, 331, 338, 339, 343,
team structure and, 302 353, 356, 360, 363, 364–65, 369, 370, 422,
information hiding, 63 441, 461, 469, 476, 479, 506, 512, 524, 531,
designing for change and, 417–19 534, 537, 581, 596, 601, 606, 608
planned reuse and, 534 Jones’s first-order estimation practice, 176, 185
in-house software, defined, 22 Joos, Rebecca, 531
In Search of Excellence (Peters and Waterman), JRP (Joint Requirements Planning), 452
247, 263, 272, 317 jumping into coding, as classic mistake, 45
Inside RAD (Kerr and Hunter), 262, 321
inspections, 74 K
further reading on, 76–77 Karten, Naomi, 247
integration of code Katzenbach, Jon, 275, 296
daily build and smoke test and, 406 Kemerer, Chris, 187, 188
staged delivery and, 552 Kerr, James, 262, 321, 540, 542
interactive software, defined, 22 Kidder, Tracy, 23, 540, 542, 544, 545
interdependence among team members, 286 Kitson, David H., 57, 62, 66, 70
International Function Point User Groups Klein, Gary, 366
(IFPUG), 174 Klepper, Robert, 360
Kohen, Eliyezer, 63
J Korson, Timothy D., 417
JAD (Joint Application Development), 241, 242, Krantz, Les, xvii
324–25, 449–63
bottom line on, 461 L
further reading on, 463 LaFasto, Frank M. J., 41, 256, 257, 279, 285, 288,
interactions with other practices, 461 291, 294, 296, 299, 300, 302, 317
managing the risks of, 460 Lakhanpal, B., 40, 277, 293
requirements specifications and, 450, 461–62 languages. See 3GLs (third-generation languages);
savings produced by, 450 4GLs (fourth-generation languages); rapid-
side effects of, 460–61 development languages (RDLs)
using, 451–59 Laranjeira, Luiz, 197
design phase, 451–52, 457–59 large teams, 311–13
keys to success in, 462 Larson, Carl E., 41, 256, 257, 279, 285, 288, 291,
planning phase, 451–57 294, 296, 299, 300, 302, 317
job feedback, work itself as motivation and, 259 late projects, adding people to, 41
job satisfaction, defined, 254 leaders (leadership), 283
Johnson, Donna L., 474 JAD session, 454
recovery of projects and, 377–78

633
Index

leads, technical, 313–14 management (managers), continued


Lederer, Albert L., xiii, 178, 319 manipulation by, as morale killer, 266–67
legacy-systems reengineering, outsourcing, 494 motivations of, compared to developers, 251–52
lifecycle planning (lifecycle models), 16, 133–43, overly optimistic scheduling and relationships
151–52, 239. See also specific models between developers, 217
case studies recovery of projects and, 377–78
effective selection of lifecycle model, 159–61 technical leads and, 313–14
ineffective selection of lifecycle model, management controls, insufficient, as classic
134–36 mistake, 45
choosing the most rapid model, 154–58 management fundamentals, 55–60
code-and-fix model, 140 estimation and scheduling, 55
design-to-schedule, 149–50 further reading on, 58–60
design-to-tools, 152–53 measurement, 58
evolutionary prototyping, 147–48 planning, 56
incremental development practices, 152 tracking, 56–58
modified waterfall model, 143–47 management risks, 87–88
off-the-shelf software, 153–54 Manager’s Guide to Software Engineering, A
selection of, 465 (Pressman), 59, 75
staged delivery, 148–49 Managing a Programming Project (Metzger),
strengths and weaknesses of, 156–58 56, 59
waterfall model, 136–39 Managing Expectations (Karten), 247
lines-of-code measurements, 477 Managing Software Maniacs (Whitaker), 247
Lister, Timothy, 12, 28, 41, 217, 231, 249, 271, 276, Managing the Software Process (Humphrey), 502
288, 290, 293, 295, 334, 506, 511, 513, 543, manipulation by management, as morale killer,
603, 608 266–67
long-term rapid development, 217 Marciniak, John J., 493, 502
loss of control Marcotty, Michael, 68
outsourcing and, 500 Martin, James, 131, 293, 308, 317, 366, 452–55,
signing up and, 544 461, 463, 584, 586, 587
Lyons, Michael L., 253 Masters, Stephen, 57, 62, 66, 70
maturity of productivity tools, 356–57
M Mayo, Elton, 263
Maguire, Steve, 28, 68, 231, 270, 543, 603, 604, 608 MBTI (Myers-Briggs Type Indicator) test, 253
maintainability, evolutionary prototyping and McCarthy, Jim, 127, 242, 307, 317, 339, 341, 388,
poor, 439 407, 414, 432, 482
“major interactions and trade-offs” entry, in McConnell, Steve, xiv, 67, 75, 343, 414, 423, 432,
best-practice summary tables, 394 523, 524
“major risks” entry, in best-practice summary McCue, Gerald M., 513
tables, 394 McDermott, Tim, 530, 534
management (managers). See also Theory-W McGarry, Frank, 12, 249, 277, 360, 469, 475,
management 530, 534
of high-performance teams, 288–89 measurement, 467–79
inappropriate involvement of technically inept, benefits of using, 468
as morale killer, 268 bottom line on, 478

634
Index

measurement, continued Mills, Harlan D., 12, 249, 305, 306, 543, 603
as development fundamental, 58 miniature milestones, 380–82, 481–89
expectations and, 468 bottom line on, 488
further reading on, 478–79 fine-grain control and, 483
interactions with other practices, 477 further reading on, 489
managing the risks of, 476–77 generic task lists vs., 487
side effects of, 477 interactions with other practices, 488
using, 469–76 managing risks of, 487
accuracy of data, 476 motivation and, 483
analysis vs. measurement, 473–74 schedule risk and, 483
baseline report, 475 side effects of, 487
feedback on measurement data, 474–75 status visibility and, 482–83
Goals, Questions, Metrics practice, 469 summary of, 481
granularity, 471–73 using, 484–87
keys to success in, 478 assessing progress and recalibrating or
limitations, 475–76 replanning, 486–87
measurement group, 469–70 developers’ own mini milestones, 485
Pareto analysis, 473 “done” and “not done” as only two
what to do with collected data, 473–75 statuses, 485
what to measure, 470–73 early or in response to a crisis, 484
measurement group, 469 exhaustive set of milestones, 486
Measures for Excellence (Putnam and Myers), 203 keys to success in, 489
Method for Assessing the Software Engineering for short-term planning, 486
Capability of Contractors, A (Humphrey and size limit, 485
Sweet), 502 minimal specifications, 323–29
Metzger, Philip W., 56, 59, 221, 543, 603 benefits of, 325–26
Microsoft keys to success in using, 328–29
case study: motivation of developers at, 270–71 lack of support for parallel activities and, 327
late-project feature cuts at, 341 risks of, 326–28
“Microsoft Corporation: Office Business Unit” wrong reason for using, 328
(Iansiti), 231 Modern Structured Analysis (Yourdon), 66
Microsoft Secrets (Cusumano and Selby), 272, modularity, 63
414, 432 “Modular Structure of Complex Systems, The”
Microsoft Windows NDS tree 3.0, development of, (Parnas, Clements, and Weiss), 423
388, 408, 542–44 MOI model, 289
Microsoft Word for Windows 1.0 Moore, Dave, 270
optimistic scheduling practices and, 207–9 morale
Microsoft Visual C++, 242 daily build and smoke test and, 407
milestones defined, 254
major, 484 factors that lower, 265–69
miniature (see miniature milestones) measurement and, 468
military software, 22 minimal-specification approach and, 325–26
Millington, Don, 236, 328 outsourcing and, 500
project recovery and, 377, 379

635
Index

morale budget at Microsoft (case study), 270 N


More Programming Pearls (Bentley), 68 Naisbitt, John, 258, 508
motivation(s), 14 NASA, 478
achievement as, 255–57 Software Engineering Laboratory, 74, 277, 352,
case studies 421, 469, 474, 475, 530, 533
disheartening lunch with the boss, 250–51 negotiation (negotiating skills)
highly motivational environment, 270–71 estimates and, 221
commitment-based approach and, 23 principled (see principled negotiation method)
defined, 254 push-me, pull-me, 47
of developers, 251–54 separating the people from the problem, 222–23
further reading on, 271–72 noisy, crowded offices, 41
heavy-handed campaigns to improve, as morale nominal schedules, 194–96
killer, 269 “No Silver Bullets-Essence and Accidents of
miniature milestones and, 483 Software Engineering” (Brooks), 348
minimal-specification approach and, 325–26
morale killers, 265–69 O
excessive schedule pressure, 267 Object Oriented Analysis and Design (Booch), 67
heavy-handed motivation campaigns, 269 object-oriented design
hygiene factors, 265–66 designing for change and, 421
inappropriate involvement of technically information hiding, 417–19
inept management, 268 planned reuse and, 534
lack of appreciation for developers, 267 Object-Oriented Design (Coad and Yourdon), 67
low quality, 269 object-oriented programming, 367
management manipulation, 266–67 Object-Oriented Rapid Prototyping (Connell and
not involving developers, 268–69 Shafer), 443
productivity barriers, 269 O’Brien, Larry, 358, 370
overly optimistic scheduling and, 216 office space. See productivity environments
performance reviews and, 265 offshore outsourcing, 496
personal life as, 260–61 off-the-shelf software, 153–54
pilot projects and, 263–65 strengths and weaknesses of, 157
possibility for growth as, 257–58 O’Grady, Frank, 294
rewards and incentives and, 262–63 Oldham, Greg R., 258, 272
technical-supervision opportunity as, 261 Olsen, Neil, 18
timebox development and, 581 “One More Time: How Do You Motivate Employ-
undermined, as classic mistake, 40 ees?” (Herzberg), 272
work itself as, 258–60 “On the Criteria to Be Used in Decomposing
motivation (motivation factors), 249–72 Systems into Modules” (Parnas), 423
mutual trust, teamwork and, 285 opportunistic efficiency, minimal-specification
Myers, Glenford, 73, 74, 76 approach and, 326
Myers, Ware, 14, 182, 192, 518 optimistic schedules, overly, as classic mistake, 44
Myers-Briggs Type Indicator (MBTI) test, 253 optimistic scheduling. See scheduling (schedules),
Mythical Man-Month, The (Brooks), 305, 317, overly optimistic
369, 418

636
Index

organization of teams, 13 penalty for breaking the build, 410


organization risks, 87–88 people (peopleware issues). See also personnel
outsourcing, 491–502 (employees; staff)
bottom line on, 501 classic mistakes related to, 40
feature creep and, 492 as dimension of development speed, 11, 12–14
further reading on, 502 recovery of projects and, 376–79
interactions with other practices, 501 Peopleware (DeMarco and Lister), 28, 231, 271,
managing the risks of, 499–500 513, 608
planning and, 493 perception of development speed, 119–21
requirements specification and, 492 overcoming the perception of slow
reusable components and, 492 development, 121
side effects of, 501 performance
staffing flexibility and, 492 product, evolutionary prototyping and, 437
summary of, 491 team structure and monitoring of, 302
time savings with, 492 performance reviews, motivation and, 265
using, 493–99 per-person-efficiency trade-offs, 127
communication with the vendor, 493 Perry, Dewayne E., 476
contract considerations, 498–99 personal life, as motivation, 260–61
contract management, 493 personnel (employees; staff). See also people
double standards for outsourced work, 495 (peopleware issues)
keys to success in, 501–2 evaluations of, measurements misused for, 476
kinds of arrangements, 495–96 long-term teambuilding and, 293
legacy-systems reengineering, 494 outsourcing and staffing flexibility, 492
management plan including risk problem, 40–41
management, 493 failure of teams and, 291–92
offshore outsourcing, 496–97 recovery of projects and, 377
technical resources, 493–94 risks associated with, 90
unstable requirements, 494 weak, 40
vendor evaluation, 497–98 Peters, Chris, 255, 324, 413, 536
overrun, project, 94 Peters, Tomas J., 247, 258, 263, 265, 271, 272, 317
overtime Pfleeger, Shari Lawrence, 441, 533–36
desire for free, as rapid-development look- philosophy, best practices and, 395
alike, 115 Pietrasanta, Alfred M., 14, 474
voluntary (see voluntary overtime) Pirbhai, Imtiaz A., 66
ownership, achievement motivation and, 255 planned reuse, 531–35
planning (plans), 486
P abandonment of, under pressure, as classic
padding of estimates, 184 mistake, 44
Page-Jones, Meilir, 67 change, 419
Papaccio, 45, 62, 71, 212, 335, 585 customer-oriented, 239
Pareto analysis, 473 as development fundamental, 56
Parnas, David L., 417, 419, 420, 422, 423 insufficient, as classic mistake, 44
Peat Marwick, 82 outsourcing and, 493

637
Index

planning (plans), continued product. See also features (feature set)


overly optimistic scheduling and adherence as dimension of development speed, 11, 17
to, 211 risks associated with, 89–90
overly optimistic scheduling and quality of, 211 trade-offs among schedule, cost, and, 126–27
risk-management, 85, 96–97 product characteristics, 17
Plauger, P. J., 28, 66–67 product families, 420–21
point-of-departure spec, 324 productivity
politics placed over substance, 43 barriers to, 269
postmortems, interim, 102 failure of teams and, 290
“potential reduction from nominal schedule” classic mistakes and (see classic mistakes)
entry, 392 factors that affect, 37–38
Practical Guide to Structured Systems Design, The lines-of-code measurement of, 477
(Page-Jones), 67 motivation and, 40
Practical Software Metrics for Project Management peopleware issues and, 12, 13
and Process Improvement (Grady), 479 per-person-efficiency trade-offs and, 127
Prasad, Jayesh, xiii, 178, 319 team
precision, accuracy vs., 173 permanent-team strategy, 292
predictability, as rapid-development look-alike, 113 variations in, 276–77
premature convergence, overly optimistic variations in, 14
scheduling and, 213–15 among groups (teams), 12
premature releases, daily build and smoke test among individuals, 12
and, 412–13 productivity environments, 505–13
presentation styles, for estimates, 179–82 bottom line on, 511
Pressman, Roger S., 59, 75, 79 further reading on, 513
principled negotiation method, 222–29, 503 interactions with other practices, 511
degrees of freedom in planning a software managing the risks of, 510–11
project and, 225–26 side effects of, 511
focusing on interests, not positions, 224–25 summary of, 505
insisting on using objective criteria, 227–29 using, 508–9
inventing options for mutual gain, 225–27 keys to success in, 513
separating the people from the problem, 222–23 productivity tools, 345–70
Principles of Software Engineering Management acquisition of, 353–58
(Gilb), 59, 71, 106, 204, 231, 428, 430, 432, criteria for tool selection, 356–58
489, 558 risks of setting up a tools group, 355
prioritization of risks, 85, 94–96 tools person or group, 354–55
problem-resolution team, 300 case studies
process effective tool use, 368–69
abuse of focus on, 14–15 ineffective tool use, 346–47
classic mistakes related to, 44–46 defined, 346
as dimension of development speed, 11, 14–16 further reading on, 369–70
recovery of projects and, 379–82 maturity of, 356–57
risks associated with, 91

638
Index

productivity tools, continued quality assurance, continued


role in rapid development, 348–52 further reading on, 75–77
areas of special applicability, 349–50 technical reviews, 73–75
limitations of productivity tools, 350–51 testing, 72–73
ultimate role of productivity tools, 351–52 purposes of, 15
silver-bullet syndrome (see silver bullets shortchanged, as classic mistake, 45
(silver-bullet syndrome)) Quality Software Management (Weinberg), 58, 218,
strategy for using, 352–53 388, 608
use of, 358–63
how much schedule reduction to expect, R
360–63 RAD (Rapid Application Development), 366
training, 359 Rapid Application Development (Martin), 131, 317,
when to deploy, 358–59 366, 463, 587
product size, 17 rapid development. See also speed of development
professional-athletic-team model, 309–10 approaches that contribute to, 130
program families, 420–21 core issues in, 109–31
Programming on Purpose (Plauger), 66 further reading, 131
Programming on Purpose II (Plauger), 28 kind of rapid development needed, 111–16
Programming Pearls (Bentley), 68 odds of completing on time, 116–19
“Project Pathology: A Study of Project Failures” perception and reality of development
(Thomsett), 388 speed, 119–21
project recovery. See recovery of projects trade-off decisions, 126–27
prototyping. See also evolutionary prototyping typical schedule-improvement pattern,
lifecycle model; throwaway prototyping; 128–30
user-interface prototyping variations in development practices, 109–11
prototyping, rapid, use of term, 434 where most of the time is spent, 122–25
prototyping, demonstration, 591 customers’ importance to, 236–38
Psychology of Computer Programming, The defined, 2
(Weinberg), 272 general kinds of, 18–21
Putnam, Lawrence H., 14, 182, 192, 203, 518 all-out rapid development, 20–21
efficient development, 19–20
Q efficient development tilted toward best
quality schedule, 20
lines-of-code measurement of, 477 look-alikes, 113–15
low trade-offs among different approaches to, 11, 18
daily build and smoke test and, 406 ways of attaining, 2–3
as morale killer, 269 rapid-development languages (RDLs), 515
overly optimistic scheduling and, 215–16 bottom line on, 522–23
trade-offs, 127 examples of, 516
quality assurance interactions with other practices, 522
fundamentals of, 69–77 keys to success in, 523
defect rates, 69–71 managing the risks of, 520
error-prone modules, 72 side effects of, 522
using, 519

639
Index

rapid prototyping, use of term, 434 requirements (requirements specifications or


“Rapid Prototyping: Lessons Learned” (Gordon and analysis), continued
Bieman), 443 lack of support for parallel activities, 327
Raytheon, 14, 15, 21 omission of key requirements, 326
RDLs. See rapid-development languages risks of, 326–28
“Realities of Off-Shore Reengineering” (Dedene unclear or impossible goals, 326
and De Vreese), 502 wrong reason for using, 328
real-time systems, 22 outsourcing and, 492, 494
recalibration of schedules, 199–200 recovery of projects and, 382–84
recognition. See appreciation (recognition) risks associated with, 89
recovery of projects, 371–88. See also features scrubbing (entirely removing), 329–30, 525
(feature set), late-project feature cuts and research-oriented development, 47
mid-project changes resources, targeting, 16
case studies reuse (reusable components), 527–38
successful project recovery, 385–87 bottom line on, 537
unsuccessful project recovery, 372–73 further reading on, 538
characteristics of projects in trouble, 371–72 interactions with other practices, 536–37
fundamental approaches to, 373–74 keys to success in using, 537–38
philosophy of this book’s approach to, 374 managing the risks of, 535–36
plan for, 375–85 outsourcing and, 492
first steps, 376 side effects of, 536
people issues, 376–79 summary of, 527
process, 379–82 using, 528–35
product, 382 opportunistic reuse, 528–31
timing, 385 planned reuse, 531–35
reestimation mistakes, 46 rewards
Reifer, Donald J., 493, 502 case study: at Microsoft, 270–71
Reinertsen, D. G., 131 motivation and, 262
Reinventing the Corporation (Naisbitt and rework
Aburdene), 258 amount of time spent on, 123
release cycles, short, 339 avoidance of, 15
requirements (requirements specifications or customer relationships and, 237
analysis), 61–62, 124 Rich, Charles, 366, 367
case studies, 234–36, 246 Rifkin, Stan, 468, 474
creeping (feature creep) (see features (feature risk analysis, 85, 91–94
set), creeping (creeping requirements)) risk assessment, elements of, 85
customer-oriented, 239–42, 338 risk, assuming, 98
detailed, 321–23 risk control, 85, 96–102
further reading on, 66 risk exposure, 92
gold-plating, 46 risk identification, 85–91
JAD and, 450, 461–62 complete list of schedule risks, 86–91
minimal specification, 323–29 contractor risks, 89
benefits of, 325–26 customer risks, 88–89
keys to success in using, 328–29 design and implementation risks, 91

640
Index

risk identification, continued schedule constraints, 112–13


development environment risks, 88 schedule creation risks, 87
end-user risks, 88 schedule estimation, 183–85
external environment risks, 90 schedule-oriented practices, 3, 9, 10. See also
organization and management risks, 87–88 speed of development; visibility-oriented
personnel risks, 90 practices
process risks, 91 focusing only on, 10
product risks, 89–90 focusing too much on single, 5
requirements risks, 89 kinds of, 4
schedule creation risks, 87 schedule pressure
general risks, 85–86 beating, 220–29
most common schedule risks, 86 build and smoke testing even under, 411
risk management, 8, 15, 81–106. See also excessive, as morale killer, 267
managing-risks sections of best-practice schedule recalibration, 199–200
Chapters 17–43 schedule risk, miniature milestones and, 483
case studies, 82–83 schedule-risk-oriented practices. See fundamentals
systematic risk management, 103–5 of software development
disadvantages of, 82 scheduling (schedules), 205–31
elements of, 84 beating schedule pressure, 220–29
further reading on, 106 principled negotiation, 222–29
insufficient, as classic mistake, 44 case study: a successful schedule negotiation,
levels of, 84 229–30
outsourcing and, 493 commitment-based, 184
risk-management planning, 85, 96–97 as development fundamental, 55
risk monitoring, 85, 100–102 further reading on, 231
risk officer, 102 overly optimistic, 207–20
risk prioritization, 85, 94–96 accuracy of schedules and, 210
risk quantification, estimation and, 180–81 adherence to plan and, 211
risk resolution, 85, 97–100 bottom line and, 218–20
risks, 102 burnout and, 217
customers-related, 237 creativity and, 216–17
roles, clear, team structure and, 302 customer relations and, 213
Ross, Rony, 484, 559, 568 developer-manager relationships and, 217
Rothfeder, Jeffrey, 82 effects of, 210–15
runaway prevention, as rapid-development look- example of, 207
alike, 113 gambling and, 216
Rush, Gary W., 461 long-term rapid development and, 217
Russell, Glen W., 74 motivation and, 216
premature convergence, 213–15
S project focus and, 213
Sackman, H., 12, 217, 249 quality and, 215–16
Saiedian, Hossein, 14 quality of project planning and, 211
sashimi model, 143–44 root causes of, 209–10
schedule compression, 191–92 underscoping the project and, 212

641
Index

scheduling (schedules), continued Sims, James, 452


overly optimistic, as classic mistake, 44 size estimation, 174–82
principled negotiation method and, 222–29 definition of size, 175
probability of meeting, 116–19 function-point estimation, 174
timebox development and, 580 size of product, 17
trade-offs among product, cost, and, 126 skill variety, work itself as motivation and, 258
Scherr, Allen, 285, 540, 542 skunkworks-team structure, 306–7
Scholtz, 367, 421, 534 Smith, Douglas, 296, 275
Schulman, Edward, 12, 217, 255, 276 Smith, P. G., 131
scientific software, 22 smoke test. See daily build and smoke test
scribe, in JAD sessions, 455 Sobol, Marion G., 366
scrubbing (entirely removing) features, software, types of, 22
329–30, 525 Software Acquisition Management (Marciniak and
search-and-rescue team model, 307–8 Reifer), 493, 502
Seewaldt, T., 12, 217, 277 Software Configuration Management (Babich), 68
Selby, Richard, 74, 116, 255, 270, 272, 290, 324, Software Configuration Management (Bersoff), 68
327, 341, 407, 414, 432, 536 software configuration management (SCM)
Shafer, Linda, 443 as development fundamental, 65–66
Shaw, 436 further reading on, 68
Shen, V. Y., 479 software development. See fundamentals of
Sherman, Roger, 138 software development; rapid development
shortest possible schedules, 188–92 software engineering, further reading on, 79
efficient schedules and, 193 Software Engineering (Pressman), 75, 79
Showstopper! (Zachary), 272, 388 Software Engineering (Sommerville), 75, 79
shrink-wrap software, defined, 22 Software Engineering Economics (Boehm), 203
Shultz, Scott, 586 Software Engineering Institute, 14, 28, 55, 57, 59,
signing up, 539–45 66, 70, 374, 502
bottom line on, 545 Software Engineering Laboratory (NASA), 74, 277,
in different environments, 542 352, 421, 469, 474, 475, 530, 533
further reading on, 545 Software Engineering Metrics and Models
giving people a choice about, 541 (Conte et al.), 479
interactions with other practices, 544–45 “Software Engineering Under Deadline Pressure”
keys to success in using, 545 (Costello), 231
managing the risks of, 543–44 Software Implementation (Marcotty), 68
shaky requirements and, 542 Software Inspection (Gilb and Graham), 76
side effects of, 544 Software Measurement Guidebook (NASA), 478
at the team level, 541 Software Metrics (Grady and Caswell), 478
vision and, 540 “Software Productivity Research Programming
Silver, Denise, 463 Languages Table” (Jones), 524
silver bullets (silver-bullet syndrome), 47–48, Software Risk Management (Boehm), 106, 161
363–69, 520–21 “Software Risk Management: Principles and
biting the bullet, 367–68 Practices” (Boehm), 106
identifying, 365–67 “Software’s Chronic Crisis” (Gibbs), 343

642
Index

Software Sizing and Estimating (Symons), 204 status-oriented office improvements, lost
Sommerville, Ian, 75, 79 productivity from, 510
Soul of a New Machine, The (Kidder), 540, 542, 545 status visibility
source-code control, lack of automated, as classic measurement and, 468
mistake, 48 miniature milestones and, 482–83
specifications. See requirements (requirements signing up and, 543
specifications or analysis) user-interface prototyping and, 591
speed of development. See also rapid development Staudenmayer, Nancy A., 476
classic mistakes and, 37–39 storyboards, minimal specification and, 324
dimensions of, 11–18 Strategies for Real-Time System Specification
people, 12–14 (Hatley and Pirbhai), 66
process, 14–16 strategy for rapid development
product, 17 alternative (commitment-based or code-like-hell
relative importance of, 21–22 approach), 22–25
technology, 17–18 case studies
typical pattern for improving, 128–30 absence of a clear, 6
spiral lifecycle model, 141–43, 547 clear, 25
strengths and weaknesses of, 156 in general, 8–10
sponsorship of projects, lack of effective, 42 stress, software errors and, 215
staff. See personnel (employees; staff) Structured Design (Yourdon and Constantine), 67
staged delivery lifecycle model, 148–49, 425, 426, SWAT team model, 308–9
549–58 Sweet, W. L., 502
benefits of, 550–52 Symons, Charles, xiii, 171, 179, 192, 204, 351
bottom line on, 558 synergy, 18
further reading on, 558 systems software, defined, 22
interactions with other practices, 557
keys to success in using, 558 T
managing the risks of, 555–56 tactical-execution team, 300–301
rigidity of, 557 task identity, work itself as motivation and, 258
side effects of, 556–57 task significance, work itself as motivation and,
strengths and weaknesses of, 157 258–59
summary of, 549 team identity
using, 552–55 lack of, 289
developer focus, 553–54 sense of, 281
kinds of projects, 554–55 teams (team projects). See also teamwork
technical dependencies, 553 organization of, 13
theme releases, 554 productivity differences among, 12, 13
Stahl, Leslie Hulet, 161 staff selection for, 13
Standish Group, xiii, 42, 61, 211, 236, 243, 319 team structure, 297–317
Stapleton, Jennifer, 236, 328 best for rapid development, 302–3
startup costs, long-term teambuilding and, 293 case studies
statistics, overreliance on, 475 good match between project objectives and
team structure, 315–16

643
Index

team structure, case studies, continued teamwork, creating a high-performance team,


mismatch between project objectives and continued
team structure, 297–99 shared vision or goal, 279
clear roles and accountabilities and, 302 small team size, 287–88
effective communication and, 302 groups distinguished from teams, 276
fact-based decision making and, 302 importance to rapid development, 276–78
kinds of, 300–301 software uses of, 275–76
managers and technical leads, 313–14 summary of guidelines, 294–95
models of, 304–13 Teamwork (Larson and LaFasto), 316
business team, 304 technical fundamentals, 60–66
chief-programmer team, 305–6 construction, 64–65
feature team, 307 design, 62–63
large teams, 311–13 requirements management, 61–62
professional athletic team, 309 software configuration management (SCM), 65
search-and-rescue team, 307–8 technical leads, 313–14
skunkworks team, 306–7 technical reviews
SWAT team, 308–9 comment on, 74–75
theater team, 310–11 as development fundamental, 73–75
monitoring of individual performance and further reading on, 76–77
providing feedback, 302 technical-supervision opportunities, as motivation,
teamwork, 273–94 261
case studies technology
high-performance team, 277–78 classic mistakes related to, 47–48
poor group dynamics, 274–75 as dimension of development speed, 11, 17–18
selection of team members, 282–83 “Ten Commandments of Tool Selection, The”
creating a high-performance team, 278–89 (O’Brien), 370
challenging work, 280–81 Tesch, Deborah B., 366
characteristics of a high-performance team, testing
278–79 as development fundamental, 72–73
commitment to the team, 285 further reading on, 76
competent team members, 282–84 Thayer, Richard H., 59
effective communication, 286 theater team model, 310–11
failure of teams, 289–92 theme for the product, minimal specification
high level of enjoyment, 288 and, 324
interdependence among members, 286 theme releases, staged delivery and, 554
long-term teambuilding, 292–94 Theory-W management, 239, 376, 559–68
managing a high-performance team, 288–89 bottom line on, 568
mix of roles and skills, 283–84 further reading on, 568
mutual trust, 285 interactions with other practices, 567–68
results-driven structure, 282 managing the risks of, 566–67
sense of autonomy, 286–87 side effects of, 567
sense of empowerment, 287 summary of, 559
sense of team identity, 281

644
Index

Theory-W management, continued training, in productivity tool use, 357


using, 561–66 transferring risks, 97
keys to success in, 568 trust
kinds of projects that can use Theory-W, 566 failure of teams and lack of, 290
manager’s role, 566 among team members, 285
step 1: establish win-win preconditions, Turner, Albert J., 556
562–64 turnover
step 2: structure a win-win software long-term teambuilding and, 293
process, 564 overly optimistic scheduling and, 217
step 3: structure a win-win software Tutorial on Software Design Techniques (Parnas),
product, 565 423
“Theory-W Software Project Management: Tutorial: Software Engineering Project Manage-
Principles and Examples” (Boehm and ment (Thayer), 59
Ross), 568 Tutorial: Software Quality Assurance (Chow, ed.),
third-generation languages. See 3GLs 76
Third Wave Project Management (Thomsett), 106
Thomsett, Rob, 42, 106, 253, 293, 294, 317, 388 U
Thriving on Chaos (Peters), 258 Udell, John, 538
throwaway prototyping, 433, 569–73 Umphress, David, 49
evolutionary prototyping and, 441 underscoping, overly optimistic scheduling and,
user-interface prototyping and, 591–92 212
timebox development, 575–83 unrealistic expectations, 42
bottom line on, 582 of customers, 120–21
further reading on, 583 after JAD session, 460
interactions with other practices, 582 about performance, evolutionary prototyping
managing the risks of, 581 and, 437–38
side effects of, 581 about schedule and budget
using, 577–80 evolutionary prototyping and, 436
entrance criteria, 579 throwaway prototyping and, 572
keys to success in, 583 upstream activities, shortchanged, 45
timebox team, 580 Ury, William, 231, 562
variations, 580 user-interface prototyping, 324, 589–97
tools group, 585 bottom line on, 597
acquisition of productivity tools and, 354–56 interactions with other practices, 596
summary of, 585 managing the risks of, 595–96
Top-10 Risks List, 100–102, 587 side effects of, 596
Townsend, Robert, 287 summary of, 589
tracking, as development fundamental, 56–58 using, 591–95
Tracz, Will, 534–36, 538 choosing a prototyping language, 592
trade-offs end-user feedback and involvement, 593–94
per-person-efficiency, 127 evolutionary vs. throwaway prototyping,
quality, 127 591–92
among schedule, cost, and product, 126–27 finished product, 595

645
Index

user-interface prototyping, using, continued W


keys to success in, 597 Waligora, Sharon, 530, 534
prototype as a Hollywood movie facade, 593 walkthroughs, 73
user manual, as specification, 324 estimating by, 178
wasted effort, minimal-specification approach and
V
avoidance of, 326
Vaishnavi, Vijay K., 417
wasted time, during fuzzy front end, 44
Valett, J., 12, 249, 277, 469
waterfall lifecycle model, 136–39
van Genuchten, Michiel, 46, 184, 199, 210, 486
modified, 143–47
vendors. See outsourcing
with risk reduction, 146
version 1 of productivity tools, 356
sashimi model, 143
version 2
with subprojects, 145
features, 339
strengths and weaknesses of, 156
of productivity tools, 357
Waterman, Robert H., Jr., 247, 263, 271, 272, 317
version 3 of productivity tools, 357
Waters, Richard, 366, 367
versioned development, 330–31
Weinberg, Gerald M., 12, 40–41, 58, 66, 77, 217,
visibility-oriented practices, 10
218, 221, 255, 271, 272, 276, 296, 388, 600,
vision
601, 608
failure of teams and lack of, 289
Weiss, David M., 423, 469
managing a high-performance team and, 288
“What Are the Realities of Software Productivity/
sharing a, in high-performance teams, 279
Quality Improvements” (Glass), 369
signing up and, 540
“When the Rubber Hits the Road: A Guide to
vision statement, minimal specification and, 324
Implementing Self-Managing Teams”
Visual C++, Microsoft, 242
(Thomsett), 317
voluntary overtime, 599–608
Whitaker, Ken, 247, 267, 554
bottom line on, 606–7
Why Does Software Cost So Much? (DeMarco), 231,
further reading on, 608
317
interactions with other practices, 606
“Why Is Technology Transfer So Hard?” (Jones),
managing the risks of, 605–6
370
side effects of, 606
Wicked Problems, Righteous Solutions (DeGrace
summary of, 599
and Stahl), 161
using, 600–605
Wiener, Lauren Ruth, 321
caution about too much overtime, 604–5
Windows NT 3.0, development of, 388, 408,
developer-pull vs. leader-push approach,
542–44
600–601
win-win projects, 239
how much overtime to ask for, 603
WinWord (Microsoft Word for Windows) 1.0
keys to success in, 607
optimistic scheduling practices and, 207–9
out-of-control projects, 603
Wirth, Niklaus, 319
overtime should not be mandatory, 601–3
Wisdom of Teams, The (Katzenbach and Smith),
Vosburgh, J. B., 37, 38, 60, 240, 241, 256, 319, 335
275
Votta, Lawrence G., 476

646
Index

wishful thinking, 43
schedules and, 221
Witness (film), 273, 296
Wood, Jane, 463
Word for Windows 1.0, optimistic scheduling
practices and, 207–9
work environments. See also productivity environ-
ments
noisy, crowded, 41
Work Redesign (Hackman and Oldham), 272
Wosser, 531, 534
Writing Solid Code (Maguire), 68

Y
Yourdon, Edward, 66–67, 73, 337, 423

Z
Zachary, Pascal, 23, 271, 272, 388, 408, 412, 413,
542, 544
Zawacki, Robert, xviii, 259, 351
Zelkowitz, 351

647

You might also like