The demand for increased efficiency and effectiveness of our software
processes places measurement demands beyond those traditionally
practiced by the software engineering community. Statistical and
process thinking principles lead to the use of statistical process control
methods to determine consistency and capability of the many
processes used to develop software. This paper presents several
arguments and illustrations suggesting that the software community
examine the use of control charts to measure the stability and
capability of software processes as a basis for process
Over the past decade, the concepts, methods and practices associated with process
management and continual improvement have gained wide acceptance in the
software community. These concepts, methods and practices embody a way of
thinking, a way of acting and a way of understanding the data generated by
processes that collectively result in improved quality, increased productivity and
competitive products. The acceptance of this "process thinking" approach has
motivated many to start measuring software processes that are responsive to
questions relating to process performance. In that vein, traditional software
measurement and analysis methods of measuring "planned versus actual" is not
sufficient for measuring process performance or for predicting process performance.
So we believe the time has come to marry, if you will, the "process thinking" with the
"statistical thinking." when measuring software process behavior.
When we combine these principles, we develop a capability to understand the
"reliability" of human processes, establish bounds on management expectations,
understand patterns and causes of variations, and to validate metric analysis used to
forecast and plan our software development activities (Keller 99). Since software
engineering is a human-intensive activity, Keller contends that humans will fail, but
the issues are how often and which root causes can be eliminated or minimized.
Establishing bounds on management expectations entails understanding variations
due to people problems from variations that are process problems since fixing the
wrong problem could be catastrophic. Another important point is that it is critical to
realize which variations are signals requiring action versus just noise in the process.
Understanding patterns and causes of variations enables us to understand what
parameters represent stability and what "stable" means in a particular environment.
Using measurement analysis for forecasting and planning are paramount because
repeatability and predictability of processes are key to effective forecasting and
If we examine the basis for these "process thinking" and "statistical thinking"
concepts", we find that they are founded on the principles of statistical process
control. These principles hold that by establishing and sustaining stable levels of
variability, processes will yield predictable results. We can then say that the
processes are under statistical control. Controlled processes are stable processes,
and stable processes enable you to predict results. This in turn enables you to
prepare achievable plans, meet cost estimates and scheduling commitments, and
deliver required product functionality and quality with acceptable and reasonable
consistency. If a controlled process is not capable of meeting customer requirements
or other business objectives, the process must be improved or retargeted.
When we relate these notions of process and statistical thinking to the operational
process level, we realize a key concern of process management is that of process
performance-how is the process performing now (effectiveness, efficiency), and how
can it be expected to perform in the future? In the context of obtaining quantified
answers to these questions, we can address this issue by decomposing the question
of process performance into three parts.
First we should be concerned about process performance in terms of compliance \u2013 is
the process being executed properly, are the personnel trained, right tools available,
etc. For if the process is not in compliance, we know there is little chance of
If a process is compliant, the next question is\u2013Is the process performance
(execution) reasonably consistent over time? Is the effort, cost, elapsed time,
delivery, and quality consumed and produced by executing the process consistent?
Realizing that variation exists in all processes, is the variation in process performance
Finally if the process performance is consistent, we ask the question\u2013Is the process
performing satisfactorily? Is it meeting the needs of interdependent processes and/or
of the needs of the customers? Is it effective and efficient?
Historically, software organizations have addressed the question of compliance by
conducting assessments, which compare the organizations\u2019 software process against
a standard (e.g., the Capability Maturity Model). Such an assessment provides a
picture of the process status at point in time and indicates the organization\u2019s capacity
to execute various software processes according to the standard\u2019s criteria. However,
it does not follow that the process is executed consistently or efficiently merely
because the assessment results satisfied all the criteria.
The questions of process consistency, effectiveness, and efficiency require
measurement of process behavior as it is being executed over some reasonable time
period. Other disciplines have addressed this issue by using statistical process control
methods, specifically, using Shewhart control charts. They have come to realize that
Successful use by other disciplines suggest it is time to examine how statistical
process control techniques can help to address our software process issues. In so
doing, we find that Shewhart\u2019s control charts provide a statistical method for
distinguishing between variation caused by normal process operation and variation
caused by anomalies in the process. Additionally, Shewhart\u2019s control charts provide
an operational definition for determining process stability or consistency and
predictability as well as quantitatively establishing process capability to meet criteria
for process effectiveness and efficiency.
We use the term software process to refer not just to an organization\u2019s overall
software process, but to any process or subprocess used by a software project or
organization. In fact, a good case can be made that it is only at subprocess levels
that true process management and improvement can take place. Thus, we view the
concept of software process as applying to any identifiable activity that is undertaken
to produce or support a software product or service. This includes planning,
This action might not be possible to undo. Are you sure you want to continue?