You are on page 1of 12

Information & Management 44 (2007) 130–141

www.elsevier.com/locate/im

The effects of optimistic and pessimistic biasing


on software project status reporting
Andrew P. Snow a, Mark Keil b, Linda Wallace c,*
a
McClure School of Information and Telecommunication Systems, Ohio University, United States
b
Department of Computer Information Systems, J. Mack Robinson College of Business, Georgia State University, United States
c
Department of Accounting and Information Systems, Virginia Polytechnic Institute and State University, United States
Received 11 April 2005; received in revised form 3 October 2005; accepted 26 October 2006
Available online 3 January 2007

Abstract
Anecdotal evidence suggests that project managers (PMs) sometime provide biased status reports to management. In our
research project we surveyed PMs to explore possible motivations for bias, the frequency with which bias occurs, and the strength of
the bias typically applied. We found that status reports were biased 60% of the time and that the bias was twice as likely to be
optimistic as pessimistic. By applying these results to an information-theoretic model, we estimated that only about 10–15% of
biased project status reports were, in fact, accurate and these occurred only when pessimistic bias offset project management status
errors. There appeared to be no significant difference in the type or frequency of bias applied to high-risk versus low-risk projects.
Our work should provide a better understanding of software project status reporting.
# 2006 Elsevier B.V. All rights reserved.

Keywords: Project management and scheduling; Reporting bias; Software project management; Traffic light reporting; Information theory

1. Introduction and difficult to observe, all of which increases the


likelihood that status reports will be erroneous [21,24].
Accurately discerning the status of a software project In addition to reporting error, some PMs may
is difficult. Project managers (PMs) often make mistakes intentionally skew project status when reporting to
when estimating their project’s status (e.g., [1,2,11]). We senior management. We refer to this as reporting bias,
refer to such mistakes here as reporting errors, which which is postulated here to be either optimistic or
may be attributed to many factors; e.g., it has been shown pessimistic. Optimistic bias occurs when the PM reports
that PMs tend to anchor on their initial project estimates a project to be in a better situation than s/he truly
and are not likely to make changes to their estimates believes. It can ultimately lead to situations where
when reporting the status at a later time [16]. senior management is suddenly faced with the grim
Furthermore, software is intangible, extremely complex, reality that a project is either in serious trouble or even
unrecoverable. While there is anecdotal evidence of
optimistic bias (e.g., [15]), there has been no reported
* Corresponding author at: Department of Accounting and Informa- effort to determine why a PM might intentionally apply
tion Systems, Pamplin College of Business, Virginia Tech, 3007
Pamplin Hall 0101, Blacksburg, VA 24060, United States.
optimistic bias.
Tel.: +1 540 231 6328; fax: +1 540 231 2511. The other type, pessimistic bias, occurs when a
E-mail address: wallacel@vt.edu (L. Wallace). project is reported to be in worse shape than the PM

0378-7206/$ – see front matter # 2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.im.2006.10.009
A.P. Snow et al. / Information & Management 44 (2007) 130–141 131

considers it to be. One motivation for this bias might be projects that fail though senior management was not
to acquire additional organizational resources; another aware that they were in danger. Alternatively, Sabher-
might be a desire to be cautious, because the PM wal, et al. [23] found that top level managers might
recognizes the difficulty of accurately ascertaining a continue to commit resources to a failing project if they
project’s status. Alternatively, the PM may wish to be felt that their future in the company depended on its
considered a hero when the project is finished success or they feared some type of punishment if it was
successfully. Such bias might be damaging if it diverts unsuccessfully terminated. The same rationale might
skills and resources from more critical projects. The apply to PMs at a lower level.
frequency or magnitude with which either type of bias Of course, software PMs sometimes apply pessi-
occurs is unknown and we thought it should be mistic bias to status reports. They may be distracting
investigated. senior management’s attention and resources away from
The extent to which PMs apply bias to their status projects that are truly in trouble.
perceptions before reporting to executive management The lack of literature regarding the factors that drive
may also vary with project risk levels. High-risk software PMs to apply bias led us to our first question:
projects are more likely to experience problems or 1. Why do PMs choose to apply either optimistic or
failures [22,31]. The failure of a high-risk project may pessimistic bias to their status reports?
come as a shock to companies if the PM has
optimistically biased its status report (e.g., [8]). The However, this does not show how often these biases
differences in reporting bias among high- and low-risk are applied by software PMs. To address this question,
projects have apparently not been studied previously we asked software PMs to estimate how frequently they
and were worthy of investigation. believe their fellow PMs apply bias. If the biasing of
The purpose of our research was thus four-fold to: (1) status reports were wide-spread then it would be
identify reasons why PMs chose to apply either important that executives were made aware of it and
optimistic or pessimistic bias to their status reports; introduced controls to deal with the problem.
(2) investigate how frequently optimistic and pessimis- Furthermore, by gathering data on both low- and
tic bias occurred in project status reporting for both high-risk projects, we could determine whether the
high- and low-risk projects; (3) investigate the estimates of bias frequency depended on the level of
magnitude with which optimistic and pessimistic bias project risk. Previous research has shown that there
was applied in high- and low-risk projects; and (4) were significant differences between high- and low-risk
determine how both optimistic and pessimistic report- projects (e.g., [32]); therefore, we might expect that the
ing biases combined with reporting error to affect extent to which a PM made errors in assessing project
project status reports. status and the bias that the PM applied before reporting
to executive management would vary with project risk.
2. Background High-risk projects may be more likely to have biased
status reports. If so, top level executives should more
Large numbers of software projects are completed heavily scrutinize the status reports of high-risk
over budget and behind schedule [33]. Controlling projects. We tried to determine if this was the case in
software projects is a difficult task that becomes more our second question:
challenging in the absence of accurate information 2. What are software PMs’ estimates of the frequency
about project status. The reporting errors made by PMs of each type of bias occurring for low- and high-risk
can be significant [3,9,34], but reporting bias may be projects?
even more important in contributing to reporting status
inaccuracies [27]. We note that our investigation of bias However, the answer to this will not provide insight
was limited to intentional biasing of project status into the degree of bias that PMs apply. We have found
reports on the part of the PM; others have examined how no research articles on the level of either optimistic or
unintentional cognitive biases have impacted decision pessimistic bias in project status reporting and, though
makers in software development projects (e.g., [4,17]). there is anecdotal evidence of optimistic biasing, there
Literature on a phenomenon called the mum effect is apparently nothing published on pessimistic biasing.
offers one explanation for why a PM might apply In order to get a more accurate sense of the extent to
optimistic bias; the human tendency to avoid transmit- which PMs apply bias, we obtained estimates of the
ting bad news [20]. Smith, et al. [25] suggested that this frequency, magnitude, and type of bias applied by
may contribute to the large numbers of software software PMs. Additionally, from a senior management
132 A.P. Snow et al. / Information & Management 44 (2007) 130–141

perspective, it is useful to know whether the expected were gathered to determine the extent to which PMs
level of bias is the same or different for low-risk versus actually applied optimistic or pessimistic bias. In order
high-risk software projects. Hence, we asked the to get a better sense of the extent to which bias affected
question: project status reporting, it was important to obtain
3. What is the expected level of bias, and is it the same estimates from actual software PMs.
for low- and high-risk software projects?
3. Data collection
We also wanted to understand the extent to which
reporting bias distorted the accuracy of the status report. Measures of bias frequency, bias strength, reporting
Status errors and reporting bias combine to affect the error, and true status estimates were required in order to
accuracy of a project status report. There are three answer the research questions. Subjects were asked to
distinct possibilities of how error and bias combine: (1) estimate how often PMs apply optimistic bias, no bias,
the project is said to be better than it really is, (2) the or pessimistic bias before reporting perceived status.
project is said to be worse than it actually is, or (3) the Subjects were asked to estimate the level, or strength, of
bias and error off-set each other, resulting in an any applied bias and furnish their bias estimates under
essentially accurate status report. A better under- the assumption that a project had just finished the design
standing of the combined effects on status reports phase, and the development stage was just beginning.
can help executive management determine how to This point of the project was chosen because it is an
interpret the reports. This leads to the final question: important point in the lifecycle where a decision will be
4. How does the full range of optimistic and pessimistic made to approve or disapprove the next phase. Accurate
bias combine with reporting error to affect reporting status reports are particularly important at this time. The
status on high-risk projects? information was elicited for both low-risk and high-risk
projects using the survey questions shown in
In our exploratory research, the effects of reporting Appendix A.
bias and reporting error were investigated using a two- The subjects were also told that the three major goals
stage probabilistic model that predicted the accuracy of of the project were (1) budget, (2) schedule, and (3)
software project management status reports, given functionality/quality. They were asked to consider only
varying levels of reporting error and bias [26]. This three project status levels—Green, Yellow, and Red
information theory model (see Fig. 1) contained three (traffic light reporting) [5,7,12]. Then they were asked
variables: true status (T), perceived status (P), and to code a questionnaire using Green for a project in
reported status (R). Perceived status and true status are which the three goals were being substantially met,
different if the PM makes an error (E) in assessing Yellow for a project where two of the goals were being
current status and reported status is different from met, and Red for a project where one or no goals were
perceived status if the PM biases (B) his or her being achieved.
perception before reporting the status. Data previously elicited from a panel of five software
The model shows true status information passing project risk experts using the traffic light reporting
through an error channel, and perceived status scheme were used for the reporting error estimates and
information through a bias channel. The combined the true status estimates. The qualification level of the
effect is report distortion, resulting in reported status experts and the protocol used for collecting estimates
being different from true status. Previous work [28] were described by Snow and Keil [28].
showed a potential for large differences between true To gain insight into why software PMs apply bias, we
and reported status when varying levels of optimistic included two open-ended questions asking respondents
bias were applied. to give three reasons why PMs apply optimistic bias and
Earlier research was limited, because it relied on three reasons why they apply pessimistic bias (Ques-
simulated reported status results generated over a range tions 4 and 5). The survey also included demographic
of contrived optimistic bias levels. No empirical data and other questions about the respondent’s project
management experience. The survey instrument was
pre-tested with a small number of PMs and slight
modifications were made, based on their feedback,
before the survey was administered.
PMs were not asked to report their own personal
Fig. 1. Two-stage status model. behavior regarding bias, because previous researchers
A.P. Snow et al. / Information & Management 44 (2007) 130–141 133

have noted limitations when PMs reported on their own subjected to content analysis, using an open-coding
behavior [18]. We were concerned that the subjects approach (i.e., the researchers did not use predeter-
might have inhibitions in reporting actions that could be mined codes but allowed the data to suggest categories
perceived as poor or bad behavior on their part [19]. [29]). This produced ten unique codes that captured a
Social desirability bias has been shown to distort data majority of the reasons that the subjects gave for
gained from self-reporting in socially or ethically applying optimistic bias.
sensitive surveys [6,30]. Indirect questioning (asking Using the same open coding procedure, nine unique
respondents what other people think about sensitive codes were created to capture the majority of the
issues) is frequently used to overcome such effects reasons that subjects gave for applying pessimistic
[10,14]. bias. Then all of the qualitative responses were coded.
A convenience sample of experienced software PMs There were some responses that truly were unique and
was constructed from two sources: the first set consisted did not fit neatly into any of the established codes: they
of Project Management Institute (PMI) members who were coded as ‘‘other’’ and not subjected to further
belonged to the Information Systems Special Interest analysis. The remaining coded responses were further
Group (ISSIG) and had expressed interest in participat- analyzed to determine which reasons were given most
ing in research on software project management. The frequently for optimistic and pessimistic biasing.
second set consisted of software PMs working in a Tables 2 and 3 show the most frequently mentioned
southeastern U.S. metropolitan area who had also reasons (expressed as a percent of total coded
agreed to participate in the study. Other researchers responses).
interested in software project management issues have Interestingly, the most frequently cited reason for
gathered survey data from similar subject groups (e.g., applying optimistic bias was reluctance to transmit bad
[13]). news, the mum effect. The other top reasons given for
Survey forms were emailed and 65 practicing optimistic biasing involved impressing management
software PMs responded. 9 surveys were removed and overoptimism on the part of the PM.
from the sample due to the respondent’s lack of any The most frequently cited reason for applying
significant software project management experience. 56 pessimistic bias was that the PM wanted to create
responses were therefore used in subsequent analysis. slack for contingencies that could arise later. Other
Roughly equal parts of the usable responses were frequently cited reasons had to do with the PM’s wish to
obtained from each of the sources. After comparing the look like a hero, lack of confidence in the project team
responses obtained from the two sub-groups for any or his or her own abilities, and the need to manage
significant differences, a decision was made to pool the expectations. The goal of obtaining additional resources
samples for subsequent analysis. Table 1 provides a for the project was mentioned repeatedly as a reason for
characterization of the overall subject pool. Subjects engaging in pessimistic biasing. The ability to shift
indicated that they had served as a PM for an average of blame was also mentioned.
more than 17 software projects.

4. Results Table 2
Reasons for optimistic biasing
4.1. Reasons for bias Reasons for optimistic biasing Percent
Project manager believes that senior management 22
In all of the 56 usable surveys, subjects gave one or
‘‘shoots the messenger’’—fear of giving bad news
more reasons why bias was applied. The responses were Project manager wants to make him or herself 22
look good
Table 1 Project manager thinks s/he can fix whatever 17
Characterization of subject pool is wrong or that it will all work out in the end
Variable S.D. Project manager does not want to look bad 9
Mean (X̄)
Project manager does not want to let the client down 8
Years of experience in 3.21 (where 3 = 11–15 1.64 Project manager succumbs to management pressure 8
software development years and 4 = 16–20 Project manager does not want to ‘‘face the music’’ 4
projects years) Project manager’s desire for continued funding 4
Number of software 4.38 (where 4 = 16–20 2.24 Project manager wants to keep up the project 4
development projects projects and 5 = 21–25 team’s morale
participated in projects) Project manager believes that things will work out 3
134 A.P. Snow et al. / Information & Management 44 (2007) 130–141

Table 3 The next step was to identify which type of bias was
Reasons for pessimistic biasing more often applied: optimistic or pessimistic. Table 5
Reasons for pessimistic biasing Percent shows the estimates of software PMs for high- and low-
Project manager wants to create slack for contingencies 17 risk projects; apparently, regardless of the risk level of
that may arise later the project, when PMs apply bias they are more likely to
Project manager hopes to become a hero later 15 apply optimistic than pessimistic bias. Specifically, the
(who turned around a project) data suggested that PMs were more than twice as likely
Project manager has low confidence in project team 14
to apply optimistic than pessimistic bias. Also, their
and/or his/her own ability
Project manager wants to set management’s 13 estimates of the probability of applying each type of
expectations low bias do not significantly change from low- to high-risk
Project manager hopes to obtain additional resources 13 projects.
for the project (or hold on to existing resources) Although these results were insightful, they pre-
Project manager wants the ability to shift blame when 9
sented only part of the picture. The next step was to look
things go wrong
Project managers are pessimistic by nature 9 at the strength of the bias that was applied when PMs
Project manager prefers to err on the side of caution 8 chose to bias their report. Some status reports may be
if the project is believed to be risky very biased, while others may only be slightly biased.
Project manager wants to escalate issues and to send a 4
wake-up call to senior management
4.3. Expected level of bias for low- and high-risk
software projects

4.2. Bias probabilities for low- and high-risk In order to estimate the level of expected bias, we
software projects created a probability-weighted metric using the prob-
ability of each type of bias and the bias level estimated
Estimates by software PMs of the probability of for each type of bias. This measure was created for each
applying bias versus not in high- and low-risk projects subject reporting, defining optimistic bias as positive
are shown in Table 4. The results suggested that PMs and pessimistic bias as negative, as follows:
were likely to apply bias approximately 60% of the  
time. A powerful distribution free test, the Kolmo- pðBopi Þ
Bi ¼ Bopi
gorov–Smirnov (K–S) test statistic, was used to find pðBopi Þ þ pðBpsi Þ
whether the differences observed were significant.  
pðBpsi Þ
Regardless of the risk level of the project, the results  Bpsi (1)
pðBopi Þ þ pðBpsi Þ
showed that PMs were more likely to apply bias to a
project than not. Their estimates of the probability of where Bi has an optimistic component (optimistic bias
applying bias do not significantly change for low-risk level times probability of applying optimistic bias) and
versus high-risk projects. a pessimistic component (pessimistic bias level times

Table 4
Probability estimates of bias application by risk level
Probability Probability Ho: no difference between probability of no bias and bias
of no bias of bias
High-risk project 0.39 0.61 Reject Ho ( p = .004); implying that most PMs apply bias on high-risk projects
Low-risk project 0.44 0.56 Reject Ho ( p = .080); implying that most PMs apply bias on low-risk projects

Table 5
Probability estimates of bias type by risk level
Probability of Probability of Ho: no difference between probability of optimistic and pessimistic bias
optimistic bias pessimistic bias
High-risk project 0.42 0.19 Reject Ho ( p = .0000); implying that when PMs apply bias on high-risk projects,
it is most often optimistic
Low-risk project 0.41 0.15 Reject Ho ( p = .0000); implying that when PMs apply bias on low-risk projects,
it is most often optimistic
A.P. Snow et al. / Information & Management 44 (2007) 130–141 135

Table 6 represents the probabilities of applied bias. As the


Bias expectation alphabet of this system has three ‘‘letters’’ (G, Y, and
Project risk type Mean (X̄) S.D. Median R) the T and R vectors are 3-by-1 while the E and B
High-risk 0.96 1.84 1.33 matrices are 3-by-3.
Low-risk 1.21 1.56 1.00
4.4.1. True status
The true status matrix was formulated by a panel of
probability of applying pessimistic bias). Each subject software risk experts and was previously described in
reported the optimistic and pessimistic bias levels using the literature. For a high-risk project, the true status was
a Likert type scale. estimated to be:
The sample mean and variance of the bias
expectation measure is shown in Table 6. The means TH ¼ j 0:22 0:34 0:44 j (3)
and medians showed that when bias was applied, the
This shows the probabilities, left to right, of a high-risk
bias expectation was optimistic, but the standard
project being truly in the Green, Yellow, and Red state,
deviation indicated that some PMs applied pessimistic
just after the design phase and at the start of the
bias. The standard deviation on high-risk projects was
development phase.
greater, indicating that there was more variance in terms
of the expected bias for high-risk rather than low-risk
4.4.2. Status error
projects. In terms of central tendency, the median level
Status error was also estimated by the same panel of
of bias for high-risk projects was higher than for low-
experts. For the high-risk project the estimated status
risk projects, as expected. However, the mean bias
errors are:
expectation on low-risk projects appeared higher than

on high-risk projects. The K–S test showed that we 0:61 0:22 0:17

could not reject the null hypothesis of no difference EH ¼ 0:27 0:54 0:19 (4)
( p = 0.89). Therefore, we inferred that PMs applied 0:14 0:38 0:48
approximately the same level of bias on both high-risk
and low-risk projects, and that the bias in expectation In this error matrix, the first row, left to right, shows the
was optimistic. probability of the PM perceiving Green, Yellow, or Red if
the project is truly Green. The second row shows the same
4.4. The impact of reporting bias and reporting probabilities if the project is truly Yellow, and the third
error on reporting status row shows the probabilities if the project is truly Red. The
elements of the matrix are thus conditional probabilities,
The purpose here was two-fold, to: (1) explain how as the chance of error is dependent upon the true status
the previously posited information theory model was value (Green, Yellow, or Red). The diagonal in this
extended to include both pessimistic and optimistic matrix shows the probabilities that Green, Yellow, and
bias, and (2) show how the quantitative pessimistic and Red projects are perceived without error.
optimistic bias data collected by the survey instrument
was used to address Research Question 4. 4.4.3. Applied bias
In order to determine how reporting bias combined Bias levels range from completely pessimistic bias to
with reporting error to affect status reports, we used the completely optimistic bias. The full set of matrices used
two-stage information theoretic status model as our for each level of bias in the survey instrument is shown in
basis of investigation. Three elements were needed to Appendix B. The assignment of minus () values for
derive reported status using this model: true status, pessimistic bias and positive (+) values for optimistic bias
status error, and reporting bias. From information is arbitrary, and used to create the bias range: [5, 4,
theory, the model can be expressed in matrix form as: . . ., 0, . . ., +4, +5]. The 11 bias matrices representing each
bias value provided a continuum ranging from com-
R¼TEB (2)
pletely pessimistic to completely optimistic behavior.
where R represents the probabilities that a project is The rationale for the different matrices is straight-
reported as one of the three possible reporting outcomes forward: probability gradations in the range [0, 1] are
(Green, Yellow, or Red); T represents the probabilities required to represent the different linearly increasing
that the project is truly in the Green, Yellow, or Red levels of bias in the survey instrument (1, slight; 2,
states; E represents the probabilities of error; and B modest; 3, moderate; 4, heavy; 5, complete). This
136 A.P. Snow et al. / Information & Management 44 (2007) 130–141

naturally leads to use of corresponding probability


values (0.2, 0.4, 0.6, 0.8, 1.0) for the same element in the
different bias matrices. Note that the following rules
were used to develop the matrices:

1. The probability values in rows add up to unity,


thereby covering all possibilities.
2. Optimistic bias never reports a status worse than
perceived, and pessimistic bias never reports a status
better than perceived, explaining the 0’s in the G, Y,
and R reported columns.
3. Complete optimistic bias always reports Red, no bias
always reports as perceived, and complete pessimis- Fig. 2. Probabilistic perceived and reported status for the high-risk
tic bias always reports Green. This explains the 1’s in project.
the reported columns.
4. Moving from matrix to matrix there are distinct (which has been subjected to both error and bias) is on
gradations, in 0.2 increments, for the ‘‘Perceived the right. Note that the true probability of Green is 0.22
Green Reported Green’’, ‘‘Perceived Yellow while the reported probability is 0.38, etc.
Reported Yellow’’ and ‘‘Perceived Red Reported The probabilistic model was executed in order to
Red’’ matrix elements. determine the probability of reported statuses at each
integer level of bias in the range [5, +5], for both high-
These rules account for all but 16 of the 99 matrix risk and low-risk projects. We focused here on high-risk
elements in the 11 different bias matrices. While other projects as such projects typically involve the greatest
values could be used for the remaining 16 elements, financial risk, are of longer duration, and are the subject
because of these interval and reporting constraint rules, of most software project catastrophes.
any other possible sets of values cannot be far from the The results for high-risk projects are shown
values shown in Appendix B. Therefore plausible graphically (Fig. 2). The horizontal lines represent true
probabilities for these remaining matrix elements were status probabilities for Red, Yellow, and Green,
selected and, when these 16 elements were perturbated, respectively. The curved lines with symbols represent
there was little change in the results, indicating the probabilistic reported status. At bias level 0, the
robustness in the model. symbols represent perceived status probabilities with no
bias applied (i.e., reported status is subject only to error)
4.4.4. Information theory model results overlaid by while the symbols at non-zero bias levels represent
expected bias distribution reported status probabilities at various levels of either
The algebraic representation of the model (Eq. (2)) optimistic or pessimistic bias. At completely optimistic
was executed for each integer bias level. For example, bias (+5) projects are always reported Green, while
for a high-risk project where the bias matrix for slight completely pessimistic bias (5) projects are always
optimistic bias (1 on the Likert scale) is used as shown reported Red.
in Appendix B, the reported status probabilities at that In order to gauge the range of bias that one might
level of bias is: expect to find in a high-risk project and the implications

R1OPT ¼ T  E  B1OPT

0:61 0:22 0:17 1:0 0:0 0:0


¼ j 0:22 0:34 0:44 j  0:27 0:54 0:19  0:2 0:8 0:0 ¼ j 0:38 0:37 0:25 j (5)

0:14 0:38 0:48 0:05 0:15 0:8

The true and error matrices are for a high-risk project, for project status reporting, we combined the results of
estimated by software risk experts. The reading of this this probabilistic model with the previously computed
result in Eq. (5) is: the true status of the project (Green, bias expectation statistic. A box plot of bias expectation
Yellow, Red) is on the left, while the reported status was overlaid on the information theory results at the
A.P. Snow et al. / Information & Management 44 (2007) 130–141 137

Table 7
Reporting discrepancies at various bias levels for the high-risk project
Bias level Bias type Observation
3 Pessimistic About 67% of projects are reported Red when 44% truly are
2 Pessimistic PMs report fairly accurately at this bias level—bias offsets error to some extent; however,
Red is somewhat over reported and Green is somewhat under reported
1 Pessimistic PMs report fairly accurately at this bias level—bias almost exactly offsets error
0 None PMs are most likely to report projects as Yellow, when they are really most likely Red
1 Optimistic PMs are about as likely to report projects Green or Yellow, but least likely to report projects Red.
The true incidence of Red projects is almost two times greater than the reported frequency
2 Optimistic PMs are more than twice as likely to report projects Green as they are Red, when the opposite is true
3 Optimistic PMs are about five times as likely to report projects Green, as they are Red

bottom of Fig. 2. In this, the centerline is the median,


and the divisions are quartiles; however, the box plot Therefore, only about 10–15% of biased PM reports
comparison does not include the 39% for which no bias appear to be accurate and reliable. These results address
is applied. Research Question 4.
We summarized our observations in Table 7. It shows
what happens as we move from moderate pessimistic to 5. Summary, limitations, and implications
moderate optimistic bias (3 to +3). For optimistic bias,
the combination of error and bias resulted in very 5.1. Summary
inaccurate reporting, even with small amounts of bias.
A contrary result is seen for pessimistic bias; the While previous research has suggested that PMs may
reporting discrepancy is relatively benign, as the intentionally bias their status reports, we attempted to
combination of error and bias tend to offset one another. ascertain the reasons why they might behave in this
We also observed that for the bias range [2, +2], manner. Two other key contributions that distinguish
Yellow reports can be believed. And in the bias range our work in this area are: (1) the consideration of both
[1, 2], all reports can be believed, while in the range pessimistic and optimistic bias and (2) the computation
[+1, +5] Red and Green reports should not be believed. of an expected level of bias based on empirical data
Comparing the box plot to the probabilistic model, we obtained from PMs.
can infer the following, for high-risk project status Our data suggested that PMs are likely to apply bias
reports: in both low- and high-risk projects and when PMs do
bias, they are twice as likely to bias optimistically.
1. About 25% of applied bias is optimistic and greater While the expected level of bias is slight and tends to be
than 2, where the most prevalent report is decidedly more optimistic than pessimistic, it is important to
Green, even though the true state probabilities recognize that the expected level of bias represents the
indicate that projects are most likely to be in a central tendency, and that the actual distribution of bias
Red or Yellow state. observed was broad, meaning that individual PMs may
2. About 25% of applied bias is optimistic between 1 have applied bias that was much more optimistic or
and 2, where Green and Red reports become the pessimistic than the overall expected level of bias we
opposite of reality. Green reporting is most prevalent calculated. Thus, it is critical for the senior executive to
here, even though projects are more often in the Red consider the personality and behavior of individual PMs
state. Here, the reported incidence of Yellow projects before coming to any conclusion about the accuracy of
is close to the true state probability. their status reports.
3. About 20% of applied bias is optimistic between 0 Our results show that even at low levels of optimistic
and 1, where Red starts to become the least bias, it combines with status errors to make reports
frequently reported status. Since the bias here is decidedly overoptimistic. Even in cases where no bias is
neutral to slightly positive, reporting error is the applied, status error results in skewed status reports.
principal driver of project status distortion. Optimistic bias leads to status reports that are very
4. About 10–15% of applied bias is pessimistic in the different from reality, while pessimistically biased
range [1, 2], unwittingly providing fairly accurate status reports tend to be accurate because bias offsets
reports, because PM status errors offset their bias. error. When PMs apply bias, only 10–15% of high-risk
138 A.P. Snow et al. / Information & Management 44 (2007) 130–141

software project management status reports seem to be Our research has demonstrated that the amount of
accurate, produced only when pessimistic PM bias inaccuracy in a status report is sensitive to the amount of
offsets PM status error. optimistic bias of the PM. Unfortunately, it appears that
if a PM applies bias, it is twice as likely to be optimistic
5.2. Limitations as pessimistic. Furthermore, the applied bias is typically
strongly optimistic, resulting in status reports that can
Since we asked PMs to estimate the percent of PMs be quite distorted and misleading.
that apply pessimistic, optimistic, and no bias, one
limitation deals with the extent to which PMs can be 6. Conclusions
accurate in making such estimates. PMs are, however,
able to assess the behavior of other PMs by virtue of We have shown that while both optimistic and
having worked with them on past projects and having pessimistic biasing occurs, optimistic tends to outweigh
observed their behavior. Thus, while the bias estimates pessimistic biasing. The impact of optimistic biasing
may be imperfect, we believe that they are more appears to be very deleterious to reporting accuracy and
accurate than we would have obtained by asking PMs to senior management should take all reasonable steps to
report on their own application of bias. The effects of counteract it. Attacking the primary reasons for biasing
indirect questioning have been shown to be less severe requires an introspective examination to determine if
than those of self reporting, but they may still exist. the organizational culture or the behaviors of specific
Although the bias matrices used in the model executives is contributing to the problem. Our data
construction appear reasonable and produce internally suggest that PMs employ optimistic biasing because of
consistent results, no model will exactly represent fear and a wish to impress management. Executives
behavior. This limitation is recognized in any model need to establish environments where bad news is
building research. Additionally, the reported status accepted as a norm. In some organizations, employees
probabilities combined two independent empirical are explicitly asked not to spread ‘‘fear, uncertainty, and
studies. However, even with this limitation, the results doubt’’ on a project. When the organizational culture is
showed that an information theory method holds promise ‘‘good news only’’ in nature, the PM has an incentive to
for deriving important insights into status reporting. optimize the project’s status report in the hope that the
Our results may also only be valid at one point in the situation may improve.
project lifecycle (the end of the design phase), as the Deciding when and how to intervene in response to
experts and PMs were given this point in the lifecycle bad news is critical. By consistently overreacting to
for which to answer questions about the probabilities. Yellow or Red reports with micromanagement, the
This research utilized a convenience sample for its executive may create or reinforce an environment for
subject source. The subjects were all experienced PMs, optimistic bias. The executive must continually stress
but they were not randomly selected. As a result, the the need for realistic reports and the importance of not
findings may not be representative of the entire being surprised after it is too late to intervene
population of software PMs. effectively.
PMs should be made aware of the impact that even a
5.3. Implications slight optimistic bias may have on the accuracy of their
reporting status and to err on the side of caution or
The most frequently cited reason for optimistic pessimism when producing a status report. Ironically,
biasing was reluctance to transmit bad news. Perhaps our research also suggested that the PM who is
methods could be developed by upper management to pessimistic gives the most accurate reports, but by
reward accurate reporting rather than penalize bearers accident. In discussing our early findings, one CIO at a
of bad news. Although the reluctance to transmit bad Fortune 100 company interpreted our results as
news has been examined in previous research, methods validation for his recruiting strategy: he tried to
to overcome this and to deal with the problem of biased eliminate overly optimistic PMs, preferring to hire
reports have not. those who would furnish accurate reports.
A.P. Snow et al. / Information & Management 44 (2007) 130–141 139

Appendix A. Question format used to assess bias


140 A.P. Snow et al. / Information & Management 44 (2007) 130–141

Appendix B. Pessimistic and optimistic bias Appendix B. (Continued )


channels Bias channel (reported) Bias type Bias
level
Bias channel (reported) Bias type Bias G Y R
level
G Y R Perceived Heavily 4
optimistic
Perceived Completely 5 G 1.00 0.00 0.00
pessimistic Y 0.80 0.20 0.00
G 0.00 0.00 1.00 R 0.75 0.05 0.20
Y 0.00 0.00 1.00
R 0.00 0.00 1.00 Perceived Completely 5
optimistic
Perceived Heavily 4 G 1.00 0.00 0.00
pessimistic Y 1.00 0.00 0.00
G 0.20 0.05 0.75 R 1.00 0.00 0.00
Y 0.00 0.20 0.80
R 0.00 0.00 1.00

Perceived Moderately 3
pessimistic References
G 0.40 0.20 0.40
Y 0.00 0.40 0.60 [1] T.K. Abdel-Hamid, Understanding the ‘90% Syndrome’ in soft-
R 0.00 0.00 1.00 ware project management: a simulation-based case study, The
Journal of Systems and Software 8 (4), 1988, pp. 319–330.
Perceived Modestly 2
[2] F.P. Brooks, No silver bullet: essence and accidents of software
pessimistic
engineering, Computer 20 (4), 1987, pp. 10–19.
G 0.60 0.20 0.20
[3] F.P. Brooks, The Mythical Man-Month: Essays on Software
Y 0.00 0.60 0.40
Engineering, Anniversary Edition, Addison-Wesley, Reading,
R 0.00 0.00 1.00
Massachusetts, 1995.
Perceived Slightly 1 [4] R. Buehler, D. Griffin, Planning, personality, and prediction: the
pessimistic role of future focus in optimistic time predictions, Organizational
G 0.80 0.15 0.05 Behavior and Human Decision Processes 92, 2003, pp. 80–90.
Y 0.00 0.80 0.20 [5] R. Burris, Get the green light, The Wight Line 7 (2), 1994, pp. 6–9.
R 0.00 0.00 1.00 [6] J. Chung, G.S. Monroe, Exploring social desirability bias,
Journal of Business Ethics 44 (4), 2003, pp. 291–302.
Perceived No bias 0 [7] C. Coulter, Multiproject management and control, Cost Engi-
G 1.00 0.00 0.00 neering 32 (10), 1990, pp. 19–24.
Y 0.00 1.00 0.00 [8] R.X. Cringely, When disaster strikes IS, Forbes ASAP 1994, pp.
R 0.00 0.00 1.00 60–64.
[9] T. DeMarco, Controlling Software Projects, Yourdon Press,
Perceived Slightly 1
New York, 1982.
optimistic
[10] R.J. Fisher, Social desirability bias and the validity of indirect
G 1.00 0.00 0.00
questioning, Journal of Consumer Research 20, 1993, pp. 303–
Y 0.20 0.80 0.00
315.
R 0.05 0.15 0.80
[11] W.W. Gibbs, Software’s chronic crisis, Scientific American 271
Perceived Modestly 2 (3), 1994, pp. 86–95.
optimistic [12] B. Hughes, M. Cotterall, Software Project Management,
G 1.00 0.00 0.00 McGraw Hill International, London, 1999.
Y 0.40 0.60 0.00 [13] J.J. Jiang, G. Klein, Risks to different aspects of system success,
R 0.20 0.20 0.60 Information and Management 36, 1999, pp. 264–272.
[14] M.-S. Jo, Controlling social-desirability bias via method factors
Perceived Moderately 3 of direct and indirect questioning in structural equation models,
optimistic Psychology & Marketing 17 (2), 2000, pp. 137–148.
G 1.00 0.00 0.00 [15] M. Jorgensen, D.I.K. Sjoberg, Impact of effort estimates on
Y 0.60 0.40 0.00 software project work, Information and Software Technology 43,
R 0.40 0.20 0.40 2001, pp. 939–948.
A.P. Snow et al. / Information & Management 44 (2007) 130–141 141

[16] M. Jorgensen, D.I.K. Sjoberg, The impact of customer expecta- [34] R.W. Zmud, Management of large software development efforts,
tion on software development effort estimates, International MIS Quarterly 4 (2), 1980, pp. 45–55.
Journal of Project Management 22, 2004, pp. 317–325.
[17] P.J. Kirs, K. Pflughoeft, G. Kroeck, A process model cognitive Andrew P. Snow is an associate professor
biasing effects in information systems development and usage, and director of the McClure School of
Information & Management 38 (3), 2001, pp. 153–165. Information and Telecommunication Sys-
[18] T. Moynihan, Coping with client-based ‘People-Problems’: the tems at Ohio University. He received his
theories-of-action of experienced is/software project managers, bachelor’s and master’s degrees in electri-
Information & Management 39, 2002, pp. 377–390. cal engineering from Old Dominion Uni-
[19] J.C. Nunnally, Psychometric Theory, McGraw-Hill, New York, versity, and his PhD in information science
1978. from the University of Pittsburgh. His
[20] E.C. O’Neal, D.W. Levine, J.F. Frank, Reluctance to transmit research interests include telecommunica-
bad news when the recipient is unknown: experiments in five tion network resiliency and IT project man-
nations, Social Behavior and Personality 7 (1), 1979, pp. 39–47. agement. His publications appear in journals such as IEEE
[21] J.S. Reel, Critical success factors in software projects, IEEE Transactions on Reliability, IEEE Transactions on Engineering Man-
Software 16 (3), 1999, pp. 18–23. agement, Journal of Networks and Systems Management, Telecom-
[22] J. Ropponen, K. Lyytinen, Components of software development munications Policy, Journal on Mobile Networks and Applications,
risk: how to address them? A project manager survey IEEE IEEE Computer, and the International Journal of Industrial Engineer-
Transactions on Software Engineering 26 (2), 2000, pp. 98–112. ing. Prior to returning to university for an academic career, he held
[23] R. Sabherwal, M.K. Sein, G.M. Marakas, Escalating commit- positions as electronics engineer, member of the technical staff,
ment to information systems projects: findings from two simu- manager, director, vice president, general manager, and chairman
lated experiments, Information & Management 40 (8), 2003, pp. in telecommunications carrier, systems integration and consulting
781–798. firms.
[24] K. Sengupta, T.K. Abdel-Hamid, The impact of unreliable
Mark Keil is the department chair and
information on the management of software projects: a dynamic
board of advisors professor in computer
decision perspective, IEEE Transactions on Systems, Man and
information systems at Georgia State Uni-
Cybernetics 26 (2), 1996, pp. 177–189.
versity. His research focuses on software
[25] H.J. Smith, M. Keil, G. DePledge, Keeping mum as the project
project management, with particular
goes under: toward an explanatory model, Journal of Manage-
emphasis on understanding and preventing
ment Information Systems 18 (2), 2001, pp. 189–227.
software project escalation. His research is
[26] A. Snow, M. Keil, A framework for assessing the reliability of
also aimed at providing better tools for
software project management reports, Engineering Management
assessing software project risk and remov-
Journal 14 (2), 2002, pp. 20–26.
ing barriers to software use. His research
[27] A.P. Snow, M. Keil, The challenge of accurate project status
has been published in MIS Quarterly, Sloan Management Review,
reporting: a two stage model incorporating status errors and
Communications of the ACM, Journal of Management Information
reporting bias, in: Proceedings of the 34th Annual Hawaii
Systems, Information & Management, IEEE Transactions on Engi-
International Conference on System Sciences (HICSS-34),
neering Management, Decision Support Systems, and other journals.
Kihei, Hawaii, January 3–6, 2001, pp. 1–10.
He has served as an associate editor for the MIS Quarterly and as co-
[28] A.P. Snow, M. Keil, The challenge of accurate software project
editor of The DATA BASE for Advances in Information Systems. He
status reporting: a two-stage model incorporating status errors
currently serves on the editorial board of the Journal of Management
and reporting bias, IEEE Transactions on Engineering Manage-
Information Systems, Decision Sciences, and IEEE Transactions on
ment 49 (4), 2002, pp. 491–504.
Engineering Management.
[29] A. Strauss, J. Corbin, Basics of Qualitative Research: Grounded
Theory Procedures and Techniques, Sage Publications, New- Linda Wallace is an associate professor in
bury Park, CA, 1990. the Department of Accounting and Infor-
[30] S. Sudman, N.M. Bradburn, Respose Effects in Surveys: A mation Systems at Virginia Polytechnic
Review and Synthesis, Aldine, Chicago, 1974. Institute and State University. She obtained
[31] L. Wallace, M. Keil, A. Rai, How software project risk affects her PhD in computer information systems
project outcomes: an investigation of the dimensions of risk and an from Georgia State University in 1999. Her
exploratory model, Decision Sciences 35 (2), 2004, pp. 289–321. research interests include software project
[32] L. Wallace, M. Keil, A. Rai, Understanding software project risk: risk, information security, knowledge com-
a cluster analysis, Information & Management 42 (1), 2004, pp. munities, and agile software development.
115–125. Her research has been accepted for pub-
[33] B. Whittaker, What went wrong? Unsuccessful information lication in Decision Sciences, Communications of the ACM, Informa-
technology projects Information Management & Computer tion & Management, IEEE Security & Privacy, Decision Support
Security 7 (1), 1999, pp. 23–29. Systems, and Journal of Systems and Software.

You might also like