You are on page 1of 8

Measuring the Success of

Requirements Engineering Processes

Khaled EI Emam *
Nazim H. Madhavji

*School of Computer Science, McGill University

Centre de Recherche Informatique de Montreal (CRIM)

Abstract improving RE practice, it is t h en of considerable

interest and utility to develop an appropriate instru
Central to understanding and improving require

ments engineering processes is the ability to mea

ment to measure RE success.
The objective of the research r epo rt ed in this
sure requirements engineering success. This paper
paper was to develop an instrument for measuring
describes a research study whose objective was to
RE success. RE success is defined as the extent to
develop an instrument to measure the success of
requirements engineering processes. The instru which the outcomes of the RE phase serve the
needs of, and provid e a basis for ensuring the suc
ment developed consists of 32 indicators that cover
cess of, all subsequent activit ies ind i vidua lly and in
the two most important dimensions of requirements

aggregate, related to the software system through

engineering success. These two dimensions were

out the software system's lifetime. These activities

identified during the study to be: quality of require
include: design, coding, testing, putting into opera
ments engineering products and quality of require
tion, and post-deployment evolution. The domain of
ments engineering service. Evidence is presented
analysis of this research is business information sys
demonstrating that the instrume nt has desirable psy
tems that are fully customized for individual user
chometric properties, such as high reliability and
Briefly, the main result of this research is a subjec
tive instrument with 32 indicators. The indicators
cover the two most important dimensions of RE suc
1 Introduction cess, which were found to be the quality of RE ser
Existing c laims [6][10] and empirical evidence vice and the quality of RE products. It is further
[3][5][22] support the notion that an inadequately shown that the instrument has desirable psychomet
performed requirements engineering (henceforth ric characteristics, includin g high reliability and
RE) proc ess is positively associated with software demonstrated content and construct validity.
system failure. There would therefore be an eco The significance of this result is that it constitutes
nomic as well as software quality payoff in improving the first, as far as we know, comprehensive effort to
RE practices. A prerequi si te to improv i ng RE prac develop an RE success instrument. In terms of
tices, however, is a fundamental understanding of ap plication to practice, the instrument can be used
RE processes and of the fac to rs that lead to their to gauge the success of RE processes in an organi
success or cau se their failure .
zation for assessment and comparison purposes. In
An under s t a nding of t he RE p r ocess can be terms of application to research, the instrument can
expressed in the form of a theory, the purpose of be utilized for the measurement of RE s ucc e ss in
which is to specify the determinants of RE success. survey and experimental research.
Considering that RE success is a critical unit in any The next section of this paper presents a summary
theory of RE success and that sound theories with
review of existing indicators of RE s uc cess . This
empirical support ought to provide a basis for review provides the theoretical foundations of the
IT instrument. Following that, section 3 describes an
'!I This research has been supported, in part, by the
Macroscope Project and NSERC Canada. extensive empirical investigation of the RE success

0-8186-7017-7/95 $04.00 1995 IEEE
concept a n d of indicator s for its assessment. tion and one concerned with productivity/cost-effec
Section 4 constitutes a detailed account of the tiveness of the RE process.
results, and section 5 concludes the paper.
3 Research Method
2 Theoretical Foundations
In this research, the RE process studied was the
Previous software engineering works that have iden requirements engineering phase of a software sys
tified indicators for the assessment of RE success tem development method. The method (henceforth
were reviewed and summarized in our base report referred to as method X) has been developed and is
[12]. The purpose of the review was to provide a marketed by company Y. Company Y is an informa
sound theoretical foundation for our instrument tion systems consultancy firm based in Canada with
development efforts. clients worl(jwide. The ultimate objective of the RE
It was evident from this review that the predomi plhase of method X is to determine the cost-effec
nant conceptualization of RE success in the soft tiveness of the information system to be developed
ware engineering literature has been the quality of and make a go/no-go decision based on it.
RE products. Thus, a vast majority of the pro The resea.rch method followed in the study pre
posed/used indicators are assessing some aspect of sented here draws from both, the normative and the
RE product quality. Example indicators are: the descriptive literature. The normative literature pre
extent to which every requirement stated in the scribes the procedures to be used in instrument
requirements speCification has only one interpreta development, for example [16][18][21]. The descrip
tion, the extent to which everything the software is tive literature specifies the procedures used by par
supposed to do is included in the specification, and tiGular authors for developing instruments, for exam
the number of detected errors in a requirements ple [2][9)[15]1. This research method consisted of two
document that are categorized as being inclusions steps1: (a) define the RE success content domain,
of implementation facts. and (b) instrument development and pretest.
Although there is a predominant focus on the qual
ity of RE products as the primary dimension of RE 3 1 Define the RE Success Content Domain

success, a minority of the reviewed articles suggest

The objective of this step was to identify the dimen
that there may be other dimensions. For example,
sions of RE success and a representative set of cri
Dawson [11] writes about process effectiveness and
teria for the assessment of RE success. Three dis
cycle time. Cordes and Carver [7] discuss necessity,
tinct investigations were conducted consecutively.
which is defined as the extent to which information
To start, 33 interviews with 30 experts were con
that is unnecessary for solution development is
ducted. For the purposes of the results presented in
included in the requirements document. This is
this paper, the interviews focused on answering one
somewhat analogous to the definition of correctness
question: "What criteria are being used to assess
that Davis [10] presents. Zagorsky [23] notes that
RE success?". Initially, an accumulated list of the
CASE tool implementation enhances perceived pro
indicators that were mentioned in the reviewed liter
ductivity and effectiveness of RE activities. All of the
ature was used [12]. This list was subsequently
above references mention some aspect of the cost
modified, refined, and reworded as the data collec
effectiveness of the RE process. Also, 8asili and
tion progressed.
Weiss [4] discuss the number of changes made to a
Each expert interviewed has been involved in at
requirements document, which is referred to as
least one RIE phase (and some up to 35+ different
requirements maturity by Farbey [14]. Number of
RE phases) in one or more of the following roles:
changes and requirements maturity address another
pl'Oject manager, lead architect, analyst, coach,
aspect of cost-effectiveness, namely the amount of
and/or auditor. The interviewee characteristics (their
rework. Furthermore, customer satisfaction [11] and
backgrounds and location) are summarized in the
user satisfaction [23] have previously been consid
"Interviews" column2 of Figure 1. Figure 1 also
ered as measures of RE success.
Therefore, it seems that RE success is not uni '1 Allintormants In all activities of the research method had a
common understanding Df the RE process since RE phase
dimensional (Le., there are other dimensions apart
objectives and activities are defined in method X.
from the quality of RE products). More specifically, it 2 It should be noted that an interviewee can be characterized
can be seen that there are at least two other dimen as having an intersection of ba ckgrounds. Therefore, the

Sions, one concerned with user/customer satisfac- total does not add up to 100%. For example, some senior
analysts take on the role of a project m anager.

shows the informant characteristics for some of the ence measure. This measure is a direct estimate of
other steps of the research method; these will be an ordinal scale, and hence can be used for the pur
discussed below. The outcome of the first step was poses of ranking.
a set of 34 criteria for the assessment of RE suc A survey was then conducted to prioritize the
cess. These criteria served as inputs to the following dimensions of RE success. The dimensions were
steps. the outcome of the categorization performed during
Subsequently, we checked the completeness of the step described above. For the survey, a ques
the criteria, identified the dimensions of RE success, tionnaire was given to 25 senior company Y consul
and prioritized the criteria tapping each dimension. tants worldwide. A total of 18 responses were
For this, ten experts were interviewed. The charac received, giving a response rate of 72%. A summary
teristics of these ten experts are summarized in the of respondent characteristics is presented in Figure
column labeled "Cat. & Prio." of Figure 1. 1 under the heading "Survey". The questionnaire,
The interviewees were initially requested to put the among other things, requested that the respondents
34 RE success criteria into two or more categories rank order the dimensions of RE success in terms of
such that the criteria in each category were most how important each is perceived to constitute an
similar to each other and most dissimilar from the indicator of overall RE success.
other categories. The criteria were randomly ordered The rank orderings from the survey were trans
before each interview. The interviewees were sub formed into a preference using the total proportion of
sequently requested to provide an interpretation of respondents who have ranked a dimension higher
each category, comment on its completeness and than the other dimensions. The data analysis from
the existence of any overlapping criteria (i.e., criteria this survey indicated which dimension of RE suc
that were perceived to be exactly the same). cess is considered the most important, and which
For the prioritization task, the interviewees were the least important.
requested to rank order the criteria in each category
in terms of how well they assess their interpretation 3.2 Instrument Development and Pretest
of the category. For example, if the category was
The starting point for instrument development was
interpreted as "user satisfaction", then the intervie
the 34 criteria that were already formulated. For this
wee was requested to rank order the criteria in
instrument. a semantic differential scale was utilized
terms of how well they assess "user satisfaction".
[19]. This scale consists of a concept and adjective
The results from all the ten interviews were initially
pairs at the extremes of a 7-point scale. Each adjec
cluster analyzed, and subsequently prioritized.
For the cluster analysis, a distance matrix was
tive pair will be referred to as an item.
Each criteria was converted to one or more con
constructed. The distance matrix was derived from
cepts. Two items were developed for each concept.
an incidence matrix as follows: dij 1 - Sij' where dij

Each item pair were reverse ordered in the pretest

is the distance between criteria i and j, and Sij is the
instrument to alleviate the tendency of respondents
similarity between criteria i and j. Similarity is
to simply mark straight down a column for the items
defined as the proportion of interviewees who
placed criteria i and j in the same category. Miller
[17] shows that this is a metric distance matrix, and Charac. Interviews Cat. & Prio. Survey
hence satisfies the criteria desirable for the applica Project 70% 70% 78%
tion of numeric cluster analysis algorithms [1]. Mgmt.
Subsequently, multiple hierarchical agglomerative Technical 43% 50% 89%
clustering algorithms [1] were used to identify the
Research & 33% 50% 17%
v a r i o u s d i m e n s i o n s or facets of RE succ ess. Education
Confidence in the emerging dimensions is increased
----------- ----------------------------------

Canada 91% 100% 61%

if similar clusters are extracted using the different
U.S.A. 9% 0% 16.66%
For the criteria in each of these clusters, a prioriti France 0% 0% 11.11%
zation was performed. This prioritization was based Australia 0% 0% 11.11%
on the rank ordering provided by the interviewees.

Total 30 10 18
For each criterion, the total proportion of intervie
wees who have ranked it higher than all the other Figure 1: Summary of informant characteristics
criteria in the same cluster was used as a prefer- (background & location) by research study activity.

covering a particular concept. Standardized instruc pal components analysis and item-total correlations
tions were also included with each instrument. [18]. For item-total correlations, each scale score
The initial instrument was administered to collect was subtracted from the total to avoid a spurious
data on the success of RE processes. A total of 32 part-whole correlation, and the correlation of each
data points were collected, each data point repre scale with the new score was computed.
senting a particular RE process. The characteristics Effectiveness refers to the extent to which a scale
of the respondents and the RE processes are sum is measuring a construct relative to the other scales
marized in Figure 2. that are measuring the same construct. Attaining a
As part of instrument pretest, the reliability, con reasonable level of effectiveness is important so as
struct validity, and effectiveness of the instrument not to have a lengthy instrument. Multiple criteria
were evaluated. Each of these is defined below. were utilized to determine effectiveness. The pur
Reliability is defined as the extent to which an pose was to eliminate concepts from the instrument
experiment, test, or any measuring procedure yields without negatively affecting its reliability and validity.
the same results on repeated trials, and is con The following criteria were utilized:
cerned with the problem of random measurement 1. Using the priorities assigned in an earlier step: if a
error [18]. The reliability of the instrument was evalu concept was tapping a dimension that was con
ated using the Cronbach alpha coefficient [8]. sidered to be of low priority, then it was a strong
Construct validity is an operational concept that candidate for elimination. If a concept had a low
asks whether the scales chosen are describing the priority in tapping a particular dimension, then it
true construct(s) [18]. In this study, the construct is was a strong candidate for elimination.
RE success. Construct validity includes two other 2. Using the results of reliability analysis: if removal
concepts: convergent validity and discriminant validi of a concept from the instrument resulted in a
ty. Convergent validity determines whether the
large increase in the value of Cronbach alpha
scales chosen are measuring one underlying con then it was a strong candidate for elimination.
struct. Discriminant validity determines whether a
3. Using the results of principal components analy
scale differentiates between constructs. Construct
sis: if a concept loaded relatively low or did not
validity of the instrument was evaluated using princi-
load on its associated factor, then it was a strong
Charac. Type Attribute Value candidate for elimination.

Location of Canada 62.5% 4. Using the item-total correlations: if a concept cor

Respondents U.S.A 25% related relatively low with the total then it was a
Australia 12.5% strong candidate for elimination.
Position of ProjecVClient Management 41% The above four criteria were used in combination
Respondents in Technical 37.5% to determine which concepts were to be eliminated.
RE Phase Auditing 3%
These criteria will ensure that the instrument retains
Coaching 25%
a good level of reliability and validity. The results of
Main Business Government 37.5% this analysis are presented in the following section.
of Organization Retail, Dis!. & Transp. 15.6%
Aerospace 15.6%
Financialll nsurance 9.4% 4 Results
Communications 6.25%
Other 15.6% The results are presented in three parts: (a) the

Functional Area Finance 22% dimensions of RE success that have been derived,
of Information Purchasing 15.6% (b) the priority of each criterion and dimension within
System Sales/Marketing 12.5% the RE success content domain, and (c) the instru
Inventory Control & Planning 12.5%
Transportation/Logistics 12.5% ment and its characteristics based on the pretest
Other 25% study.

Number of IS Average 5.6

Personnel Range 2-15 4.'1 Dimensions of HE Success
A total of 34 criteria for assessing the success of the
Number of user Average 11.7
Personnel Range 0-120 RE process have been identified. These are shown
Involved in Figure 3, and are numbered M1-M34.
The cluster analysis of the criteria yielded five clus
Figure 2: Summary of the characteristics of the
ters as shown in Figure 4 (the numbers in Figure 4
respondents and RE processes for instrument pretest.

M1 The awareness of users of the business changes required in order to Mi8 The extent to which key issuesc have been resolved.
implement the recommended solution,
M19The extent to which the users have understood what the new system will
M2 The clarity of the links between the (processa and data) models and the do and will not do,
system objectivesb
M20The extent of user consensus on the recommended solution.
M3 The clarity of the business process in the architecture,
M21 The extent to which top management is convinced that the expected
M4 The thoroughness with which solutions alternative to the recommended benefits are likely to materialize.
solution were explored.
M22The relationship between the users and the requirements engineering
M5 The cost and effort compared to other similar requirements engineering staff.
phases in the same organization or similar organizations.
M23Whether the users have approved all the documentation.
M6 The amount of changes to the RE documentation.
M24The fit between the available funding profile and the necessary funding
M7 The users' reaction to the cost estimate. profile to implement the recommended solution.
M8 The willingness of the users to defend the recommended solution in
M25The fit between the architecture and the way the users work.
front of executive management.
M26The ability of the organization to make the necessary changes to imple
M9 The completeness of coverage of the cost/benefits analysis. ment the recommended solution.

M10The fraction of the cost of the requirements engineering phase com M27The fit between the recommended solution and the strategic orientation
pared to the (estimated) total system development cost. of the organization.
M11The amount of deliverables that were not used in formulating the recom
M28The fit between the recommended solution and the technical orientation
mended solution and the cost/benefits analysis.
of the organization.
M12The amount of benefits that are expected to be brought to the organiza
'" M29The willingness of the organization to make the necessary changes to
o tion by implementing the recommended solution compared to altemative
00 implement the recommended solution.
M30The accuracy of the cost estimates compared to the accuracy required
M13The clarity of the links between the (processa and data) models and the by the organization.
key issuesc.
M31The degree of top management support for changes necessary to imple
M14The adequacy of the diagnosis of the existing system. ment the recommended solution.

M15The soundness of the approach(es) taken to quantify the intangible ben M32The fit between the system architecture and the corporate architecture or
efits. information plan.
M16The clarity of the links between the weaknesses and strengths of the M33The degree of match between the functionality of the 1st release of the
existing system and the weaknesses and the strengths of the recom software system and user expectations.
mended solution.
M34The extent to which the presentation of the costlbenefits analysis follows
M17The extent to which the (processa and data) models conform to the rules the accounting procedures of the organization.
of modeling.

a A process model includes processes of the software system (functions) as well as manual work activities of the business process.
b These are objectives 01 the system not objectives of the software development project, and they are supposed to be specific and measurable. Example objectives are "to
improve the accuracy of the planning process by 10%", "to provide the ability to find a volume in the automated catalogue in less than 5 seconds, 95% of the time", and "to
satisfy 95% of the requests for bibliographic information within 5 minutes",
c Key issues are issues that concern the system not the project. They are critical aspects of the system that require examination; without their resolution the system cannot be
completely defined and developed, When an issue can be resolved without r equir ing a formal examination of alternatives by the decision makers, then it is not a key issue.
Example key issues are "the system must be simple and inviting to use and must enable members to find answers to their queries easily. This is to ensure member accep
tance of the system. Staff assistance should be required only in exceptional cases. ", and "selVlces such as consultation, location of titles, and processing of loans and reser
vations must continue In the event of malfunctions or failure of the automated system. This Is to ensure continuity of service ".

Figure 3: Criteria for assessing RE success.

concern prioritization, and will be described below). 3. Quality of RE Service
For each cluster, its interpretation is presented. This dimension concerns the service provided by
These interpretations were derived from those pro the RE staff to the users. The quality of this ser
vided by the interviewees. Since two of the clusters vice is reflected in the extent of user satisfaction
directly concerned how good the outputs of the RE and commitment, and the extent to which the rec
process are (quality of architecture and quality of ommended solution fits the organization. It is criti
cost/benefits analysis), they were grouped into one cal to provide a service that satisfies the users
general cluster by the authors: quality of RE prod and to properly manage their expectations. This is
ucts. Furthermore, since two other clusters seemed intended to ensure that they will participate in
to tap the quality of RE service dimension (user sat later phases and that they will use the system
isfaction and commitment, and fit of the recommend after it is operational. Furthermore, a software
ed solution with the organization), they were also system is considered as part of a business solu
grouped together by the authors. tion. This may necessitate, for instance, new work
The resultant three dimensions of RE success are procedures, changes in organizational structure,
as follows: and/or changes to relationships with customers
1. The Cost Effectiveness of The RE Process and suppliers. Thus, the products of the RE
This dimension assesses whether a reasonable process constitute a recommended organizational
amount of resources were consumed during the change supported by information technology,
RE phase. which must fit the organization and its capacity to

2. Quality of RE Products
implement it.

The quality of RE products covers the quality of

4.2 Importance of Criteria and Dimensions
the major documents that are produced during the
RE phase: quality of architecture and quality of Tlhe criteria iln each cluster are presented in Figure 4
cost/benefits analysis.


Cost Effectiveness Quality Of

Of RE Process REProducts
(0.438) (1.126)
Quality Of

M5(1.599) REService
Ml1 (0.300)

Quality Of Quality Of
Architecture CostiBenefits
M3(2.702) M9(2.700)
M14(2.597) M2l : M24(1.902)
M13(2.499) M12 (1.698)
M30(1.398) User Satisfaction
M34 (0.789 ) And Commlttment
M4 (1.799)
M15 (0.702) M19 (3.904)
M18 (1.498) /
M17(0.399) M8(3.800)
Fit Of Recommended
Solution With The
M1 (2.600)
M26('1.600) M7(1.600)
M27(1.500) M25 : M23(1.400)
M31(1.200) M33(1.104)
M32 : M26(0.600)

Figure 4: The dimensions of RE success, the priorities of the criteria tapping each dimension and the priorities
of each dimension in tapping RE success.

in priority order, with the highest priority criteria at In particular, it was identified that concepts covering
the top. A colon in Figure 4 indicates that the two cri the SUb-dimension "fit of the recommended solution
teria are at the same priority. This prioritization indi with the organization" (M27, M26, M29, M31) and
cates how well, compared to other criteria, a particu "user satisfaction and commitment" (M1, M8, M20,
lar criterion is a measure of the cluster's interpreta M22) load highly on the first factor and have low
tion. Next to each criteria/criterion is the preference loadings on the second factor. Furthermore, con
measure value that was used for p r ior iti z at io n . It cepts covering the "quality of architecture" ( M16,
may be recalled that the total proportion of respon M14, M13, M2) and "quality of cost/benefits analy
dents who have ranked a criterion higher than other sis" (M9a, M9b, M12, M30) load highly on the sec
criteria in the same cluster was used as a prefer ond factor and have low loadings on the first factor.
ence measure. These results demonstrate good convergent and
The data from the survey, which asked respon discriminant validities.
dents to rank order the three dimensions of RE suc The factor loadings shown in Figure 5 are in gen
cess, were used to determine whether there was eral very high, lending reasonably strong support to
any difference among the perceived importance of the instrument's construct validi t y. It should be
the dimensions of RE success. Figure 4 shows the noted, however, that construct validity cannot be
preference measures for each of these dimensions. claimed until these same results are replicated in
It is evident that quality of RE service is perceived to subsequent studies. The results presented here pro
be the most important dimension of RE success vide some i n i tial evi dence supporting construct
(with a preference measure of 1.438), and the cost validity, and hence encouraging further studies.
effectiveness of the RE process is perceived to be Further evidence of construct validity was obtained
the least important dimension of RE success (with a from the results of item-total correlations. Overall, 15
preference measure of 0.438). out of 16 correlations are above 0.4 and significant
To test the null hypothesis that the three dimen at an alpha level of 0.05.
sions of RE success are of equal importance versus
the alternative hypothesis that the three dimensions 5 Conclusions
of RE success are ordered in the specific sequence
The research study described in this paper was
described above, a nonparametric statistic is used.
based on the premise that it is critical to understand
The particular statistic is L [20]. The test was con
ducted at an <x=0.01, and resulted in rejecting the
Quality Qf Quality of RE
null hypothesis, hence, further supporting the order RE Service Products
ing presented above. E 1ii
:I Mean 77.6333 79.6250
en iii
Std. Dev. 26.6102 16.7898
4.3 RE Success Instrument

The elimination of concepts using the four criteria Qj iii

c: Cronbach a 0.9711 0.9071
a: Std. Error 4.5237 5.1174
mentioned earlier resulted in an instrument with 16
concepts and 32 items (this instrument is available
. M1 0.7739
from the authors). The concepts cover the two m ost I/) M8 0.8079
important dimensions of RE su ccess: quality of RE (ij M20 0.8989
service and qua lity of RE products. c: M22 0.8377
M27 0.6467 {0.5550}
Figure 5 shows the reliability estimates for each of I/) M26 0.6974
the two dimensions of RE success. As can be seen, c: M29 0.8755
the reliability estimates (next to Cronbach alpha c: M31 0.8440
heading) are sufficiently high to recommend usage c..
E M9a 0.6655
of t h i s i n s t r u m e n t f o r p r ac t i c a l m e a s u r e m e n t 0
(.) M9b 0.6670
(Nunnally [18] suggests values a t least greater than M12 0.3962
0.9 for practical measurement).
(ij M30 0.5097

The results of the principal components analysis 0 M2 0.7205

.;:; M13 0.6801
are also presented in Figure 5 (item composites M14
a. 0.5155
were formed by summing the values on item pairs). M16 0.8003
These results indicate that the structure of the RE ..
success construct matches that originally expected. Figure 5: Summary statistics, results of reliab i l i t y analy
sis, and results of principal components analysis.

the RE process in order to improve it. A central con [4] V Basili and D. Weiss. "Evaluation of a software
tributor to this understanding is the development of requirements document by analysis of change data".
In Proceedings of the Fifth International Conference
an instrument for measuring RE success. An exten
on Software Engineering, pages 314-323, 1981.
sive research investigation described in this paper [5] B. B o ehm. "Software engineering economics".
has resulted in such an instrument. This is, as far as Prentice Hall, 1981.
we know, the first comprehensive research effort [6] F. Brooks. "No silver bullet: Essence and accidents of
that has resulted in an RE success instrument. software engineering". In C o mp ut er, 20(4):10-19,
April 1987.
The potential applications of these results are dis
[7] D. Cordes and D. Carver. "Evaluation method for user
cussed from the research and practice perspectives.
requirements documents". In Information a nd
From the research perspective, the concern is pri Software Technology, 31(4):181-188, May 1989.
marily with advancing the state of knowledge about [8] L. Cronbach. "Coefficient alpha and the internal con
RE success. The primary significance of this work sistency of tests". In Psychometrika, pages 297-334,
September 1951.
for the research community is therefore the exis
[9] F. Davis. "Perceived usefulness, perceived ease of
tence of a general and standardized instrument for u s e , and user acceptance o f Information
measuring RE success. It is now an easier task for Technology". I n MIS Quarterly, 13(3):319-340,
researchers to conduct studies (for example, using September 1989.
survey or experimental empirical research methods) [10] A. Davis. "Software requirements: Objects, functions,
and states". Prentice Hall, 1993.
whose purpose is to test hypotheses (such as those
[11] J. Dawson. "Toronto laboratory requirements process
reported in [13]) about the effect of existing and reference guide". Technical Report (Unpublished),
emerging practices and tools on RE success. IBM Canada Ltd. Laboratory, 1991.
From the practice perspective, the concern is pri [12] K. EI Emam and N. H. Madhavji. "An instrument for
marily with improving RE practices in order to attain measuring the success of the requirements engineer
ing process in information systems development".
greater RE success. The instrument for measuring
Technical Report SE-94.12, School of Computer
RE success may be applied by practitioners, for
Science, McGill Un iversity, 1994.
example, in the evaluation of RE phase pilot pro [13] K. EI Ernam and N. H. Madhavji. "A field study of
jects. Thus, if an organization is adopting a new requirements engineering practices in information
requirements engineering method, then the outcome systems development". In Proceedings of the Second
IEEE International Symposium on Requirements
of a pilot project can be evaluated and compared to
Engineering, 1995.
the baseline value of RE success that is more com [14] B. Farbey. "Software quality metrics: Considerations
mon within the organization. This would allow man about requirements and requirements specifications".
agement to gauge the benefits of the new method. In Information and Software Technology, 32(1):60-64,
While the above list of the applications of this January/February 1990.
[15] B. Ives, M. Olson, and J. Baroudi. "The measurement
research may not be comprehensive, it is contended
of user information satisfaction". In Communications
that they address important contemporary issues of of the ACM, 26(10):785-793, October 1983.
concern to both researchers and practitioners. [16] F. Kerlinger. "Foundations of behavioral research".
As for future research, greater confidence with this Holt, Rinehart, and Winston, 1986.
instrument would be established if the evidence sup [17] G. Miller. "A psychological method to investigate ver
b a l concep ts". In Journal of Mathematical
porting its reliability and validity can be replicated in
Psychology, 6: 169-191, 1969.
studies conducted by other researchers. It is expect [18] J. Nunnally. " Psychometric theory". McGraw Hill,
ed that the instrument can be improved by testing 1967.
further, for example, by determining the test-retest [19] C. Osgood, G. Suci, and P. Tannenbaum. 'The mea
reliability of the instrument, and determining its relia surement of meaning". University of Illinois Press,
bility and validity with different samples.
[20] E. Page.. "Ordered hypotheses for multiple treat
ments: A significance test for linear ran ks". In
References American Statistical Association Journal, pages 216-
230, March 1963.
[1] M. Aldenderfer and R. Blashfield. "Cluster analysis".
[21] D. Straub. "Validating instruments in MIS research".
Sage Publications, 1984.
In MIS Quarterly, pages 147-169, June 1989.
[2] J. Bailey and S. Pearson. "Development of a tool for
[22] M. Wu. "Selecting the right software application pack
measuring and analyzing computer user satisfaction".
age". In Journal of Systems Management, pages 28-
In Management Science, 29(5):530-545, May 1983.
32, September 1990.
[3] V. Basili and B. Perricone. "Software errors and com
[23] C. Zagorsky. "Case study: Managing the change to
plexity: An emp irical investigation". In
C A S E". In Journal of I n formation Systems
Communications of the ACM, 27(1):42-52, January
Management, pages 24-32, Summer 1990.