You are on page 1of 44

Software

Requirements Engineering
Lecture 16
Writing Better Requirements
DRAFT

SESD-2213
FALL 2021
Today’s Agenda

 Anatomy of a Good / Bad User Requirement

 Standard for Writing a Requirement

 Writing Pitfalls to Avoid

 A Few Simple Tests…

 The greatest challenge to any thinker is stating the problem in a way that will allow a solution. 1

[1] Bertrand Russell, 1872-1970


3 Anatomy of a Good User Requirement

Defines the system under discussion Verb with correct identifier (shall or may)

The Online Banking System shall allow the Internet user


to access her current account balance in less than 5 seconds.

Defines a positive end result Quality criteria

 Identifies the system under discussion and a desired end result that is wanted within a specified time that is measurable

 The challenge is to seek out the system under discussion, end result, and success measure in every requirement
4 Example of a Bad User Requirement

Cannot write a requirement on the user No identifier for the verb

The Internet User quickly sees her current


account balance on the laptop screen.
X
Vague quality criteria What versus how
5 Standard for Writing a Requirement

 Each requirement must first form a complete sentence


 Not a bullet list of buzzwords, list of acronyms, or sound bites on a slide

 Each requirement contains a subject and predicate


 Subject: a user type (watch out!) or the system under discussion
 Predicate: a condition, action, or intended result
 Verb in predicate: “shall” / “will” / “must” to show mandatory nature; “may” / “should” to show
optionality

 The whole requirement provides the specifics of a desired end goal or result
 Contains a success criterion or other measurable indication of the quality
6 Standard for Writing a Requirement

 Look for the following characteristics in each requirement


 Feasible (not wishful thinking)
 Needed (provides the specifics of a desired end goal or result)
 Testable (contains a success criterion/other measurable indication of quality)
 Clear, unambiguous, precise, one thought
 Prioritized
 ID

 Note: several characteristics are mandatory (answer a need, verifiable, satisfiable) whereas others
improve communication
7 Writing Pitfalls to Avoid

 Never describe how the system is going to achieve something (over-specification), always describe
what the system is supposed to do
 Refrain from designing the system
 Danger signs: using names of components, materials, software objects, fields & records in
the user or system requirements
 Designing the system too early may possibly increase system costs
 Do no mix different kinds of requirements (e.g., requirements for users, system, and how the
system should be designed, tested, or installed)
 Do not mix different requirements levels (e.g., the system and subsystems)
 Danger signs: high level requirements mixed in with database design, software terms, or
very technical terms
8 Writing Pitfalls to Avoid
The system shall use Microsoft Outlook to send an
email to the customer with the purchase confirmation.
X
The system shall inform the customer
that the purchase is confirmed.
 “What versus how” test
9 Writing Pitfalls to Avoid

 Never build in let-out or escape clauses


 Requirements with let-outs or escapes are dangerous because of problems that arise in testing
 Danger signs: if, but, when, except, unless, although
 These terms may however be useful when the description of a general case with exceptions is much
clearer and complete that an enumeration of specific cases
 Avoid ambiguity
 Write as clearly and explicitly as possible
 Ambiguities can be caused by:
 The word or to create a compound requirement
 Poor definition (giving only examples or special cases)
 The words etc, …and so on (imprecise definition)
10 Writing Pitfalls to Avoid

 Do not use vague indefinable terms


 Many words used informally to indicate quality are too vague to be verified
 Danger signs: user-friendly, highly versatile, flexible, to the maximum extent,
approximately, as much as possible, minimal impact

The Easy Entry System shall be easy to use and require


a minimum of training except for the professional mode. X
11 Writing Pitfalls to Avoid

 Do not make multiple requirements


 Keep each requirement as a single sentence
 Conjunctions (words that join sentences together) are danger signs: and, or, with, also
 Do not ramble
 Long sentences with arcane language
 References to unreachable documents

The Easy Entry Navigator module shall consist of order


entry and communications, order processing, result
X
processing, and reporting. The Order Entry module shall be
integrated with the Organization Intranet System and
results are stored in the group’s electronic customer record.
12 Writing Pitfalls to Avoid

 Do not speculate
 There is no room for “wish lists” – general terms about things that somebody probably wants
 Danger signs: vague subject type and generalization words such as usually, generally, often,
normally, typically
 Do not express suggestions or possibilities
 Suggestions that are not explicitly stated as requirements are invariably ignored by developers
 Danger signs: may, might, should, ought, could, perhaps, probably
 Avoid wishful thinking
 Wishful thinking means asking for the impossible (e.g., 100% reliable, safe, handle all failures,
fully upgradeable, run on all platforms)
The Easy Entry System may be fully adaptable to all
situations and often require no reconfiguration by the user.
X
13 A Few Simple Tests…(1)

 “What versus how” test discussed earlier


 Example: a requirement may specify an ordinary differential equation that must be solved,
but it should not mention that a fourth order Runge-Kutta method should be employed

 “What is ruled out” test


 Does the requirement actually make a decision (if no alternatives are ruled out, then no
decision has really been made)
 Example: a requirement may be already covered by a more general one

Source: Spencer Smith, McMaster U.


14 A Few Simple Tests…(2)

 “Negation” test
The software shall be reliable. X
 If the negation of a requirement represents a position that someone might argue for, then
the original decision is likely to be meaningful

The car shall have an engine. X


 “Find the test” test

 The requirement is problematic if no test can be found or the requirement can be tested with
a test that does not make sense
 Test: look, here it is!

Source: Spencer Smith, McMaster U.


Rate these Requirements
15
The Order Entry system provides for quick, user- X
friendly and efficient entry and processing of all orders.

Invoices, acknowledgments, and shipping notices shall X


be automatically faxed during the night, so customers
can get them first thing in the morning.

Changing report layouts, invoices, labels, and form


letters shall be accomplished.
X
The system shall be upgraded in one whack. X
The system has a goal that as much of the IS data as X
possible be pulled directly from the T&M estimate.
16 Towards Good Requirements Specifications (1)

 Valid (or “correct”)  Consistent


 Expresses actual requirements  Doesn’t contradict itself (satisfiable)
 Complete  Uses all terms consistently
 Specifies all the things the system  Note: inconsistency can be hard to
must do (including contingencies) detect, especially in
concurrency/timing aspects and
 ...and all the things it must not do! condition logic
 Conceptual Completeness  Formal modeling can help
(e.g., responses to all classes of input)
 Beneficial
 Structural Completeness
 Has benefits that outweigh the costs of
(e.g., no TBDs!!!)
development

Source: Adapted from Blum 1992, pp164-5 and the IEEE-STD-830-1993


17
Towards Good Requirements Specifications (2)
 Necessary  Verifiable
 Doesn’t contain anything that isn’t  A process exists to test satisfaction
“required” of each requirement
 Unambiguous  “every requirement is specified
behaviorally”
 Every statement can be read in
exactly one way  Understandable (clear)
 Clearly defines confusing terms  E.g., by non-computer specialists
(e.g., in a glossary)  Modifiable
 Uniquely identifiable
 Must be kept up to date!
 For traceability and version control

Source: Adapted from Blum 1992, pp164-5 and the IEEE-STD-830-1993


18 Typical Mistakes
 Noise = the presence of text that carries no  Wishful thinking = text that defines a
relevant information to any feature of the feature that cannot possibly be validated
problem
 Jigsaw puzzles = e.g., distributing
 Silence = a feature that is not covered by any
requirements across a document and then
text
cross-referencing
 Over-specification = text that describes a feature
of the solution, rather than the problem  Inconsistent terminology = inventing and
 Contradiction = text that defines a single feature then changing terminology
in a number of incompatible ways  Putting the onus on the development staff
 Ambiguity = text that can be interpreted in >=2 = i.e. making the reader work hard to
different ways decipher the intent
 Forward reference = text that refers to a feature  Writing for the hostile reader (fewer of
yet to be defined these exist than friendly ones)

Source: Steve Easterbrook, U. of Toronto


19 Key Questions and Characteristics
• Remember the key questions “Why?” or
“What is the purpose of this?”

• Feasible

• Needed

• Testable
Measures and Metrics
21 Non-Functional Requirements

 Non-Functional Requirements and Software Quality Attributes


 Software Quality
 Classifications of Non-Functional Requirements
 Quality Measures

 To measure is to know. If you can not measure it, you can not improve it.1

[1] Lord Kelvin (1824 - 1907)


22 Software Quality (1)

 Most definitions require compliance with requirements

 “Conformance to explicitly stated functional and performance requirements, explicitly


documented development standards, and implicit characteristics that are expected of all
professionally developed software.”1

 Implication:
 We need to be able to explicitly quantify requirements and verify that any solution meets
them
 We need measures

[1] Pressman, 1997


23 Software Quality (2)

 An interesting phenomenon:

Measurable objectives are usually achieved!

 Therefore, unless you have unrealistic values, requirements are usually met
 Important to know what measures exist!
 The chosen values, however, will have an impact on the amount of work during development
as well as the number of alternatives and architectural designs from which developers may
choose to meet the requirements
24 Quantification

 Non-functional requirements need to be measurable


 Avoid subjective characterization: good, optimal, better...

 Values are not just randomly specified


 Must have a rational
 Stakeholder must understand trade-offs
 Important to rank and prioritize the requirements

 Precise numbers are unlikely to be known at the beginning of the requirement process
 Do not slow down your initial elicitation process
 Ensure that quality attributes are identified
 Negotiate precise values later during the process
25 Measures vs. Metrics

 We use measures in a generic way but there is actually a distinction between measures and
metrics

 For example, consider reliability


 Metric: mean time between failures
 Measure: number of failures in a period of time (an observation!)
 Reading the text on Wikipedia about software and performance metrics, I get the impression that metrics and measure mean the same
thing. To define a measure, you have to define WHAT you measure (that is, the quality), the metric units used with the measurement
values, and HOW you measure what is measured. For the example of Reliability above, the first line defines the quality (WHAT?), and
the second lines defines a measurement method (HOW?). – G.v. B.
26 Some Relationships

collection of qualities

Quality (WHAT?)

Required value (with unit)

HOW?

Source: D. Firesmith, http://www.jot.fm/issues/issue_2003_09/column6/


27 Performance Measures (1)

 Lots of measures
 Response time, number of events processed/denied in some interval of time, throughput,
capacity, usage ratio, jitter, loss of information, latency...
 Usually with probabilities, confidence interval

 Can be modeled and simulated (mainly at the architectural level) – performance prediction
 Queuing model (LQN), process algebra, stochastic Petri nets
 Arrival rates, distributions of service requests
 Sensitivity analysis, scalability analysis
28 Performance Measures (2)

 Examples of performance requirements


 The system shall be able to process 100 payment transactions per second in peak load.
 In standard workload, the CPU usage shall be less than 50%, leaving 50% for background
jobs.
 Production of a simple report shall take less than 20 seconds for 95% of the cases.
 Scrolling one page up or down in a 200 page document shall take at most 1 second.
29 Reliability Measures (1)

 Measure degree to which the system performs as required


 Includes resistance to failure
 Ability to perform a required function under stated conditions for a specified period of time
 Very important for critical, continuous, or scientific systems

 Can be measured using


 Probability that system will perform its required function for a specified interval under stated conditions
 Mean-time to failure
 Defect rate
 Degree of precision for computations
30 Reliability Measures (2)

 Examples
 The precision of calculations shall be at least 1/106.
 The system defect rate shall be less than 1 failure per 1000 hours of operation.
 No more than 1 per 1000000 transactions shall result in a failure requiring a system restart.
31 Availability Measures (1)

 Definition: Percentage of time that the system is up and running correctly


 Can be calculated based on Mean-Time to Failure (MTBF) and Mean-Time to Repair (MTTR)
 MTBF : Length of time between failures
 MTTR : Length of time needed to resume operation after a failure
 Availability = MTBF/(MTBF+MTTR)
 May lead to architectural requirements
 Redundant components (lower MTBF)
 Modifiability of components (lower MTTR)
 Special types of components (e.g., self-diagnostic)
 Measurement: The mean time to failure and mean time to repair of critical components must be identified (typically measured) or estimated
 Modeling reliability and availability: e.g. Markov models
32 Availability Measures (2)
 Examples
 The system shall meet or exceed 99.99% uptime.
 The system shall not be unavailable more than 1 hour per 1000 hours of operation.
 Less than 20 seconds shall be needed to restart the system after a failure 95% of the time. (This is a MTTR
requirement)

 Availability Downtime
 90%36.5 days/year
 99%3.65 days/year
 99.9% 8.76 hours/year
 99.99% 52 minutes/year
 99.999% 5 minutes/year
 99.9999% 31 seconds/year
33 Security Measures (1)

There are at least two measures:


1. The ability to resist unauthorized attempts at usage
2. Continue providing service to legitimate users while under denial of service attack (resistance to DoS attacks)
 Measurement methods:
 Success rate in authentication
 Resistance to known attacks (to be enumerated)
 Time/efforts/resources needed to find a key (probability of finding the key)
 Probability/time/resources to detect an attack
 Percentage of useful services still available during an attack
 Percentage of successful attacks
 Lifespan of a password, of a session
 Encryption level
34 Security Measures (2)
 May lead to architectural requirements
 Authentication, authorization, audit
 Detection mechanisms
 Firewall, encrypted communication channels
 Can also be modeled (logic ...)

 Examples of requirements
 The application shall identify all of its client applications before allowing them to use its
capabilities.
 The application shall ensure that the name of the employee in the official human resource and
payroll databases exactly matches the name printed on the employee’s social security card.
 At least 99% of intrusions shall be detected within 10 seconds.
35 Usability Measures (1)
In general, concerns ease of use and of training end users. The following more specific measures can be
identified:
 Learnability
 Proportion of functionalities or tasks mastered after a given training time
 Efficiency
 Acceptable response time
 Number of tasks performed or problems resolved in a given time
 Number of mouse clicks needed to get to information or functionality
 Memorability
 Number (or ratio) of learned tasks that can still be performed after not using the system for a given time
period
 Error avoidance
 Number of error per time period and user class
 Number of calls to user support
36 Usability Measures (2)
 Error handling
 Mean time to recover from an error and be able to continue the task
 User satisfaction
 Satisfaction ratio per user class
 Usage ratio
 Examples
 Four out of five users shall be able to book a guest within 5 minutes after a 2-hour
introduction to the system.
 Novice users shall perform tasks X and Y in 15 minutes.
Experienced users shall perform tasks X and Y in 2 minutes.
 At least 80% of customers polled after a 3 months usage period shall rate their satisfaction
with the system at 7 and more on a scale of 1 to 10.
37 Maintainability Measures (1)
 Measures ability to make changes quickly and cost effectively
 Extension with new functionality
 Deleting unwanted capabilities
 Adaptation to new operating environments (portability)
 Restructuring (rationalizing, modularizing, optimizing, creating reusable components)
 Can be measured in terms of
 Coupling/cohesion metrics, number of anti-patterns, cyclomatic complexity
 Mean time to fix a defect, mean time to add new functionality
 Quality/quantity of documentation
 Measurement tools
 code analysis tools such as IBM Structural Analysis for Java
(http://www.alphaworks.ibm.com/tech/sa4j)
38 Maintainability Measures (2)
 Examples of requirements
 Every program module must be assessed for maintainability according to procedure xx.
70% must obtain “highly maintainable” and none “poor”.
 The cyclomatic complexity of code must not exceed 7.
No method in any object may exceed 200 lines of code.
 Installation of a new version shall leave all database contents and all personal settings
unchanged.
 The product shall provide facilities for tracing any database field to places where it is used.
39 Testability Measures
Measures the ability to detect, isolate, and fix defects
 Time to run tests
 Time to setup testing environment (development and execution)
 Probability of visible failure in presence of a defect
 Test coverage (requirements coverage, code coverage…)
 May lead to architectural requirements
 Mechanisms for monitoring
 Access points and additional control

 Examples
 The delivered system shall include unit tests that ensure 100% branch coverage.
 Development must use regression tests allowing for full retesting in 12 hours.
40 Portability Measures
Measure ability of the system to run under different computing environments
 Hardware, software, OS, languages, versions, combination of these
 Can be measured as
 Number of targeted platforms (hardware, OS…)
 Proportion of platform specific components or functionality
 Mean time to port to a different platform
 Examples
 No more than 5% of the system implementation shall be specific to the operating system.
 The meantime needed to replace the current Relational Database System with another
Relational Database System shall not exceed 2 hours. No data loss should ensue.
41 Integrability and Reusability Measures
Integrability
 Measures ability to make separated components work together
 Can be expressed as
 Mean time to integrate with a new interfacing system

Reusability
 Measures ability that existing components can be reused in new applications
 Can be expressed as
 Percentage of reused requirements, design elements, code, tests…
 Coupling of components
 Degree of use of frameworks
42 Robustness Measures
Measure ability to cope with the unexpected
 Percentage of failures on invalid inputs
 Degree of service degradation
 Minimum performance under extreme loads
 Active services in presence of faults
 Length of time for which system is required to manage stress conditions

 Examples
 The estimated loss of data in case of a disk crash shall be less than 0.01%.
 The system shall be able to handle up to 10000 concurrent users when satisfying all their
requirements and up to 25000 concurrent users with browsing capabilities.
43 Domain-specific Measures
The most appropriate quality measures may vary from one application domain to another, e.g.:

 Performance
 Web-based system:
Number of requests processed per second
 Video games:
Number of 3D images per second

 Accessibility
 Web-based system:
Compliance with standards for the blind
 Video games:
Compliance with age/content ratings systems (e.g., no violence)
44 Other Non-Functional Requirements
 What about NFRs such as “fun” or “cool” or “beautiful” or “exciting”?
 How can these be measured?
 The lists of existing quality attributes are interesting but they do not include all NFRs.
 It is sometimes better to let customers do their brainstorming before proposing the
conventional NFR categories.
 In any case, we must also refine those goals into measurable requirements.

You might also like