Software Metrics

Prof. Wei T. Huang Department of Computer Science and Information Engineering National Central University 2007

1 © WTH07

z z z

Software Engineering Object-Oriented Software Engineering Engineering Mathematics


Note: – This lecture note is developed for the teaching purpose and is only offered to the senior and graduate students at the National Central University. Any commercial activity using this note is not allowed. – Most of the materials in this lecture note, including the text, the tables, and the figures, are excerpted from the sources given in the References, especially from [Fenton97].

2 © WTH07

Objective for Software Measurement


Measurement helps people to understand the world. Without measurement you cannot manage anything. There are three important activities in software development project:
– Understanding what is happening during development and maintenance – Controlling what is happening on the projects – Improving the processes and products


Thus, people must control their projects and predict the product attributes not just run them. But
– “You cannot control what you cannot measure.” (DeMarco, 1982) – “You can neither predict nor control what you cannot measure.” (DeMacro’s rule)

3 © WTH07

Software Measurement

Software Developers get sense of
– whether the requirements are consistent and complete – whether the design is of high quality – whether the code is ready to be tested


Project Managers measure
– attributes of process – whether product will be ready for delivery – whether the budget will be exceeded


Customers measure
– whether the final product meets the requirements – whether the product is of sufficient quality


Maintainers must be able
– to see what should be upgraded and improved
4 © WTH07

1. Measurement

5 © WTH07

a journey. It is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules. the cost of a journey. the testing phase of a software project – Attribute: a feature or property of an entity • The area or color of a room. the elapsed time of a testing phase 6 © WTH07 . a room. – Entity: an object or an event in the real world • A person.1 What is Measurement? (1) z Measurement is essential to our daily life.1.

• Questions: (1) How can you measure to identify the best (or good) individual soccer player? (2) How can you identify good or bad software developer? z There are two kinds of qualification: – Measurement: a direct quantification • Examples: the height of a tree or the weight of a shipment of bricks – Calculation: a indirect quantification z So. 7 © WTH07 . a software engineer may continue to claim that important software attributes. air quality. medicine. in order to make software engineering as powerful as other engineering disciplines. quality. such as dependability.1. usability and maintainability.1 What is Measurement? (2) z “What is not measurable make measurable. • Human intelligence. and even some social science. economies. and economic inflation form the basis for important decision that affect our everyday life.” (Galileo) – In the physical science. we are now able to measure attributes that were previously thought unmeasurable.

we – Fail to set measurement targets for our software products. 8 © WTH07 . – Do not quantify or predict the quality of the products we produce.1.2 Measurement in Software Engineering z For most software development projects. – Fail to understand and quantify the component costs of software projects. • Gilb’s Principle of Fuzzy Target: projects without clear goals will not achieve their goals clearly. – Do without a carefully controlled study to determine if the technology is sufficient and effective.

3 Understand and Control a Project (1) z z Information needed to understand and control a software development project: Managers – – – – – What does each process cost? How productive is the staff? How good is the code being developed? Will the user be satisfied with the product? How can we improve? Are the requirements testable? Have we found all the fault? Have we met our product or process goals? What will happen in the future? z Engineers – – – – 9 © WTH07 .1.

– – 10 © WTH07 .Understand and Control a Project (2) z Measurement is important for three basic activities: – Measures help us to understand what is happening during development and maintenance – establishing baselines to set goals for future behavior. for instance. Measurement encourages us to improve our process and products – increasing the number or type of design reviews. The measurement allows us to control what is happening on our projects – predicting what is likely to happen and make changes to processes and products in order to meet the goals.

1979) z Productivity measurements and models Productivity Value Cost Quality Reliability Defects Quantity Size Functionality Personnel Time Money Resources H/W S/W Complexity Environment constraints Problem difficulty 11 © WTH07 .4 The Scope of Software Metrics (1) z Cost and effort estimation – Examples: COCOMO model (Beohm. 1981). SLIM model (Putnam. Albrecht’s function point model (Albrecht.1. 1978).

testability z z z z Reliability model Performance evaluation and models Structural and complexity metrics Management metrics – Using measurement-based charts or graphs to help customers and developers decide if the project is on track. z Evaluation of methods and tools – New methods and tools that may make the organization or project more productive and the products better and cheaper. reliability.4 The Scope of Software Metrics (2) z z Data collection – The collected data distilled into simple graphs or charts Quality models and measures – Product operation: usability.1. portability. efficient – Product revision: maintainability. 12 © WTH07 .

What are the attributes of those entities? – Hint: see next chapter. What do you want to measure such an entity? – Hint: see next chapter. design and code are entities of software products. Specifications.Exercises z z What are metrics? Name the reason why metrics are useful. z Personnel is one of the resources for software development. Suppose you are a manager. 13 © WTH07 .

2. Classifying Measures 14 © WTH07 .

• Internal product attributes: size. 15 © WTH07 . • External product attributes depending on product behavior and environment: reliability. usability. structuredness. effort. code complexity. efficiency. – Process: collections of software-related activities • The duration of the process or one of its activities. portability and interoperability are attributes that we can measure. deliverables or documents that result from a process activity. • The effort associated with the process or one of its activities.1 Classifying Software Measures (1) z Software-measurement activity is identifying the entities and attributes.2. integrity. testability. • The number of incidents of a specified type arising during the process or one of its activities. module coupling and cohesiveness. cost. – Products: any artifacts. reusability.

methods are candidates for measurement. process or resource related to its environment. materials. or intelligence.2. process or resource its own. • External attributes: product. age. tools (software and hardware). • Personnel (individual or team).1 Classifying Software Measures (2) – Resources: entities required by a process activity. • Cost and productivity ( = amount of output/effort input) • Staff: experience. 16 © WTH07 . … – Attributes • Internal attributes: measured by examining the product.

2.1 Classifying Software Measures (3)

Components of software measurement
Product Specifications Internal size, reuse, modularity, redundancy, functionality, syntactic correctness, etc. size, reuse, modularity, coupling, cohesiveness, functionality, etc. size, reuse, modularity, coupling, functionality, algorithmic complexity, control flow, structuredness, etc. size, coverage level, etc.


comprehensibility, maintainability, etc.


quality, complexity, maintainability, etc


reliability, usability, maintainability, etc.

Test data

quality, etc. 17 © WTH07

2.1 Classifying Software Measures (4)
Process Constructing specification Detailed design Testing Resources Personnel Teams age, price, etc. size, communication level, structuredness, etc. price, size, etc price, speed, memory size, etc. size, temperature, light, etc. productivity, experience, intelligence, etc. productivity, quality, etc time, effort, number of requirements changes, etc. time, effort, number of specification faults found, etc. time, effort, number of coding faults found, etc. quality, cost, stability, etc. cost, cost-effectiveness, etc. cost, cost-effectiveness, stability, etc.

Software Hardware Office

usability, reliability, etc. reliability, etc. comfort, quality, etc. 18 © WTH07

2.2 Determine What to Measure (1)

The Goal-Question-Metric (GQM) paradigm In order to decide what your project should measure, you may use GQM paradigm. The following high-level goals may be identified.
– Improving productivity – Improving quality – Reducing risk


Templates for goal definition
– Purpose
• Example: To evaluate the maintenance process in order to improve it

– Perspective
• Example: Examine the cost from the viewpoint of the manager

– Environment
• Example: the maintenance staff are poorly motivated programmers who have limited access to tools

19 © WTH07

2.2 Determine What to Measure (2)

A framework of GQM
– List the major goals of the development or maintenance project. – Derive from each goal the questions that must be answered to determine if the goals are being met. – Decide what must be measured in order to be able to answer the questions adequately.

20 © WTH07

2.2 Determine What to Measure (3)

Example of deriving metrics from goals and questions

21 © WTH07

2 Determine What to Measure (4) z Examples of AT&T goals. questions and metrics (Barnard and Price 1994) 22 © WTH07 .2.

3 Applying the Framework (1) z Cost and effort estimation – Focusing on predicting the attributes of cost or effort for the development process. z Quality model and measures – Predicting product model – cost and productivity are dependent on the quality of products output during various processes. 23 © WTH07 . – Measuring internal product attributes: such as complexity and structure – Measuring external product attributes: attributes with respect to how the product relates to its environment. z Productivity measures and model – Measuring a resource attribute. z Data collection – Gathering accurate and consistent measures of process and resource attributes.2.

z Structural and complexity metrics – Measuring internal attributes of products which suggest what the external measures might be.2. z Performance evaluation and models – Measuring efficiency of the product. z Capability-maturity assessment – SEI CMM 24 © WTH07 .3 Applying the Framework (2) z Reliability model – Successful operation during a given period of time.

– Example: US defense projects (NetFocus 1995) Item Defect removal efficiency Original defect density Slip or cost overrun in excess of risk reserve Total requirements creep (function points or equivalent) Total program documentation Staff turnover Target > 95% < 4/function point 0% Malpractice level < 70% > 7/function points (*) > 10% < 1%/month > 50% average < 3 pages/function > 6 pages/function point point 1-3%/year > 5%/year 25 © WTH07 (*) Function points: to measure the amount of functionality in a system as described by a spec.2.3 Applying the Framework (3) z Management by metrics – Use metrics to set targets for the development projects. .

3 343 194 172 106 11 Second sample project 55 22. and evaluating the results – Example: Code inspection statistics from AT&T (Barnar and Price 94) Metric Number of inspections in sample Total KLOC inspected Average LOC inspected (module size) Average preparation rate (LOC/hour) Average inspection rate (LOC/hour) Total faults detected (observable and non-observable)/KLOC Percentage of re-inspections First sample project 27 9.9 154.7 0.3 Applying the Framework (4) z Evaluation of methods and tools – Using the proposed tool or method on a small project at first.5 409 121.2.5 26 © WTH07 .8 89.

27 © WTH07 .2.4 Software Measurement Validation (1) z The measurement pattern.

2.4 Software Measurement Validation (2) z Two types of measuring – Measures or measurement systems: used to assess an existing entity by numerically characterizing one or more its attributes. – Example: program length 28 © WTH07 . – Prediction systems: used to predict some attribute of a future entity involving a mathematical model with associated prediction procedure z Validating prediction systems – Deterministic prediction systems: always get the same output for a given input – Stochastic prediction systems: the output for a given input will vary probabilistically – Example: Using COCOMO (to be explained later) • Acceptance range within 20% for predicted effort z Validating software measures – Measures should reflect the behavior of entities in the real world.

– Hints: Using the template of goal definitions z Use the GQM approach to explain the reason you take the “Software Metrics” course. What are the goals you take this course. 29 © WTH07 .Exercises z You are now taking the course “Software Metrics”.

Measuring Internal Product Attributes (1) 30 © WTH07 .3.

design and final code) – Functionality: functions supplied by the product to the user 31 © WTH07 .3.1 Measuring Size (1) z Simple measures of size are often rejected because they do not adequately reflect: – Effort – Productivity – Cost z The important aspects (attributes) of software size – Length: physical size ( including specifications.

1 Measuring Size (2) – Complexity: underline problem that the software is solving • Problem complexity: the complexity of the underlying problem • Algorithmic complexity: the complexity of the algorithm to solve the problem (measuring the efficient of the software) • Structural complexity: the structure of the software used to implement the algorithm • Cognitive complexity: the effort required to understand the software – Reuse: how much of a product was copied or modified from a previous version of an existing product (including off-the-shell products).3. 32 © WTH07 .

2 Software Size (1) z Software size – Specification • a useful indicator of how the design is likely to be – Design • a predictor of code length – Code 33 © WTH07 .3.

for example: CLOC/LOC 34 © WTH07 .3.2 Software Size (2) z Traditional code measures – – – – Number of line of code (LOC) HP definition: NCLOC (non-commented line) or ELOC Total length (LOC) = NCLOC + CLOC The density of comments.

where V* is the potential volume. 35 © WTH07 .3 Software Science (1) z Maurice Halstead’s software science – A program P is a collection of tokens.3. the minimal size implementation of P. – Tokens are μ1 = number of unique operators μ2 = number of unique operands N1 = total occurrences of operators N2 = total occurrences of operands – The length of P: N = N1 + N2 – The vocabulary of P: μ = μ1 + μ2 – The volume of P (a suitable metric for the size of any implementation of any algorithm): V = N × ㏑ μ – The program level of P of volume V (the implementation of an algorithm) : L = V*/V (L ≦ 1).

thus. So the time required for programming is T = E / 18 seconds 36 © WTH07 . Halstead claimed that ß = 18. 20≥ ß≥5.3 Software Science (2) – – – – The difficulty: D = 1/L The estimate of L: Ĺ = 1/D = (2/μ1) × (μ2/N2) The estimated program length: Ń = μ1 × ㏑ μ1 + μ2 × ㏑ μ2 The effort required to generate P: E = V/ Ĺ = μ1 N2 N ㏑ μ / 2 μ2 where the unit of measurement of E is elementary mental discriminations needed to understand P. John Stroud claimed that the human mind is capable of making a limited number ß of elementary discriminations per second.3.

026 10 SAVE = A(I) D = 1/L = 38.LT.6/442 = 0. M C ROUTINE SORTS ARRAY A INTO DESCENDING ORDER IF (N. N) INTEGER A(100). N.6 bits (V > V*) GOTO 20 L = V*/V = 11.3 Software Science (3) z Halstead’s sample FORTRN program SUBROUTINE SORT (A.2) GOTO 40 DO 30 I = 2.75 = 442 bits IF (A(I).044 = 10045 20 CONTINUE T = E/β = 10045/18 = 558 seconds = 10 minutes 30 CONTINUE 40 CONTINUE END 37 © WTH07 . N N = N1 + N2 = 93 M=I–1 μ= μ1 + μ2 = 27 DO 20 J = 1.GT.044 A(J) = SAVE E = V/Ĺ = 442/0. SAVE.5 A(I) = A(J) Ĺ = 1/D = (2/μ1) × (μ2/N2) = (2/14) × (13/42) = 0.3. M V = N × ㏑μ= 93 × 4. J. I.A(J)) GOTO 10 V* = 11.

This is the same case by using component-based software construction technique. In this kind of environment. icons. say. can easily generate an executable program of 200 Kb. and graphic. a program with just five BASIC statements. In this environment. z There are two separate measurement issues – How do we account in our length measures for objects that are not textual? – How do we account in our length measures for components that are constructed externally? 38 © WTH07 . the executable code to produce a scrollbar is constructed automatically. Thus. it is not clear how you would measure length of the “program”. You need to write code only to perform the specific actions that result from a click on a specific command button.3. you can create a sophisticated Windows program.4 “Object” Programming Environment (1) z An example – In the Visual Basic™ programming environment. with almost no code in traditional sense. complete with menus. you point at a scrollbar object in the programming environment. For example.

(*) Ontology: the common words and concepts (the meaning) used to describe and represent an area of knowledge. Suppose you look up ontology in the dictionary. An ontology is the specification of conceptualization for engineering product. you will find that the metaphysics are: (1) A branch of philosophy that seeks to explain the nature of being and reality. a count of objects and methods led to more accurate productivity estimates than those using lines of code (Pfleeger 1989).3. X is said to act upon Y if the history of Y is affected by X. (2) speculative philosophy in general (Webster’s New World Dictionary).4 “Object” Programming Environment (2) z z In object-oriented development. “Ontological principles” (Bunge’s ontological terms) (*) – Two objects are coupled if and only if at least one of them acts upon the other. 39 © WTH07 .

cn be the complexity of the methods. the number of methods. ci…. then WMC = n. Depth of Inheritance Tree (DIT): The length from the node to the root of the inheritance tree.. – Metric 1. 40 © WTH07 .4 “Object” Programming Environment (3) z The Metrics Suite for OOD [Chidamber/Kemerer] using the notion mentioned above. Weighted Methods Per Class (WMC) = ∑ ci (i=1 to n) where consider a Class C. with methods M1….. If all method complexities are considered to be unity.Mn that are defined in the class. – Metric 2.3.

– Metric 4. Response for Class (RFC): The number of local methods plus the number of methods called by local methods. Lack of Cohesion Metric (LCOM): The number of nonintersecting of local methods. 41 © WTH07 . Coupling between Object Classes (CBO): The number of other classes to which the class is coupled. – Metric 5.3. – Metrics 6.4 “Object” Programming Environment (4) – Metric 3. Number of Children (NOC): The number of immediate successors of the class.

42 © WTH07 . the atomic entities are various lines appearing in the specification. and data flows. data stores.3. where diagrams have a uniform syntax.5 Specification and Design (1) z z A document of specification and design may consist of both text and diagrams. and axioms. Zschema or class diagrams. functions. We can define appropriate atomic objects for the different type of diagrams and symbols. – For Z schemas. for instance: – For data flow diagram. etc. such as type declaration or a predicate. the atomic entities are sorts. The well-known method for handle these atomic objects. the atomic objects are process. external entities. – For algebraic specification. operations. such as DFD.

3.5 Specification and Design (2) z Example: Structured analysis components (DeMarco 1978) View Functional Data State Diagram Data flow diagram Data dictionary Entity relation diagram State transition diagram Atomic objects Bubbles Data elements Objects. transitions 43 © WTH07 . relations States.

Supplement to Section 3.5 (1) z Data flow diagram: Satisfy Material Request as an example. 44 © WTH07 .

y2)) = ((x1 = x2) and (y1 = y2)) Axioms defining the Operations over sort 45 © WTH07 . BOOLEAN Create (Integer. Operation signatures setting out the names and the types of the parameters to the operations defined over the sort X (Create (x.y)) = y Eq (Create (x1. Create (x2.5 (2) z Algebraic specification: Coord as an example Spec Name → (Generic Parameter) sort <name> imports <List of Spec Name> COORD sort Coord Imports INTEGER. Integer) → Coord. Y (Coord) → Integer. Coord) → Boolean.y1). X (Coord) → Integer.y)) = x Y (Create (x.Supplement to Section 3. Eq (Coord.

Supplement to Section 3. wang |→ 2345.5 (3) z Z schema: a Phone DB as an example Δ Phone DB members. …} 46 © WTH07 . members’: P Person telephones. telephones’ : Person ↔ Phone Δ state dom telephones ⊆ members dom telephones’ ⊆ members’ declaration predicate new state telephones’ = telephones ∪ {huang |→ 0543} where telephones = {lee |→ 1234.

m is the number of modules. z Length may be predicted by considering the median expansion ratio from spec or design length to code length. so. and α is the design-to-code expansion ratio 47 © WTH07 . It is 16 in fact.3. The example in slid 48.13 is a reasonable estimate. N = 93. LOC = 93/7 ≈ 13.6 Predicting Length z Example: Halstead’s software science LOC = N / ck where ck is a constant depending on programming language K. Expansion ratio (design to code) = size of design / size of code z For the module design phase: LOC = α ∑ Si (i = 1 to m) where Si is the size of module i. for FORTRAN ck = 7.

etc. code. HP’s example (Lim 1994): Organization Quality Productivity Time to market Manufacturing productivity 51% defect reduction 57% increase NA San Diego technical graphics 24% defect reduction 40% increase 42% reduction z Extent of reuse (NASA/Goddard’s SE Lab.) – – – – Reused verbatim: the code in the unit reused without any change Slightly modified: fewer than 25% of lines of code in the unit modified Extensively modified: ≥ 25% of the lines of code modified New: none of the code comes from previously constructed unit 48 © WTH07 .7 Reuse (1) z z The reuse of software (including requirements.3. test data.) improves the productivity and quality. allowing the developer to concentrate on new problems. scripts. designs. documentation.

73 0.3.80 2.21 2.09 000 of noncomment source lines 49 © WTH07 .55 0.7 Reuse (2) z Example: Software Engineering Laboratory – 20% reused lines of code – 30% reused lines of Ada code z Reuse at HP 90 80 70 Percent reuse 60 50 40 30 20 10 0 0.85 3.

3.7 Reuse (3) z Example: Programming Research Ltd. Product (%) QAC QA Fortran QA Manager(X) QA Manager (Motif) QA C++ Reusable LOC 40 900 34 000 18 300 18 300 40 900 Total LOC 82 300 73 000 50 100 52 700 82 900 Reuse ratio 50 47 37 35 49 50 © WTH07 .

3. – External inquiries: interactive inputs requiring a response. – External files: machine-readable interfaces to other systems. 51 © WTH07 . – Internal files: logical master files in the system. – External output: items provided to the user that generate distinct application-data (such as reports and messages). Using the following items of types to compute an unadjusted function point count (UFC): – External inputs: items provided by the user that describe distinct application-oriented data (such as files). not including inquiries.8 Functionality (1) z Albrecht’s approach: Function points that are intended to measure the amount of functionality in a system as described by a specification.

3.8 Functionality (2) z Unadjusted function point (FP) count. UFC – FP complexity weight (wi): Item Low Complexity 3 4 3 7 5 Medium Complexity 4 5 4 10 7 High Complexity 6 7 6 15 10 52 © WTH07 External inputs External outputs External inquiries External files Internal files .

D = # external files = 2.8 Functionality (3) z Example: A simple spelling checker.3. – The DFD A = # external inputs = 2. B = # external outputs = 3. E = # internal files = 1 53 © WTH07 . C = # inquiries = 2.

personal dictionary-name – 3 external outputs: misspelled word report.3. personal dictionary – 1 internal file: dictionary 54 © WTH07 .8 Functionality (4) z For the example of the spelling checker. the items are identified as follows: – 2 external inputs: document filename. # of words processed message. errors – 2 external files: document file. # of errors message – 2 external inquiries: words processed.

35 see next slide) for the spelling checker.8 Functionality (5) z z The complexity rates: simple. Then we may estimate the effort to complete the spelling checker is 118 person days (59 × 2). 55 © WTH07 .65 to 1. TCF = 0. we assume that the complexity is average. average. or complex For the spelling checker example. then UFC = 4A + (5 × 2 + 7 × 1) + 4C + 10D + 10E = 63 z Technical complexity factor.3.93 = 59 z What is the FP for? Suppose a task takes a developer an average of two person days of effort to implement a function point. then UFC = 4A +5B +4C +10D +10E = 61 z If the dictionary file and the misspelled word report are complex. then FP = 63 × 0.93 (0.

– Programming language statements per function point Minimum (.8 Functionality (6) z Converting from function points to lines of code.1 standard deviation) 60 40 40 40 10 15 Mode (most common value) 128 55 55 55 20 32 Maximum (+ 1 standard deviation) 170 80 140 80 40 41 56 © WTH07 Language C C# C++ Java Smalltalk Visual Basic Note: [McConnell07] Table 18.3.3 .

– Example: For the spelling checker UFC = 35 x 1 + 15 x 2 = 65 (matching the result in shown in slide 65) 57 © WTH07 . you can come up with your own calibrations for use in your environment. – Use the Dutch Method of counting function points to attain a low-cost ballpark estimate early in the project.8 Simplified FP Techniques z The Durch method. However.3. IndicativeFunctionPointCount = (35 x Internal files) + (15 x External files) The numbers 35 and 15 are derived through calibration.

8 z Components of the technical complexity factors: Reliable back-up and recovery Data communications Distributed functions Performance Heavily used configuration Online data entry Operational ease Online update Complex interface F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 Complex processing Reusability Installation ease Multiple sites Facilitate change TCF = 0.65 + 0. − F1F2F6 F7F8F14 are 3 (average) F4 and F10 are 5 (it is essential to the systems being built).Supplement to Section 3. So. TCF = 0.65 + 0. .01 ∑ Fi i =1 14 For the spelling checker: − F3 F5F9F11F12 F13 are 0 (sub-factor is irrelevant).65 (if each Fi is set to 0) = 1.01(18 + 10) = 0.35 (if each Fi is set to 5).93 58 © WTH07 Note: The factor varies = 0.

we measure the efficiency of a solution. measuring algorithmic efficiency. that is.3. the fastest algorithm necessary to solve the problem requires at least (㏒ n). – Complexity of a solution: be regarded in terms of the resources needed to implement a particular solution. • Time complexity: where the resource is computer time • Space complexity: where the resource is computer memory z In order to measure and express complexity.9 Complexity z We define – Complexity of a problem: the amount of resources required for an optimal solution to the problem. – Example: Binary search • For a list of n elements. z Big-O notation – Example: the problem of searching a sorted list for a single item can be shown to have complexity O(㏒ n). 59 © WTH07 . that is. the binary search algorithm terminates after at most (㏑ n) comparisons.

2. Explain very briefly the idea behind Albrecht’s function points measure. 3. Compare function points with the line of code measure. List the main applications of function points. 60 © WTH07 . Note: [Fenton97] Chapter 7.Exercise 1.

4. Measuring Internal Product Attributes (2) 61 © WTH07 .

reusability. – Data-flow structure: the trail of a data item created or handled by a program. in testing a product. – Data structure: the organization of the data itself. testability. independent of the program. such as maintainability. Types of structural measures – Control-flow structure: the sequence in which instructions are executed in a program. and code may help the developers to understand the difficulty they sometimes have in converting one product to another. not only in requiring development effort but also in how the product is maintained. or in predicting external software attributes from early internal product measures. 62 © WTH07 .1 Structure z z z The structure of requirements. The structure of a product plays a part. and reliability. design.4.

(Note: Graph excerpted from McCabe 1976) 63 © WTH07 .4. e edges. and p connected components is V(G) = e – n + p – Theorem: In a strongly connected graph G. the cyclomatic number is equal to the maximum number of linearly independent circuits.2 Control-Flow Structure (1) z McCabe’s cyclomatic complexity measure – Definition: The cyclomatic number V(G) of a graph G with n vertices.

– G has only on path it and only if V(g) = 1. z The control graphs of the usual constructs in structured programming. it is the size of a basis set. – Inserting or deleting functional statements to G does not affect V(G). 64 © WTH07 . – V(G) depends only on the decision structure of G. – Inserting a new edge in G increases V(G) by unity.4.2 Control-Flow Structure (2) z Properties of cyclomatic complexity – V(G) ≧ 1 – V(G) is the maximum number of linearly independent paths in G.

4. Example: Channel Tunnel Rail System – The module be rejected if its V exceed 20 or if it has more than 50 statements (Bennet 1994).2 Control-Flow Structure (3) z z The cyclomatic number is a useful indicator of how difficult a program or module will be to test and maintain. 65 © WTH07 . the module may be problematic. When V exceeds 10 in any one module.

– Example: Design charts (excerpted from [Fenton97]) 66 © WTH07 .3 Modularity and Information Flow Attributes (1) z Module: a contiguous sequence of program statements. – A module can be any object. – A program.4. bounded by boundary elements. procedure. having an aggregate identifier (Yourdon and Constantine 1979). or function. unit. z Inter-module attributes.

– Module a calls b.4. e 67 © WTH07 .3 Modularity and Information Flow Attributes (2) z Module call-graph: an abstract model of the design. c – Module b calls d – Module c calls d.

where each parameter is either a single data element or a homogeneous set of data items (no control element). This type of coupling is necessary for any communication between modules. – R4 Common coupling relation: x and y refer to the same global data. – R5 Content coupling relation: x refers to the inside of y z z Loosely coupling: i is 1 or 2 Tightly coupling: i is 4 or 5 68 © WTH07 . – R2 Stamp coupling relation: x and y accept the same record type as a parameter.4. – R3 Control coupling relation: x passes a parameter (flag) to y with the intention of controlling its behavior. – R1 Data coupling relation: x and y communicate by parameters.3 Modularity and Information Flow Attributes (3) z z Coupling is the degree of interdependence between modules (Yourdon and Constantine 1979). Classification for coupling (Ri > Rj for i > j): – R0: module x and module y have no communication.

y) = i + n/(n+1). 69 © WTH07 .m) • n means coupling relation Ri m is parameter (flag) – M1 and M2 share two common record types: R2 – M1 passes to M3 a parameter that acts as a flag in M3: R1 – M2 branches into module M4 and passes two parameters that act as flags in M4: R3 and R5 z Measuring coupling between x and y: c(x.4. where i is the number corresponding to the worst coupling relation Ri between x and y.3 Modularity and Information Flow Attributes (4) z Example: Coupling-model graph – Level: (n. and n is the number of interconnections between x and y (Fenton and Melton 1990).

– Procedural: the module performs more than one function. but not on the same body of data. z Cohesion ratio = # of modules having functional cohesion / total # of modules 70 © WTH07 .4.3 Modularity and Information Flow Attributes (5) z Cohesion (Yourdon and Constantine 1979): – Functional: the module performs a single well-defined function. – Temporal: the module performs more than one function within the same time span. – Sequential: the module performs more than one function. – Coincidental: the module performs more than one function. and they are unrelated. – Communication: the module performs multiple functions. and they are related only to a general procedure.

Information flow complexity = length (M) x ((fan-in(M) x fan-out(m))2 – Example: 71 © WTH07 .4.3 Modularity and Information Flow Attributes (6) z Information flow (Henry-Kafura’s measure 1981).

94 Nominal (10≦D/P<100) 1.16 Example: • If DATA is rated “very high”.4 Data Structure z The amount of data – Halstead’s software science: μ2 (# of distinct operands) or N2 (total number of occurrences of operands) as the data measure – COCOMO (Constructive Cost Model): D/P = Database size in bytes or characters/Program size in DSI where DSI is the number of delivered source instructions. • If DATA is low. Multiplier Data Low (D/P < 10) 0. then the cost of a project is increased by 16%.08 Very high(D/P≧1000) 1.4. 72 © WTH07 .00 High(100≦D/P<1000) 1. the cost is reduced to 94%.

Which software entity and attribute do you believe it really measures? 73 © WTH07 .” Briefly describe what you understand this assertion to mean. “A good design should exhibit high module cohesion and low module coupling. McCabe’s cyclomatic number is a classic example of a software metric. The following flowgraph is a truly unstructured “spaghetti” prime. 3. What is the essential complexity. 2.Exercises 1.

5. Measuring External Product Attributes 74 © WTH07 .

whereas external attributes are measurable only when the product is complete.1 External Attributes Functionality z External attributes – Software quality – Quality impacts: Time Quality Effort z Predicting external attributes via measuring and analyzing internal attributes. – Internal attributes are often easier to measure than external ones. 75 © WTH07 .5. because – The internal attributes are often available for measurement early in the life cycle.

2 Quality Model z ISO 9126 standard – – – – – – Functionality Portability Reliability Efficiency Usability Maintainability 76 © WTH07 .5.

1.3 Definitions (1) z z Functionality: the functions supplied by the product to the user Portability (*): “A set of attributes that bear on the capability of software to be transferred from one environment to another. and ER is a measure of the resources needed to create the system for the resident environment. (*) To be described in details in section 7.5. 77 © WTH07 .” Portability = 1 – ET/ER where ET is a measurement of the resources needed to move the system to the target environment.

3 Definitions (2) z Reliability (*): ”A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions and a stated period of time. (*) To be described in detail in the next section. inspection or other techniques.” defect density = # of known defects/product size where product size is measured in terms of LOC.5. and the known defects are discovered through testing. 78 © WTH07 .

z z Other quality measures system spoilage = time to fix post-release defects /total system development time Hitachi example: 79 © WTH07 .

3 minutes). the efficiency can be expressed by the response time.5. The response time of the machine must at leat 800 seconds (13. – Example: Suppose we implement the Heapsort sorting algorithm in a machine environment where comparison operations are performed at the rate of 220/sec. 80 © WTH07 .3 Definitions (3) z Efficiency. – In software. we need nln(n) (n=25 in this case) comparisons. If we sort a list of n = 225 items. we need 25 x 225 comparisons. that is.

3 Definitions (4) z Usability: The extent to which the software product is convenient and practical to use (Boehm 1978).5. Good usability includes: – Well-structured manuals – Good use of menus and graphics – Information error messages – Help function – Consistent interfaces 81 © WTH07 .

Thus. Records needed to calculate this measure: – Problem recognition time – Administrative delay time – Maintenance tools collection time – Problem analysis time – Change specification time – Change time (including testing and review) z z The guideline: Cyclomatic number < 10. 82 © WTH07 . maintainability in a real software system is affected by a wide range of system design decisions.5.3 Definitions (5) z Maintainability: Many measures of maintainability are expressed in term of MTTR (mean time to repair).

z Example: A decomposition of maintainability 83 © WTH07 .

84 © WTH07 . List some possible problems with this measure. Compare the usefulness of this measure for developers and users.Exercise z The most commonly used software quality measure in industry is the number of faults per thousand lines of product source code.

• Reliability is defined in terms of failures. by carefully collecting data on inter-failure times.6. there are several software reliability growth models may aid the estimation. However. it is impossible to measure before development is complete. 85 © WTH07 . Software Reliability • Software reliability is a key concern of many users and developers of software.

6.1 Probability Density Function (1) z Probability density function (pdf) f(t) describes the uncertainty about when the component will fail : Probability of failure between t1 and t2 = ∫ f(t) dt t1 t2 86 © WTH07 .

It is just as likely to fail in the first two minutes as in the last two minutes.6. We can illustrate this behavior with the pdf f(t) as shown in the above figure.t] A component has a maximum life span of 10 hours. and 0 for any t>10. The function is defined to be 1/10 for any t between 0 to 10. failure time is bounded • Example: t ε[0.g. x] . e. We say it is uniform in the interval of time from t=0 to t=10. it is certain fail within 10 hours of use. 87 © WTH07 .10] and 0 elsewhere.x] to be 1/x for any t ime the interval [0. Suppose that the component is equally likely to fail during any two time periods of equal length within the 10 hours.1 Probability Density Function (2) – Example: Uniform pdf f(t) = 1/x for t ε [0.. For any x we can define the uniform pdf over the interval [0.

998 88 © WTH07 . t2] • For example. Probability of failure between time t1 and t2 = ∫f(t) dt tε[t1. failure is statistically independent of the past. The following figure shows the pdf: f(t) = λe –λt • The component fails in a given interval [t1. when λ=3. the probability of failure during time 0 and time 2 hours is: 1e2λ when λ=1 this probability failure = 0.6.t2]. it is equal to 0.63. that is.1 Probability Density Function (3) – Example: Failure time occurs purely randomly.

1 Probability Density Function (4) z The distribution function. that is.6. The probability of failure from time 0 to a given time t. F(t) = ∫f(t) dt where t ε[0.t] z The reliability function (called the survival function). R(t) = 1 – F(f) Distribution function Survival function 89 © WTH07 . the cumulative density function F(t) is the probability of failure between time 0 and t.

6.t]. Then f(t) = 1 for each t between 0 and 1. The cumulative density function F(t) and the survival function R(t) are shown as follows: F(t) = ∫ 0 1 dt = t 0 f(t)dt = ∫ t t R(t) = 1 – F(t) 90 © WTH07 .1 Probability Density Function (5) z Example: Distribution function and reliability function for uniform density function: Consider the pdf that is uniform over the interval [0.

6. F(t) = -e-λt R(t) = e-λt 91 © WTH07 .1 Probability Density Function (6) z Example: Distribution function and reliability function for f(t) = λe –λt.

or called expected value E(t) of T: E(T) = ∫tf(t)dt z Examples: – For the uniform pdf: 1/x.6.2 Mean Time to Failure (1) z The mean time to failure (MTTF): the mean of the probability density function. the MTTF is 1/λ 92 © WTH07 . – For the pdf as the exponential function: f(t) = λe-λt . the MTTF is 5 hours.

2 Mean Time to Failure (2) z To fix failure after each occurrence: 93 © WTH07 .6.

that is.2 Mean Time to Failure (3) z z z Reliability growth: the new component should fail less than the old one. where MTTR us the mean time to repair. the mean fi+1 > fi Mean time between failure (MTBF) = MTTF +MTTR. Availability: the probability that a component is operating at a given point in time Availability = MTTF/(MTTF + MTTR) × 100% 94 © WTH07 .6.

Exercises 1. List three internal software product attributes that could affect reliability. Why is reliability an external attribute of software? 2. 3. What corresponding improvements would you expect in the reliability of the system? 95 © WTH07 . Suppose you can remove 50% of all faults resident in an operational software system.

Resource Measurement 96 © WTH07 .7.

Internal files • The function-based measure more accurately reflects the value of output • It can be used to assess the productivity of software development staff at any stage in the life cycle • Measuring progress by comparing completed function points with incomplete ones 97 © WTH07 .1 Productivity (1) z Productivity – Productivity equation: productivity = size(lines of code)/effort (person-months) • Difficulty of measuring effort – Measuring productivity based on function-points # of function points implemented/person months • Function points: a review External inputs. External inquiries. External files. External output.7.

1 Productivity (2) z Example: Distribution of US software project size in function points (Jones. 1991) % of US software projects Application size in function points 98 © WTH07 .7.

0 % 0.1 % 1.0 % 0.0 % 3.7.0 % 25.0 % 30.0 % 50.5 % 70.0 % 15.0 % 10.4 % 13.0 % 5.1 Productivity (3) z Productivity ranges and modes for selected s/w project sizes Productivity rate in FP per P-M >100 75-100 50-75 25-50 15-25 5-15 1-5 <1 1.0 % 7.0 % 0.1 % 1.0 % 15.0 % 100 FPs 1000 FPs 10000 FPs 99 © WTH07 .0 % 4.0 % 0.0 % 10.01 % 0.0 % 0.0 % 40.0 % 4.

00) use of tools.7. COCOMO includes an 8% increase in project effort compared to the “normal” (value 1. Note: COCOMO model will described later.2 Method and Tool (1) z The original COCOMO model (*) is included two cost drivers: – Use of software tools – Example: For the project rated “low” in use of tools. 100 © WTH07 .

7. – Tool use categories (COCOMO 2.0): 101 © WTH07 .2 Method and Tool (2) – Use of modern programming practices.

Exercise z Other than personnel. which software development resources can be assessed in terms of productivity? How would you define and measure productivity for these entities? 102 © WTH07 .

Process Prediction 103 © WTH07 .8.

and while the software is being maintained. from before the development begins. 104 © WTH07 . through the development process.Good Estimates z The process predictions guide the decision-making. during the transition of product to customer.

Estimate (median) Estimate (median) (Normal distribution) Number of months to complete project – For example. while in less than 12 months is 0.9.8. the probability of completing the project in [8 months.1 Making Process Predictions z What is an estimate? – An ideal case: the probability density is normal distribution.16months] is 0.5 105 © WTH07 .

2 Models of Effort and Cost z Regression-based models: E = (aSb)F (without adjusting log E = log a + b log S) – Low experience (say <8years): F = 1.8.3 (F is the effort adjustment factor) Medium experience (8 – 10 years): F = 1.0 High experience (>10 years): F = 0.7 106 © WTH07 .

E = aSb F where E is effort in person months. – Advanced model: when design is complete. F is an adjust factor (= 1 in the basic model).8. 107 © WTH07 . – Intermediate model: after requirements are specified. there are three models: – Basic model: when little about the project is known.3 COCOMO – Effort (1) z Constructive Cost Model (Barry Boehm. S is size in thousands of delivered source instruction (KDSI). 1970s).

Mode a b Organic 2. e..g.6 (5000)1. e.2 ≈ 100 000 P-M (the system will require approximately 5000 thousand of delivered source instructions – KDSI) 108 © WTH07 . a missile guidance system..8.6 1. – Semi-detached system: between organic and embedded system.g.20 z Example: Telephone switching system.4 1.05 Semi-detached 3. E = 3. banking and accounting system.3 COCOMO – Effort (2) The values of a and b depend on the development mode: – Organic system: data processing (using databases and focusing on transactions and data retrieving).0 1. – Embedded system: real-time software.12 Embedded 3.

8.3 COCOMO – Effort (3) z Cost drivers for original COCOMO Product attributes: Required software reliability Database size Product complexity Use of modern programming practices Use of software tools Required development schedule Process attributes: 109 © WTH07 .

8.3 COCOMO – Effort (4) Resource attributes: Computer attributes: Personnel attributes: Execution time constrains Main storage constrains Virtual machine volatility Computer turnaround time Virtual machine experience Analyst capability Applications experience Programming capability Language experience 110 © WTH07 .

5 0.Duration z The duration model: D = a Eb where D is duration in months. 111 © WTH07 .32 z Suppose the development time for a 3000 person months embedded project: 2. the project requires 58 (≈3000/52) staff working for 52 months to complete the software.5 (3000)0.8.38 Embedded 2.4 COCOMO . The coefficients depend on the development mode.5 0.35 = 52 months That is. Mode a b Organic 2.35 Semi-detached 2. E is effort in person months.5 0.

because COCOMO is inflexible and inaccurate for new techniques. application generators. object-oriented approaches. such as use of tools. 112 © WTH07 .8. reengineering. called COCOMO II.5 COCOMO II (1) z Boehm and his colleagues have defined an updated COCOMO.

in that size can be done in terms of line of code. so they offer a richer system description than object points. that is. COCOMO II estimates size in object points. – Stage 2: A decision has been made to move forward with development. or technological maturity.8. and many cost factors can be estimated with some degree of comfort. 113 © WTH07 . software and system interaction. performance. little is known about the likely size of the final product under consideration. So.5 COCOMO II (2) z COCOMO II estimation process: – Stage 1: The project usually builds prototypes to resolve high-risk issues involving user interfaces. COCOMO II employs function points as a size measure. since function points estimate the functionality captured in the requirements. There is not enough information to support fine-grained effort and duration estimation. – Stage 3: This stage corresponds to the original COCOMO model.

5 COCOMO II (3) z Comparison of original and COCOMO II models COCOMO II Stage 2 FP and language % unmodified reuse.05 1. precedent.0 Semi-detached: 1. Embedded: 1. early architecture SEI process.20 114 © WTH07 . etc.02 to 1. precedent. Reuse Implicit in model Scale Organic: 1.26 depending on conformity. early architecture SEI process. Model aspect Original COCOMO Size Delivered source instruction or SLOC Equivalence SLOC Stage 1 Object points Stage 3 FP and lang.8.12. % modified reuse (determined by function) 1.02 to 1. etc.26 depending on conformity. or SLOC Equivalence SLOC as function of other variables 1.

required development schedule 115 © WTH07 Project cost drivers Use of modern None programming practices. programmer capability. documentation needs. development environment . required reusability Reliability db size. programmer capability. main storage constraints. Platform cost drivers None Platform difficulty Personnel cost drivers None Personnel capability and experience Analyst capability applications experience. use of s/w tools required development schedule Required development schedule. language and tool experience. product complexity Execution time constraints. continuity Use of software tools.Product cost drivers Reliability db size product complexity Execution time constraints. computer turnaround time Analyst capability applications experience. main storage constraints. programming lang. experience None Complexity.

z Certain things are unknown or very “fuzzy” – Size of the system. – A desired cost. – Feasible schedule.8. – A desired schedule. – Minimum person power and cost consistent with a feasible schedule 116 © WTH07 .6 Putnam’s SLIM Model (1) z Customer Perspective at Start of Project – A set of general functional requirements the system is supposed to perform.

the customer really wants to know: – Product size ± a reasonable percentage variation.6 Putnam’s SLIM Model (2) z Assuming that technical feasibility has been established. – The person power and dollar cost for development ± reasonable variation.8. 117 © WTH07 . – A “do-able” schedule ± a reasonable variation. – Projection of the software modification and maintenance cost during the operational life of the system.

6 Putnam’s SLIM Model (3) z 4 parameters concerned by a manager: – Effort – Development time – Elapsed time – A state-of-technology parameter These parameters provide sufficient information to assess the financial risk and investment value of a new software development project. 118 © WTH07 .8.

8.6 Putnam’s SLIM Model (4) z Rayleigh curves for SLIM model (*) (*) Excerpted from [Putnam78] 119 © WTH07 .

8.6 Putnam’s SLIM Model (5) z SLIM uses separate Rayleigh curves for – – – – Design and code Test and validation Maintenance management 120 © WTH07 .

e. and machine access fairly unconstrained. less “fuzzy” requirements.g. Example: The SLIM software equation implies that a 10% decrease in elapsed time results in a 52% increase in total life-cycle effort 121 © WTH07 ..6 Putnam’s SLIM Model (6) z z The Effort-Time Tradeoff Law Size = CkK1/3td4/3 where Ck is the state of technology which depends on the development environment. Ck = 10040 by an environment with on-line interactive development.8. structured coding.

6 Putnam’s SLIM Model (7) z To allow effort or duration estimation.3 for new software with many interfaces and interactions with other system. 27 for re-implementations of existing system. – Example: the manpower acceleration = 12. – We can derive by using the two equations (s/w equation and manpower acceleration equation). the effort or duration are: K = (Size/Ck)9/7D04/7 122 © WTH07 . 15 for stand-alone systems. introducing equation (Putnam): D0 = K / td3. where D0 is a constant called manpower acceleration.8.

that is.8. we get the software differential equation as: d2y/dt2 + t/td2 dy/dt + y/td2 = K/td2 = D 123 © WTH07 . Differentiate once more with respect to time.6 Putnam’s SLIM Model (8) z The software differential equation – The rate of accomplishment is proportional to the pace of the work times the amount of work remaining to be done. dy/dt = 2at(K-y) where 2at is the pace and (K-y) is the work remaining to be done.

6 147.9 20.2 24.0 Coding rate (size/yr in 000) 0 52.8.0 3.6 Putnam’s SLIM Model (9) z The software differential equation is very useful because it can be solved step-by-step using the Runge-Kutta solution.5 3. D=52.65.0 215. SIDPERS parameters: K=700 MY.5 2.0 232. td=3.0 Actual size is 256K which is pretty close to this result.2 101.5 236. for SIDPERS. For example.0 1.8 89.6 50.0 90.54 MY/yr 124 © WTH07 .3 Cumulative code (in 000) 0 13.33 12.8 44.5 1.65 4.0 98. a DoD’s the “Army’s Standard Installation-Division Personnel System”: t (year) 0 .0 3.0 241.

Supplement z The overall life-cycle manpower curve can be well represented by the Norden/Rayleigh form: dy/dt = 2K a t e-at*t MY/YR where a=(1/2td2). The cumulative number of people used by the system at any time t is: y = K(1 – e-at*t) 125 © WTH07 . K is the area under the curve from t to infinity and represents the nominal life-cycle effort in man-years. td is the time at which dy/dt is a maximum.

126 © WTH07 .Neglecting the cost of computer test time. $DEV = $COST/MY * (0..3944 K) = 40% $LC. the development cost is simply the $COST/MY (average). That is. inflation overtime. where $LC is life-cycle cost. etc.

what is the effect of extending the delivery date by 20%? 2. 127 © WTH07 .000 delivered source instructions. According to the Rayleigh curve model. assuming that the estimated software size is 10.Exercises 1. and use the COCOMO model to give a crude estimate of the total number of person months required for the development. Select the most appropriate mode for the project. Suppose that you are developing the software for a nuclear power plant control system.

Object-Oriented Metrics 128 © WTH07 .9.

9. – Number of scenario scripts (NSS): The number of scripts or use cases is directly proportional to the number of classes required to meet requirements. schedule and overall integration effort. High values for NKC indicate substantial development work. attributes. and the number of methods. the number of states for each class. 129 © WTH07 . – Number of key classes (NKC): A key class focuses directly on the business domain for the problem and will have a lower probability of being implemented via reuse. – Number of subsystems (NSUB): The NSUB provides insight into resource allocation. It is suggested that between 20 and 40 percent of all classes in a typical OO system are key classes.1 Object-Oriented Metrics (1) z OO Metrics can provide insight into software size.

– An example: Inventory management system Initiator Action Participant User request item info.9. InventoryQueryWin sends item: aNumber to Inventory Inventory returns anItem to InventoryQueryWin InventoryQueryWin requests price from Item…. – Scenario scripts are written in certain OO methodologies to document and leverage the expected uses of the system. since scripts exercise major functionality of the system being built. – To measure the amount of work on project. – Scenario scripts should directly relate to satisfying requirements. 130 © WTH07 . on InventoryQueryWin..1 Object-Oriented Metrics (2) z Number of scenario scripts.

– The number of key classes is an indicator of the volume of work needed in order to develop an application. 131 © WTH07 . z Number of key classes – Key classes are central to the business domain being developed. – The number of key classes is indicator of the amount of long-term reusable objects. Scripts steps should relate to the public responsibilities of the subsystem and classes to be developed.1 Object-Oriented Metrics (3) – The number of scenario scripts is an indication of the size of the application to be developed.9.

1 Object-Oriented Metrics (4) z How to determine if a class is key by asking the following questions – Could I easily develop applications in this domain without this class ? – Would a customer consider this object important ? – Do many scenarios involve this class ? z Examples of key classes for some problem domains: Retail SalesTransaction LineItem Currency Telephony Call Connection Switch Banking SavingAccount Currency Customer 132 © WTH07 .9.

• Example: If there are 100 key classes and a GUI is used. – Support classes give us a handle on estimating the size of the effort. and so on). collection. and early estimate might be for 300 total classes in the application. – Non-UI intensive projects have none to two times as many support classes as key classes. file. – Type of user interface • Number of support classes varies one to three times the number of key classes (in experience). string. The variance is primarily affected by the type of UI. database.1 Object-Oriented Metrics (5) z Number of support classes – Support classes. 133 © WTH07 . They are not the central to the business domain ( UI. – UI intensive projects have two to three times as many support classes as key classes.9. include user interface classes. communications.

– Example: If there are 100 key classes during first weeks of analysis. such as a GUI under Presentation Manager or Windows 134 © WTH07 . the total estimated number of classes for final project is 100 + 250 = 350 classes.1 Object-Oriented Metrics (6) z Average number of support classes per key class – This metric can be used to help estimate the total number of classes that will result on a project. based on previous projects’ result.5. where ratio is 2. – User interface complexity • The number of classes required to support a complex UI will be greater than a simple interface.9.

135 © WTH07 . drag-and-drop GUI 2.0 3. such as parts of speech and scenario scripts to discover a majority of the key classes in the problem domain.5 3.25 2. Use analysis techniques. Multiply the number of key classes by the numbers from step 2. text-based UI Graphic UI Complex.0 2. This is the early estimate of the total number of classes in the final system.9.1 Object-Oriented Metrics (7) z An estimating process 1. Categorize the type of user interface found • • • • No UI Simple. 2.

Multiply the total number of classes from step 3 by a number between 15 and 20 (person-days from person-days per class). based on factor such as • • The ratio of experienced to novice OO personnel The number of reusable domain objects in the reuse library This is an estimate of the amount of effort to build the system.1 Object-Oriented Metrics (8) 4. 136 © WTH07 .9.

137 © WTH07 . from 20-30% before OO to 80% with OO • OO program were about 75% the length (in line of code) of comparable traditional solutions (Stark 1993). • But.9. – The preliminary results of NASA-SEL research showed that OO represented “the most important methodology studies by the SEL to date”: • The amount of reused had risen dramatically. it was not clear which gains are due to Ada.1 Object-Oriented Metrics (9) z Example from NASA’s Software Engineering Laboratory (NASASEL).

9. (*) The research by Gustav Karner of Objectory AB. Applying Use Cases: A Practical Guide. 138 © WTH07 .. 2001. Boston. Addison-Wesley. Geri Schneider and Jason P. 1993. Winters. 2nd Ed.2 Estimating Work with Use Cases (1) [Schneider01] (*) z Example: Order Processing System use case diagram (extracted from OOSE courseware Chapter 4).

139 © WTH07 . • A Complex Actor is a person interacting through a graphical user interface. or it is a person interacting through text-based interface such as ASCII terminal. interface 2 Graphical interface 3 • A Simpler Actor represents another system with a defined application programming interface • An Average Actor is either another system that interacts through a protocol such as TCP/IP. or protocol-driven.2 Estimating Work with Use Cases (2) z Weighting Actors. Actor Type Simple Average Complex where Description Factor Program interface 1 Interactive.9.

9.2 Estimating Work with Use Cases (3) z Order Processing System − − − − − − Customer – complex Inventory System – simple Accounting System – simple Customer Rep – complex Warehouse Clerk – complex Shipping Company – average So. – 2 simple * 1 = 2 – 1 average * 2 = 2 – 3 complex * 3 = 9 – The total actor weight for OPS = 13 140 © WTH07 .

9.2 Estimating Work with Use Cases (4) z Weighting Use Cases – Transaction-Based Weighting Factors Use Case Type Description Simple Average Complex 3 or fewer transactions 4 to 7 transactions More than 7 transactions Factor 5 10 15 – Analysis Class-Based Weighting Factors Use Case Type Description Simple Average Complex Fewer than 5 analysis classes 5 to 10 analysis classes More than 10 analysis classes Factor 5 10 15 141 © WTH07 .

9. 6 simple * 5 = 30 4 average * 10 = 40 0 complex * 0 = 0 Total use case weight for OPS = 30 + 40 + 0 = 70 142 © WTH07 .2 Estimating Work with Use Cases (5) z Order Processing System − − − − − − − Create Order – average Check Order – simple Cancel Order – simple Modify Existing Order – average Confirm Order – simple Check Customer’s Credit – simple Check Account – average − Fill Order – average − Shipping Order – simple − Send Email – simple So.

143 © WTH07 .2 Estimating Work with Use Cases (6) z Unadjusted Use Case Points. – (UUCP) is raw number to be adjusted to reflect the project’s complexity and the experience of the people on the project: 13 + 70 = 83 UUCP for the Order Processing System.9.

2 Estimating Work with Use Cases (7) z Weighting Technical Factors – Technical Factors for System and Weights Factor Description Factor number T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 Distributed system Response or throughput performance objectives End-user efficient (online) Complex internal processing Code must be reusable Easy to install Easy to use Portable Easy to change Concurrent Includes special security feature Provides direct access for third party Special user training facilities required Weight 2 1 1 1 1 0.5 2 1 1 1 1 1 144 © WTH07 .5 0.9.

later) 0 T6: Easy for non-technical people (yes) 2 T7: Easy for non-technical people (yes) 2 T8: Portability (not at this time) 0 T9: Easy to change (yes) 3 T10: Concurrent (multi-user only) 5 T11: Security (simple) 3 T12: Direct access for 3rd parties (yes) 1 T13: Training (no) 0 145 © WTH07 .2 Estimating Work with Use Cases (8) z Technical factors for system and weights for our OPS: Factor Number & Description The value (extended) T1: Distributed system (yes) 2 T2: Response & throughput (likely limited by human input) 3 T3: Needs to be efficient (yes) 5 T4: Easy to process (yes) 1 T5: Code reusability (not yet.9.

6 + (0.6 + (0.01 * TFactor) – Rating the factors for National Widgets (say): 2 + 3 + 5 + 1 + 0 + 2 + 2 + 0 + 3 + 5 + 3 + 1 + 0 = 27 TCF = 0. 5 means it is essential.01 * 27) = 0. A rating of 0 means that the factor is irrelevant for this project.9. TFactor = Σ (Tlevel) * (Weighting Factor) TCF = 0.87 146 © WTH07 .2 Estimating Work with Use Cases (9) z Technical Complexity Factor (TCP) – Rate each factor from 0 to 5.

2 Estimating Work with Use Cases (10) z Environment Factor: the experience level of people on project.5 1 0.5 0.5 1 2 -1 -1 147 © WTH07 .9. – Environmental Factors for Team and Weights Factor Number Factor Description F1 F2 F3 F4 F5 F6 F7 F8 Familiar with UP Application experience OO experience Lead analyst capability Motivation Stable requirements Part-time workers Difficult programming language Weight 1.

2 Estimating Work with Use Cases (11) z Environmental factors for team and weights for our OPS: Factor number & descriptions The value (extended) F1: Most of team (unfamiliar) 1.5 F3: Most of team (no OO experience) 1 F4: Lead analyst capability (good) 3 F5: Motivation (really eager) 5 F6: Stable requirements (not enough) 5 F7: Part-time workers (none) 0 F8: Difficult programming language Java (looking for) -1 148 © WTH07 .9.5 F2: Most of team (no application experience) 0.

0 means no motivation for the project. 5 means unchanging requirements. 5 means expert. 0 means no part-time technical staff. EFactor = Summation of (Flevel * WeightingFactor) EF = 1. 3 average. – Now multiply each factor’s rating by its weight from the table shown in last slide to get F factors.03 * Efactor) 149 © WTH07 . For F6. 5 means high motivation. For F1 through F4. 5 means very difficult programming language. For F7. 0 means extremely unstable requirements. For F8. 3 means average.2 Estimating Work with Use Cases (12) – Rate each factor from 0 to 5. 3 means average. For F5. 0 means easy-touse programming language. 0 means no experience. 5 means all part-time technical staff. 3 means average. 3 means average.9.4 + (-0.

03 * 15) = 0.87*0.95 – Use case points UCP = UUCP * TCF * EF – The use case point for National Widget and the final estimation of time to complete the project is UCP = 83*0.5 + 1 + 3 + 5 + 5 + 0 – 1 = 15 EF = 1.9.6 150 © WTH07 . Efactor = 1.2 Estimating Work with Use Cases (13) – The rating for OPS.95 = 67.5 + 0.4 + (-0.

You may assume that all of the team member work full-time. there is not problems of communication or synchronization of effort. then 67.2 Estimating Work with Use Cases (14) z Project estimate.9. and factor of person-hours per UCP are different from every organization and estimated according to experience. • The values of TF. about 11 months of effort plus 2 weeks for working out any team issues. • If you have 4 people in a team. So.6 * 28 ≈ 1921 hours ≈ 46 weeks at 42 hours a week for one person • Note that the factor of 28 person-hours/UCP is because we have 2 negative factors in “Environment Factors for Team and Weights”. EF. – Suppose we use a factor of 28 person-hours per UCP. 151 © WTH07 .

Planning a Measurement Program 152 © WTH07 .10.

productivity may be measured in terms of size and effort. techniques and staff available. • Some measurements are taken once. – What: what will be measured..10. – Where and When: measurement will be made during the process.1 A Measurement Program (1) z What is a metrics plan? – Why: the plan lays out the goals or objectives of the project. while others are made repeatedly and tracked over time. – How and Who: the identification of tools. • For instance. • Describing what questions need to be answered by project members and project management. 153 © WTH07 .

10. as well as the viewpoint of the question. 154 © WTH07 .1 A Measurement Program (2) z Why and what: Developing Goals-Questions-Metrics (GQM): – The GQM templates encourage managers to consider the context in which a question is being asked. – By deriving measurement from goals and questions. it is clear how the resulting data points will be related to one another and used in the larger context of the project.

10. Suppose that evaluating tool use is one of the major goals for a project. including: – – – – – – Which tools are used? Who is using tool? How much of the project affected by each tool? How much experience do developers have with the tool? What is productivity with tool use? Without tool use? What is the productivity quality with tool use? Without tool use? 155 © WTH07 .1 A Measurement Program (3) z Example: The importance of understanding the effects of tool use on productivity. Several questions from the goals.

Thus.1 A Measurement Program (4) z z Example: Suppose that a project involves developing a complex piece of software. – A count of lines of code. and models tell us what to measure. including a large database of sensor data. and questions. The sensor data capture routines are being written in C. metrics. while the data storage and manipulation routines are being written in Smalltalk. There are three ways: – A count of objects and methods (operations). His/her metrics plan includes a GQMderived question: Is the productivity for C development the same as for Smalltalk development? Productivity will be measured as size per person day. 156 © WTH07 . The project manager wants to track code productivity. the goals tell us why we are measuring. – A count of function points.10.

10. New tools and techniques are tested and monitored to see how they affect the process and products. The organization focuses on using quantitative information to make problems visible and to assess the effort of possible solutions. – – – 157 © WTH07 . Success on earlier projects can be repeated with similar. schedule and functionality. not on team accomplishment.2 CMM (1) z Capability Maturity Model (CMM): – – Level 1 Initial: Few processes are defined. Standard process is tailored to the special need. Level 3 Defined: Management and engineering activities are documented. Level 4 Managed: A managed process directs its efforts at product quality. There are some discipline among team member. the success of development depends on individual efforts. the result is a standard process for everyone in the organization. new one. standardized and integrated. Level 5 Optimizing: Quantitative feedback is incorporated in the process to produce continuous process improvement. Level 2 Repeatable: Basic project management processes track cost.

10.2 CMM (2) z Key process area in the CMM model (Paulk 1995) – Initial: none – Repeatable: • • • • • • Requirements management Software project planning Software project tracking and oversight Software subcontract management Software quality assurance Software configuration management 158 © WTH07 .

2 CMM (3) – Defined: • • • • • • • Organization process focus Organization process definition Training program Integrated software management Software product engineering Intergroup coordination Peer reviews – Managed: • Quantitative process management • Software quality management – Optimizing: • Defect prevention • Technology change management • Process change management 159 © WTH07 .10.

1 Tutorial 160 © WTH07 .10.2 CMMi (4) z Capability Maturity Model Integration (CMMI) From Mike Philips: CMMIV1.

Decision analysis and resolution. Integrated supplier management. Technical solution. Integrated training Requirements management. Verification.2 CMMi (5) Level 5 Optimizing Focus Process Areas Continuous Organization innovation and process development. Organization Environment for integration. Risk management. Organization process focus. project planning. Organization project management. Measurement and analysis. Product integration. Project Project monitoring and control.10. Validation. Supplier agreement management. Organization process definition. Configuration Mgmt 4 Quantitative managed 3 Defined 2 Managed Basic project management 1 Performance None 161 © WTH07 . Process and product quality assurance. Casual Analysis improvement and resolution Quantitative management Process standardization Organization process performance Quantitative process management Requirements development.

The XP life cycle has four basic activities: – Continual communication with the customer and within team.10. achieved by a constant focus on minimalist solutions. – Simplicity. 162 © WTH07 . – Rapid feedback through mechanisms such as unit and functional test.3 Extreme Programming (XP) and CMM (1) z z XP’s target is small to medium sized team (fewer than 10 people) building software with vague or rapidly changing requirements. and – The courage to deal with problems proactively.

– Metaphor: Guide all development with a simple. simplify.10. that is. – Refactoring: Restructure the system without changing its behavior to remove duplication. – Small releases: Put a simple system into production quickly. improve communication. then code”. combining business priorities and technical estimates. – Simple design: Design as simple as possible at any given moment.3 Extreme Programming (XP) and CMM (2) z XP method consists of 12 basic elements: – Planning game: Quickly determine the next release’s scope. – Testing: Developers continually write unit tests that must run flaw-lessly. 163 © WTH07 . or add flexibility. shared story of how the overall system works. “test. customers write tests to demonstrate that functions are finished. Release new version on a very short (say 2 weeks) cycle.

164 © WTH07 . – Continuous integration: Integrate and build the system many time a day when every time a task is finished). – On-site customer: Have an actual user on the team full-time to answer questions.– Pair programming: All production code is written by two programmers at one machine. – Collective ownership: Anyone can improve any system code anywhere at any time. Continual regression when requirements change. – Coding standards: Have rules that emphasize communication throughout the code. never work overtime two weeks in a row. – 40-hour weeks: Work no more than 40 hours per week whenever possible.

Level 2 2 2 2 2 2 3 3 3 3 Key process area Requirements management Software project planning Software project tracking and oversight Software subcontract management Software quality assurance Software configuration management Organization process focus Organization process definition Training program Integrated software management Satisfaction ++ ++ ++ -+ + + + --- 165 © WTH07 .z XP satisfaction of key process area.

166 © WTH07 .3 3 3 3 3 3 4 4 5 5 5 Software product engineering Intergroup coordination Peer review Software product engineering Intergroup coordination Peer review Quantitative process management Software quality management Defect prevention Technology change management Process change management ++ ++ ++ ++ ++ ++ --+ --- Note: ++ Largely addressed in XP. -. + partially addressed in XP.Not addressed in XP.

If systems grow. some XP practices become more difficult to implement. XP is targeted toward small teams working on small to medium-sized projects.z z Note that CMM supports a range of implementations through 18 key process areas (KPAs) and 52 goals that comprise the requirements for a fully mature software process. 167 © WTH07 .

– If no defect can be found anywhere in the development process. then the customer is not likely to find one either. – The more robust the design. 168 © WTH07 .4 Measurement in Practice (1) z Motorola improvement program – A product is built in the shortest time and based on lowest cost if no mistake is made in the process.10. the lower the inherent failure rate.

10.4 Measurement in Practice (2) z Example: The process phases and steps of Siemens AG and/or Nixdorf AG Process phase Planning and high-level design Process steps Requirements study Solution study Functional design Interface design Detailed project plan Component design Code design Coding Component test Functional test Product test System test Pilot installation and test Customer installation 169 © WTH07 Detailed design and implementation Quality control Installation and maintenance .

Total number of defects received per fiscal year. fist year of customer installation / KLOC. Development cost/# of KLOC. maintenance and marketing for FY and product line. KLOC/development time on months.10. Maintenance cost/# of defects. KLOC/development effort in staff-months. Sales in DM/ total cost for development. – Productivity metrics – Profitability metrics 170 © WTH07 .3 Measurement in Practice (3) z Siemens metrics – Quality metrics Number of defects counted during code review. Total gross lines of code delivered to customers/total staff months. quality control. Total number of field problem reports received. Sales in DM/software development cost for FY and product line. pilot test.

3 Measurement in Practice (4) z Hitachi Software Engineering (HSE): 98% of the projects were completed on time. and 99% of the project cost between 90 and 110% of the original estimate.10. 171 © WTH07 .

3 Measurement in Practice (5) z HP: Effects of reuse on software quality. 172 © WTH07 .10.

3 Measurement in Practice (6) z HP: Effects of reuse on software productivity.10. 173 © WTH07 .

10.4 Successful Metrics Program (1-1) z Recommendation for successful metrics program (Rifkin and Cox 91) Pattern type Measures Recommendation Staff small Use a rigorously defined set Automate collection and reporting Motivate managers Set expectations Involve all stakeholders Educate and train Earn trust People 174 © WTH07 .

Take a “whole process” view Understand that adoption takes time Implementation 175 © WTH07 . to the right people Strive for an initial success Add value Empower developers to use measurement info.10.4 Successful Metrics Program (1-2) Program Take an evolutionary approach Plan to throw one away Get the right info.

176 © WTH07 . – They highlight areas of potential process improvement and characterize improvement efforts.10.4 Successful Metrics Program (2) z Key benefits of measurement programs: – They support management planning by providing insight into product development and by quantifying trade-off decisions. – They support understanding of both the development process and development environment.

measurement programs will become more necessary. Success means: – The measurement program results were actively used in decision making. – The results were communicated and accepted outside of the IT department. Rubin’s report (1990): Among 300 major US IT companies (less 100 IT staff). 177 © WTH07 . sixty were successful by implemented measurement programs.5 Lessons Learned (1) z z As software become more pervasive and software quality more critical. – The program lasted longer than two years.10.

178 © WTH07 . perceiving it as a negative commentary on their performance. – Management withdrew support for the program. – Systems professionals resisted the program.10. – Program reports failed to generate management action. – Already burdened project staff.5 Lessons Learned (2) z Reasons for failure in the remaining 240 companies: – Management did not clearly define the purpose of the program and later saw the measures as irrelevant.

Assign responsibilities to each activity. Create a metrics database. Do research.10. Establish a training class in software measurement. Get tools for automatic data collection and analysis. 179 © WTH07 .5 Lessons Learned (3) z Steps to success for a measurement program (Grady and Caswell): – – – – – – – – – – Define the company and project objectives for the program. Publicize success stories and encourage exchange of ideas. Sell the initial collection of these metrics. Define the metrics to collect. Establish a mechanism for changing the standard in a orderly way.

0047.018L – d is the predicted faults. z Gaffney argued that the relationship between d and L was not language dependent. thus d = 4.0023 and A2=0. A1=0. for Fortran A0=0. and Quality (by Examples) z Fault density by Akiyama (Fujitsu.0015 (L)4/3 180 © WTH07 .2 + 0. 0.000002.10. L is the size in LOC. For instance. for assembly 0. 1971) d = 4.86 + 0.0001 and 0. Structure. and each Ai depends on the average number of uses of operators and operands per line of code for a particular language. a module of 1000 lines of code will be expected to have approximately 23 faults z Lipow and Halstead’s theory to define a relationship between fault density and size d/L = A0 + A1㏑ L + A2㏑ L – d is the number of faults. L is the size in LOC. respectively.6 Size. For instance.0012.000043.

7 Object Orientation z Example from NASA’s Software Engineering Laboratory (NASASEL). • But.10. 181 © WTH07 . from 20-30% before OO to 80% with OO • OO program were about 75% the length (in line of code) of comparable traditional solutions (Stark 1993). – The preliminary results of NASA-SEL research showed that OO represented “the most important methodology studies by the SEL to date”: • The amount of reused had risen dramatically. it was not clear which gains are due to Ada.

11 Software Estimation Note: The materials in this chapter are excerpted from [McConnell06]. 182 © WTH07 . we strongly recommend they may read McConnell’s book to obtain more solid knowledge about. If readers are interested in software estimation.

11. not to predict a project’s outcome.” (*) – The primary purpose of software estimation is to determine whether a project’s targets are realistic enough that the project can be controlled to meet the them. 183 © WTH07 . (*) [McConnel06] page 14.1 A Good Estimate z Definition of a good estimation. – “A good estimate is an estimate that provides a clear enough view of the project reality to allow the project leadership to make good decisions about how to control the project to hit its targets.

– easily determining effort.000 (3200) 1.11.000 (1. and schedule • Relationship between project size and productivity.600) (*) [McConnel06] Table 5-1 184 © WTH07 .2 Estimate Influences (1) z Project Size.000-20. cost.000 (2600) 700-10.000 (2.000) 300-5.000-25. (*) Project Size (in LOC) 10K 100K 1M 10M LOC per Staff Year (COCOMO II nominal in parentheses 2.

11. 185 © WTH07 .2 Estimate Influences (2) z Personnel Factors. – Personnel factors exert significant influence on project outcomes.000 LOC project: Best Rank Requirements analysis capability Programmer capability (general) Personnel continuity (turnover) Applications (business area) experience Language and tools experience Platform experience Team cohesion -29% -24% -19% -19% -16% -15% -14% Worst Rank 42% 34% 29% 22% 20% 19% 11% Compare with nominal Example: the project with worst requirements analysis would require 42% more effort than nominal. – According to COCOMO II: a 100.

Otherwise. C#.5 2 to 1 1 to 6 1 to 10 1 to 4. Cobol.11.5 1 to 2. – 40% impact on the overall productivity rate of the project by team’s experience with the specific language and tools used on the project. this point is not relevant to your estimate. or VB would tend to be more productive than using C.2 Estimate Influences (3) z Programming Language Factors. or Macro Assembly.5 Note: If you don’t have choice about the programming language. using Java. – Some languages generate more functionality per line of code than others: Language Level Relative to C C C# C++ Cobol Fortran 95 Java Macro Assembly Smalltalk SQL Visual Basic 1 to 1 1 to 2. 186 © WTH07 .5 1 to 2 1 to 2.5 1 to 1.

11.3 Estimates, Targets, and Commitments

Estimation on software projects interplays with business targets, commitments, and control.
– Estimate: a prediction of how long a project will take and how much it will cost. – Target: a statement of a desirable business objective.
• Example: “We must limit the cost of the next release to $2 million, because that is the maximum budget we have for the release.”

– Commitment: a promise to deliver defined functionality at a specific level of quality by a promised date. – Control: typical activities are to remove non-critical requirements, to redefine requirements, to replace less-experience staff with moreexperience staff.

187 © WTH07

11.4 The Probability of Software Delivering

The probability of a software project delivering on or before a particular date.

188 © WTH07

11.5 Estimation Error (1)

The Cone of Uncertainty (adapted from [McConnel06] figure 4-4)

189 © WTH07

11.6 Estimation Error (2)

Estimation error by software development activity (according to the figure in last slide) [Beohm00].
Scoping Error Possible Error on Low Side 0.25x (-75%) 0.50x (-50%) 0.67x (-33%) 0.80x (-20%) 0.90x (-10%) Possible Error on High Side 4.0x (+300%) 2.0x (+100%) 1.5x (+50%) 1.25 (+25%) 1.10x (+10%) Range of High to Low Estimates 16x 4x 2.25x 1.6x 1.2x

Phase Initial Concept Approved Product Definition Requirements Complete User Interface Design Complete Detailed Design Complete (for sequential projects)

190 © WTH07

11.6 Estimation Error (3)

Chaotic Development Processes.
– Common examples of project chaos.
• • • • • • • • • • No well-investigated requirements in the first place. Lack of end-user involvement in requirements validation. Poor design. Poor coding practices. Inexperienced personnel. Incomplete or poor project planning. Prima donna team members. Abandoning planning under pressure. Gold-plating developers. Lack of automated source code control.

191 © WTH07

30-calendar-day iterations.11. • If requirements cannot be stabilized. daily team measurement. Key practices: a daily stand-up meeting with special questions. time box development. a demo to external stakeholders at the end of each iteration. consider the development approaches that are designed to work in short iterations (agile methods). Extreme Programming. 192 © WTH07 . such as Scrum. and avoidance of following predefined steps. – The challenges of unstable requirements.6 Estimation Error (4) z Unstable Requirements. • Scrum: an agile method that is strong promotion of self-organizing teams. • Requirements changes are often not tracked and the project is often not reestimated. in those cases. etc. estimate variability will remain high through the end of the project. – So.

Agile Estimating(*) In this additional chapter.12. 2004. User Stories Applied: For Agile Software Development. Addison-Wesley. this chapter can be referred to as a supplement to this Software Metrics courseware. because using agile method to develop software system becomes more popular today. agile method is something different from the tradition software development methods such as Unified Process. However. we want to introduce agile estimating. (*) Excerpted from Mike Cohn. From my viewpoint. 193 © WTH07 . Boston.

delivering software z Agile Software Development Manifesto: – – – – “Individuals and interactions over processes and tools.e.” 194 © WTH07 .. – Lightness: staying maneuverable – Sufficient: a matter of staying in the game. Agile process is both light and sufficient. i.” “Responding to change over following a plan. context-specific.12. and growth-oriented.” “Customer collaboration over contract negotiation.1 Agility z z “Agility is dynamic.” (Goldman 1997). aggressively change-embracing.” “Working software over comprehensive documentation.

Addison-Wesley.2 User Stories (1) z User stories. title or ISBN number. For example.12. shipping address. • A user can edit her account information (credit card. • A user can remove books from her cart before completing an order. 2004. billing address and so on). – Examples: Buying books through Internet(*) • A user can search for books by author. User Stories Applied for Agile Software Development. • A user can establish an account that remembers shipping and billing information. the shipping address and credit card information. Boston. • A user can view detailed information on a book. number pages. 195 © WTH07 . • A user can put books into a “shopping cart” and buy them when she is done shopping. … (*) Mike Cohn. – A user story is a description of functionality that will be valuable to either a user or software or purchase of a system. publication date and contents. • A user enters her billing address.

12.2 User Stories (2)

Some comments on stories:
– Using story card which contains a short description of user- or customer-valued functionality. – The customer team writes the story cards. – Stories are prioritized base on their value to the organization. – Releases and iterations are planned by placing stories into iterations. – Velocity is the amount of work the developers can complete in an iteration. – The sum of the estimates of the stories place in an iteration cannot exceed the velocity the developers forecast for that iteration. – If a story won’t fit in an iteration, you can split the story into two or more smaller stories (next slide). – User stories are worth using because they emphasize verbal communication.
196 © WTH07

12.2 User Stories (3)

Disaggregating into tasks
– Stories are small enough to serve as units of work. If not, then a story may be disaggregated.
• Example: The story: “A user can search for a hotel on various field.” might be turned into the following tasks:
– – – – – – Code basic search screen Code advanced search screen Code results screen Writhe and tune SQL to query the database for basic search Write and tune SQL to query the database for advanced search Document new functionality in help system and user’s guide

197 © WTH07

12.3 Story Points

Story points.
– Story point as an ideal day of work (that is, a day without interruption whatsoever). – Story point as an ideal week of work – Story point as a measure of the complexity of the story

198 © WTH07

12.4 Estimating (1)

Estimate as a team
– Gather together the customer and the developers who will participate in creating the estimates. – Estimate, and converge on a single estimate that can be used for the story.

199 © WTH07

12.4 Estimating (2)

Using story points
– Use the term velocity to refer to the number of story points a team completes in an iteration.
• Suppose a project comes up with a total of 300 story point. Estimators can complete 50 story points in each iteration, that is, they will finish the project in a total of 6 iterations. So they may plan on maintaining their measured velocity of 50 based on three conditions: – Nothing unusual affected productivity this iteration – The estimates need to have been generated in a consistent manner (using a team estimate process) – The stories selected for the first iteration must be independent
» The sum of a number of independent samples from any distribution is approximately normally distribution.

200 © WTH07

12. we estimate that it will take 100/25 = 4 iterations to complete the project 201 © WTH07 .4 Estimating (3) z The ways of estimating a team’s initial velocity – Use historical values – Take a guess – Run an initial iteration and use the velocity of that iteration z From story points to expected duration – Suppose the team sums the estimates from each card and come up with 100 story points which are converted into a predicted for the project. if we estimate a project at 100 ideal days (story points) with a velocity of 25. – Using velocity which represents the amount of work that gets done in an iteration. For example.

how many iterations will it take the team to complete a project with 27 story points if they have a velocity of 4? – Answer: With a velocity of 4 and 27 story points in the project.4 Estimating (4) z Question: Assuming one-week iterations and a team of four developers. (Note that the number of iteration is integer. it will take the team 7 iteration to finish.) 202 © WTH07 .12.

• Don’t estimate stories yourself. and five story point. 4. Which estimate should they use? – Answer: They should continue discussing the story until their estimate get closer. 203 © WTH07 .4 Estimating (5) z Responsibilities – Developers • • • • Defining story points Giving honest estimates Estimating as a team All two-point stories should be similar. z Question: If three programmer individually estimate the story at 2. – Customer • Participating in estimation meetings • Playing the role to answer questions and clarifying stories.12.

3. which is the sum of the story points for the stories completed in the iteration.5 Measuring and Monitoring Velocity (1) z The team complete stories during iteration. – Note that stories partially completed cannot include in velocity calculations. 204 © WTH07 . such as values like 12.12. For example: Story Story Point A user can … 4 A user can … 3 A user can … 5 A user can … 3 Velocity 15 – The team’s velocity is 15.

12. – Planned and actual velocity after the first three iteration – actual velocity is slightly less than planned velocity. 205 © WTH07 .5 Measuring and Monitoring Velocity (2) z Planned and Actual Velocity – To graph planned and actual velocity for each iteration is a good way to monitor whether actual velocity is deviating from planned velocity.

206 © WTH07 .5 Measuring and Monitoring Velocity (3) z The answer: Figure in last page + the cumulative story point graph: z The cumulative story point chart (above) shows the total number of story points completed through the end of each iteration.12.

expressed in story points.5 Measuring and Monitoring Velocity (4) z Iteration burndown charts – An iteration burndown chart shows the amount of work.12. 207 © WTH07 . remaining at the end of each iteration.

– To be noted: A strength of agile software development is that a project can begin without a length upfront complete specification of the project’s requirements. so still 113 story points remain. – Stories will come and go. 208 © WTH07 .12. The team actually complete 45-10-18=17.5 Measuring and Monitoring Velocity (5) z Progress and changes during four iterations (an example). and stories will change in importance. stories will change size.

the project would not be finished after 3 iterations. From the slope of the burndown line after the 1st iteration. 209 © WTH07 .5 Measuring and Monitoring Velocity (6) z Burndown chart for the project in last slide.12.

12. – A daily burndown chart shows the estimated number of hours left (not hours expended) in the iteration. – Example: The following chart shows a daily tracking of the hours remaining in an iteration: Reflects the amount of work remaining.5 Measuring and Monitoring Velocity (7) z Burndown charts during an iteration. 210 © WTH07 .

At this point they are ahead of schedule but you should be reluctant to draw too many firm conclusions after only two iterations. 211 © WTH07 .12.5 Measuring and Monitoring Velocity (8) z Question: What conclusions should you draw from the following figure? Does the project like it will finish ahead. behind or on schedule? z Answer: The team started out a little better than anticipated in the first iteration. After two iterations they have already achieved the velocity they expected after three iterations. They expect velocity to improve in the second and third iterations and then stabilize.

12. 212 © WTH07 .5 Measuring and Monitoring Velocity (9) z Question: What is the velocity of the team that finished the iteration shown in the following table? Story Story 1 Story 2 Story 3 Story 4 Story 5 Story 6 Story 7 Velocity Story Points 4 3 5 3 2 4 2 23 Status Finished Finished Finished Half finished Finished Not started Finished z Answer: 16. Partially completed stories do not contribute to velocity.

5 Measuring and Monitoring Velocity (10) z Question: Complete the following table by writing the missing values into the table.12. Iter-1 Iter-2 Iter-3 Story points at start of iteration Complete during iteration Change estimate Story points from new stories Story points at end of iteration 100 35 5 6 76 40 -5 3 36 0 2 213 © WTH07 .

5 Measuring and Monitoring Velocity (11) z Answer: Iter-1 Story points at start of iteration Complete during iteration Change estimate Story points from new stories Story points at end of iteration 100 35 5 6 76 Iter-2 76 40 -5 3 34 Iter-3 34 36 0 2 0 214 © WTH07 .12.

• Prioritize user stories: prioritize the features the product owner wants to develop. – Select stories and a release date: estimate the team’s velocity per iteration and assume the number of iteration. and resource goals. – Do in any sequence: • Select an iteration length: Mostly two or four weeks for most agile teams work. Iterate until the conditions of satisfaction for the release can best be met. • Estimate velocity: make an informed estimate of velocity based on past results. • Feature-driven: consider the completion of a set of features. – Estimate the user stories: estimate each new feature that has some reasonable possibility of being selected for inclusion in the upcoming release.12. 215 © WTH07 .6 Release Plan z The steps in planning a release [Cohn06] – Determine condition of satisfaction: defined by a combination of schedule. scope. • Date-driven: product must be release by a certain date for which the feature set is negotiable.

216 © WTH07 . z User stories are not scenarios – Use case scenarios are much more detailed than user stories. User stories are not. – Documenting a system’s requirements following IEEE 830 is tedious. though they similar to each other. and very time-consuming. z User stories are not use cases – One of the most obvious difference between stories and use cases is their scope. – A scenario often describes a broader scope that does a use story. error-prone.12.7 What User Stories Are Not z User stories are different from IEEE 830 software requirements specifications. – Both differ in the level of completeness – Use cases are often intended as permanent artifacts of a project.

Good to express requirements z Drawbacks to using user stories: – On large project it can be difficult to keep hundreds or thousands of stories organized. 217 © WTH07 . are comprehensive by everyone.12. encourage participatory design. work for iterative development. – Communications cannot scale adequately to entirely replace written documents on large projects.8 Why User Stories? z User stories – – – – – – – – emphasize verbal communication. build up tacit knowledge. encourage deferring detail. support opportunities design. are the right size for planning.

– The developers are often 90% complete in a matter of days.8 218 © WTH07 . and leave the work “99.Some Comments z Martin Fowler and Kent Beck: Asking a developer for a percentage of completeness for a task generates a nearly meaningless answer. 95% complete in a month.9% complete”. As a manage what you can do! – So. better ask the teams what percentage of the features or user stories complete they are. don’t ask teams for a percentage of complete. (*) Refer to [Palmer02] and Supplement to Section 12. z But. 99% complete in six months. – Feature Driven Development (FDD) (*) uses the percentage of completeness of each feature to produce summary progress reposts.

12.9 An Example: User Story for Sailing Books(*) (1)
z z


z z

z z


A user can search for books by author, title or ISBN number. A user can view detailed information on a book. For example, number of pages, publication date and a brief description. A user can put books into a “shopping cart” and buy them when she is done shopping. A user remove books from her cart before completing an order. To buy a book the user enters her billing address, the shipping address and credit card information. A user can rate and review books. A user can establish an account that remembers shipping and billing information. A user can edit her account information (credit card, shipping address, billing address and so on).
(*) [Cohn06]. 219 © WTH07

12.9 An Example: User Story for Sailing Books(*) (2)



z z z

z z

A user can put books into a “wish list” that is visible to other site visitors. A user can place an item from a wish list (even someone else’s) into his or her shopping card. A repeat customer must be able to find one book and complete an order in less than 90 seconds. (Constraint) A user can view a history of all of his past orders. A user can easily re-purchase items when viewing past orders. The site always tells a shopper what the last 3 items she viewed are and provides links back to them. A user can see what books we recommend on a variety of topics. A user, especially a Non-Sailing Gift Buyer, can easily find the wish list of other users.
220 © WTH07

12.9 An Example: User Story for Sailing Books(*) (3)
z z

z z

z z

z z

A user can choose to have items gift wrapped. A Report Viewer can see reports of daily purchases broken down by book category, traffic, best- and worst-selling books and so on. A user must be properly authenticated before viewing reports. Orders made on the website have to end up in the same other database as telephone orders. (Constraint) An administrator can add new books to the site. An administrator needs to approve or reject reviews before they are available on the site. An administrator can delete a book. An administrator can edit the information about an existing book.

221 © WTH07

12.9 An Example: User Story for Sailing Books(*) (4)


A user can check the status of her recent orders. If an order hasn’t shipped, she can add or remove books, change the shipping method, the delivery address and the credit card. The system must support peak usage of up to 50 concurrent users. (Constraint)

222 © WTH07

Supplement to Section 12.8 (1)

Feature-Driven Development – the Processes.

Develop an Overall Model

Build a Feature List

Plan by Feature

Design by Feature

Build by Feature

An object model + notes

A list of features grouped into sets and subject areas

A development plan Class owners Feature set owners

A design package Complete client(sequence) valued function (add more content to the object model)

Note: Readers who are interested in FDD may refer to

223 © WTH07

8 (2) z Feature. (a calculateTotal() operation in a Sale class) • Validate the PIN number for a bank account. client valued function expressed in the form: <action>the<result><by|for|of|to>(an)<object> • Small: 1-10 days of effort are required to complete the feature. – Examples: • Calculate the total of a sale. – A feature is a very specific. Most are 1-3 days. • Client valued: the feature is relevant and has meaning to the business. (a authorize() operation in a Customer class) 224 © WTH07 . small. (a validate() operation on a BankAccount class) • Authorize a loan for a custome.Supplement to Section 12.

Review Articles R1: GQM Trees R2: Software Cost Estimation R3: Function Points R4: COCOMO Model R5: Putnam Model R6: Software Science Measurements 225 © WTH07 .

R1: GQM Trees z Example: GQM tree on software reliability 226 © WTH07 .

R1: GQM Trees z Example: GQM tree on software reliability 227 © WTH07 .

228 © WTH07 . – With the increased size of software projects.R2: Software Cost Estimation (1) z Special attention to the software cost-estimation: – There are no two identical systems or projects. – The uncertainty about cost estimates is usually quit high. any estimation mistakes could cost a lot in terms of resources allocated to the project. These mistakes lead either to overestimation or underestimation. • Uncertainty reduction over the course of software project.

Breaking a product up into its smallest components. and Albrecht’s function point models 229 © WTH07 . The well-known models are the COCOMO effort model. – Decomposition. or decomposing a project into smallest subtasks. – PERT models.R2: Software Cost Estimation (2) z Methods of cost estimation (from Kitchenham 1994): – Expert opinion. use the estimates obtained from previous projects. Effort = (lower estimate + 4 * most likely estimate + upper estimate) / 6 – Mathematical models. and apply them to the current project. Exercising some judgment based on some previous projects. – Analogy. Estimation based on some personal experience. Rayleigh curve models.

R3: Function Points (1) z Function Points Models. – Function Points • External inputs are the inputs from the use that provide distinct application oriented data. • External files deal with all machine-readable interfaces to other systems. • External outputs are directed to the user. • Internal files are the master files in the system. • User inquires are interactive inputs requiring a response. Example of such inputs are file names and menu selections. they come in the form of various reports and messages. 230 © WTH07 .

R3: Function Points (2) z Levels of complexity. Item External input External output User inquiry External file Internal file Simple 3 4 3 7 5 Average 4 5 4 10 7 Complex 6 7 6 15 10 231 © WTH07 .

65 means all factors rated as irrelevant.R3: Function Points (3) z The Unadjusted Function Count (UFC) z The computation of TCF are completed using the experimentally derived formula: where fi are detailed factors contributing to the overall notion of complexity.35 means all factors being essential. =1. TCF = 0. So. z Adjusted function point (FP) FP = UFC * TCF 232 © WTH07 . It ranges 0 to 5 with 0 being irrelevant and 5 standing for essential.

R3: Function Points (4) z Example: Weather Information 233 © WTH07 .

35*53).45 (0. weather data) Internal files: 1 (logs) UFC = 1*7 + 1*6 + 2*15 + 1*10 = 53 z If we consider adjusted point count FP.55 (1.65*53) to 71. the range of possible values spreads from 34.R3: Function Points (5) z Weather Information – – – – – External inputs: none External outputs: 1 (update display) User inquiries: 1 (update request) External files: 2 (weather sensor. 234 © WTH07 .

R3: Function Points (6) z A general scheme of Albrecht’s function point model. 235 © WTH07 .

and relaxed interfaces. changing environment. medicine). – Organic. The software systems falling under this category are a mix of those of organic and embedded nature. avionic. and unfamiliar surroundings. Such as simple business systems. and inventory management systems. small software libraries. 236 © WTH07 .R4: COCOMO Model (1) z z The COCOMO model is the most complete and thoroughly documented model used in effort estimate. This class of systems is characterized by tight constraints. familiar surroundings. aerospace. – Semidetached. Such as real-time software system. e. data processing. It is based on Boehm’s analysis of a database of 63 software projects. This category have a stable environment. There are 3 classes of systems: – Embedded. database management systems.g. Such as operating systems..

4*KDLOC1. • For embedded systems Effort = 3.R4: COCOMO Model (2) z The basic form of the COCOMO Model.20 • For organic systems Effort = 2.0*KDLOC1.05 • For semidetached systems Effort = 3.12 237 © WTH07 .6*KDLOC1. – Effort = a*KDLOCb . where a and b are two parameters of the model whose specific values are selected upon the class of the software system.

R4: COCOMO Model (3) z Development Schedule M (in months).5*Effort0.35 z Maintenance effort Effortmaintenance = ACT*Effort – ACT (annual change traffic) is a fraction of KDLOC undergoing change during the year.38 – For semidetached systems M = 2.5*Effort0.5*Effort0.32 – For organic systems M = 2. – For embedded systems M = 2. 238 © WTH07 .

Rating the 15 attributes by using the following six point scale: – – – – – – VL (very low) LO (low) NM (nominal) HI (high) VH (very high) XH (extra high) 239 © WTH07 .R4: COCOMO Model (4) z z The Intermediate COCOMO Model: a refinement of the basic model. The Improvement comes in the form of 15 attributes of the product.

• Complexity (CPLX). and virtual machine experience (VEXP). • Schedule effects (SCED) 240 © WTH07 . language experience (LEXP). • Data bytes per DSI (DATA). – Computer attributes • Execution time (TIME) and memory (STOR) constraints. • Virtual machine volatility (VIRT). – Personnel attributes • Analysis capability (ACAP). – Project attributes • Modern development practices (MODP). • Application experience (AEXP).R4: COCOMO Model (5) z The list of attributes. • Development turnaround time (TURN). • Use of software tool (TOOL). – Product attributes • Required reliability (RELY).

15 1.19 1.10 1.56 1.08 1.14 1.75 0.87 1.00 1.00 1.86 0.83 1.94 0.87 0.91 1.00 1.15 1.30 1.10 1.24 1.00 1.86 0.00 1.00 1.06 1.65 1.00 1.11 1.07 0.R4: COCOMO Model (6) z Intermediate COCOMO Attributes VL RELY DATA CPLX TIME STOR VIRT TURN ACAP AEXP PCAP LEXP VEXP MODP TOOL SCED 0.00 HI 1.00 1.40 1.21 1.88 0.07 1.10 241 © WTH07 .16 1.00 1.90 0.00 1.95 0.21 1.17 1.29 1.00 1.46 1.00 1.24 1.66 1.15 0.00 1.70 LO 0.00 1.91 0.85 NM 1.42 1.15 1.10 1.71 0.30 1.30 1.82 0.13 1.08 0.70 XH 1.91 0.23 0.82 0.04 VH 1.

8*KDLOC1.8*KDLOC1. each attribute is rated and these partial results are multiplied giving rise to the final product multiplier (P).20 for embedded system Effortnom =2. The effort formula is expressed as follows: Effort = Effortnom*P where Effortnom arises in the following form: Effortnom = 2.12 for semidetached system The support effort is calculated using the following formula: Effortmaintenance = ACT*Effortnom*P 242 © WTH07 .05 for organic system Effortnom =2.8*KDLOC1.R4: COCOMO Model (7) z z Depending upon the product.

process them. and development time M=2.R4: COCOMO Model (8) z Example – Suppose a software with an estimated size of 300 KDLOC. This is an embedded system. 243 © WTH07 . The basic form of the cost estimation model leads to the person-month effort as Effort = 3.6*3001.32=33. The software is a part of control system of a smart vehicle initiative.5*33790. and develop a schedule of pertinent control actions. The system collects the readings from various sensor.66 months.20=3379 person-month.

R4: COCOMO Model (9) z Refining by using the intermediate COCOMO model with: RELY DATA CPLX TIME STOR VIRT TURN ACAP AEXP PCAP LEXP VEXP MODP TOOL SCED HI HI NM VH VH NM LO HI HI NM NM NM NM LO VH 1.00 1.91 1.8*3001.00 1.00 1.86 0.21 1.00 1.08 1.6095.00 1.10 1.00 0. 244 © WTH07 .10 The scaling factor P = 1.87 0.20=2628 person-months.30 1.6095=4229 person-months.15 1. The modified result is 2628*1. The nominal effort is equal 2.

R5: Putnam Model (1) Manpower Loading 245 © WTH07 .

R5: Putnam Model (2) z The basic distribution (by Rayleigh distribution) dy/dt = 2kat exp (-at2) z a = (1/2t2). k = the area under the curve and has the dimensions of effort. In the curve. while the right of td is the maintenance effort (60%) required after delivery of the software.g. e. 246 © WTH07 . a shape parameter for distribution.. dy/dt is a maximum when t = td. where the time td at which the average teamsize is maximum. the left of td is the effort (40%) for a software specification and development.

247 © WTH07 .e. – – z The equation is at the heart of Putnam’s parametric cost estimation model. its exponent here is x = 1/3. machine access constraints. on-line interactive development. less “fuzzy” requirements.R5: Putnam Model (3) z Software Equation: – – Ss = ck · k x · t y Ss is the software size in source code statements.. ck is a constant of proportionality that can be correlated with the degree of sufficient of a technical environment for a type connected effort. i. t is the time. etc. representing effort. structural coding. and its exponent is y = 4/3. k is the area under the Rayleigh curve.

and therefore constants. E · T4 = constant z This equation expresses the underlying relationship between effort and time-scale for software development. 248 © WTH07 . small incremental or decremental changes to time will result in rather large concomitants in effort. So.R5: Putnam Model (4) z By transposition of the software equation Effort x Time4 = (Ss/ck)3 given that for a particular task and environment Ss and ck can be regarded as properties of that task.

according to Putnam’s derivation: – The intrinsic property of this estimate is given 25 x 24 = constant = 400.R5: Putnam Model (5) z Example. a 25% decrease in time-scale has led to an increase of 216% in effort required. 249 © WTH07 . Suppose a project with the estimate of effort about 25 person-year.5)4 = constant = 400 – It follows then that a new value of effort is required. the predicable effects of this. If it allows the duration reduced to 18 months.5)4 = 79. effort x (1. over 2 elapsed year of time. and this may be computed from E = 400/(1. say.6 person-years In other word. – In the new circumstance.

length. “results”. pass++) 10 for (int i = 0.length i hold g 0 print N2 1 1 1 3 1 5 4 3 2 7 2 2 1 2 1 public void paint (Graphics g) 2 { 3 print (g. 60). a. pass < a. “Sequence in original order”. {hold=a[i]. 6 } 7 public void sort ( ) 8 { 9 for (int pass=1. 13. 14. i < a. 15. int ++ [] {} for = + > 2 2 2 6 3 2 3 4 2 2 2 1 paint Graphics sort 30 60 a 1 pass a.R6: Software Science Measures (1) z Example: Bubble Sort Code in Java Operators Occurrences Operands Occurrences μ1 N1 μ2 public void () . a.length. 4 sort ( ). a[i+1]=hold. 30. 30). } 16 } μ1=12 μ2=14 N1=31 N2=35 250 © WTH07 . 5 print (g. 30. a[i]=a[i+1]. i++) 11 if (a[i] > a[i+1]) 12.

61 L = V*/V (if V = V*.20] Stroud number) = 373 sec = 7 min 251 © WTH07 .R6: Software Science Measures (2) z Program length Program volume Potential volume Program level Difficulty Effort and Time z z z z z N’ = μ1ln μ1 + μ2ln μ2 = 12 ln 12 + 14 ln 14 = 96. In general. L = 1.068 E = V/L’ = 6706 T = E/β (β=[5.32 V = N ln μ = 456 bits V* = (2 + μ2*) ln (2 + μ2*) = (2 + 3) ln (2 + 3) = 11. V > V*) = 0.025 D = 1/L = 40 L’ = (2/μ1) x (μ2 /N2) = 0.

NJ. [McConnell06] McConnell. 252 © WTH07 . Stephen and John M. IEEE Press.References z z z z z z z z [Boehm00] Boehm. Vol. Mike. Steve. 1978. 345-361. pp. Lawrence R. and Jeff Kidd. AddisonWesley. H. 2006. No. Agile Estimating and Planning. Software Metrics: A Rigorous & Practical Approach. 1997. [Palmer02] Palmer.” IEEE Trans. J. Prentice Hall Professional. WS. Mark. Norman E. [Cohn06] Cohn. “A General Empirical Solution to the Macro Software Sizing and Estimating Problem. Prentice Hall PTR. London. Reading. 2006. NJ. 1994. PWS. Barry. Upper Saddle River.. [Möller93] Möller. PTR Prentice Hall. Upper Saddle River. MA. 2002. Redmond. Englewood Cliffs. NH.. Boston.. Object-Oriented Software Metrics. [Fenton97] Fenton.. 4. Software Metrics: A Practitioner’s Guide to Improved Product Development. A Practical Guide to FeatureDriven Development. et al. 1993. Software Estimation. and Shari Lawrence Pfleeger. K. On Software Engineering. 2000. [Lorenz94] Lorenz. Microsoft Press. Paulsh. [Putnam78] Putnam. Felsing.. D. SE-4. Software Cost Estimation with Cocomo II.

IEEE Software. Winters. 253 © WTH07 . [Papers] Papers from IEEE Transaction on Software Engineering. IEEE Computer.z z [Schneider01] Schneider. CACM. Addison-Wesley. and JOOP. 2nd Edition. and Jason P. Applying Use Cases: A Practical Guide. Geri. Boston 2001.

Sign up to vote on this title
UsefulNot useful