You are on page 1of 34

Software Engineering

Introduction

The economies of ALL developed nations are dependent on software. More and more systems are software controlled. Software engineering is concerned with theories, methods and tools for professional software development. Software engineering expenditure represents a significant fraction of GNP in all developed countries.

Software costs Software costs often dominate system costs. The costs of software on a PC are often greater than the hardware cost. Software costs more to maintain than it does to develop. For systems with a long life, maintenance costs may be several times development costs. Software engineering is concerned with cost-effective software development.

Computer programs and associated documentation. What is software?may be developed for a particular customer or may be developed for Software products a general market. Software products may be Generic - developed to be sold to a range of different customers. Bespoke (custom) - developed for a single customer according to their specification.

What is software engineering? Software engineering is an engineering discipline which is concerned with all aspects of software production. Software engineers should adopt a systematic and organised approach to their work and use appropriate tools and techniques depending on the problem to be solved, the development constraints and the resources available.

The IEEE has developed a more comprehensive definition that states: Software Engineering: (1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. (2) The study of approaches as in (1). Computer software can then be defined as the product that software engineers design and build. It includes the executable programs, documentation (both electronic and hard copy). In addition, it may also include data in the form of numbers and text, or even pictorial and multimedia formats. Engineering is the analysis, design, construction, verification and management of technical (or social) entities. What are the attributes of good software? The software should deliver the required functionality and performance to the user and should be maintainable, dependable, efficient, and usable. q Maintainability Software must evolve to meet changing needs. q Dependability Software must be trustworthy. q Efficiency Software should not make wasteful use of system resources. q Usability Software must be usable by the users for which it was designed. Software Characteristics Software is both a product and a vehicle for developing a product. Software is engineered not manufactured. Software does not wear out, but it does deteriorate. Most software is still custom-built.

Software Myths Software standards provide software engineers with all the guidance they need. People with modern computers have all the software development tools they need Adding people is a good way to catch up when a project is behind schedule. Giving software projects to outside parties to develop solves software project management problems. A general statement of objectives from the customer is all that is needed to begin a software project.

Project requirements change continually and change is easy to accommodate in the software design. Once a program is written, the software engineer's work is finished. There is no way to assess the quality of a piece of software until it is actually running on some machine. The only deliverable from a successful software project is the working program. Software engineering is all about the creation of large and unnecessary documentation not shorter development times or reduced costs.

Software Engineering Phases Definition phase focuses on what (information engineering, software project planning, requirements analysis). Development phase focuses on how (software design, code generation, software testing). Support phase focuses on change (corrective maintenance, adaptive maintenance, perfective maintenance, preventative maintenance). Software Life Cycle Phases 1. 2. 3. 4. 5. 6. 7. 8. 9. Requirements, analysis, and design phase. System design phase. Program design phase. Program implementation phase. Unit testing phase. Integration testing phase. System testing phase. System delivery. Maintenance.

Linear Sequential Model or Waterfall Model

The waterfall model derives its name due to the cascading effect from one phase to the other as is illustrated in the Figure. In this model each phase well defined starting and ending point, with identifiable deliveries to the next phase.

The model consist of six distinct stages, namely: 1. In the requirements analysis phase

(a)

The problem is specified along with the desired service objectives (goals)

(b) The constraints are identified 2. In the specification phase the system specification is produced from the detailed definitions of (a) and (b) above. This document should clearly define the product function. Note that in some text, the requirements analysis and specifications phases are combined and represented as a single phase. 3. In the system and software design phase, the system specifications are translated into a software representation. The software engineer at this stage is concerned with: Data structure Software architecture Algorithmic detail and Interface representations

The hardware requirements are also determined at this stage along with a picture of the overall system architecture. By the end of this stage the software engineer should be able to identify the relationship between the hardware, software and the associated interfaces. Any faults in the specification should ideally not be passed down stream 4. In the implementation and testing phase stage the designs are translated into the software domain Detailed documentation from the design phase can significantly reduce the coding effort. Testing at this stage focuses on making sure that any errors are identified and that the software meets its required specification.

5. In the integration and system testing phase all the program units are integrated and tested to ensure that the complete system meets the software requirements. After this stage the software is delivered to the customer [Deliverable The software product is delivered to the client for acceptance testing.] 6. The maintenance phase the usually the longest stage of the software. In this phase the software is updated to: Meet the changing customer needs Adapted to accommodate changes in the external environment

Correct errors and oversights previously undetected in the testing phases Enhancing the efficiency of the software

Observe that feed back loops allow for corrections to be incorporated into the model. For example a problem/update in the design phase requires a revisit to the specifications phase. When changes are made at any phase, the relevant documentation should be updated to reflect that change. Advantages Testing is inherent to every phase of the waterfall model It is an enforced disciplined approach It is documentation driven, that is, documentation is produced at every stage

Disadvantages The waterfall model is the oldest and the most widely used paradigm. However, many projects rarely follow its sequential flow. This is due to the inherent problems associated with its rigid format. Namely: It only incorporates iteration indirectly, thus changes may cause considerable confusion as the project progresses. As The client usually only has a vague idea of exactly what is required from the software product, this WM has difficulty accommodating the natural uncertainty that exists at the beginning of the project. The customer only sees a working version of the product after it has been coded. This may result in disaster if any undetected problems are precipitated to this stage.

Prototyping Model Definition: A prototype is a working model that is functionally equivalent to a component of the product. In many instances the client only has a general view of what is expected from the software product. In such a scenario where there is an absence of detailed information regarding the input to the system, the processing needs and the output requirements, the prototyping model may be employed. This model reflects an attempt to increase the flexibility of the development process by allowing the client to interact and experiment with a working representation of the product. The developmental process only continues once the client is satisfied with the functioning of the prototype. At that stage the developer determines the specifications of the clients real needs.

The following sections examines two (2) version of the prototyping model: Version I: Prototyping is used as a requirements technique. Version II: Prototype is used as the specifications or a major part thereof. Version I This approach, as illustrated in Fig 1.3, uses the prototype as a means of quickly determining the needs of the client; it is discarded once the specifications have been agreed on. The emphasis of the prototype is on representing those aspects of the software that will be visible to the client/user (e.g. input approaches and output formats). Thus it does not matter if the prototype hardly works. Note that if the first version of the prototype does not meet the clients needs, then it must be rapidly converted into a second version.

Version II In this approach, as illustrated in Fig 1.4, the prototype is actually used as the specifications for the design phase. This advantage of this approach is speed and accuracy, as not time is spent on drawing up written specifications. The inherent difficulties associated with that phase (i.e. incompleteness, contradictions and ambiguities) are then avoided.

Disadvantages of prototyping

1. Often clients expect that a few minor changes to the prototype will more than suffice their needs. They fail to realise that no consideration was given to the overall quality of the software in the rush to develop the prototype. 2. The developers may lose focus on the real purpose of the prototype and compromise the quality of the product. For example, they may employ some of the inefficient algorithms or inappropriate programming languages used in developing the prototype. This mainly due to laziness and an over reliance on familiarity with seemingly easier methods. A prototype will hardly be acceptable in court in the event that the client does not agree that the developer has discharged his/her obligations. For this reason using the prototype as the software specification is normally reserved for software development within an organisation.

3.

To avoid the above problems the developer and the client should both establish a protocol, which indicates the deliverables to the client as well as an contractual obligations. In both versions the prototype is discarded early in the life cycle. However, one way of ensuring that the product is properly designed and implemented is to implement the prototype in a different programming language from that of the product. The Spiral Model The spiral model, illustrated in following figure, combines the iterative nature of prototyping with the controlled and systematic aspects of the waterfall model, therein providing the potential for rapid development of incremental versions of the software. In this model the software is developed in a series of incremental releases with the early stages being either paper models or prototypes. Later iterations become increasingly more complete versions of the product.

Spiral Model The model is divided into a number of task regions. Depending on the model it may have 3-6 task regions (/framework activities) our case will consider a 6-task region model. These regions are: 1. The customer communication task to establish effective communication between developer and customer. 2. The planning task to define resources, time lines and other project related information.. 3. The risk analysis task to assess both technical and management risks. 4. The engineering task to build one or more representations of the application.

5. The construction and release task to construct, test, install and provide user support (e.g., documentation and training). 6. The customer evaluation task to obtain customer feedback based on the evaluation of the software representation created during the engineering stage and implemented during the install stage. The evolutionary process begins at the centre position and moves in a clockwise direction. Each traversal of the spiral typically results in a deliverable. For example, the first and second spiral traversals may result in the production of a product specification and a prototype, respectively. Subsequent traversals may then produce more sophisticated versions of the software. An important distinction between the spiral model and other software models is the explicit consideration of risk. There are no fixed phases such as specification or design phases in the model and it encompasses other process models. For example, prototyping may be used in one spiral to resolve requirement uncertainties and hence reduce risks. This may then be followed by a conventional waterfall development. Note that each passage through the planning stage results in an adjustment to the project plan (e.g. cost and schedule are adjusted based on the feedback from the customer, project manager may adjust the number of iterations required to complete the software.) Each of the regions is populated by a set of work tasks called a task set that are adapted to characteristics of the project to be undertaken. For small projects the number of tasks and their formality is low. Conversely, for large projects the reverse is true.

Advantages of the Spiral Model

The spiral model is a realistic approach to the development of large-scale software products because the software evolves as the process progresses. In addition, the developer and the client better understand and react to risks at each evolutionary level. The model uses prototyping as a risk reduction mechanism and allows for the development of prototypes at any stage of the evolutionary development. It maintains a systematic stepwise approach, like the classic life cycle model, but incorporates it into an iterative framework that more reflect the real world. If employed correctly, this model should reduce risks before they become problematic, as consideration of technical risks are considered at all stages.

Disadvantages of the Spiral Model Demands considerable risk-assessment expertise It has not been employed as much proven models (e.g. the WF model) and hence may prove difficult to sell to the client (esp. where a contract is involved) that this model is controllable and efficient. [More study needs to be done in this regard]

Fourth Generation Language Techniques A 4GL is a set of tools that automatically, or semi-automatically, translates human instructions in some abstract form into instructions understood by the computer. A 5GL requires the use of knowledge-based systems

The power of a tool set refers to the ease with which tool users may develop a software system. Moving toward 4GLs and 5GLs generally means improving the power of a tool set. Certainly, for the sake of ease of development, the more powerful the better. However, this is not the only concern. In studies of the use of 4GLs in software development, the quality of the software produced by a 4GL is often lower than the quality of a similar system produced by a 3GL. The key is in the quality and maturity of the tools behind the development in either case. It is important to choose a powerful tool set that can also develop good quality software. Otherwise, the initial gains in productivity may be quickly lost as soon as performance requirements are not met or changes need to be made to the system. Communicating with a computer in more and more abstract ways is a desirable goal because it is the way humans will continue to use automation more and more effectively. However, when moving to this higher level of abstraction, the concerns are better discussed in terms of capabilities of tools and tool sets, rather than in terms of language generations. When considering the capabilities of various tool sets, one of the major concerns is whether the software developed using these tool sets can be moved to different hardware and operating system platforms. It is also important to know whether a tool set can be moved to a new platform. Metrics and Estimation

Process and Project Indicators Process indicators enable software project managers to: assess project status track potential risks detect problem areas early adjust workflow or tasks evaluate team ability to control product quality Measurement, Measures, Metrics Measurement is the act of obtaining a measure Measure provides a quantitative indication of the size of some product or process attribute Metric is a quantitative measure of the degree to which a system, component, or process possesses a given attribute

Process metrics Private process metrics (e.g. defect rates by individual or module) are known only to the individual or team concerned. Public process metrics enable organizations to make strategic changes to improve the software process. Metrics should not be used to evaluate the performance of individuals. Statistical software process improvement helps an organization to discover its strengths and weaknesses.

Project Metrics Software project metrics are used by the software team to adapt project workflow and technical activities. Project metrics are used to avoid development schedule delays, to mitigate potential risks, and to assess product quality on an on-going basis. Every project should measure its inputs (resources), outputs (deliverables), and results (effectiveness of deliverables).

Software Measurement Direct measures of software engineering process include cost and effort. Direct measures of the product include lines of code (LOC), execution speed, memory size, defects per reporting time period. Indirect measures examine the quality of the software product itself (e.g. functionality, complexity, efficiency, reliability, maintainability).

Size-Oriented Metrics Derived by normalizing (dividing) any direct measure (e.g. defects or human effort) associated with the product or project by LOC. Size oriented metrics are widely used but their validity and applicability is widely debated.

Function-Oriented Metrics Function points are computed from direct measures of the information domain of a business software application and assessment of its complexity. Once computed function points are used like LOC to normalize measures for software productivity, quality, and other attributes.

Software Quality Metrics Factors assessing software quality come from three distinct points of view (product operation, product revision, product modification). Defect removal efficiency (DRE) is a measure of the filtering ability of the quality assurance and control activities as they are applied through out the process framework.

Software quality factors requiring measures: correctness (defects per KLOC) maintainability mean time to change (MTTC) spoilage = (cost of change / total cost of system) integrity threat = probability of attack (that causes failure) security = probability attack is repelled Integrity = [1 - threat * (1 - security)] usability easy to learn (time) easy to use (time) productivity increase (quantity) user attitude (questionnaire score) defect removal efficiency DRE = E / (E + D). E = # of errors found before delivery D = # of errors found after delivery

Cost Estimation

Types of Cost Models Experiential derived from past experience Static derived using regression techniques doesnt change with time Dynamic derived using regression techniques often includes the effects of time Estimation is influenced by uncertainty and inaccuracy. A prerequisite for estimation is an object to be estimated. The greatest danger during Estimation is the managements persistant claim for a (too) early Estimation leading to the end that Estimation is mistaken for bargaining. This as preliminary introduction to hint at the problem of estimation. Before project start the estimation of effort, costs, dates and duration is the base for profound planning as well as measurement of project success. estimations before project start ask for according know-how-collections from corporate planning, where project risks from market trends and according violation of trends as well as technological developments and Scenarios are available from Experiences of past project portfolios. Estimation is a process influenced by conflicts due to resistance (not having a mind to estimator to make a commitment, persons get valuable), which leads to the question of honesty of estimation. Estimations deliver figures for planning which can be tracked continuously. That is the base for measuring success. Its paradox that project leaders dont measure enough when evaluating their projects. Function Point Estimation Worksheet Description Inputs Outputs Queries Files Program Interfaces Low ___ x 3 ___ x 4 ___ x 3 ___ x 7 ___ x 5 Complexity Medium ___ x 4 ___ x 5 ___ x 4 ___ x 10 ___ x 7 High ___ x 6 ___ x 7 ___ x 6 ___ x 15 ___ x 10 Total ___ ___ ___ ___ ___

Total Unadjusted Function Points (TUFP): _______________ (0=no effect on processing complexity; 5=great effect on processing complexity) Data communications Heavily use configuration Transaction rate End-user efficiency Complex processing Installation ease Multiple sites Performance Distributed functions On-line data entry On-line update Reusability Operational ease Extensibility 0-5 ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___

Processing Complexity (PC): ______

Adjusted Processing Complexity (PCA) = 0.65 + (0.01 * ____________)

Total Adjusted Function Points (TAFP):

___________ * ____________ =

An Empirical Estimation Model for Effort Introduction: The structure of empirical estimation models is a formula, derived from data collected from past software projects, that uses software size to estimate effort. Size, itself, is an estimate, described as either lines of code (LOC) or function points (FP). The typical formula of an estimation model for software development effort is:

E = a + b(S)c where: E represents effort, in person months S is the size of the software development, in LOC or FP a, b, and c are values derived from data collected from past projects. No size or effort estimation model is appropriate for all software development environments, development processes, or application types. Models must be customised (parameters in the formula must be altered) so that results from the model agree with the data from the particular software development environment. The relationship seen between development effort and software size is generally: E

S This graph demonstrates that the amount of effort accelerates as size increases, i.e. the value c in the typical formula above is greater than 1.

COCOMO:

When Barry Boehm wrote Software Engineering Economics, published in 1981, he introduced an empirical effort estimation model (COCOMO - COnstructive COst MOdel) that is still referenced by the software engineering community. The original COCOMO model was a set of models; 3 development modes (organic, semidetached, and embedded) and 3 levels (basic, intermediate, and advanced). COCOMO model levels: Basic - predicted software size (lines of code) was used to estimate development effort. Intermediate - predicted software size (lines of code), plus a set of 15 subjectively assessed 'cost drivers' was used to estimate development effort. Advanced - on top of the intermediate model, the advanced model allows phase-based cost driver adjustments and some adjustments at the module, component, and system level. COCOMO development modes: Organic - relatively small, simple software projects in which small teams with good application experience work to a set of flexible requirements. Embedded - the software project has tight software, hardware and operational constraints. Semi-detached - an intermediate (in size and complexity) software project in which teams with mixed experience levels must meet a mix of rigid and less than rigid requirements.1 COCOMO model: The basic COCOMO model follows the general layout of effort estimation models: E = a(S)b where: E represents effort in person-months S is the size of the software development in KLOC a and b are values, derived from past project data, dependent on the development mode a and b are values are: Organic development mode a = 2.4 b = 1.05 Semi-detached development mode a = 3.0 b = 1.12 Embedded development mode a = 3.6 b = 1.20 The intermediate and advanced COCOMO models incorporate 15 'cost drivers'. These 'drivers' multiply the effort derived for the basic COCOMO model. The importance of each driver is assessed and the corresponding value multiplied into the COCOMO equation, which becomes: E = a(S)b x product(cost drivers)
1

As an example of how the intermediate COCOMO model works, the following is a calculation of the estimated effort for a semi-detached project of 56 KLOC. The cost driver values are selected from the table of values on the next page (e.g., the three product cost drivers - rely, data, cplx have values of 1.15, 1.08, and 1.15 when all product cost drivers are rated as high): Product cost drivers rated Computer cost drivers rated Personnel cost drivers rated Project cost drivers rated high nominal low high = = = = 1.15 x 1.08 x 1.15 1.00 1.19 x 1.13 x 1.17 x 1.10 x 1.07 0.91 x 0.91 x 1.04 = = = = 1.43 1.00 1.85 0.86

hence, in the model: product(cost drivers) = 1.43 x 1.00 x 1.85 x 0.86 = 2.28 for a semi-detached project of 56KLOC: a = 3.0 b = 1.12 S = 56 E = a(S)b x product(cost drivers) E = 3.0 x (56)1.12 x 2.28 E = 3.0 x 90.78 x 2.28, E = 620.94 person-months Descriptions of COCOMO Cost Drivers: COST DRIVE R PRODUCT COMPUTE R RELY DATA CPLX TIME STOR VIRT DESCRIPTION

Required software reliability Database size Product complexity Execution time constraints Main storage constraints Virtual machine volatility - degree to which the operating system changes Computer turn around time Analyst capability Application experience Programmer capability Virtual machine (i.e. operating system) experience Programming language experience Use of modern programming practices Use of software tools Required development schedule

TURN PERSONN ACAP EL AEXP PCAP VEXP LEXP PROJECT MODP TOOL SCED

Table of values for COCOMO Cost Drivers:

COCOMO - COST DRIVERS COST DRIVER (PRODUCT) .. .. (COMPUTER) .. .. .. (PERSONNEL ) .. .. .. .. (PROJECT) .. .. RELY DATA CPLX TIME STOR VIRT TURN ACAP AEXP PCAP VEXP LEXP MODP TOOL SCED V.LOW 0.75 . 0.70 . . . . 1.46 1.29 1.42 1.21 1.14 1.24 1.24 1.23 LOW 0.88 0.94 0.85 . . 0.87 0.87 1.19 1.13 1.17 1.10 1.07 1.10 1.10 1.08 RATING NOMIN HIGH AL 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.15 1.08 1.15 1.11 1.06 1.15 1.07 0.86 0.91 0.86 0.90 0.95 0.91 0.91 1.04 V.HIGH 1.40 1.16 1.30 1.30 1.21 1.30 1.15 0.71 0.82 0.70 . . 0.82 0.83 1.10 EX. HIGH . . 1.65 1.66 1.56 . . . . . . . . . .

Final Observations: An effort estimation is only, ever, an estimate. Management should treat it with caution. To make empirical models as useful as possible, as much data as possible should be collected from projects and used to customise (refine) any model used. An ongoing data collection programme is essential if models are to be developed and refined, and if management wishes to make informed decisions Many organisations are known to use 'estimation redundancy', i.e., to provide a check on a particular estimate they will use more than one estimating method. The result is usually a set of estimates from which an organisation will choose. The underlying assumptions play a large role in the final decision and organisation makes. The different estimating methods used should be documented, and all underlying assumptions should be recorded.

Automated Estimation Tools Automated estimation tools allow the planner to estimate cost and effort and to perform "what if" analyses for important project variables such as delivery date or staffing. All have the same general characteristics and require: 1. A quantitative estimate of project size (e.g., LOC) or functionality (function point data) 2. Qualitative project characteristics such as complexity, required reliability, or business criticality 3. Some description of the development staff and/or development environment From these data, the model implemented by the automated estimation tool provides estimates of the effort required to complete the project, costs, staff loading, and, in some cases, development schedule and associated risk. BYL (Before You Leap) developed by the Gordon Group, WICOMO (Wang Institute Cost Model) developed at the Wang Institute, and DECPlan developed by Digital Equipment Corporation are automated estimation tools that are based on COCOMO. Each of the tools requires the user to provide preliminary LOC estimates. These estimates are categorized by programming language and type (i.e., adapted code, reused code, new code). The user also specifies values for the cost driver attributes. Each of the tools produces estimated elapsed project duration (in months), effort in staff-months, average staffing per month, average productivity in LOC/pm, and cost per month. This data can be developed for each phase in the software engineering process individually or for the entire project. SLIM is an automated costing system based on the Rayleigh-Putnam Model. SLIM applies the Putnam software model, linear programming, statistical simulation, and program evaluation and review technique, or PERT (a scheduling method) techniques to derive software project estimates. The system enables a software planner to perform the following functions in an interactive session:

(1) calibrate the local software development environment by interpreting historical data supplied by the planner; (2) create an information model of the software to be developed by eliciting basic software characteristics, personal attributes, and environmental considerations; and (3) conduct software sizing--the approach used in SLIM is a more sophisticated, automated version of the LOC costing technique. Once software size (i.e., LOC for each software function) has been established, SLIM computes size deviation (an indication of estimation uncertainty), a sensitivity profile that indicates potential deviation of cost and effort, and a consistency check with data collected for software systems of similar size. The planner can invoke a linear programming analysis that considers development constraints on both cost and effort, and provides a month-by-month distribution of effort, and a consistency check with data collected for software systems of similar size. ESTIMACS is a "macro- estimation model" that uses a function point estimation method enhanced to accommodate a variety of project and personnel factors. The ESTIMACS tool contains a set of models that enable the planner to estimate 1. 2. 3. 4. 5. system development effort, staff and cost, hardware configuration, risk, the effects of "development portfolio."

The system development effort model combines data about the user, the developer, the project geography (i.e., the proximity of developer and customer), and the number of "major business functions" to be implemented with information domain data required for function point computation, the application complexity, performance, and reliability. ESTIMACS can develop staffing and costs using a life cycle data base to provide work distribution and deployment information. The target hardware configuration is sized (i.e., processor power and storage capacity are estimated) using answers to a series of questions that help the planner evaluate transaction volume, windows of application, and other data. The level of risk associated with the successful implementation of the proposed system is determined based on responses to a questionnaire that examines project factors such as size, structure, and technology.

SPQR/20, developed by Software Productivity Research, Inc. has the user complete a simple set of multiple choice questions that address:

project type (e.g., new program, maintenance), project scope (e.g., prototype, reusable module), goals (e.g., minimum duration, highest quality), project class (e.g., personal program, product), application type (e.g., batch, expert system), novelty (e.g., repeat of a previous application), office facilities (e.g., open office environment, crowded bullpen), program requirements (e.g., clear, hazy), design requirements (e.g., informal design with no automation), user documentation (e.g., informal, formal), response time, staff experience, percent source code reuse, programming language, logical complexity of algorithms, code, data complexity, project related cost data (e.g., length of work week, average salary).

In addition to output data described for other tools, SPQR/20 estimates:


total pages of project documentation, total defects potential for the project, cumulative defect removal efficiency, total defects at delivery, and number of defects per KLOC.

Each of the automated estimating tools conducts a dialog with the planner, obtaining appropriate project and supporting information and producing both tabular and (in some cases) graphical output. All these tools have been implemented on personal computers or engineering workstations. Martin compared these tools by applying each to the same project. A large variation in estimated results was encountered, and the predicted values sometimes were significantly different from actual values. This reinforces the fact that the output of estimation tools should be used as one "data point" from which estimates are derived--not as the only source for an estimate.

Project Planning and Management Project Planning Planning Objectives To provide a framework that allows a software manager to make an estimate of resources, cost, and schedule. Project outcomes should be bounded by 'best case' and 'worst case' scenarios. Estimates should be updated as the project progresses. Software Scope data to be processed or produced control parameters function performance constraints external interfaces reliability Scope Definition Determine the customer's overall goals for the proposed system and any expected benefits. Determine the customer's perceptions concerning the nature of a good solution to the problem Evaluate the effectiveness of the customer meeting. Resource Estimation Human Resources number of people required and skills needed to complete the development project Project Methods and Tools Reusable Software Resources off-the-shelf components modifiable components new components Development Environment hardware and software required during the development process

Scheduling Principles Compartmentalization the product and process must be decomposed into a manageable number of activities and tasks Interdependency tasks that can be completed in parallel must be separated from those that must completed serially Time allocation

every task has start and completion dates that take the task interdependencies into account

Effort validation project manager must ensure that on any given day there are enough staff members assigned to completed the tasks within the time estimated in the project plan Defined Responsibilities every scheduled task needs to be assigned to a specific team member Defined outcomes every task in the schedule needs to have a defined outcome (usually a work product or deliverable) Defined milestones a milestone is accomplished when one or more work products from an engineering task have passed quality review

List the Deliverables Documents. Demonstration of function. Demonstration of subsystem. Demonstration of accuracy. Demonstration of reliability, security, or speed. Define the Milestones Completion of an activity or deliverable (must be measurable). Activities must have definite a start and stop. A milestone is point in time not a time period like an activity. Work Breakdown Structure Create the work breakdown structure. Separate the project into phases composed of steps. Subdivide steps into activities as needed.

Gantt Chart Gantt Charts Introduction Gantt Charts - named after their inventor Henry Gantt, are the preferred information media of

senior managers, who usually find that the information portrayed in PERT charts is overly detailed.

Pert Charts A PERT chart is a project management tool used to schedule, organize, and coordinate tasks within a project. PERT stands for Program Evaluation Review Technique, a methodology developed by the U.S. Navy in the 1950s to manage the Polaris submarine missile program. A similar methodology, the Critical Path Method (CPM), which was developed for project management in the private sector at about the same time, has become synonymous with PERT, so that the technique is known by any variation on the names: PERT, CPM, or PERT/CPM.

PERT Chart Example

As shown in Figure, a PERT chart presents a graphical illustration of a project as a network diagram consisting of numbered nodes (either circles or rectangles) representing events, or milestones in the project linked by labeled vectors (directional lines) representing tasks in the project. The direction of the arrows on the lines indicates the sequence of tasks. In the diagram, for example, the tasks between nodes 1, 2, 4, 8, and 10 must be completed in sequence. These are called dependent or serial tasks. The tasks between nodes 1 and 2, and nodes 1 and 3 are not dependent on the completion of one to start the other and can be undertaken simultaneously. These tasks are called parallel or concurrent tasks. Tasks that must be completed in sequence but that don't require resources or completion time are considered to have event dependency. These are represented by dotted lines with arrows and are called dummy activities. For example, the dashed arrow linking nodes 6 and 9 indicates that the system files must be converted before the user test can take place, but that the resources and time required to prepare for the user test (writing the user manual and user training) are on another path. Numbers on the opposite sides of the vectors indicate the time allotted for the task. The PERT chart is sometimes preferred over the Gantt chart, another popular project management charting method, because it clearly illustrates task dependencies. On the other hand, the PERT chart can be much more difficult to interpret, especially on complex projects. Frequently, project managers use both techniques. People and Effort Adding people to a project after it is behind schedule often causes the schedule to slip further The relationship between the number of people on a project and overall productivity is not linear (e.g. 3 people do not produce 3 times the work of 1 person, if the people have to work in cooperation with one another) The main reasons for using more than 1 person on a project are to get the job done more rapidly and to improve software quality. Management Spectrum People Product Process Project People Recruiting Selection Performance management Training Compensation Career development Organization Work design Team/culture development Software Team Roles

Matching People to Tasks

Software team Organization Democratic decentralized rotating task coordinators group consensus Controlled decentralized permanent leader group problem solving subgroup implementation of solutions Controlled centralized top level problem solving internal coordination managed by team leader Democratic

Factors Affecting Team Organization Difficulty of problem to be solved Size of resulting program Team lifetime Degree to which problem can be modularized Required quality and reliability of the system to be built Rigidity of the delivery date Degree of communication required for the project Communication and Coordination Formal, impersonal approaches documents, milestones, memos Formal interpersonal approaches review meetings, inspections Informal interpersonal approaches information meetings, problem solving Electronic communication e-mail, bulletin boards, video conferencing Interpersonal network discussion with people outside project team

Product Product objectives

Scope Alternative solutions Constraint tradeoffs

Product Dimensions Software scope context information objectives function performance Problem decomposition partitioning or problem elaboration focus is on functionality to be delivered and the process used to deliver it Process Framework activities populated with Tasks milestones Work products Quality Assurance points Framework Activities Customer communication Planning Risk analysis Engineering Construction and release Customer evaluation Process Considerations Process model chosen must be appropriate for the: customers developers characteristics of the product project development environment Project planning begins with melding the product and the process Each function to be engineered must pass though the set of framework activities defined for a software organization Work tasks may vary but the common process framework (CPF) is invariant (e.g. size does not matter) Software engineers task is to estimate the resources required to move each function though the framework activities to produce each work product

Project 90/10 Rule 90% of the effort on project to accomplish 10% of the work

Planning Monitoring Controlling

Managing the Project Start on the right foot Maintain momentum Track progress Make smart decisions Conduct a postmortem analysis 5WHH Principle Why is the system being developed? What will be done and When? Who is responsible for a function? Where are they organizationally located? How will the job be done technically and managerially? How much of each resource is needed? Software Requirements Specifications Software engineering task bridging the gap between system requirements engineering and software design. Provides software designer with a model of: system information function behavior Model can be translated to data, architectural, and component-level designs. Expect to do a little bit of design during analysis and a little bit of analysis during design.

Analysis Objectives Identify customers needs. Evaluate system for feasibility. Perform economic and technical analysis. Allocate functions to system elements. Establish schedule and constraints. Create system definitions. Software Requirements Analysis Phases Problem recognition Evaluation and synthesis focus is on what not how Modeling Specification Review Management Questions

How much effort put towards analysis? Who does the analysis? Why is it so difficult? Bottom line - who pays for it?

Feasibility Study Economic feasibility cost/benefit analysis Technical feasibility hardware/software/people, etc. Legal feasibility Alternatives there is always more than one way to do it System Specification Introduction. Functional data description. Subsystem description. System modeling and simulation results. Products. Appendices. Requirements Requirement features of system or system function used to fulfill system purpose. Focus on customers needs and problem, not on solutions: Requirements definition document (written for customer). Requirements specification document (written for programmer; technical staff). Types of Requirements Functional requirements: input/output processing. error handling. Non-functional requirements: Physical environment (equipment locations, multiple sites, etc.). Interfaces (data medium etc.). User & human factors (who are the users, their skill level etc.). Performance (how well is system functioning). Documentation. Data (qualitative stuff). Resources (finding, physical space). Security (backup, firewall). Quality assurance (max. down time, MTBF, etc.). Requirement Validation Correct?

Consistent? Complete? Externally - all desired properties are present. Internally - no undefined references. Each requirement describes something actually needed by the customer. Requirements are verifiable (testable)? Requirements are traceable.

You might also like