Chapter 1

An Introduction to Software Engineering

e are all aware that software has become a part and parcel of our daily life in ones own way. It has become the key element in the evolution of computer-based systems and products. Today, software takes on a dual role. It is a product, and at the same time, the vehicle for delivering a product. As a product, it delivers the computing potential embodied by computer hardware. Whether it resides within a food processor or a washing machine or a cellular phone or operates inside a computer, software is an information transformer – producing, managing, acquiring, modifying, displaying or transmitting information that can be as simple as a bit or as complex as an image. As a vehicle used to deliver a product, software acts as the basis for the control of the computer (operating systems), the communication of information (networks), and the creation and control of other programs ( software tools and environments). Today, software is working both explicitly and behind the scenes in virtually all aspects of our lives, making it more comfortable and effective. For this reason, software engineering is more important than ever. Good software engineering practices must ensure that the software makes a positive contribution towards the role of our lives.


At the end of this chapter you should be able to answer these questions:

• • •

What software is, its important characteristics, its components and applications ? What is software engineering? Is it possible to have a generic view of software engineering ?


Chapter 1 - An Introduction to Software Engineering

• •

Chapter 1 - An Introduction to Software Engineering What is the importance of software process models ? Benefits and limitations of some of the major software process models ?

Software is a set of instructions or computer programs that when executed provide desired function and performance. It is both a process and a product. To gain an understanding of software, it is important to examine the characteristics of software, which differ considerably from those of hardware.

1.2.1 Software Characteristics
1. Software is developed or engineered, it is not manufactured. Unlike hardware, software is logical rather than physical. It has to be designed well before producing it. In spite of availability of many automated software development tools, it is the skill of the individual, creativity of the developers and proper management by the project manager that counts for a good software product. 2. Software does not “wear out”. As time progresses, the hardware components start deteriorating – they are subjected to environmental maladies such as dust, vibration, temperature etc. and at some point of time they tend to breakdown. The defected components can then be traced and replaced. But, software is not susceptible to the environmental changes. So, it does not wear out. The software works exactly the same way even after years it was first developed unless any changes are introduced to it. The changes in the software may occur due to the changes in the requirements. And these changes may introduce some defects in it thus, deteriorating the quality of the software. So, software need to maintained properly. 3. Most software is custom-built, rather than being assembled from existing components. Most of the engineered products are first designed before they are manufactured. Designing includes identifying various components for the product before they are actually assembled. Here several people can work independently on these components thus making the manufacturing system highly flexible. In software, breaking a program into modules is a difficult task, since each module is highly interlinked with other modules. Further, it requires lot of skill to integrate different modules into one. Now a days the term component is widely used in software industry where object oriented system is in use.

BSIT 44 Software Engineering


1.2.2 Software Components
The wide spread use of object-oriented technology has resulted in the creation of software components. A software component is essentially a software module or a class of objects. It should be designed and implemented so that it can be reused in many different programs. The advantage with software reuse is that , it allows for faster software development and higher-quality programs. Modern reusable components encapsulate both data and the operations that manipulate the data into one unit. Software components are built using a programming language.

1.2.3 Software Applications
Software may be applied in any situation for which a predefined set of procedures (algorithms) has been defined and exist. System software, real-time software, business software, embedded software, personal computer software etc., are some of the software areas where potential applications exists.

Software engineering is a discipline. It uses the existing tools and methods of software development and systematizes the entire process of software development. There are a number of formal definitions for software engineering, given by experts in the field. However for the purpose of understanding, we shall see some of them. “Software engineering is the establishment and use of sound engineering principles in order to obtain economical software that is reliable and works efficiently on real machines” – definition according to a scientist Fritz Bauer. “Software engineering is the application of science and mathematics by which the capabilities of computer equipment are made useful to man via computer programs, procedures and associated documentation “ – definition according to a scientist Barry Boehm. According to IEEE, Software engineering is defined more comprehensive as “The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, that is, the application of engineering to software.”

To engineer software, a software development process must be defined. The software engineering as such can be categorized into three generic phases, regardless of application area, project size or complexity. The three generic phases of software engineering include

• • • The definition phase The development phase The maintenance phase

Chapter 1 - An Introduction to Software Engineering

The definition phase focuses on what. That is during definition, the software developer attempts to identify what information is to be processed, what function and performance are desired, what system behavior can be expected, what interfaces are to be established, what design constraints exist, and what validation criteria are required to define a successful system. This phase includes three main tasks : system or information engineering, software project planning and requirements analysis. The development phase focuses on how. That is, during the software development, a software engineer attempts to define how data are to be structured, how function is to be implemented as a software architecture, how procedures are to be implemented, how design will be translated into a programming language, how testing will be performed etc. This phase includes three main tasks : software design, code generation and software testing. The maintenance phase focuses on change that is associated with the software. The four main changes that are encountered during this phase are: Correction – It is likely that the customers will find errors or defects in the software in spite of the quality assurance activities. Corrective maintenance changes the software to correct defects. Adaptations – As time progresses, it is likely that the original environment ( eg. CPU, operating system, business rules, external product characteristics etc.), for which the software was developed is likely to change. Adaptive maintenance results in modification to the software to accommodate changes to its external environment. Enhancement – As software is used, the customer will recognize the need for additional functional requirements that will benefit him. This type of maintenance extends the software beyond its original functional requirements. Prevention – Computer software deteriorates due to changes. So, preventive maintenance, often called software reengineering must be conducted in order to make changes to the computer software more easily.

Software engineering is a discipline that integrates process, methods and tools for the development of computer software. This development strategy is often referred to as a process model or software engineering paradigm. A process model for software engineering is chosen based on the nature of the project and application, the methods and tools to be used, and the controls and deliverables that are required. A number of different process models for software engineering have been proposed, each

BSIT 44 Software Engineering


exhibiting strengths and weaknesses, but all having a series of generic phases in common. In the next sub section that follow, let us see some of the important software process models.

System feasibility

Feasibility report

Requirements Analysis & project planning validation


Requirements document & project plan

System design System design document

Detailed design

Verification Detailed design document Coding Programs


Testing and integration

Verification Test plan, Test report and Manuals Installation report Installation

Operations and maintenance

Figure 1.1 The waterfall model

6 1.5.1 The Waterfall Model

Chapter 1 - An Introduction to Software Engineering

It is the simplest and the widely used process model for software development. Here the phases involved in the software development are organized in a linear order. In a typical waterfall model, a project begins with the feasibility analysis. On successfully demonstrating the feasibility of a project, the requirements analysis and project planning begins. The design starts after completing the requirements analysis and coding starts after completing the design phase. On completing the coding phase, testing is done. On successful completion of testing, the system is installed. After this, the regular operation and maintenance of the system takes place. Linear ordering of these activities are shown in the figure 1.1. The important characteristics of this model are

• • • • • • • • • • •

the process of software development consists of a linear set of distinct phases requirements analysis project planning system design detailed design coding and unit testing system integration and testing Each phase is distinct and is mandatory for every project irrespective of project size Every phase has a well defined entry and exit criteria. At every phase there is a provision for verification and validation, which helps in correction of errors. The importance of this model is that it allows for communication between the customer and the software developer and specifies what, when the product will be delivered, and at what cost. The weakness with this model is that, it requires a complete set of user requirements before the commencement of design. Which is difficult to give since requirements keep changing according to needs and time.

1.5.2 The Prototype Model
The aim of this model is to overcome the limitations of the waterfall model that is, instead of freezing the requirements before the design or coding phase a throw-away prototype is built to help understand the requirements. And the prototype is developed based on the currently known requirements. This model is

BSIT 44 Software Engineering


very attractive for the customers, since they get an actual feel of the system that they indent to use. And this model is very well suited for complicated and large systems for which there is no manual process or existing system that can be used to determine the requirements. The process model of this approach is shown in figure 1.2

Requirements analysis Design Design Code Test



Figure 1.2 The Prototyping model

The development of the prototype starts when the preliminary version of the requirements specification document has be developed. At this stage, there is a reasonable understanding of the system and its needs. After the prototype has been developed, the clients are permitted to use the prototype system. Based on the client’s feedback, the developer incorporates the suggested changes to the system and gives the modified system to the client’s use. This cycle is repeated until the client has no modifications on the system, at which stage the final requirements specification is ready for further processes like designing, coding and testing. The important characteristics of this model are:

• • • • •

For prototyping, the cost of the requirements analysis must be must be kept low, in-order for it to be feasible The development approach followed is “quick and dirty”, the focus is on quicker development rather than on the quality Only minimal documentation is required because it is a throw away prototype model This model is very useful in projects where requirements are not properly understood in the beginning It is an excellent method for reducing some types of risks involved with a projects


Chapter 1 - An Introduction to Software Engineering

1.5.3 The Evolutionary Software Process Model
Software like any other system, evolves over a period of time. So ,software engineers need a process model that can take care of software products that evolves over time. Such models are termed as evolutionary models. Evolutionary models are iterative. Some examples of this model are the incremental model, spiral model, concurrent development model and the component assembly model. The Incremental Model
It is also known as the iterative enhancement model. It is an evolutionary model. It combines elements of the waterfall model with the iterative philosophy of the prototype model. The incremental model is shown in figure 1.3 This model applies linear sequences with reference to time. And each linear sequence produces a deliverable “ increment “ of the software. The first increment is often a core product, where the basic requirements are addressed. This core product is used by the customer. Based on the result of use and/or evaluation, a plan is developed for the next increment. The plan addresses the changes of the core product, so as to meet the better needs of the customer and the delivery of additional features and functionality. This process is repeated following the delivery of each increment, until the complete product is produced. This approach is advantageous for an in-house software development. But the problem with this method is that the iterations may never end and the user may never really get the final product.

Increment 1 System Engineering analysis design code test Delivery of 1st increment

Increment 2 analysis design code test Delivery of 2nd increment

Increment 3 analysis design code test Delivery of 3nd increment

calendar time

Figure 1.3 The incremental model

BSIT 44 Software Engineering

9 The Spiral Model
This model is relatively a new model, proposed by Barry Boehm. It incorporates the elements of both the prototype approach along with the classic software lifecycle. The activities in this model can be organized like a spiral, that has many cycles. Typically, the inner cycles represent the early phases of requirement analysis along with the prototyping and the outer spirals represent the classic software lifecycle. The spiral model is shown in figure 1.4 This model has been divided into four quadrants, each quadrant representing a major activity like planning, risk analysis, engineering and customer evaluation. The software process cycle begins with the planning activity represented by the first quadrant of this model (upper-left quadrant). Each cycle here begins with the identification of objectives for that cycle, the alternatives and constraints associated with that objective. The second quadrant is associated with risk analysis activity. It used to evaluate different alternatives that are based on the objectives and constraints listed in the first quadrant. The importance of evaluation is that, there is a risk assessment phase to evaluate the development effort and the associated risk involved for that particular iteration. The third quadrant is about engineering activity, which actually involve the development of the software, and uses various development strategies that resolve the uncertainties and risks. It may include some activities like simulation, bench marking etc. The last phase is the customer evaluation phase. It involves a review of the preceding development stage. Based on the outcome of the development step the next phase is planned. Some important characteristics of this model are:

• • • •

It uses an iterative approach and with in each iteration it introduces a phase of risk analysis to accommodate for the changing environment It allows the usage of prototyping at any stage to be able to further refine requirements and therefore reduce risk element It maintains a systematic approach as in the case of waterfall model Although this model is quite flexible, it has some problems associated with it like, risk analysis is a major phase in this model and assessment of risks involve a greater technical expertise. And also since this approach is relatively new, there are not many practitioners in this unlike other methods.


Chapter 1 - An Introduction to Software Engineering

Determine objectives, alternatives, constraints Risk Analysis Risk Analysis Risk Analysis REVIEW Risk An.

Evaluate alternatives, identify, resolve risks

Proto 1 Requirements plan Concept Simulations, models, benchmarks life cycle plan of operation S/W requirements Product design Detailed Development Requirement design Code plan validation Integration and test plan Unit test Design Integration V&V Acceptance test Service Determine objectives, test alternatives, constraints

Opera tional Proto prototype Proto 3 2

Plan next phase

Figure 1.4 The spiral model

Software is a set of instructions or computer programs that when executed provide desired function and performance. It is both a process and a product. Software engineering is a discipline that integrates process, methods, and tools for the development of computer software. A number of process for software engineering have been proposed each with its own strengths and weaknesses. And you have learnt some of the important aspects of software, methods of engineering it in this unit. You will be reading more on the generic phases involved in the development of software in the subsequent chapters.

BSIT 44 Software Engineering


I Fill in the blanks: 1. Software is a set of ————————— that when executed provide desired function and performance.
2. 3. 4. 5. 6. 7. 8. Software is a process and ——————————. Software engineering is a —————————— The definition phase of software engineering includes tasks such as system engineering, software project planning and ————————— —————————————— results in modification to the software to accommodate changes to its external environment. In Water fall model, the phases involved in the software development are organized in ———————————. The development approach followed in prototype model is ——————————— The ——————————— method is also known as the iterative enhancement model.

II Answer the following questions in one or two sentences: 1. What is Software, List out the important characteristics of software.
2. 3. 4. 5. 6. 7. 8. 9. 10. What is the advantage of software reusability ? Briefly list out the major application areas of software . Give the IEEE definition of software engineering Name the three generic views of software engineering. What is software engineering paradigm ? What are the limitations of Waterfall model ? What is a throw away prototype ? What is evolutionary software process model ? What are the drawbacks of Spiral model ?

I Answers 1. instructions or computer programs
2. 3. 4. 5. 6. 7. 8. product discipline requirements analysis adaptive maintenance linear order quick and dirty incremental

Chapter 1 - An Introduction to Software Engineering

1. 2. Roger S. Pressman : Software Engineering – A Practitioner’s Approach (Fourth Edition McGraw-Hill 1997. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa publishing house

Chapter 2

System Analysis and Requirements Specification

n the previous chapter, the stages of system development and also some of the important process models were discussed. Each process model of software development process includes a set of activities aimed at capturing requirements : understanding what the customers and users expect the system to do. Thus, understanding the system is through what is known as requirements capturing and analysis. In this chapter, we will see some of the ways of analyzing the problem and explore the characteristics of requirements. We shall also see how to document the requirements for use by the design and test teams.


At the end of this chapter you should be able to

• • • • •

Appreciate the importance of capturing requirements Discuss the requirements process Answer the question “ What is software requirements specification ?” Document the requirements Understand the importance of validating requirements

BSIT 44 Software Engineering



Chapter 2 - System Analysis and Requirements Specification

According to IEEE definition, a requirement is (1) a condition of capability needed by a user to solve a problem or achieve; (2) a condition or a capability that must be met or possessed by a system…, to satisfy a contract, standard, specification or other formally imposed document…..[IEE87]. The software requirements deal with the requirements of the proposed system and produces a document at the end of the requirements phase of software development cycle. And this document is known as the Software Requirements Specification (SRS). SRS completely describes what the proposed software system should do without describing how the software will do it.

2.2.1 Need for SRS
SRS is a medium through which the client and the user needs are accurately specified, it actually forms the basis of software development. An SRS is basically an organization’s understanding (in writing) of a customer or potential client’s system requirements and dependencies at a particular point in time, prior to any actual design or development work. It’s a two-way insurance policy that assures that both the client and the organization understand the other’s requirements from that perspective at a given point in time. The SRS is often referred to as the “parent” document because all subsequent project management documents, such as design specifications, statements of work, software architecture specifications, testing and validation plans, and documentation plans, are related to it. It’s important to note that an SRS contains functional and nonfunctional requirements only; it doesn’t offer design suggestions, possible solutions to technology or business issues, or any other information other than what the development team understands the customer’s system requirements to be. SRS is typically developed during the first stages of “Requirements development,” which is the initial product development phase in which information is gathered about what requirements are needed. This information-gathering stage can include onsite visits, questionnaires, surveys, interviews, and perhaps a return-on-investment (ROI) analysis or needs analysis of the customer or client’s current business environment. The actual specification, then, is written after the requirements have been gathered and analyzed. In short, a requirements-gathering team consisting solely of programmers, product marketers, systems analysts/architects, and a project manager runs the risk of creating a specification that may be too heavily loaded with technology-focused or marketing-focused issues. The presence of a technical writer on the team helps place at the core of the project those user or customer requirements that provide more of an overall balance to the design of the SRS, product, and documentation. A well-designed, well-written SRS accomplishes four major goals:

It provides feedback to the customer. An SRS is the customer’s assurance that the development organization understands the issues or problems to be solved and the software behavior necessary to address those problems. Therefore, the SRS should be written in natural language in an unambiguous manner that may also include charts, tables, data flow diagrams, decision tables, and so on.

BSIT 44 Software Engineering


It decomposes the problem into component parts. The simple act of writing down software requirements in a well-designed format organizes information, places borders around the problem, solidifies ideas, and helps break down the problem into its component parts in an orderly fashion. It serves as an input to the design specification. As mentioned previously, the SRS serves as the parent document to subsequent documents, such as the software design specification and statement of work. Therefore, the SRS must contain sufficient detail in the functional system requirements so that a design solution can be devised. It serves as a product validation check. The SRS also serves as the parent document for testing and validation strategies that will be applied to the requirements for verification.

2.2.2 Requirement Process
A requirement is a feature of the system or a description of something the system is capable of doing in order to fulfill the system’s purpose. It is an important activity in the software development life cycle, since the client or the customer has some notion of the system that need to be built, either by replacing the existing system or extending the existing system. So, the process of determining requirements for a system shown in figure 2.1 is an important activity, and has two phases namely, requirements elicitation and analysis requirements definition and specification

The requirements elicitation and analysis phase is an important part of the software development process. Here we work with our customers to get the requirements, by asking questions, demonstrating similar systems, or even developing prototypes of all or part of the proposed system. It has three parts : 1. 2. 3. problem analysis which answers the question “have we captured all the user needs?” problem description which answers the question “are we using the right techniques or views ?” prototyping and testing which answers the question “is the function feasible?”

Next is the requirements definition and specification phase. We capture those requirements in a document or database and validate it for completeness, correctness and consistency. This is done with he documentation and validation part, which answers the question “have we captured what the user expects?”


Chapter 2 - System Analysis and Requirements Specification

requirements elicitation and analysis

requirements definition and specification

Problem analysis

Problem description

Prototyping and testing

Documentation and Validation

Figure 2.1 the process of determining requirements

The basic purpose of problem analysis is to understand the problem and its constraints. Its main aim is to understand the needs of the clients and the users, what they want from the system. Analysis leads to the actual specification. It involves interviewing the clients and the end users. The existing documents, clients and end users become the main source of information for the analysts. There are a number of methods to problem analysis. And we will discuss a few among them like informal approach, structured analysis, prototyping.

Informal approach This approach has no defined methodology. The information about the system is obtained by interaction with the client, end users , study of the existing system etc.. It uses conceptual modeling. That is, the problem and the system model are built in the minds of the analysts and it is directly translated to SRS. Once the initial draft of the SRS is ready, it may be used in further meetings. Structured analysis This method focuses on the functions performed in the problem domain and the data input and output by these functions. It is a top-down refinement process, it helps an analyst to decide upon the type of information to be obtained at various stages of analysis. This technique mainly depends on two types of data representation methods, the data flow diagram (DFD) and data dictionary.

BSIT 44 Software Engineering


Data Flow Diagrams Data Flow Diagrams also called data flow graphs are commonly used during problem analysis, for understanding a system. It shows the flow of data through the system it does not represent procedural information. It views a system as a function that transforms the inputs into desired outputs. Any complex system can be broken down into a system with a series of simple data transformations.
The DFD is an excellent communication tool for analysts to model processes and functional requirements. It is still considered one of the best modeling techniques for eliciting and representing the processing requirements of a system. Used effectively, it is a useful and easy to understand modeling tool. It has broad application and usability across most software development projects. It is easily integrated with data modeling, workflow modeling tools, and textual specs. Together with these, it provides analysts and developers with solid models and specs. Alone, however, it has limited usability. It is simple and easy to understand by users and can be easily extended and refined with further specification into a physical version for the design and development teams. The different versions are Context Diagrams (Level 0), Partitioned Diagrams (single process only — one level), Functionally decomposed, leveled sets of Data Flow Diagrams.

Description Process (i.e. Activity, Function) Depending on the level of the diagram it may represent the whole system as in a Context (level 0) diagram or a business area, process (activity), function, etc. in lower levels.
Symbol: Circle (Yourdon notation), or a Rounded Rectangle (Gane & Sarson notation). In the physical model, a program label is identified in the bottom of the symbol.

External Entity(s) (i.e. Sink, Source, Terminator) A person or group which interacts with the system. Something outside the system. It is not a user. e.g., Customer, Supplier, Government Agency, Accounting Department, The Human Resources System, etc. Usually external to the business or system but may be internal (e.g., Marketing Dept).
Symbol: rectangular box which may be shaded.

Data Store A repository of information. In the physical model, this represents a file, table, etc. In the logical model, a data store is an object or entity.
Symbol: Two parallel lines (Yourdon notation), or an open ended rectangle (G&S notation)

Data Flows The directional movement of data to and from External Entities, the process and Data Stores. In the


Chapter 2 - System Analysis and Requirements Specification

physical model, when it flows into a data store, it means a write, update, delete, etc. Flows out of Data Stores, mean read, query, display, select types of transaction. Symbol: Solid line with arrow. Each data flow is identified with a descriptive name that represents the information (data packet) on the data flow. An example of DFD for an employee payment system is shown in figure 2.2

Data Dictionary The data dictionary is a repository of various data flows defined in a DFD. It states the structure of each data flow in a DFD. The components in the structure of a data flow may also be specified in the data dictionary, as well as the structure of files.
The notations used to define the data structure are: + ( plus ) represents a sequence or composition | (vertical bar) represents selection means one OR the other and * represents repetition means one or more occurrences.

Prototyping In this method of problem analysis, a partial system is constructed, which is used by the client and the developers to have a better understanding of the problem and the needs. There are two approaches to this methods: throwaway and evolutionary prototyping. Throwaway prototyping In this approach, the prototype is constructed with an idea that it will be discarded after the completion of analysis and the final system will be built from the scratch. Evolutionary prototyping In this approach, the prototype is built with an idea that it will later be converted into the final system.
In order for prototyping for requirements analysis to be feasible, its cost must be kept low. And the cost of developing and running a prototype can be around 10% of the total development cost of the software.

BSIT 44 Software Engineering


Overtime rate Employee record

Get employee file

Pay rate * Regular Hours

Pay Weekly pay *

Overtime pay

Overtime Hours Regular Hours

Total pay Net pay

Company Records

Employee ID Issue paycheck


Deduct taxes

check Company Records Tax rates Worker

Figure 2.2 DFD for an employee pay system


Chapter 2 - System Analysis and Requirements Specification

The following figure 2.3 gives the data dictionary associated with the DFD of an employee payment system.

Weekly timesheet = Employee_name + Employee_id + [Regular_hours + Overtime_hours] * Pay_rate = [hoursly | daily | weekly] + dollar_amount Employee_name = Last + First + Middle_initial Employee_id = digit + digit + digit + digit

Figure 2.3 Data Dictionary

This is an activity that is carried on in parallel with the problem analysis, although it has to be after the problem analysis. Requirement specification yields an SRS as the final document. Next, we shall discuss the characteristics of an SRS, components associated with it and specification languages.

2.4.1 Characteristics of an SRS
The table–1 shown below gives the fundamental characteristics of a quality or good SRS, which is very essential for satisfying the basic goals of the system. Table-1 SRS Quality Characteristic Complete Consistent What It Means SRS defines precisely all the go-live situations that will be encountered and the system’s capability to successfully address them. SRS capability functions and performance levels are compatible, and the required quality features (security, reliability, etc.) do not negate those capability functions. For example, the only electric hedge trimmer that is safe is one that is stored in a box and not connected to any electrical cords or outlets. SRS precisely defines the system’s capability in a real-world environment, as well as how it interfaces and interacts with it. This aspect of requirements is a significant problem area for many SRSs.


BSIT 44 Software Engineering



The logical, hierarchical structure of the SRS should facilitate any necessary modifications (grouping related issues together and separating them from unrelated issues makes the SRS easier to modify). Individual requirements of an SRS are hierarchically arranged according to stability, security, perceived ease/difficulty of implementation, or other parameter that helps in the design of that and subsequent documents. An SRS must be stated in such a manner that unambiguous assessment criteria (pass/fail or some quantitative measure) can be derived from the SRS itself. Each requirement in an SRS must be uniquely identified to a source (use case, government requirement, industry standard, etc.) SRS must contain requirements statements that can be interpreted in one way only. This is another area that creates significant problems for SRS development because of the use of natural language. A valid SRS is one in which all parties and project participants can understand, analyze, accept, or approve it. This is one of the main reasons SRSs are written using natural language. A verifiable SRS is consistent from one level of abstraction to another. Most attributes of a specification are subjective and a conclusive assessment of quality requires a technical review by domain experts. Using indicators of strength and weakness provide some evidence that preferred attributes are or are not present.


Testable Traceable Unambiguous



What makes an SRS “good?” How do we know when we’ve written a “quality” specification? The most obvious answer is that a quality specification is one that fully addresses all the customer requirements for a particular product or system. While many quality attributes of an SRS are subjective, we do need indicators or measures that provide a sense of how strong or weak the language is in an SRS. A “strong” SRS is one in which the requirements are tightly, unambiguously, and precisely defined in such a way that leaves no other interpretation or meaning to any individual requirement.

2.4.2 Designing an SRS
You probably will be a member of the SRS team (if not, ask to be), which means SRS development will be a collaborative effort for a particular project. In these cases, your company will have developed SRS before, so you should have examples to use. But, let’s assume you’ll be starting from scratch. Several standards organizations (including the IEEE) have identified nine topics that must be addressed when designing and writing an SRS:

1. 2. 3. 4. 5. 6. 7. 8. 9. Interfaces Functional Capabilities Performance Levels Data Structures/Elements Safety Reliability Security/Privacy Quality Constraints and Limitations

Chapter 2 - System Analysis and Requirements Specification

But, how do these general topics translate into an SRS document? What, specifically, does an SRS document include? How is it structured? And how do you get started? An SRS document typically includes four ingredients, as given below: 1. 2. 3. 4. A template A method for identifying requirements and linking sources Business operation rules A traceability matrix

The first and biggest step to writing an SRS is to select an existing template that you can fine tune for your organizational needs (if you don’t have one already). There’s not a “standard specification template” for all projects in all industries because the individual requirements that populate an SRS are unique not only from company to company, but also from project to project within any one company. The key is to select an existing template or specification to begin with, and then adapt it to meet ones needs.

2.4.3 Specification Languages
Requirements specification necessitates the use of some specification language. It should posses many desired qualities of an SRS. Unlike formal language that allows developers and designers some latitude. The natural language of SRS must be exact, without ambiguity, and precise because the design specification, statement of work, and other project documents are what drive the development of the final product. Here, we will see some of the commonly used languages for requirements specification.

BSIT 44 Software Engineering


Structured English Natural languages have been widely used for specifying requirements since they have an advantage that it easily understood both by the client and the developer. The use of natural language has some problems associated with it. Written requirements become necessary as the system become more and more complex as against the requirements conveyed verbally, using the natural language for smaller systems. However, written requirements are imprecise and ambiguous. So, analysts are making an effort to move from natural languages towards formal languages for requirements specification. To reduce these problems, the natural language is used in a structured manner. If English is used as a natural language, requirements are broken into sections and paragraphs and paragraph further into subparagraphs. Some insist on using words like “shall”, “perhaps”, “should”, and try to restrict the use of common phrases in order to improve precision and reduce ambiguity. Regular Expressions Regular expressions can be used to specify the structure of symbol strings formally. They can be considered as grammar for specifying the valid sequences in a language and can be processed automatically. Some basic constructs like atoms, composition, alternation, closure are used in regular expressions, to define many data streams. Decision Tables It is formal, table-based notation that can be used to check the qualities like completeness and lack of ambiguity in requirements specification. They are helpful in specifying complex decision logic.

5 C1 C2 C3




The decision table has two parts. The top part specifies different conditions, and the bottom part specifies different actions.

24 2.4.4 Structure of an SRS

Chapter 2 - System Analysis and Requirements Specification

The table-2 below shows what a basic SRS outline might look like. This example is an adaptation and extension of the IEEE Standard 830-1998:

Table-2 A sample of a basic SRS outline 1. Introduction
1.1 1.2 1.3 1.4 1.5 1.6 2. Purpose Document conventions Intended audience Additional information Contact information/SRS team members References

Overall Description 2.1 2.2 2.3 2.4 2.5 2.6 2.7 Product perspective Product functions User classes and characteristics Operating environment User environment Design/implementation constraints Assumptions and dependencies


External Interface Requirements 3.1 3.2 3.3 3.4 User interfaces Hardware interfaces Software interfaces Communication protocols and interfaces


System Features 4.1 System feature A

BSIT 44 Software Engineering 4.1.1 Description and priority 4.1.2 Action/result 4.1.3 Functional requirements 4.2 5. System feature B


Other Nonfunctional Requirements 5.1 5.2 5.3 5.4 5.5 5.6 Performance requirements Safety requirements Security requirements Software quality attributes Project documentation User documentation


Other Requirements Appendix A: Terminology/Glossary/Definitions list Appendix B: To be determined

It is important that that final requirements be validated before the next phase of software development - design starts. A number of methods exist for requirements validation such as automated cross-referencing, reading, prototyping, review etc. But, among them the most commonly used method is requirement review. In this technique, a team of people consisting of a representative from the client, the author of the requirements document, a person from design team etc., are involved in the review process to find errors and discuss the requirements specification of the system. Checklists are often used in such review meetings to focus on the reviews and to ensure that no major source of errors is over looked by the reviewers.

26 2.6 SUMMARY

Chapter 2 - System Analysis and Requirements Specification

Requirements collection is crucial to the development of successful information systems. To achieve a high level of IS quality, it is essential that the SRS be developed in a systematic and comprehensive way. If this is done, the system meet the user’s needs, and will lead to user satisfaction. If it is not done, the software is likely to not meet the user’s requirements, even if the software conforms with the specification and has few defects. There’s so much more we could say about requirements and specifications. Hopefully, this information will help you get started when you are called upon—or step up—to help the development team. Writing top-quality requirements specifications begins with a complete definition of customer requirements. Coupled with a natural language that incorporates strength and weakness quality indicators— not to mention the adoption of a good SRS template—technical communications professionals well-trained in requirements gathering, template design, and natural language use are in the best position to create and add value to such critical project documentation. In the next chapter we will see how to transform these requirements analyzed into design.

I Fill in the blanks 1. The software requirements deal with the ————————of the proposed system.
2. 3. 4. 5. 6. The document produced at the end of the requirements phase of software development cycle is known as ———————————————— The process of determining requirements for a system has two phases namely, requirements elicitation and analysis and ————————— Structured analysis mainly depends on data flow diagrams and —————— An external entity is represented using ————————— in a DFD. A ———————— SRS is one in which the requirements are tightly, unambiguously, and precisely defined in such a way that leaves no other interpretation or meaning to any individual requirement. ———————— are often used in review meetings to focus on the reviews and to ensure that no major source of errors is over looked by the reviewers.


BSIT 44 Software Engineering


II Answer the following questions in one or two Sentences : 1. Give the IEEE definition of software requirements analysis.
2. 3. 4. 5. 6. 7. 8. 9. What is problem analysis ? What is a data dictionary ? What is the an evolutionary prototype model ? When do you say that an SRS is traceable ? List out the important characteristics of a good SRS. Name the important methods of requirements validation. What are specification languages ? Give an example. Give the outline structure of SRS

I Answers 1. requirements
2. 3. 4. 5. 6. 7. Software Requirements Specification (SRS) requirements definition and specification data dictionary rectangle strong Checklists

1. 2. 3. Roger S. Pressman : Software Engineering – A Practitioner’s Approach (Fourth Edition) McGraw-Hill 1997. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa publishing house Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5

Chapter 3

System Design

n the last chapter, we learned how to work with the customers to determine what they want out of the proposed system. The outcome of requirements analysis and specification phase was a system requirements specification document. This serves two purposes : one for the customer to capture their needs and the other for the designers to explain the problem in technical terms. The next step in development is to translate those desires into a solution: a design that will satisfy the customers’ needs. In this chapter we will see what to do and how to do it.


At the end of this chapter you should be able to

• • • • • • • •

know the process of designing a software describe the different types of design document the design specifications say what coupling is and name the different types of coupling say what cohesion is and mention the different types of cohesion appreciate the importance of various design notations give the main objective of transform analysis and transaction analysis give the necessary steps involved in transform analysis as well as transaction analysis Chapter 3 - System Design


BSIT 44 Software Engineering


The software design is an activity which is after the requirements analysis activity. This phase begins when the requirements document for the system to developed is available. Design is an important phase in the software development life cycle, it bridges the requirements specification and the final solution for satisfying the requirements. The goal of the design process is to produce a model of the system, which can be used later to build that system. The model thus produced is called the design of the system. The design process for the software has two levels: 1. 2. system design or top-level design detailed design or logic design

System design Using this, the modules that are needed for the system are decided, the specifications of these modules and how these modules need to be connected are also decided. Detailed design Using this, the internal design of the modules are decided or how the specifications of the modules can be satisfied are decided. This type of design essentially expands the system design to contain more detailed description of the processing logic and data structures so that the design is sufficiently complete for coding.
A design can be object-oriented or function-oriented. In function-oriented design, the design consists of module definitions, with each module supporting a functional abstraction. In object-oriented design, the modules in the design represent data abstraction.

Software design is both a process and a model. The design process is a set of iterative steps that enable the designer to describe all aspects of the software to be built. The basic design principles enable the software engineer to proceed with the design process. Here are some software design principles suggested by Davis: • The design process should not suffer from “tunnel vision” – a good design should consider alternative approaches based on the requirements of the problem, resources available and the design concepts. The design should be traceable to the analysis model – since the design needs to satisfy multiple requirements of a problem, it is necessary to have a means of tracking the requirements.

• •

Chapter 3 - System Design The design should not reinvent the wheel – design time should be used for representing new ideas and integrating the already existing design patterns instead of going for reinvention. The design should “ minimize the intellectual distance” between the software and the problem as it exists in the real world – the structure of the software design should reflect the structure of the problem domain. The design should exhibit uniformity and integration – a design is uniform if it appears that one person developed the entire thing. In order to achieve this, the design rules, format, style etc. will have to defined for the design team before design work begins. If the interfaces are well defined for the design components then the design is said to be integrated. The design should be structured to accommodate change The design should be structured to degrade gently, even when aberrant data, events, or operating conditions are encountered Design is not coding, coding is not design – design model has a higher level of abstraction than the source code. Major decisions are at design phase and only small decisions are taken at the implementation phase. The design should be assessed for quality as it is being created, not after the fact – a number of design concepts and design measures are available and can be used to access quality of the software. The design should be reviewed to minimize conceptual errors – major semantic errors like omissions, ambiguity, inconsistency etc. have to be addressed by the designer before dealing with the syntax of the design model.

• • •

When the design principles described above are properly applied, the software engineer creates a design that exhibits both internal and external quality factors. External quality factors are those properties of the software like, speed, reliability, correctness etc., that can be readily observed by the users. Internal quality factors are those that lead to high quality design software. In order to achieve internal quality factors, the designer must understand the basic deign concepts.

The design concepts provide basic criteria for design quality. There are a number of fundamental design concepts that has been of interest: Abstraction, refinement, modularity, software architecture, control hierarchy, structured partitioning, data structure, software procedure, information hiding.

BSIT 44 Software Engineering


3.4.1 Abstraction
Abstraction is a means of describing a program function, at an appropriate level of detail. It deals with problems at some level of generalization without regard to irrelevant low level details. At the highest level of abstraction, a solution is stated in broad terms using the language of problem environment. At the lower levels of abstraction, more procedural details are given. There are three important levels of abstraction • Procedural abstraction : A procedural or functional abstraction is a named sequence of instructions that has a specific and limited function. • Data abstraction : A data abstraction is a named collection of data that describes abstract data types, objects, operations on objects by suppressing the representation and manipulation details. Consider an example sentence, “open the door”. Here the word “open” is an of example of procedural abstraction, which implies a long sequence of procedural steps like: walk to the door, hold the door-knob, turn the knob and pull the door, move away from the door. The word “door” is an example of data abstraction, which has certain attributes like dimensions, weight, door type etc., that describe the door. • Control abstraction : It implies a program control mechanism without specifying internal details. That is, stating the desired effect without stating exact mechanism of control, co-routines, exception handling.

Example: ON interrupt DO
save STACK_A and call Exception_handler_a;

3.4.2 Stepwise Refinement
Stepwise refinement is a top-down approach where a program is refined as a hierarchy of increasing levels of detail. It causes the designer to elaborate on the original statement, providing more and more details at end of each refinement step. The process of refinement may start during the requirements analysis and conclude when the detail of the design is sufficient for conversion into code. As tasks are refined, the data associated with the tasks may have to be refined, decomposed, or structured. And processing procedures and data structures are likely to be refined in parallel.

32 3.4.3 Software Architecture

Chapter 3 - System Design

While refinement is about the level of detail, architecture is about structure of software. It refers to the overall structure of the software and the ways in which that structure provides the conceptual integrity for a system. The architecture of the procedural and data elements of a design represents a software solution for the real-world problem defined by the requirements analysis. A set of architectural patterns enable a software engineer to reuse the design level concepts. The properties associated with an architectural design are: · Structural properties This defines the components of the system (such as modules, objects), the way these components are grouped and interact with one another. Extra-functional properties This describes about the manner in which the design architecture achieves requirements for system characteristics like performance, capacity, reliability, security, adaptability etc Families of related system The design of similar systems usually encounter similar design patterns. So, architectural design should have the ability to reuse architectural building blocks.



There are a number of architectural design models that is based on the above said properties. These models use ADL (architectural description language) for describing the system components and the interconnections among them. • • • • • Structural models represent architecture as an organized collection of program components Framework models increase the level of design abstraction by identifying repeatable architectural design frameworks or patterns that are encountered in similar type of applications. Dynamic models address the behavioral aspects of program architecture, indicating how the system configuration may change as a function of external events. Process models deals with the design of the technical process that the system must accommodate. Functional models can be used to represent the functional hierarchy of a system

3.4.4 Program Structure
The program structure, also called as control hierarchy, represents the hierarchy of control with out regard to the sequences of processing and decisions. Program structure is usually expressed as a simple hierarchy showing super-ordinate and subordinate relationships of modules. Notations like Warnier-Orr notation, Jackson notation, tree-like diagrams etc., are used to represent control hierarchy. The tree-like

BSIT 44 Software Engineering


diagram ( as shown in figure 3.1) is the most common structure followed, which represents hierarchy of modules. The terms like depth, width, Fan-in , Fan-out are usually associated in describing and measuring the program structure. Depth refers to the number of levels of control . Width refers to the overall span of control. Fan-out is a measure of the number of modules that are directly controlled by another module. Fan-in indicates how many modules directly control a given module. The relationship between modules is said to be either super-ordinate or subordinate. A module that controls another module is said to be super-ordinate to it. And a module controlled by another is said to be subordinate to the controller.

M depth a b

Fan-out c

d f g

e h







q Fan- in



Figure 3.1 control hierarchy

3.4.5 Data Structure
Data structure is a representation of the logical relationship among the individual elements of data. It represents the organization, methods of access, degree of associativity, and processing alternatives for


Chapter 3 - System Design

problem-related information. Classic data structures include scalar, sequential, linked-list, n-dimensional, and hierarchical. Data structure, along with program structure, makes up the software architecture.

3.4.6 Modularity
A module is a named entity that: 1. 2. 3. 4. 5. Contains instructions, processing logic, and data structures. Can be separately compiled and stored in a library. Can be included in a program. Module segments can be used by invoking a name and some parameters. Modules can use other modules.

Modularity derives from the architecture. Modularity is a logical partitioning of the software design that allows complex software to be manageable for purposes of implementation and maintenance. The logic of partitioning may be based on related functions, implementation considerations, data links, or other criteria. Modularity does imply interface overhead related to information exchange between modules and execution of modules. There are five important criteria for defining an effective modular system that enable us to evaluate a design method:

1. Modular decomposability If a design method provides a systematic way for decomposing a problem into sub problems, it will reduce the complexity of the over all problem, there by achieving an effective modular solution. 2. Modular composability If a design method enables the existing design components to be assembled in to a new system, it will produce a modular solution for the problem. 3. Modular understandability If a module can be understood as a single unit without referring to other modules, it will be easier to build a module and make changes easily. 4. Modular continuity If any small changes to the system requirements result in changes to individual modules, rather than system-wide changes, the impact of change induced on it will be minimum.

BSIT 44 Software Engineering


5. Modular protection If an aberrant condition occurs with in a module and its effects are constrained within that module, the impact of error induced on it will be minimum.

3.4.7 Software Procedure
Software procedure provides a precise specification of the software processing, including sequence of events, exact decision points, repetitive operations, and data organization. Processing defined for each module must include references to all subordinate modules identified by the program structure. A procedure representation of software is layered.

3.4.8 Information Hiding
Information hiding is an adjunct of modularity. It permits modules to be designed and coded without concern for the internal details of other modules. Only the access protocols of a module need to be shared with the implementers of other modules. Information hiding simplifies testing and modification by localizing these activities to individual modules.

Among the several fundamental design concepts discussed earlier, modularity is the most important one. Since, a modular design reduces complexity, facilitates changes and results in easier implementation of programs. Some of the criteria used to guide modularization

• • • • • •

Conventional criteria — module is a processing step in execution sequence. Information hiding criterion — modules hide difficult or changeable design decisions. Data abstraction criterion — modules hide representation details of major data structures behind functions that access and modify the data structure. Levels of abstraction — modules provide hierarchical set of increasingly complex services. Coupling and cohesion — system is structured to maximize the cohesion of module elements and minimize coupling between modules. Problem modeling — modular system structure matches the problem structure (data structures match the problem structure and visible functions manipulate the data structures,


Chapter 3 - System Design or modules form a network of communicating processes, each process corresponds to a problem entity).

3.5.1 Functional Independence
The functional independence is a direct outgrowth of concepts like modularity, information hiding and abstraction. The functional independence is very important since software with effective modularity (i.e. independent modules) is easier to develop and maintain. Independence is measured using two qualitative criteria namely, cohesion and coupling

3.5.2 Cohesion
Cohesion is a measure of the relative functional strength of a module. It is an extension of information hiding concept. A cohesive module must perform a single task within a software procedure, requiring little interaction with procedures that are performed in other parts of a program. Cohesion may be represented in various levels ranging from low measure to high measure. Strongest cohesion is most desirable (7), weakest cohesion (1) is least desirable. 1. 2. 3. 4. 5. 6. 7. Coincidental cohesion (no apparent relationship among module elements). Logical cohesion (some inter-element relationships exist, e.g. several related functions, math library) Temporal cohesion (elements are usually bound through logic (2) and are executed at one time, i.e. same invocation of the module, e.g. initialization module). Communication cohesion (all elements are executed at one time and also refer to the same data, e.g. I/O module). Sequential cohesion (output of one element is input to the next, model structure bears close resemblance to the problem structure or procedure). Functional cohesion (all elements relate to performance of a single function). Information cohesion (complex data structure with all its functions/operators, concrete realization of data abstraction, objects).

3.5.3 Coupling
Coupling is a measure of the relative interdependence among modules. Strength of coupling depends on interface complexity, type of connections and communication between modules. In software design,

BSIT 44 Software Engineering


we look for the lowest possible coupling. Simple connectivity among modules results in software that is easier to understand and less prone to error propagation in the system. Shown below are the different types of module coupling. The strongest (1) is least desirable, the weakest (5) is most desirable. 1. 2. 3. 4. 5. Content coupling (cross modification of local data by other modules). Common coupling (global data cross coupling). Control coupling (control flag etc. module controls sequencing of processing in another module). Stamp coupling (selective sharing of global data items). Data coupling (parameter lists are used to pass/protect data items).

The document outlined in figure 3.3 below can be used as a template for a design specification.

Figure 3.3 Design specification outline I Scope A.
B. C. System objectives Major software requirements Design constraints, limitations

II Data design A. Data objects and resultant data structures
B. File and database structures 1. External file structure a. logical structure b. logical record description c. access method 2. global data 3. file and data cross reference

III Architectural design A. Review of data and control flow
B. Derived program structure

Chapter 3 - System Design

IV Interface design
A. B. C. Human-machine interface specification Human-machine interface design rules External interface design 1. Interfaces to external data 2. Interfaces to external systems or devices D. Internal interface design rules

V Procedural design
For each module: A. B. C. D. E. F. Processing narratives Interface description Design language description Modules used Internal data structures Comments/restrictions/limitations

VI Requirements Cross-reference VII Test provisions
1. 2. 3. Test guidelines Integration strategy Special considerations

VIII Special notes

BSIT 44 Software Engineering


IX Appendices
Section I gives the overall scope (overview of system objective, interfaces, major software functions, external data bases, major constraints) of the design effort. Much of the information is derived from the SRS. Section II gives the data design, describing the external file structures, internal data structures and a cross reference that connects data objects to specific files. Section III gives the architectural design, indicating how the program architecture has been derived from analysis model. Section IV describe the interface design, emphasizing on the human-machine interface specifications and design rules. Section V is about procedural design. Here, each module is described with an English language processing narrative. Along with it, the interface description, internal data structures, comments associated with each and every modules is described. Section VI contains a requirements cross-reference. The purpose of this is to establish all requirements are satisfied by the software design and to indicate which modules are very important to the implementation of specific requirements. Section VII is about the Verification which includes testing guidelines, integration strategy, special considerations ( physical constraints, high-speed constraints, memory management ). Section VIII and IX is on notes and appendices.

Each software design method has as its goal to provide the software designer with a blueprint from which a reliable system may be built. This section covers the nature of software design in more detail. It defines the fundamentals which software design should adhere to, design’s role as a representational model, and a historic perspective on design.

Design Fundamentals Three distinctive aspects of an information system are addressed during the software design. Data design is involved with the organization, access methods, associativity, and processing alternatives of the system’s data. Architectural (preliminary) design defines the components, or modules, of the system and the relationships that exist between them. Procedural (detailed) design uses the products of the data and architectural design phases to describe the processing details of the system — module internals.


Chapter 3 - System Design

Software design methods attempt to help the designer in the following aspects:

• • •

they assist in partitioning the software into smaller components and reducing complexity they help to identify and isolate data structures and functions they attempt to provide some measure of software quality.

3.7.1 Data Design
Data design is the first method that is conducted during the software engineering activity. The main activity during this design method is to select logical representations of data objects or data structures which are identified during the requirements definition and specification phase. In other words, it translates the data objects defined in the analysis model into data structures that reside with in the software. However, the choice of the data structure depends on the attributes of the data objects, the relationship between these data objects and their use with in the program. Following are some of the principles that need to be followed for data design approach: 1. The systematic analysis principles applied to function and behavior should also be applied to data – representations of data objects, relationships, data flows, content should be developed and reviewed, also alternative data organizations should be considered in a similar way that we derive, review and specify the functional requirements and preliminary design. All data structures and operations to be performed on each should be identified – the design of an efficient data structure must take the operations to be performed on the data structure into account. A data dictionary should be established and used to define both data and program design – A data dictionary represents the relationships among data objects and the constraints on the data elements of a data structure. Low-level data design decisions should be deferred until late in the design process – the overall data organization may be defined during requirements analysis, refined during preliminary design and specified in detail during the later design process. The representation of data structures should be known only to those modules that must make direct use of data contained within the structure – this indicates the importance of information hiding and the concept of coupling. A set of useful data structures and the operations that may be applied to them should be developed – data structures and operations are the important resources for the software design. Data structures can be designed for reusability. And a set of data structure templates help to reduce both the specification and design effort for data.






BSIT 44 Software Engineering 7.


A software design and programming language should support the specification and realization of abstract data types – the implementation and the corresponding design of a sophisticated data structure can be made very difficult if direct specification of the structure does not exist.

So, a well designed data can lead to better program structure and modularity, and reduced procedural complexity.

3.7.2 Architectural Design
The main objective of architectural design model is to develop a modular structure and represent the control relationships between modules. This method uses information flow (represented as a data flow diagram) characteristics to derive the program structure. The transition from information flow to program structure is accomplished as a five step process: (1) the type of information flow is established. (2) flow boundaries are indicated. (3) the DFD is mapped in to program structure. (4) control hierarchy is defined by factoring. (5) the resultant structure is refined using design measures and heuristics. A data flow diagram is mapped into program structure using one of the following mapping approaches – transform mapping and/or transaction mapping.

Transform flow In the fundamental system model (context level DFD), information must enter and exit software in an “external world” form. The data entered through a keyboard, information shown on a computer display etc., are examples external world information


Chapter 3 - System Design

External data representation

Incoming flow Transform flow

Outgoing flow


Internal data representation Time
Figure 3.4 Representation of flow of information

The external data that enters the system must be converted in to an internal form for processing. The information that enters the system along the paths that transform external data into an internal form is known as incoming flow. The transition form external to internal data form occurs at the kernel of the software. The incoming data moves through the transform center and from this it moves out of the software through the paths called outgoing flow. This is shown in figure 3.4. the over all flow of data occurs in a sequential manner. When a segment of a DFD shows these characteristics we say transform flow is present.

Transaction flow The information flow in a system is characterized by a single data item called a transaction. A transaction triggers other data flow along one of many paths as shown in figure 3.5. The transaction flow is characterized by data moving along an incoming path that converts external world information into a transaction. The center of information flow from where many actions paths start is called a transaction center. When the external information enters the transaction center, the transaction is evaluated and based on its value, the flow along one of many action paths is initiated.

BSIT 44 Software Engineering


Transaction T

Action paths

Transaction center

Figure 3.5 Transaction flow Transform Mapping
Transform mapping is a set of design steps that allows a DFD with transform flow characteristics to be mapped into predefined template for program structure. This type of mapping is applied to an information flow that exhibits distinct boundaries between incoming and outgoing data. The DFD is mapped into a structure that allocates control to input, processing, and output along three separately factored module hierarchies. Let us list out the design steps for performing the transform mapping. 1. 2. 3. Review the fundamental system model – context level or 0 level DFD. Review and refine data flow diagrams for the software – the information obtained from analysis phase through the software requirements specification is refined to get more details. Determine whether the DFD has transaction or transform flow characteristics – depending on the nature of the DFD, the type of information flow can be found out. In general, information flow within a system can always be represented as a transform flow. However, when a transaction characteristic is encountered it can be represented as a transaction flow.


Chapter 3 - System Design Separate the transform center by specifying incoming and outgoing flow boundaries – incoming flow is a path in which information is converted from internal form to external form, and outgoing is the vice-versa. Perform the “first level factoring” - Factoring is the process of decomposing a module into main and subordinate modules. Perform the “second level factoring” Refine the first iteration program structure using design heuristics for improved software quality

5. 6. 7. Transaction Mapping
Transaction mapping is applied when a single information item causes flow to branch along one of many paths. The DFD is mapped into a structure that allocates control to a substructure that acquires and evaluates a transaction. Another substructure controls all potential processing actions based on a transaction. The design steps for transaction mapping include: 1. Review fundamental system model 2. Review and refine data flow diagrams for the software 3. Determine whether the DFD has transform or transaction characteristics 4. Identify the transaction center and flow characteristics along each action path 5. Map the DFD to a program structure amenable to transaction processing 6. Factor and refine the transaction structure and the structure of each action path 7. Refine the first iteration architecture using design heuristics for improved software quality

3.7.3 Interface Design
The interface design focuses on three important areas: (1) the design of interfaces between software modules. (2) the design of interfaces between the software and other external entities ( i.e. non-human producers and consumers of information) (3) the design of the interface between the user and the computer. It encompasses internal and external program interfaces and design of the user interfaces. The internal and external design are guided by the information obtained from analysis phase.

BSIT 44 Software Engineering


Internal Design interface The internal design interface, also called inter-modular interface design is driven by the data that flows between the modules and the characteristics of the programming language in which the software is to be built. This design must support data validation and error handling algorithms with in a module. External design interface The external design interface evaluates each of the external entity represented in the DFD of the analysis phase. The data and control requirements of these external entities are determined before going in for this design. Here again, the design must support data validation and error handling algorithms with in a module. So, that a check is made to ensure that data conform to the boundaries set during the requirements analysis phase. User interface design The user interface design process begins with task analysis and modeling. The task analysis and modeling is a design activity that can be applied to understand the tasks and actions the user performs using either a elaborate approach or object-oriented approach. Human-computer interface design The overall process model for designing a user interface begins with different models of the system functions. There are four Interface design models that provides a human-computer interface (HCI).
1. Design model - the software engineer creates this model. This model incorporates data. Architectural and procedural description of the software. 2. User model - the software engineer establishes this model. This model depicts the profile of end user of a system. An end user can be a novice, knowledgeable, intermittent user or knowledgeable, frequent user. 3. System perception model or user’s model - this depicts an image of the system that is carried by the end users. 4. System image model - this model combines the look and feel of the computer based system with the supporting information available, which describes the system syntax and semantics.

Design issues There are four design issues that need to be addressed while designing an interface. · System response time This is an important issue that need to be addressed. It is the time that is measured form the time the user performs some control action like hitting a key or clicking the mouse until some desired output or


Chapter 3 - System Design

action is given out by the software. The system response time depends on two factors : length of the response time and variability, the deviation from average response time.

· User help facilities Help facilities are very essential for any user interactive, computer-based system. And on-line help is the most thought of, since it enables any user to solve a problem without leaving the interface. Usually, there are two types of help facilities: integrated help facility and add-on help facility.
Integrated help facility – is designed to into the software from the beginning, and it is often context sensitive, enabling the user to select from those topics that are relevant to the actions currently being performed. It provides a very quick help for the user. Add-on help facility – this facility is added to the software after system has been built. It is an online user’s manual with limited query capabilities.

· Error information handling Error messages or warning messages are produced by interactive system when ever something wrong has occurred. In order to have a better quality software, care must be taken to reduce the likely hood of mistakes. So, error handling is an important design issue that need to be addressed. Every error message or warning produced by an interactive system should exhibit characteristics like:
the message should describe the problem in a proper form that can be understood by the user the message should provide positive, constructive ideas so as to encourage the user to recover from the error the message, should carry audio or visual clues the message wordings should never blame the user

· Command labeling Earlier, the typed command was the most common type of interaction the user used to have with the system. Now-a days, even though users are after window-oriented, point and pick interfaces, still many users are after the command-oriented interaction. So , some design issues that correspond to the command mode interaction need to be addressed:
Will every menu option have a corresponding command ? What forms will commands take ? (i.e. a typed key or function keys or control sequence such as ^P , ^D) Can commands be customized ? etc.

BSIT 44 Software Engineering


Design evaluation The user interface prototype once created after the design has to be evaluated to check whether it meets the user requirements. And the design evaluation cycle is shown in figure 3.8
The user interface evaluation cycle begins with the creation of a first level prototype soon after the preliminary design is over. This prototype is evaluated by the user and the comments about the interface is passed on to the interface designer. The designer then studies the evaluation report and performs modifications to the design thus creating a next level of prototype. The evaluation process continues until no further modifications to the interface design are suggested.

Interface design guidelines In-order to have a good interface, some design guidelines need to be followed for general interaction, information display and data input.
Here are some guidelines that focus on general interaction:

• • • • • •

Use consistent format for menu selection, command input and data display Provide the user with meaningful feedback with audio and video support Ask the user for verification of any important destructive action. Eg. if a user wants to delete a file then use reconfirmation like “are you sure you want to delete ? “ Give the user with the functions like UNDO or REVERSE for easy reversal in interactive applications Provide context sensitive help facility for the user The user should not be expected to remember a list of names or numbers so that he can reuse them in subsequent functions i.e. minimize the amount of information that must be memorized between actions


Chapter 3 - System Design

Preliminary design

Build prototype #1 interface

Build prototype #n interface Design modifications are made User evaluates interface

Evaluation is studied by designer Interface design is complete

Figure 3.8 The interface design evaluation cycle

Here are some guidelines that focus on information display:

• • •

Display only the relevant information pertaining to the current context Use graphs or charts as presentation formats instead of voluminous tables Use consistent labels, standard abbreviations and appropriate colors

BSIT 44 Software Engineering


• •

Produce meaningful error messages Allow the user to maintain visual context – if graphical representations are scaled up or down, the original image should be constantly displayed

Here are some guidelines that focus on data input:

• • • • •

Minimize the number of input actions required by the user – use mouse to select from predefined sets of input instead of using the keyboard Maintain consistency between information display and data input Allow the user to customize inputs Deactivate commands that are not appropriate in the context of current action Provide help to assist with all input actions

3.7.4 Procedural Design
The procedural design is carried out after the completion of data, architectural and interface design. It transforms structural elements of the program architecture into the procedural description of software components. The information obtained from process specification, control specification and state transition diagram of analysis phase serve as the basis for the procedural design. The design notation, along with structural programming concepts enable the designer to represent procedural details that can be effectively translated into code. The design notation can be in graphical or tabular or textual form. Structured Programming
Structured programming is an important procedural design technique. It refers to the use of a set of existing logical constructs from which any program could be formed. The constructs fundamental to structured programming are sequence, condition and repetition. Sequence – it implements processing steps that are essential in the specification of any algorithm. Condition – provides the facility for selected processing based on some logical occurrence. Repetition – provides looping. Graphical Design Notation
Flow charts and Box diagrams are very popular graphical design notations that readily depict procedural details.


Chapter 3 - System Design

Flow Charts Prior to the structured programming revolution, flowcharts were the predominant method of representing program logic. Flowcharts are limited by a physical view of the system that is improperly applied before overall logical requirements are understood. Figure 3.9 illustrates the three structured constructs : sequence, repetition and selection.
Sequence : If-then-else:

F First task Else part Next task

condition T Then part

Selection :


F T Case condition F T Case part


BSIT 44 Software Engineering


Repetition : (a) Do-While (b) Repeat-Until

Loop condition Loop T F task


task Loop condition T F

Box diagram It is an another graphical tool which can be used to develop procedural design.
It is also known as N-S charts or Nassi-Shneiderman charts. The fundamental element of this tool is a box. The graphical representation of structural constructs are shown in figure 3.10.


Chapter 3 - System Design

Sequence :


First task Next task

Condition F T

Else part Next + 1 task

Then part

Repetition: Loop condition Repeat -Until Part Do - while part Loop condition

(a) Do-While

(b) Repeat-Until

BSIT 44 Software Engineering



Case condition

Value Value


Case Part

Case part Tabular Design Notation
Here are a few Tabular Design Tools that are used in designing a procedure, among them decision table is the popular one.

• • •

Decision Tables Detailed State Diagrams and Tables Karnaugh map

Decision tables They provide a notation that translates actions and conditions in to a tabular form. The organization of a decision table is given below:
Example: A limited-entry decision table (2N entries, N is the number of conditions) . It has a list of conditions and actions. The actions are based on the combinations of conditions. And the occurrence of any action depends on the decision rules


Chapter 3 - System Design

Rule numbers Condition 1 Condition 2 Condition 3 Action 1 Action 1 Action 1 Action 1

R1 Y Y Y x

R2 N N N x

Decision Rule R3 R4 R5 R6 Y N N Y N Y N N N N Y Y x x x x x

R7 N Y Y

R8 Y Y N x

x Program Design Language
Program design languages (PDLs), also called pseudocode, or structured English, is a pidgin language which uses vocabulary of one language (i.e. English) and the overall syntax of another (i.e. structured programming language). They express the logic of a program in narrative form (in English). A PDL is principally applied during the detailed design phase and it is best used to describe algorithms and specify interfaces between modules. PDLs impose a language syntax upon their users that makes it possible to use automated tools for validity checking. PDLs were first proposed as a replacement for flowcharts. A basic PDL syntax should include constructs for subprogram definition, interface description, and data declaration; condition construct, repetition construct and I/O constructs. Let us consider an example of PDL, where a string variable is converted in to an integer.

Example Function parameter is string containing value to be converted into
an integer (single characters are enclosed in single quotation marks, e.g. ‘i’ is character representation of letter i)

Declare Value to be passed back via the function
Pointer to string that is being converted Sign of value being converted

BSIT 44 Software Engineering Initialise variables(return value, pointer to the value of the character being pointed to in the string)


Begin Conversion loop While(current string_character is NOT end_of_string) While(current string_character is EQUAL blank OR tab_character OR end_of_line_character) Increment pointer to skip over white space */ Endwhile Skip over the leading + or - sign remembering the first one. While(current string_character is a number) Convert to integer (remember string is read left to right) Endwhile Endwhile

If the number is negative, change sign Return summed value to caller EndFunction

56 3.8 SUMMARY

Chapter 3 - System Design

In this chapter, we have looked at what it means to design a system. We have also seen that software design process involves four distinct but interrelated activities, data design, architectural design, interface design and procedural design. When you build a system, you should keep in mind several important characteristics such as modularity, levels of abstraction, coupling, cohesion, prototyping etc.. and you should be able to come out with a design document at the end of a design process. In the next chapter, we will see how to translate the design into implementation.

I Fill in the blanks : 1. In object-oriented design, the modules in the design represent —————
2. 3. 1. 2. 3. 4. 8. The three important levels of abstraction are procedural abstraction, data abstraction and ————————The functional Independence is measured using two qualitative criteria namely, ——————— and coupling The weakest coupling that is most desirable is ——————————————— A data flow diagram is mapped into program structure using transform mapping and/or ——————————— ——————— type of mapping is applied to an information flow that exhibits distinct boundaries between incoming and outgoing data. The constructs fundamental to structured programming are sequence, ———————— and repetition. PDL stands for ———————————

II Answer the following questions in one or two Sentences : 1. Why is design an important phase in software development life cycle ?
2. 3. 4. What is a program structure ? Define the term modularity What is cohesion ?

BSIT 44 Software Engineering 5. 6. 7. 8. 9. 10. What is the main objective of data design ? What are the three main areas where interface design is focused ?


Name the four Interface design models that provides a human-computer interface (HCI). What is the difference between integrated help and add-on help ? What is structured programming ? Name any two graphical design notations used in procedural design.

I Answers: 1. data abstraction.
2. 3. 4. 5. 6. 7. 8. control abstraction cohesion data coupling transaction mapping. transform mapping condition program design language

1. 2. 3. Roger S. Pressman : Software Engineering – A Practitioner’s Approach (Fourth Edition) McGraw-Hill 1997. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa publishing house Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5

Chapter 4

Software Coding



o far, we were able to understood the user’s problem, in the form of requirements and the way of addressing them in the form of design to get a high-level solution. Now, we must focus on implementing the solution as software. That is, we must write the programs that implement the design. Even though, there are many ways to implement a design, and many languages and tools that are available; this chapter does not teach you how to program rather, it explains some of the software engineering practices that you need to follow when you write the code.

In this chapter, we will look at • • • • • Standards of programming Guidelines for programming Appreciate the importance of documentation with respect to programming Types of documentation - Internal and external documentation Understand the use of programming tools

Most of the software is developed by a team of people. In-order to generate a quality product, several types of jobs are required to be performed by the team. Even when writing the code, many people


Chapter 4 - Software Coding

BSIT 44 Software Engineering


are generally involved, and a great level of cooperation and coordination is required. Thus, it is very important for others to understand not only what you have written, but also why you have written it and how it fits in their work. For these reasons, one must know the organization’s standards and procedures before beginning to write code. Standards and procedures can help you to • • organize your thoughts and avoid mistakes. Some procedures involve methods of documenting your code so it is clear and easy to follow. Translate designs to code. By structuring code according to standards, it is possible to maintain the correspondence between design components and code components.

The primary goal of coding phase is to translate the given design into source code in a given programming language, so that the code is simple, easy to understand, test and modify. Good programming involves a great deal of creativity, it is a skill that can only be acquired by practice. Good programming is a practice independent of target programming language. The design or requirements specification may suggest a programming language for use. No matter what language is used, each program component involves at least three important aspects: control structures, algorithms and data structures. We will see them in more detail here.

Control structures Many of the control structures for a program component are given by the architecture and design of a system. And the given design is translated in to code. In case of some architecture, such as implicit invocation and object-oriented design, control is based on the system states and changes in variables. In procedural designs, control depends on the structure of the code it self. However, irrespective of the design type, it is important that the program structure should reflect the design’s control structure.
Let us look at some of the guidelines that are applicable here: • • • • The code shall be written in such a way that it can be read easily from top-down The concept of modularity need to be followed in-order to hide the implementation details and to make the code more understandable, easy to test and maintain Use parameter names and comments to exhibit coupling among code components Instead of commenting the code with Re-estimate TAX


Chapter 4 - Software Coding

It is better to write Re-estimate TAX based on values of GROSS_INCOME and DEDUCTIONS • Restructure the code as far as possible, so that it helps in better understanding

Consider an example, here the control skips around among the program’s statements, making it difficult to follow: Benefit = minimum; If (age < 75) Goto A; Benefit = maximum; Goto C; If (age < 65) Goto B; If (age < 55) Goto C; A: if (age < 65) Goto B; Benefit = Benefit * 1.5 + Bonus ; Goto C; B: if (age < 55) Goto C; Benefit = Benefit * 1.5 ; C: Next statement.

The same piece of program can be restructured for better understanding : if (age < 55) Benefit = minimum; elseif (age < 65) Benefit = minimum + Bonus; elseif ( age < 75) Benefit = minimum * 1.5 + Bonus; else Benefit = maximum;

Algorithms The program design often specifies a class of algorithms to be used in coding. For example, the design may tell the programmer to use binary search technique. Even though, a programmer has lot of flexibility

BSIT 44 Software Engineering


in converting the algorithm to code, it depends on the constraints of the implementation language and hardware.

Data structures In writing programs, one should format and store data in such a manner that the data management and manipulations are straight forward. The program’s design may specify some of the data structures that can be used in implementing functions. In general, data structures can influence the organization and flow of a program. In some cases, it can even influence the choice of programming language. For example, LISP is a language for list processing. It is so designed that it contains structures that make it much easier for handling lists than other languages. In general, the data structure must be very carefully considered while deciding the language for implementation. General guidelines There are several strategies that are useful in preserving the design quality of a program;
• Localizing input and output – those parts of a program that read input or generate output are highly specialized and must reflect characteristics of the underlying hardware and software. Because of this dependency, the program sections performing input and output functions are sometimes difficult to test. Therefore, it is desirable to localize these sections in components separate from rest of the code Pseudo-code can be used for transforming the design to code through a chosen programming language. By adopting constructs and data representations without becoming involved immediately in the specifics of each command, one can experiment and decide which implementation is most desirable. In this way, code can be rearranged and restructured with a minimum rewriting Revise the design and rewrite the code until one is completely satisfied with the result Reuse code components if possible

• •

Documentation is an important activity in the implementation phase. The program documentation is a set of written descriptions that explain to a reader what the programs do and how they do it. There are two kinds of program documentation : • • Internal documentation External documentation


Chapter 4 - Software Coding

Internal documentation Internal documentation is a descriptive material written directly with in the program, at a level appropriate for a programmer. It contains information directed at some one who will be reading the source code of the program. The information contains a description of data structures, algorithms and control flow. Usually, this information is placed at the beginning of each component in a set of comments called the header comment block. The header comment block acts as an introduction to the program. It has the following information for each of the code component :
what is the name of the component who wrote it where does the component fit in the general system design when was the component written and revised why the component exists how the component uses its data structures, algorithms and controls

Program comments Comments in a program are textual statements that are meant for program readers and are not executed. Comments, if properly written and kept consistent with the code, can be invaluable during maintenance. Comments enlighten the readers as they move through the programs, helping them to understand how the intentions described in the header are implemented in the code. Providing comments for modules in a program are very useful. And the comments for a module are often called prologue for the module. It is desirable that prologue contains the following information :
• • • • Module functionality Parameters and their purpose Assumptions about the inputs, if any Global variables accessed and/or modified in the module

Comments have a place even in clearly structured and well-written code. Although code clarity and structure minimize the need for other comments, additional comments are useful whenever helpful information can be added to a component. It is very important that the comments need to be updated to reflect changes whenever the code is revised.

Meaningful variable names and statement labels Appropriate names for variables and statements that reflect their use or meaning need to be chosen in a program. Since, it is considered as a good style of programming. It is bad practice to choose cryptic

BSIT 44 Software Engineering names or totally unrelated names. Writing, Simple_Interest = (Principle * Time * Rate) / 100.0 Makes more sense to the reader than writing the statement C = ( A * B * C) / 100.0


Proper formatting enhances easier understanding When the code is properly formatted, that code can easily be read and understood. The indentation and spacing of statements are very important and can reflect the basic control structure of a program. Notice how unintended code looks like :
If ( xcoord < ycoord ) Result = -1; elseif ( xcoord == ycoord) if ( slope1 > slope2) result = 0; else result = 1; elseif ( slope1 > slope2) result = 2; elseif (slope1 < slope2) result = 3; result = 4; the same code can be formatted for clarity by using proper indentation and spacing :

if ( xcoord < ycoord) result = -1; elseif ( xcoord == ycoord ) if (slope1 > slope2) result = 0; else result = 1;

elseif (slope1 > slope2) result = 2; elseif (slope1 < slope2) else result = 4; result = 3;

Chapter 4 - Software Coding

Documenting data When a system handles many files of different types and purposes, along with flags and passed parameters, it is very difficult for program readers to understand the way in which data is structured and used. So, a data map is very essential to document data. External documentation It is a part of overall system documentation. It is intended to be read by those people who may never look at the actual code. It explains things more broadly then that of program’s comments.
The external code documentation contains the following description : • Problem description It explains what problem is being addressed by the component, when the component is invoked and why it is needed. • Algorithm description Addresses the choice of the algorithms, once when the existence of the component is clear. It explains each algorithm used by the component, including formulae, boundary conditions etc. • Data description This describes the flow of data at the component level. Usually, data flow diagrams along with data dictionaries are used in the form of references.

In the implementation phase, documentation assumes a very important role. Because, it is in this phase that the skills need to be passed from the development team to the users. And the users have to be extensively trained for it. So, proper communication in the form of user document becomes very essential. The nature and number of user documents usually vary across applications and organizations. Based on the following factors, it is possible to decide on the type of user documents: • The nature of software, its complexity, interfaces etc.

BSIT 44 Software Engineering • • •


User groups, depending on the nature of usage, their exposure and training levels of users Volume of information presented Document usage mode, whether it has to be an instructional mode, reference mode or both

Generally, the documentation set could consists of installation manual, operations manual, procedure manual, user manual, error message manual etc. However, the main idea here is to ensure that the users have complete documentation on how to install, use and manage the software. Any user document would generally consists of the following components: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Title page Restrictions Warranties Table of contents List of illustrations Introduction Body of document Error conditions and recovery Appendices Bibliography Glossary Index

The introduction would generally give an idea of the intended users for whom the document is addressed, how to use it, information on other related documents, conventions followed etc. The body of the document gives the main contents of the document. This generally include: 1. 2. 3. 4. 5. Scope Prerequisites for usage Preparatory instructions Cautions and warnings Description of each task giving

• • • • 6. What user is expected to do What function is to be invoked Possible errors and how to avoid them Expected results

Chapter 4 - Software Coding

Information on associated tasks and limitations

Today, there are tools available that help to reduce the amount of time spent on the development of programs. Let us see some of the popular one. Source-code tools There are two commonly used source-code tools editing tools, these relate to the editing of source code browsing tools, helps to view the source code The source-code beautifiers and templates not only makes a program look consistent but also standardize indentation styles, align variable declarations and format comments. Executable code tools Tools that are required for working with executable codes. It helps in code creation, debugging and testing. Code creation has four major tools which help the developer in converting the source code into executable code : Linker, code libraries, code generators and macro-preprocessors. Debugging tools help in debugging the code. Testing tools help in tracing the code errors.

In this chapter, we have looked at the several guidelines for implementing programs. It is necessary that certain points need to be considered while writing the programs by the programmer, such as organizational standards and guidelines to be followed, concept of code re-usability, incorporation of systemwide error-handling strategy and proper program documentation. We were able to see the importance of programming tools, and also address some of the important issues in implementing the design to produce high quality software. In the subsequent chapters, we will discuss the way of testing the code and how to make the software a quality software.

BSIT 44 Software Engineering


I Fill in the blanks: 1. A program component involves three important aspects: control structures, —————— ———— and data structures
2. 3. 4. 5. The two kinds of program documentation are internal and ———————————— Comments in a program are —————————— that are meant for program readers and are not executed ————————— helps to view the source code ——————————————tools helps in code creation, debugging and testing.

II Answer the following questions in one or two sentences : 1. What is internal documentation ?
2. 3. 4. 5. How is an external document being described ? What are programming tools ? What are source code tools ? What documents constitute a document set?

I Answers 1. algorithms
2. 3. 4. 5. external textual statements browsing tools executable code

1. 2. 3. 4.

Chapter 4 - Software Coding

Shari Lawrence Peleeger, software engineering theory and practice, Pearson education, 2001, ed. 2 Roger S. Pressman : Software Engineering – A Practitioner’s Approach (Fourth Edition) McGraw-Hill 1997. Pankaj Jalote, An Integrated approach to Software Engineering (second edition),Narosa publishing house Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5

Chapter 5

Software Testing



oftware testing is a critical element of software quality assurance and represents the ultimate review of specification, design and coding. In the process of software development, errors can occur at any stage and during any phase. The errors encountered during the earlier phases of software development life cycle (SDLC) such as requirements, design phases are likely to be carried to the coding phase also, in addition to the errors generated at the coding phase. This is particularly true because, in earlier phases most of the verification techniques are manual and no executable code exists. Because code is the only product that can be executed and its actual behavior can be observed, testing is the phase where errors remaining from all the phases must be detected. Hence, testing performs a very critical role for quality assurance and for ensuring the reliability of software. In this chapter, we discuss software testing fundamentals, techniques for software test case design and different strategies for software testing.

At the end of this chapter you should be able to

• • • • •

Say what software testing is State testing objectives Understand the basic principles behind testing Differentiate white-box from black-box testing method Derive test cases using white-box or black-box methods

BSIT 44 Software Engineering


• • • •
Give the importance of different testing strategies

Chapter 5 - Software Testing

Discuss unit testing, integration testing, validation testing and system testing and their importance Come out with a test plan Describe reliability, as an important metrics for testing software

Software testing is the process of testing the functionality and correctness of software by running it. Software testing is usually performed for one of two reasons: (1) defect detection, and (2) reliability estimation. The problem of applying software testing to defect detection is that software can only suggest the presence of flaws, not their absence (unless the testing is exhaustive). The problem of applying software testing to reliability estimation is that the input distribution used for selecting test cases may be flawed. In both of these cases, the mechanism used to determine whether program output is correct (known as an oracle) is often impossible to develop. Obviously the benefit of the entire software testing process is highly dependent on many different pieces. If any of these parts is faulty, the entire process is compromised. Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many different ways. Detecting all of the different failure modes for software is generally infeasible. The key to software testing is trying to find the modes of failure — something that requires exhaustively testing the code on all possible inputs. For most programs, this is computationally infeasible. It is commonplace to attempt to test as many of the syntactic features of the code as possible. Once source code has been generated, software must be tested to uncover (and correct) as many errors as possible before delivery to the customer. The goal is to design a series of test cases that have a high likelihood of finding errors - but how? That’s where software testing techniques enter the picture. These techniques provide systematic guidance for designing tests that: (1) (2) exercise the internal logic of software components, and exercise the input and output domains of the program to uncover errors in program function, behavior, and performance.

Techniques that try to exercise as much of the code as possible are called white-box software testing techniques. Techniques that do not consider the code’s structure when test cases are selected are called black-box techniques.

BSIT 44 Software Engineering


Testing objectives The following statements serve as the objectives for testing :
1. 2. 3. testing is a process of executing a program with the intent of finding error a good test case is one that has a high probability of finding an as-yet undiscovered error a successful test is one that uncovers as-yet undiscovered error.

Testing principles Here are some basic principles that are applicable to software testing, that need to be followed before applying methods to design test cases:
• • • • • • All tests should be traceable to customer requirements Tests should be planned long before testing begins The Pareto principle ( implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program modules) applies to software testing Testing should begin “in small “ and progress towards testing “in large”, from module level testing towards entire system testing. Exhaustive testing is not possible The tests should be conducted by an independent third party, so that testing will be very effective

It is an important element of Software Quality Assurance activity and it is the ultimate review of specification, design and coding phase.

Testing function will find errors in software
It will show that the software functions appear to be working according to the specifications Indicates software reliability and software quality as whole

But, Testing cannot show the absence of defects it can only show the presence of errors.

Test Case Design Any engineered software product can be tested in two ways:

1. 2. White-Box Testing Black-Box Testing

Chapter 5 - Software Testing

Black Box Testing : It is also known as functional testing. Knowing the specified function that the product has been designed to perform, tests can be conducted to demonstrate that each function is fully operational Examples: Boundary value analysis, Equivalence partitioning method White Box Testing : It is also known as structural testing. Knowing the internal working of product, test can be conducted to ensure that all internal operations perform according to specification that all internal components have been adequately used. Example : Basis path testing

5.3.1 White - Box Method
White-box method is also known as glass-box testing. It is a test case design method or this testing method is used for designing test cases. It uses control structures of procedural design to derive the test cases. Using this method the software engineer derive the test cases that : 1. 2. 3. Guarantee that, all independent paths with in a module have be used at least once. Use of decisions which are based on true and false value or user decisions based on true and false values. Execute all the loops at their boundaries and within their internal bound i.e. for i = 1 to 10 boundaries 4. 5. 2, 3, 4. . . . . . . .9 Internal bound

Use internal data structure to assure then validity. White-box testing handles errors such as logical errors and typographical errors

Basic path Testing It is a White-box testing technique This method enables the test-case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining the basis set of execution paths

BSIT 44 Software Engineering



if while

Figure 5.1


Uses flow graphs for representing the control flow or logical flow, as shown in Figure 5.1

To illustrates the use of flow graph, we will consider the procedural design representation and it is represented as a flow chart, as shown in figure 5.2. Here, a flow chart is used to depict program control structure. Figure 5.3 maps the flowchart into a corresponding flow graph. Each circle, called a flow graph node, represents one or more procedural statements. A sequence of procedure boxes and decision box can map into a single node. The arrow on the flow graph, called edges or links, represent flow of control. Areas bounded by nodes and edges are called regions.


Chapter 5 - Software Testing



3 6


5 7 9 10 8

Figure 5.2 Flow Chart

Cyclomatic complexity is a software measure, that provides a quantitative measure of logical complexity of a program
When used in basis path method, it give a number called cyclomatic number. Cyclomatic Number = E - N + 1 This number defines the number of independent paths in the basis set of a program and also gives the upper bound for the number of tests that can be conducted so that all statements have been executed at least once.

BSIT 44 Software Engineering





4,5 Nodes





10 11 Edge

Figure 5.3 Flow Graph

Set of Independent path A path in program that introduces at least one new condition or one new set of processing statements.
Path 1 : 1- 11 * Path 2 : 1-2-3-4-5-10-1-11 Path 3 : 1-2-3-6-8-9-10-11 Path 4 : 1-2-3-6-7-9-10-11 * 1-2-3-4-5-10-11-1-2-3-6-8-9-10-1-11-1-2-3-6-7-9-10-11

Here, paths 1-2-3 and 4 form the basis set for the flow graph. Cyclomatric complexity (CC) can be computed in 3 ways: 1. 2. 3.

Chapter 5 - Software Testing

The number of regions of a flow graph correspond to cyclomatic complexity Cyclomatic complexity V(G) for a flow graph G is defined as V(G) = E - N + 2 where E = flow graph edge, and N = number of nodes. The cyclomatic complexity V(G) for a flow graph G is defined as V(G) = P + 1 where P is the number of predicate nodes contained a flow graph.

Example: 1.
2. 3.

CC = 4

(because, there are 4 regions)

V(G) = E - N + 2 = 11 – 9 + 2 = 2 + 2 = 4 V(G) = 3 + 1 = 4

This V(G) gives the upper bound for the number of independent paths that form the basis set.

5.3.2 Block Box Testing
Black- box testing is also known as behavioral testing or partition testing. It focuses on the functional requirements of the software. The engineer can derive sets of input conditions that satisfy all the functional requirements of a program. This is performed is later stages of testing. It purposely discards control structures and focuses on information domain. It finds different class of errors, like : 1. 2. 3. 4. 5. Incorrect or missing functions Interface errors Errors in data structures or external data base access Performance errors Initialization and termination errors

Graph based testing method A Graph representation is used for this
Software testing begins by creating a graph of important objects and their relationships and devising a series of tests that will cover the graph so that each object and relationship is exercised and errors are uncovered.

BSIT 44 Software Engineering


Object #1

Directed link

Object #2

Undirected link
Object #3

Parallel links

Figure 5.4 Graph notation

In order to accomplish these steps, a software engineer begins by creating a graph - a collection of nodes that represents objects, links that represents relationship between objects, node weights that describe the properties of node and link weights that describe some characteristics of a link. the symbolic representation of a graph is shown in figure 5.4. Once nodes have been identified, links, link weights should be established. Each relationship is studied separately like transitive relation, symmetric relation, reflexive relation.

Graph - Based testing method is used by Transaction flow modeling
Data flow modeling Finite state modeling. Timing modeling etc.

Equivalence partitioning It is a block box testing method. The Input domain of a program is taken and it is divided into classes of data from which test cases can be derived. A test case will handle each class of data and produce different classes of errors. Test case design for equivalent partitioning is based on the evaluation of equivalence classes for input condition. Equivalence class equivalence class of objects are present if the set of objects have relationships like transitive, reflexive and symmetric.


Chapter 5 - Software Testing

represents a set of valid or invalid states for input conditions. Input conditions being: specific numeric value, a range of values, a set of related values, and Boolean conditions.

Equivalence classes may be defined according to the following guidelines: Input condition specifies
Range of values A numeric value Set of values Boolean value

Equivalence classes defined
1 valid and 2 invalid 1 valid and 2 invalid 1 valid and 1 invalid 1 valid and 1 invalid

Example: Automated banking application Data is accepted in the form:
Area code : blank or 3 digit numbers Prefix Suffix Password : 3 digit numbers not beginning with 0 : 4 digit number : 6 digit alphabetic numeric

Commands : “check”, “deposit “, etc.

Input conditions for each data element : Area code : Input condition, Boolean - area code may or may not be present. Input condition range - values defined between 100 & 999.
Prefix : Suffix : Input condition, range - specified values > 200 with no 0 digits. Input condition, value - 4 digit length

Password : Input condition, Boolean - may or may not be present

Input condition, value - 6 character string
Command : Input condition, set - containing commands as said above.

BSIT 44 Software Engineering


Next equivalence classes are derived using guidelines for input conditions, for which text cases are developed and executed.

Boundary Value Analysis (BVA) It is a testing technique meant for testing the boundaries of the input domain for errors.
It is a complement of equivalence partitioning method. It is a test case design technique, the test cases are selected at the “edges” of the class. BVA derives the test cases from the output domain. The guide lines for BVA is shown in table 5.1

Guide lines for BVA: Input Conditions defined
1. 2. A range of values (say bounded between a & b) A numeric value

Test Case designed is designed with values a & b, just above and just below a & b respectively use maximum and minimum numbers, values that are just above and just below maximum and minimum. boundary testing is followed.


Internal data structures

Note: guidelines 1 and 2 are applicable to output conditions also.

A strategy for software testing integrates software test case design methods into a well planned series of steps that result in successful construction of software. It provides a roadmap for the software developer, quality assurance team and the customer.

A strategic approach to software testing Testing is a set of activities that can be planned in advance and conducted systematically. For this reason a template, a set of steps into which specific test case design methods can be fitted, is required. So, a strategic approach can be used to provide templates for software testing. Here, are some important characteristics for this approach:
• • Testing begins at the module level and works “outward” toward the integration of the entire computer-based system Different testing techniques are appropriate at different points in time

• •

Chapter 5 - Software Testing Testing is conducted by the developer of the software and an independent test group Testing and debugging are different activities, but debugging must be accommodated in any testing strategy

Verification & validation ( V & V) V & V are Software Quality Assurance activities.
Verification - is a set of activities that ensures that the software correctly implements a specified function. Validation - a set of activities that ensures that the software that has been built, is traceable to the customer requirements. Software Engineering process may be viewed as a spiral, with the following activities: System engineering. Requirements analysis Design Coding

A strategy for software testing may also be viewed as a spiral, with the following types of testing: Unit testing Integration testing Validation test System test

5.4.1 Unit Testing
Concentrates on functional verification of a module Using procedural design description as a guide, important control paths are tested to find out the errors within the boundary of the module. It is white-box oriented The functional verification can be done in parallel for multiple modules.

BSIT 44 Software Engineering


Steps involved in unit testing are : 1. Interface
2. 3. 4. 5. Local data structure Boundary conduction Independent paths Error handling paths

1) 2) 3) 4) 5)

First the module interface is tested to ensure that information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrands during all steps in an algorithm’s execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All Independent paths (basis paths) through the control structure are executed to ensure that all statements in a module have been executed at-least once. Finally, all error handling paths are tested.

So, Test cases are so designed to uncover errors due to: erroneous computations - Incorrect arithmetic precedence, mixed mode operations, etc.
Incorrect comparisons or improper control flow - comparison of different data types, incorrect logical operations or precedence, improper loop termination etc. Boundary values just above, at, below, the maxima and minima.

Each test case is coupled with a set of expected results. Unit testing is normally associated with the coding step i.e. after the source code has been developed, reviewed, verified only then the test case design begins. A module in unit test is not a standalone program. So, driver and / or stub software must be developed for each unit test. But these two software are overhead. (i.e. they are developed but not delivered with the final software product) figure 5.5 shows a unit testing environment.


Chapter 5 - Software Testing

Driver Interface Local data structures Boundary conditions Independent paths Error handling paths

Module to be tested


stub Test Cases results

Figure 5.5 Unit testing environment

In most applications, a driver is a software or “main program” that accepts test case data, passes it to the module to be tested and prints the relevant results. A stub is a “dummy sub program”, that replaces the modules that are subordinate to the module to be tested. It does minimal data manipulation, prints verification of entry and returns. Unit testing is advantages when a module to be tested defines only one function, and that the number of test cases are reduced which in turn makes error handling easier. A module with high cohesion simplifies unit.

5.4.2 Intergration Testing
It is a systematic technique for constructing the program structure while constructing tests to find errors associated with interfacing.

The objective is to take modules which are unit tested, and build a program structure that has been dictated by design. There are two categories of Integrated testing :

BSIT 44 Software Engineering 1.


Incremental Integration : The program is constructed and tested in small segments where errors are easier to isolate and correct. Interfaces are more likely to be tested completely and a systematic test approach is applied. Non-incremental integration: It uses a “Big bang” approach to construct the program. All modules are combined in advance. And the entire program is tested as a whole. Errors are more and difficult to isolate and correct.


Incremental Integration strategies: 1. Top-down Integration
2. 3. Bottom-up integration Sandwich testing

1. Top Down Integration: Is an incremental approach to the construction of program structures
Modules are integrated by moving down ward through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately sub ordinate) to the main control module are incorporated in to the structure in either a depth-first or breadth-first manner.

Depth-First Integration: this would integrate all modules on a major control path of the structure. However, the major control path can be arbitrarily selected and it depends an application specific characteristics. The depth-first integration is shown in figure 5.6. Example: Selecting left hand path, modules m1, m2, and m5 would be integrated first, next either m8 or M6 would be integrated. Then central and right hand control paths are built.








Figure 5.6 Depth-first integration


Chapter 5 - Software Testing

Breadth First Integration : Incorporates all modules directly subordinate at each level, moving across the structure horizontally. From figure 5.6 , modules m2, m3 and m4 are integrated first, next control level, m5, m6 and so on.

Top-down Integration is performed with the following steps: 1. The main control module is used as a test driver, and stubs are substituted for all modules directly subordinate to the main control module. A test driver is a main control module. Stub is a module directly subordinate to main control module.
2. 3. 4. 5. Depending on the Integration approach (either depth-first or breadth-first) subordinate stubs are replaced one at a time with actual modules. Tests are conducted as each module is integrated. On completion of each set of tests, another stub is replaced with the real module. Regression testing may be conducted to ensure that new errors have not been introduced.

The process continues from step 2 until the entire program structure is built. This approach verifies major control or decision points early in the test process. Though this approach sounds relatively simpler, but in practice has several logistical problems. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels. Stubs replace the low level modules at the beginning of top-down testing therefore no significant data can flow upward in the program structure.

The tester is left with 3 choices: 1. Delay many tests until stubs are replaced with actual modules - this leads to the difficulty in determining the cause of errors and tends to violate the highly constrained nature of the top down approach.
2. 3. Develop stubs that perform limited functions that simulate the actual module - workable but leads to significant overhead as stubs become more and more complex. Integrate the software from the bottom of the hierarchy upward – bottom-up testing.

Advantages : Testing of major control functions early. Disadvantages: Need for stubs - overhead

2. Bottom-up Integration : The modules at the lowest levels in the program structure are constructed and tested. Here need for stub is eliminated. Figure 5.7

BSIT 44 Software Engineering


Steps required for implementing bottom-up integration strategy : 1. Low level modules are combined into clusters (sometimes called builds) that perform a specific software sub function.
2. 3. 4. A driver (a control program for testing) is written to co-ordinate test case input and output. The cluster is tested. Drivers are removed and clusters are combined moving upward in the program structure.

According to figure 5.7, modules are combined to form clusters 1,2 & 3. Each cluster is tested using a driver (dashed box). Modules in clusters 1 and 2 are subordinates to ma. Now, Driver D1, D2 are removed and the cluster 1, 2 are interfaced directly to ma. Similarly, driver D3 of cluster 3 is removed and then cluster 3 is integrated to module mb. Then ultimately modules ma & mb are integrated with module mc.








Cluster 3

Cluster 1

Cluster 2

Figure 5.7 Bottom-up integration


Chapter 5 - Software Testing

Advantages: This method is very simple. Substantial decrease in the number of drivers if the top two levels of program structure are integrated top-down Lack of stubs easier test case design. Disadvantages : Program as an entity does not exist until the last module is added. Regression Testing: Is an activity that helps to ensure that changes that often take place, do not introduce additional errors. These software changes do occur every time a new module is added as part of integration testing - new data paths are established, new input may occur and new control logic is invoked.
This testing may be conducted manually, by re-executing a subset of all test cases using automated capture play back tools (these tools enable the software engineer to capture test cases and results for subsequent playback and comparison) Focuses on critical module function.


The regression test suite contains three different classes of test cases: 1) A representative sample of tests that will exercise all software functions.
2) 3) Additional tests that focus on software functions, that are likely to be affected by the change. Tests that focus on the software components that have been changed.

So, selection of an integration strategy depends upon software characteristics and sometimes project schedule. In general, a combined approach called as sandwich testing is used that takes characteristics of both approaches.

3. Sandwich testing : A best compromise between two methods. It uses a top-down strategy for upper levels of the program structure, coupled with a bottom-up strategy for subordinate levels.

5.4.3 Validation Testing
Focuses on requirements Here validation criteria stated in requirements are tested. This gives a final assurance that the software meets the functional, behavioral and performance requirements

BSIT 44 Software Engineering Black box testing is exclusively used. Validation test criteria is based on test plan and test procedure.


A test plan outlines the classes of tests to be conducted. And a test procedure defines specific test cases that will be used to find errors. After each validation test case has been conducted, one of the two possible conditions exists: 1. 2. The function or performance characteristics conform to specification and are accepted. A deviation from specifications is uncovered and a deficiency list is created.

Configuration Review: It is also termed as audit
It is an input element of the validation process. This is done to ensure that all elements of the software configuration have been properly developed, are catalogued and detailed to support the maintenance phase of software life cycle.

Alpha and Beta Testing: Are tests conducted to find out errors at the customer side.
The Alpha Test : is conducted at the developer’s site by a customer. Usually the developer keeps track of working of the software, records errors etc., So, alpha tests are conducted in a controlled environment. Beta Test: is conducted at one or more customer sites by the end user(s) of the software. The software developer is generally not present. So, they are not in the controlled environment. It is a ‘live” application of the software in an environment. Customer records all the problems (real or imagined) that are encountered and reports to the developer at regular intervals.

5.4.4 System Testing
This testing verifies that the system performs well with other system elements like hardware, information, database etc., It is actually a series of different tests whose main purpose is to fully exercise the computer based system. A classic system testing problem is “finger pointing” this occurs when an error is uncovered and each system element developer blames the other for the problem.

Types of system testing: 1) Recovery Testing
2) 3) 4) Security Testing Stress Testing Performance Testing

Chapter 5 - Software Testing

Recovery Testing: Is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. (i.e. a system failure must be corrected within a specified period of time). Recovery can be automatic or manual. If automatic, re-initialization, check pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery requires manual intervention, the meantime to repair is evaluated to determine whether it is within acceptable limits. Security Testing: It attempts to verify that protection mechanisms built into a system will in fact protect it from improper penetration (like hackers attempt to penetrate systems for sport, for revenge, for illicit personal gain etc). During this testing, the tester plays the role of the individual who desire to penetrate the system. So, the system designer has to make the penetration cost greater than the value of the information to be obtained. Stress Testing: It is designed to confront programs with abnormal situations. This testing executes a system in a manner that demands resources in abnormal quantity, frequency or volume.
A variation of stress testing is called sensitivity Testing.

Performance Testing : It is mainly used for testing real-time and embedded systems. It is designed to test
run -time performance of software within the context of an integrated system. It is often combined with stress testing.

5.4.5 Debugging
It occurs as a consequence of successful testing. When a test case uncovers an error, then debugging, a process that results in the removal of errors occur.

BSIT 44 Software Engineering Debugging process begins with the execution of test cases


There are three debugging approaches commonly used :

1. Brute force method : it is the most common and least efficient method.
It is applied when all else fails.

2. Backtracking: fairly common, can be successfully used in small programs.
beginning at the site where symptom has been found, the source code is traced backward (manually) until the site of course is found.

3. Cause elimination: uses the concept of binary partitioning
“A cause hypothesis” is devised and data related to the error occurrence are used to prove or disprove the hypothesis.

• • • • • • •
Comparison of different Techniques Levels of Testing Test plan Test case specifications Test case execution and analysis Reliability models Source code metrics

Testing is the last phase in the life cycle of the software product, before the software product is developed. We even know that, software typically undergoes changes even after if has been delivered. In order to validate that a change has not affected some old functionality of the system, regression testing is done. In regression testing, old test cases are executed with the expectation that the same old results will be produced. But this testing adds on additional requirements on the testing phase.


Chapter 5 - Software Testing

Testing is the costliest activity in the software development. So, it has to be done efficiently. And testing cannot be done all a sudden, a careful planning is required and the plan has be executed properly. The testing process focuses on how testing proceed for a particular project. Structural testing (White Box testing) , Is best suited for testing only one module (and not for entire program therefore it is difficult to generate test cases to get the desired coverage) It is good for detecting logic errors, computational errors and Interface errors. Functional Testing (Black - Box Testing), is best for testing the entire program or system. It is good for input errors and data handling errors.

Code Reviews: Cost effective method of detecting errors.
Code is reviewed by a team of people in a formal manner. Here faults are found directly. Does not require test case planning, test case generation or test case execution.

It is a general document for the entire project that defines the scope, approach to be taken and the schedule of testing, test deliverables and the personnel responsible for different activities of testing. This plan can be prepared before actual testing begins and can be done in parallel with the coding and design phases. The inputs for forming the test plan are:1. 2. 3. Project plan Requirements document System design document

A test plan containing the following: Test unit specification. Features to be tested Approach for testing Test deliverables Schedule Personnel allocation.

BSIT 44 Software Engineering


Test unit specification: An important activity of the test plan is to identify the test units. A test unit is a set of one or more modules together with data.
Along with identification of test units, different levels of testing are also specified like unit testing, integration testing etc. While forming “test units”, it is input to look for “testability” of a unit. (i.e. a unit should be tested early or it should be possible to form meaningful test cases and execute the unit without much effort with these test cases)

Features to be tested: Includes all software features and its combinations to be tested. These features are the ones specified in requirement or design documents like functional, performance, design constraints and attributes. Approach for testing: Specifies the overall approach to be followed in the current project. Testing criterion for evaluating the set of test cases should be specified. Test deliverables: List of test cases that were used
Detailed results of testing Test summary report Test log Data about code coverage

In general, a test case specification report , test summary report and test log should be supplied as deliverables.

Schedule Specifies the amount of time and effort to be spent on different activities of testing and testing of different units that have been identified. Personnel allocation: Identifies the persons responsible for performing different activities.
The testing process usually begins with a test plan which is the basic document guiding the entire testing of the software. It specifies the levels of testing and the units that need to be tested. For each of


Chapter 5 - Software Testing

the different units, first the test case specification are given and renewed, then during the test case execution phase, the test cases are executed and various reports are produced for evaluating testing. The main outputs of the execution phase are: The test log, the test summary report and the error report.

The important metrics used during testing is reliability. The reliability of software depends on the faults in the software. To assess the reliability of software, reliability models are used. Most of these reliability models are based on the data obtained during the system and acceptance testing. Data about time between failures observed during testing are used by these models to estimate the reliability of software.

MUSA Model (Musa’s basic execution time model) Simplest model proposed for software reliability assessment.
Simple to understand , Most widely used model This is an execution time model. Its predictive value has generally been found good This model focuses on failure intensity while modeling reliability. It assumes that the failure intensity decreases with time.

Failure intensity = number of failures / unit time

In basic model, it is assumed that, the failure intensity decreases with a constant rate with the number of failures. See fig 5.8 It is given by the equation: λ(U) =λ0 (1- U/V0)

BSIT 44 Software Engineering



Execution time

Figure 5.8 failure intensity with time

Where λ0 is the initial failure intensity at the start of execution; U is the expected number of failures by a given time t; V0 is the total number of failures that would occur in infinite time.

Failure intensity

Total failures
Reliability has two parameters, λ0 , V 0 whose values are used to predict the reliability of the given software.

94 5.8 SUMMARY

Chapter 5 - Software Testing

Software testing is an important phase in the software development life cycle. It represents the ultimate review of specification, design and coding. The main objective for test case design is to derive a set of tests that can find out errors in the software. This can be achieved through the help of two test case design approaches namely, white-box and black-box testing. White-box method focuses on the program control structure where as black-box method focuses on finding errors in functional requirements. Apart from knowing the test case design approaches, we also have the different strategies for software testing. Others concepts like, the art of debugging, the test plan specification, metrics related to software testing have been discussed in this unit. A thoroughly tested software maintains quality. A quality product is what every one wants and appreciates. We shall look into quality and its related concepts in the subsequent units.

I Fill in the blanks : 1. ———————————— is also known as functional testing.
2. 3. 4. 5. 6. 7. ——————————— is a software measure, that provides a quantitative measure of logical complexity of a program. Equivalence class of objects are present if the set of objects have relationships like ——— —————, reflexive and symmetric. The two categories of Integrated testing are incremental and —————————— A —————— outlines the classes of tests to be conducted. ———————— is mainly used for testing real-time and embedded systems. The important metrics used during testing is ——————————

BSIT 44 Software Engineering


II Answer the following questions in one or two sentences : 1. What is software testing ?
2. 3. 4. 5. 6. 7. 8. Why is software testing very important ? How does white-box testing differ from black-box testing ? Give two examples for white-box testing method. Give two examples for black-box testing method. What is the difference between verification and validation ? What is system testing ? Name the important approaches used in program debugging ?

I Answers 1. Black-Box Testing
2. 3. 4. 5. 6. 7. cyclomatic complexity transitive non-incremental test plan performance testing reliability

1. 2. 3. Roger S. Pressman : Software Engineering – A Practitioner’s Approach (Fourth Edition) McGraw-Hill 1997. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa publishing house Edward Kit, Software testing in the real world, Addison-Wesley publications, 2000, ed.1

Chapter 6

Software Project Management

oftware development is an umbrella activity with various phases involving several resources. The most important resources being the man power and money. The software development is a lengthy process which may take months or even years for completion. In order for the project to successfully get completed, the large workforce has to be properly organized so that they contribute very efficiently and effectively towards the project. So, proper management controls are very essential in controlling the developments ensuring quality. Project management includes activities such as project planning, estimation, risk analysis, scheduling, Software Quality Assurance and Software Configuration Management. However, in the context of set of resources, planning involves estimation - an attempt to determine how much money, how much effort, how many resources, and how much time it will take to build a specific softwarebased system or product. In this chapter, we shall see the major activities addressed by project management such as cost estimation, project scheduling, staffing and risk management. In the subsequent units, we shall discuss on software configuration management, quality assurance.


At the end of this chapter you should be able to • • • • know the importance of managing people, process and project describe the different types of team structuring say what project planning is discuss planning with respect to resources


Chapter 6 - Software Project Management

BSIT 44 Software Engineering • • • • • • appreciate the importance of software project estimation discuss various estimation models differentiate different kinds of COCOMO estimation models use COCOMO as an estimation model say, what is risk , how it is managed say, what is the necessity of project scheduling and tracking


Effective software project management focuses on the three P’s : people, problem and process. The people must be organized into effective teams, motivated to do high-quality work, and coordinated to achieve effective communication. The problem must be communicated from the customer to the developer, partitioned into several parts, and allocated for work by software team. The process must be adapted to the people and the problem. A common process framework is selected, an appropriate software engineering paradigm is applied, and a set of work tasks is chosen to get the job done.

People The people are the backbone for software development. The software process or project is usually populated with several people, who can be categorized as follows:
1. 2. 3. 4. 5. Senior managers, who define the business issues that often have more influence on the project. Project managers, who plan, motivate, organize, and control the practitioners who do software work. Practitioners, who deliver the technical skills that are necessary to engineer a product. Customers, who specify the requirements for the software to be developed. End users, who interact with the software once it is released for production.

The software team Since a variety of people belonging to different categories are involved in the software development process, it is very important to manage them. For an efficient and easier management, the people must be organized into effective teams, motivated to do high-quality work, and coordinated to achieve effective communication. So, the structure of the team has a direct impact on the product quality and project productivity.


Chapter 6 - Software Project Management

Democratic decentralized It consists of ten or few number of programmers
the goals of the group are set by consensus every member is considered for taking major decisions group leadership rotates among group members the structure results in many communication paths between people as shown in figure 6.1

Figure 6.1 Democratic team structure

Controlled centralized it consists of a chief programmer, who is responsible for all the major technical decisions of the project. He also does most of the design activities and allocates coding part to each and every member of the team
under him, he has a backup programmer, program librarian, and programmers the backup programmer helps the chief in making decisions and takes over the role of chief in absence of the chief programmer

BSIT 44 Software Engineering -


the program librarian is responsible for maintaining the documents and other communication related work this team structure exhibits considerably less inter-personnel relationship. The structure is shown in figure 6.2

Chief programmer

Backup programmer

Librarian Programmers

Figure 6.2 Controlled centralized team structure

Controlled decentralized team it combines the strengths of the democratic and chief programmer teams
it consists of a project leader who has a group of senior programmers under him and a group of junior programmers under a senior programmer the communication between the senior and junior programmers are like ego-less team and the communication between different groups is through the senior programmers and they in turn communicate with project leaders this structure is best suited for very simple projects or research-type works


Project The problem or project under consideration need to be well thought of at the beginning. A proper understanding of the problem helps to prepare quantitative estimates and thus avoids several issues


Chapter 6 - Software Project Management

related to it. So, determining the scope of the problem must be established since, it marks the beginning activity of the software project management. The software scope must be unambiguous and understandable at the management and the technical levels. It must address the following context information objectives function and performance

In order to make the problem easily manageable, the problem must be communicated from the customer to the developer and partitioned or decomposed into several parts, and then allocated for work by software team. The decomposition approach can be seen from two view points: • • decomposition of the problem decomposition of the process

Software estimation is a form of problem solving. If the problem to be solved is quite complex to be considered in one piece ( i.e. in terms of developing cost and effort estimation), then the problem is decomposed into a set of smaller problems, which can be easily managed. Estimation makes use of one or both of the above approaches of decomposition.

Process Deciding about a process model is very important for a project manager, since there are a number of process models. He must decide the most appropriate model for the project, then define a preliminary plan based on the set of common process framework activities. Once the preliminary plan is ready, process decomposition begins. That is, a complete plan showing the work tasks required for process framework activities is discussed in project scheduling activity.

The software management consist of activities like project planning, project monitoring and control, and project termination. Among these activities, project planning is very important since, it has to be performed before the development work and all other management phases depends on this very much. The basic goal of software planning is to look into the future, identify the activities that need to be done to complete the project successfully, and plan the schedule and resources. Lack of proper planning leads to schedule slippage, cost over-runs, poor quality and high maintenance costs for software. Software project planning includes activities such determining the scope of the software and estimation of resources to accomplish the software development effort.

BSIT 44 Software Engineering


6.3.1 Software Scope
This is the first activity in software project planning. The functions and performance allocated to software during system engineering is assessed to establish the project scope. The scope of the project must be very clear and understandable. The software scope describes function, performance, constraints, interfaces, and reliability. The most commonly used communication technique between the customer and the developer to derive at the scope of the problem, is to conduct a preliminary meeting or interview. The project planner then examines the statement of the scope and extracts all the important software functions. And one such software function is to prepare the software estimates using one or more techniques such as decomposition and empirical modeling.

6.3.2 Resources – Hardware, Software and People
The second task of software planning is estimation of resources required to accomplish the software development effort. The main resources are the hardware and software tools, reusable software components and people are shown in figure 6.3. The hardware/software tools make up the development environment, the foundation or base of the pyramid which provides the necessary infrastructure to support the development effort. The middle layer in the pyramid is the reusable software components layer. It constitutes the software building blocks that can reduce the development costs and speed up the delivery. The top most part of the pyramid is the main resource, the people.

People Reusable software components Hardware / Software tools

Figure 6.3 Resources

In early days of computing, the cost of the software was just a small fraction of the overall cost of a computer-based system. So, any small discrepancy in estimation of software cost had relatively little impact. But, today software is the most expensive one in a computer-based system. And, a large cost estimation error can make a lot of difference, leading to cost overrun for the developer. So, applying an appropriate software cost estimation technique is very essential.


Chapter 6 - Software Project Management

Cost in a project is due to the requirements for software, hardware and human resources. The bulk of the cost of software development is due to the human resources needed, and most cost estimation procedures focus on this aspect. Most cost estimates are determined in terms of person-months (PM). As the cost of the project depends on the nature and characteristics of the project, at any point, the accuracy of the estimate will depend on the amount of reliable information we have about the final product. When the project is being initiated or during the feasibility study, we have only some idea of the data the system will get and produce, the major functionality of the system. There is a great deal of uncertainty about the actual specifications of the system. As we specify the system more fully and accurately, the uncertainties are reduced and more accurate cost estimates can be made. Despite the limitations, cost estimation models have matured considerably and generally give fairly accurate estimates.

6.4.1 Estimation Models
The line of code ( LOC ) and function points ( FP ) are the two problem-based techniques; the basic measures from which productivity metrics can be computed. The LOC and FP data are used in two ways during software project estimation: 1. 2. as an estimation variable that is used to estimate the size of the software as baseline metrics collected from past projects and used along with the estimation variables to develop cost and effort projections

An another most common method for estimating a project is the process-based technique. Here, the process is decomposed into a relatively small number of tasks and the effort required to accomplish each task is estimated. Like the problem based techniques, process based estimation begins with a delineation of software functions obtained from the project scope.

6.4.2 Empirical Estimation Models
An estimation model for computer software uses empirically derived formulas to predict effort as a function of LOC or FP. A typical estimation model which is derived on the data collected from the past projects, can be written as

E = A + B * (ev)C Where,
A, B, and C are empirically derived constants, E is the effort in persons-months and ev is the estimation variable ( either in LOC or FP )

BSIT 44 Software Engineering

103 The Cocomo Model
The COnstructive COst MOdel (COCOMO) is the most widely used software estimation model in the world. It was developed by Barry Boehm. The COCOMO model predicts the effort and duration of a project based on inputs relating to the size of the resulting systems and a number of “cost drives” that affect productivity. COCOMO is defined in terms of three different models: the Basic model, the Intermediate model, and the Detailed model. Model 1 : The Basic COCOMO model computes software development effort (and cost ) as a function of program size expressed in estimated lines of code. Model 2 : The Intermediate COCOMO model computes software development effort as a function of program size and a set of “cost drivers” that include subjective assessments of product, hardware, personnel, and project attributes. Model 3 : The Advanced COCOMO model incorporates all the characteristics of the intermediate COCOMO with an assessment of the cost driver’s impact on each step of the software engineering process. In the COCOMO model, one of the most important factors contributing to a project’s duration and cost is the Development Mode. Every project is considered to be developed in one of three modes: Organic Mode: The project is developed in a familiar, stable environment, and the product is similar to previously developed products. The product is relatively small, and requires little innovation. Semidetached Mode: The project’s characteristics are intermediate between Organic and Embedded. Embedded Mode: The project is characterized by tight, inflexible constraints and interface requirements. An embedded mode project will require a great deal of innovation.

The Basic COCOMO Model Basic COCOMO model estimates the software development effort using only a single predictor variable (size in DSI) and three software development modes. Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is necessarily limited because of its lack of factors which have a significant influence on software costs. The basic equations for the COCOMO model is about the effort estimate in Person-Months required to develop a project and the KLOC or KDSI, the number of delivered lines of code for the project ( expressed in thousands ). Equation 1 gives a measure of the effort and equation 2 gives the development time. E = Ab * (KLOC)Bb (person-months) ——— Equation 1 Where,
E = number of person-month (=152 working hours), Ab , Bb = constants,


Chapter 6 - Software Project Management

KLOC = thousands of lines of code or thousands of “delivered source instructions “ (DSI) or KDSI.

D = Cb * (E) Db ———————————————— Equation 2 Where,
D = development time in chronological months E = number of person-month (=152 working hours), Cb , Db = constants
The values of the coefficients Ab, Bb, and the exponents Cb and Db are given in Table 1 below :






















Table 2 : Effort Equations for three development Modes in Basic COCOMO Model

Development Mode Organic:

Basic Effort equation

Basic Schedule Equation

Effort = 2.4 * (KDSI)1.05 Effort = 3.0 * (KDSI)1.12 Effort = 3.6 * (KDSI)1.20

TDEV = 2.5 * (Effort)0.38 TDEV = 2.5 * (Effort)0.35 TDEV = 2.5 * (Effort)0.32



BSIT 44 Software Engineering


The Intermediate COCOMO Model The Intermediate model use an Effort Adjustment Factor (EAF) and slightly different coefficients for the effort equations than the Basic model. One can apply Intermediate COCOMO across the entire software product for easily and roughly cost estimation during the early stage, or apply it at the software product component level for more accurate cost estimation in more detailed stages.
There are two primary limitations, which may become significant, particularly in detailed cost estimates for large software projects:

• •

Its estimated distribution of effort by phase may be inaccurate. It can be very cumbersome to use on a product with many components.

The Intermediate model produces better results because you supply settings for 15 Cost Drivers that determine the effort and duration of software projects. The Cost Drivers include factors such as Product Complexity, Programmer Capability, and “Use of Software Tools”. The Effort adjustment Factor is simply the product of the Effort Multipliers corresponding to each of the cost drivers for your project. The Intermediate model also produces better results than the Basic model because the system can be divided into “components”. DSI values and Cost Drivers can be chosen for individual components, instead of for the system as a whole. COCOMO can estimate the staffing, cost, and duration of each of the components — allowing you to experiment with different development strategies, to find the plan that best suits your needs and resources. The intermediate COCOMO equation takes the form :

E = Ai(LOC)Bi * EAF
Where, E is the effort applied in persons-months LOC is the estimated number of delivered lines of code for the project. Ai, Bi are the coefficients whose values are given in table 3.

Table 3 :
Software Project Organic Semidetached Embedded Ai 3.2 3.0 2.8 Bi 1.05 1.12 1.20


Chapter 6 - Software Project Management

Table 4. Effort Equations for Three Development Modes in Intermediate COCOMO Model
Development mode Organic: Semidetached: Embedded: Basic Effort equation Effort = EAF * 3.2 * (KDSI)1.05 Effort = EAF * 3.0 * (KDSI)1.12 Effort = EAF * 2.8 * (KDSI)1.20 Basic Schedule equation TDEV = 2.5 * (Effort)0.38 TDEV = 2.5 * (Effort) 0.35 TDEV = 2.5 * (Effort) 0.32

There are many candidate factors to consider in developing a better model for estimating the cost of a software project. There are two principles to reduce the large number of candidate factors to a relatively manageable number of factors for practical cost estimation:  · · General Significance: this tends to eliminate factors which are significant only in a relatively small fraction of specialized situations. Independence: this tends to eliminate factors which are strongly correlated with product size, and to compress a number of factors which tend to be highly correlated on projects into a single factor.

COCOMO model uses 15 cost drivers based on these principles. These cost drivers are grouped into four categories: software product attributes, computer attributes, personnel attributes, and project attributes. 

Product Attributes
RELY — Required Software Reliability. The extent to which the software product must perform its intended functions satisfactorily over a period of time.  DATA — Data Base Size. The degree of the total amount of data to be assembled for the data base.  CPLX — Software Product Complexity. The level of complexity of the product to be developed. 

BSIT 44 Software Engineering


Computer Attributes 
TIME — Execution Time Constraint The degree of the execution constraint imposed upon a software product.  STOR — Main Storage Constraint The degree of main storage constraint imposed upon a software product.  VIRT — Virtual Machine Volatility The level of the virtual machine underlying the product to be developed.  TURN — Computer Turnaround Time The level of computer response time experienced by the project team developing the product. 

Personnel Attributes 
ACAP — Analyst Capability The level of capability of the analysts working on a software product.  AEXP — Applications Experience The level of applications experience of the project team developing the software product.  PCAP — Programmer Capability The level of capability of the programmers working on the software product VEXP — Virtual Machine Experience The level of virtual machine experience of the project team developing the product.  LEXP — Programming Language Experience The level of programming language

Project Attributes 
MODP — Use of Modern Programming Practices The degree to which modern programming practices ( MPPs ) are used in developing software product. 


Chapter 6 - Software Project Management

TOOL — Use of Software Tools The degree to which software tools are used in developing the software product.  SCED — Schedule Constraint The level of schedule constraint imposed upon the project team developing the software product. The Detailed COCOMO Model The Detailed model differs from the Intermediate model in two aspects:
• The Detailed model uses different Effort Multipliers for each cost driver attribute. These phase sensitive Effort Multipliers are used to determine the amount of effort required to complete each phase. The Detailed model apply a three level hierarchical decomposition of the software product whose cost is to be estimated. The lowest level, the module level, is described by the number of DSI in the module and by those cost drivers which tend to vary at the lowest level. The second level, the subsystem level, is described by the remainder of the cost drivers which tend to vary from subsystem to subsystem, but which tend to be the same for all the modules within a subsystem. The top level, the system level, is used to apply major overall project relations such as nominal effort and schedule equations, and to apply the nominal project effort and schedule breakdowns by phase.

Seven Basic Steps in Software Cost Estimation This section provides a seven-step process for software cost estimation, this activity need to be planned, reviewed, and followed up accordingly. Step 1: Establish Objectives • Key the estimating objectives to the needs for decision making information.
• • Balance the estimating accuracy objectives for the various system components of the cost estimates. Re-examine estimating objectives as the process proceeds, and modify them where appropriate.

Step 2: Plan for Required Data and Resources · prepare a project plan at an early stage, the plan includes an notes on the why, what, when, who, where, how, how much, and whereas of the estimating activity. Step 3: Pin Down Software Requirements · It is important to have a set of software specifications that are as unambiguous as possible

BSIT 44 Software Engineering •


A specification need to be testable to the extent possible, that one can define a clear pass/ fail test for determining whether or not the developed software will satisfy the specification

Step 4: Work Out as Much Detail as Feasible • carry out the estimating activities in detail so that the estimates will be very accurate, for three main reasons:
a) the more detail we explore, the better we understand the technical aspects of the software to be developed; b) the more pieces of software we estimate, the more we get the law of large numbers working for us to reduce the variance of the estimate; c) the more we think through all the functions the software must perform, the less likely we are to miss the costs of some of the more unobtrusive components of the software.

Step 5: Use Several Independent Techniques and Sources • None of the alternative techniques for software cost estimation is better than the others from all aspects, their strengths and weaknesses are complementary.
• It is important to use a combination of techniques, in order to avoid the weakness of any single method and to capitalize on their joint strengths.

Step 6: Compare and Iterate Estimates • The most valuable aspect of using several independent cost-estimation techniques is the opportunity to investigate why they give different estimates.
• An iteration of the estimates after finding why they give different estimates may converge to a more realistic estimate.

Step 7: Follow-up • Once a software project is started, it is essential to gather data on its actual costs and progress and compare these to the estimates

When considering risk management and the concept of risk in the development of software it may be useful first of all to examine what software management is. Primarily risk and risk management can be assessed by considering the following definitions:


Chapter 6 - Software Project Management

Risk, “A chance or possibility of danger, loss, injury or other adverse consequences.”
A definition from Oxford Dictionary.

Software Risks, as per the Quotations [Boehm 93] “Failure to obtain all or even any, of the anticipated benefits.”
“Costs of implementation vastly exceed planned levels.” “Time for implementation that is much greater than expected.” “Technical performance of resulting systems turns out to be significantly below estimate.” “Incompatibility of the system with the selected hardware and software.”

Risk Management, “Software risk management is an emerging discipline whose objectives are to identify address and eliminate software risk items before rework.” [IEEE 89]
There are different categories of risks that can be considered, in the first category, it is: • • • • • • Project risks, threaten the project plan Technical risks, threaten the quality and timeliness of the software to be built Business risks, threaten the viability of the software to be built In the next category, it is: Known risks, can be uncovered from careful evaluation of the project plan Predictable risks, are obtained from the past project experience Unpredictable risks, they cannot be known in advance and are difficult to find

6.5.1 Risk Identification
Risk identification is a systematic attempt to specify the threats to the project plan. By identifying known and predictable risks, the project manager takes a step towards avoiding it and there by controlling it. There are two distinct types of risks for each categories said earlier. They are generic risks they are a potential threat to every software project. product -specific risks they can be identified by those who have a clear understanding of the in and out of the project like technology, people, environment etc.

BSIT 44 Software Engineering


Both generic and product-specific risks need to be identified systematically, for which a risk item checklist can be used. The risk item check list is given below: • • • • • • • Product size Business impact Customer characteristics Poor definition Development environment Technology to be built Staff size and experience

6.5.2 Risk Projection
Risk protection, is also called as risk estimation. It attempts to rate each risk in two ways – the likelihood or probability that the risk is real and the consequences of the problems associated with the risk, if it occurs. There are four risk estimation activities: 1. 2. 3. 4. establish a scale that reflects the perceived likelihood of a risk delineate the consequences of the risks estimate the impact of the risk on the project and the product note the overall accuracy of the risk projection so that there will be no misunderstandings.

Developing a risk table is an important activity of the project planner. This table provides a simple technique for risk projection. The sample risk table is given below. It has information like list of risks, category of risks, probability of its occurrence and its impact. Risks Category PS PS PS BU Probability 60% 30% 70% 40% Impact RMMM 2 3 2 3

Size estimate may be significantly low Larger number of users than planned Less reuse than planned End users resist system

Delivery deadline will be tightened Funding will be lost Lack of training tools Staff inexperienced BU CU DE ST

Chapter 6 - Software Project Management 50% 40% 80% 30% 2 1 3 2

Where, the second column shows the category of risk encountered such as, PS is project size risk, BU is business risk etc. The value of impact implies that 1 is catastrophic, 2 is critical, 3 is marginal and 4 is negligible. The column RMMM contains a pointer to Risk Mitigation, Monitoring and Management plan that is developed for all risks.

6.5.3 Risk Mitigation, Monitoring and Management
In order to assist the project team in developing a strategy for dealing with risks, an effective strategy must be considered. The following three issues help in developing a strategy for dealing with risks: • • • Risk avoidance Risk monitoring, and Risk management and contingency planning

If a software team adopts a proactive approach to risks, avoidance is the best method. This is achieved by developing a plan for risk mitigation. In order to mitigate the risk, project management must develop a strategy for reducing the turn over. Some steps in this direction are: • • • • • • Meeting with staff members to determine the causes for turn over, (e.g. poor working conditions, low pay etc.) Act to mitigate those causes that are under management control before the project starts Once project starts, assume turn over will occur and develop techniques to ensure continuity when people leave Organize project teams so that information about each development activity is widely spread Define documentation standards and establish mechanisms to be sure that documents are developed in timely manner Define a backup staff member for every critical technologist

As the project proceeds, risk monitoring activities begins. The project manager monitors the facts that

BSIT 44 Software Engineering


may provide an indication of whether the risk is becoming more or less likely. In addition to monitoring the risk factors, the project manager should also monitor the effectiveness of risk mitigation steps. The risk management and contingency planning assumes that mitigation efforts have failed and that the risk has become a reality.

6.5.4 The Problems of Putting Risk Management into Practice
This part of the report aims to identify the shortcomings that still appear to remain within the discipline of risk management. If there are so many good models on risk management then why is there still such a high failure rate of software projects. This section will consider some of the most commonly documented problems.

Risk aversion It is very easy to acknowledge the fact that there are large risks present when undertaking software development projects. It can therefore be tempting for software development managers to completely ignore risk management approaches, risk management may imply that the failure of the software project is almost a certainty.
Let us discuss a problem of risk aversion by considering the following example based on a recent risk management paper [Boehm, DeMarco 97]. In this example a software project manager may be seen to avoid risks and display a confident can-do attitude. For example if two more personnel are needed to complete final testing of a software project so it may meet its deadline. This is considered a small risk and is easily solvable by adding the required staff. In contrast an example of a fatal risk may be that the date set for the release of the software was a hopeless and drastic underestimation. The attitude of can-do project manager for the small risk situation is obviously flawed if ignored but in the case of the large risk would be considered less incompetent.

Poor Infrastructure The second largest problem when it comes to assessing software risk management is the lack of a good infrastructure in the organization that would allow support to be given to effective risk management. A crisis may occur within a project or be precipitated by a customer. The project manager may be diverted by this activity from managing risk assessment activity. Insufficient staffing or appropriate decisions from senior management may mean that the systematic approach to risk assessment is not in place. Design flaws may occur in the project and be missed. Another problem is that staffing resources from elsewhere may be diverted to deal with a crisis in a project and therefore cause problems when risks are reassessed. If problems had been identified and risks managed from the start of the project then it is likely that many would have been avoided as they were anticipated.


Chapter 6 - Software Project Management

Lack systematic risks analysis The lack of a systematic approach to risk management is widely noted as a reason for the reason for problems with it. It can almost be seen as a feature of a poor infrastructure but is so important is worth mention as an issue itself.
Approaches used by the SEI (Software Engineering institute) highlighted this as a major problem. The SEI have developed systematic approaches and so provide a good example as what a systematic approach should entail. The probability and impact of risks are recorded using a high medium and low rating against each risk. It is therefore possible for managers to highlight parts of a risk management plan, look at risks taken, by whom and whether deadline can be met and what action may be required. Breaking down risks in a systematic way and highlighting those that are important is a very difficult and assessing which risks are important.

6.5.5 The RMMM Plan
A risk management strategy can be included in the software project plan or the risk management steps can be organized into a separate Risk Mitigation, monitoring, and Management Plan. An outline of it is shown below:

I Introduction 1. Scope and purpose of document
2. 3. Overview of major risks Responsibilities a. Management b. Technical staff

II Project risk Table 1. Description of all risks above cut-off
2. Factors influencing, probability and impact

III Risk mitigation, monitoring, management n. Risk #n
a. Mitigation i. General strategy ii. Specific steps to mitigate the risk

BSIT 44 Software Engineering b. Monitoring i. Factors to be monitored ii. Monitoring approach Management i. Contingency plan ii. Special considerations




RMMMM Plan iteration Schedule Summary

Software project scheduling is an activity that distributes estimated effort across the planned project duration by allocating the effort to specific software engineering tasks. Once the development schedule has been established, tracking and control activity begins. Each task noted in the schedule is tracked by the project manager. If a task falls behind schedule, the manager can use an automated project scheduling tool to determine the impact of schedule slippage on intermediate project milestones and the overall delivery date. So, resources can be redirected, tasks can be reordered, delivery commitments can be modified to accommodate the problem that has been uncovered. Like this, the software development can be controlled. It involves quality assurance & control and change management & control.

6.6.1 Scheduling
Software project scheduling is an activity that involves: 1. 2. 3. 4. 5. 6. identification of a set of tasks establishing the interdependencies among the tasks

identification of resources including people
estimation of effort associated with each task creation of a “task network”, a graphical representation of the task flow for a project identification of a time-line schedule

Scheduling focuses on the project control activity which are based on project work breakdown structure. Project control is necessary in-order to monitor the progress of activities against plans, to ensure that goals are being approached and eventually being achieved.

Work break down structure (WBS)

Chapter 6 - Software Project Management

Compiler project



Integrate & test

Write manual



Code generator

Figure 6.1 shows the work break down structure for compiler construction

Most of the project control techniques are based on breaking the goal of the project into several intermediate goals. And each intermediate goal is decomposed further into further goals. This process is repeated until, each goal is small enough to be well understood. There are two general scheduling techniques followed, they are : • • Gantt charts PERT charts

Gantt charts • A Gantt chart is a project control technique that can be used for several purposes like scheduling, budgeting and resource planning.
• • • • • • • It is a bar chart. Each bar represents an activity, drawn against time-line. The length of each bar is proportional to the length of the time planned for the activity. Help in scheduling of project activities but not identifying them. Simple and easy to understand used for small and medium sized projects they show the tasks and their duration clearly they do not show inter task dependencies

BSIT 44 Software Engineering


PERT Chart • It is Program Evaluation and Review Technique. • It is a network of boxes (or circles) and arrows. • Each box represents an activity • Arrows indicate the dependencies of activities on one another • The activity at the head of the arrow cannot start until activity at the tail of the arrow is finished. • Some boxes can be designated as mile stones • PERT can have starting and ending dates • PERT is suitable for larger projects • Not as simple as Gantt charts Example 1: Gantt chart for a simple compiler project Here, the white part of the bar (with no slack) indicates the time duration by which each task is estimated to take. The gray part (slack) shows the latest time by which the task must be finishe
Jan’94 Apr’94 July’94 Oct’94 Jan’95 A pr’95


Design Build scanner

Build Parser

Build code generation

Integration &


Figure showing Example 1: Gantt chart for a simple compiler project


Chapter 6 - Software Project Management

Example 1 using PERT chart Here in the example, the design, build code generation, integration & test path represent a critical path for the project. That is any delay in any activity in this path will cause a delay in the entire project. so, the project manager must clearly and closely watch the critical path.

Mar 7’ 94
Build Scanner

Mar 7’ 94 Jan 1’94

Jan 3’94

Build Parser

Nov 14’ 94
Integration & testing

Mar 7’ 94
Build code generation


Mar 7’ 94
Write manual

Mar 13’ 95

Figure showing Example 1: PERT chart for a simple compiler project

6.6.2 The Project Plan
Each step in the software engineering process should produce a work product that can be reviewed and that can serve as a base for further process. The software project plan is produced as an outcome of planning phase. It provides baselines cost and scheduling information, which can be used through out the software engineering process. It is given below:

I Introduction A. Scope and purpose of the document
B. Project objectives

BSIT 44 Software Engineering


II Project Estimates A. Historical data used for estimates
B. C. Estimation techniques Estimation of effort, cost, duration

III Project Risks A. Risk analysis
B. Risk management

IV Schedule A. Project work break down structure
B. C. D. Task network Time-line-chart Resource chart


Project Resources A. Hardware and software
B. C. People Special resources

VI Staff Organization A. Team structure
B. Management reporting

VII Tracking and Control A. Quality assurance and control
B. Change management and control

VIII Appendices

120 6.7 SUMMARY

Chapter 6 - Software Project Management

Software management is an umbrella activity within software engineering. It has three P’s which influence on it. Namely, people, process and problem. Its very important to manage them. People are very important for developing and maintaining software projects. So, in order to increase the efficiency of work, people need to be teamed properly. The project management activity encompasses activities like risk management, estimating project costs, schedules, tracking, and control.

I Fill in the blanks : 1. Estimation makes use of an important approach ——————————.
2. 3. 4. 5. 6. ——————————is a project control technique that can be used for several purposes like scheduling, budgeting and resource planning. COCOMO stands for ——————————— The equation for the Basic COCOMO model to estimate effort is ——————— PERT stands for ——————————— ———————risks threaten the quality and timeliness of the software to be built

II Answer the following questions in one or two sentences : 1. Name the different kinds of software development team structure.
2. 3. 4. 5. 6. 7. 8. 9. 10. Why should a software project be planned before development ? What is en estimation model ? Name the different techniques which are based on COCOMO What are cost drivers ? Name the methods that are commonly used for representing scheduling techniques. Give the IEEE definition of risk management. Define the terms: risk mitigation, risk monitoring Is scheduling same as project tracking ? How do you classify risks ?

BSIT 44 Software Engineering


I Answers 1. decomposition
2. 3. 4. 5. 6. Gantt chart COnstructive COst MOdel E = Ab * (KLOC)Bb (person-months) Program Evaluation and Review Technique technical risks

1. 2. 3. Roger S. Pressman : Software Engineering – A Practitioner’s Approach (Fourth Edition) McGraw-Hill 1997. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa publishing house Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5

Chapter 7

Software Quality Assurance

ne of the important factors driving any production discipline is quality. An important goal of software engineering is to produce a high quality software. And this goal is achieved through what is known as software quality assurance. Software Quality Assurance ( SQA ) is an “umbrella activity” that is applied at each step in the software development process. SQA Encompasses the activities like : a quality management approach, effective software engineering techniques, Formal Technical Reviews, (FTR) applied thought the software process, a multi tired testing strategy, control of software documentation and changes made to it, a procedure to assure compliance with software development standard and measurement and reporting mechanisms. In this chapter, we focus on concepts related to quality, activities that need to be carried out for producing high quality software products, and the international standards that need to be followed for software engineering.


At the end of this chapter you should be able to • • • • • Define terms like quality, quality control, quality assurance Discuss the impact of cost on quality State the important quality assurance activities Answer the question “ What is cost impact of software defects ?” Understand the defect amplification model


Chapter 7 - Software Quality Assurance

BSIT 44 Software Engineering • • • Know the importance of Formal Technical Reviews Write the SQA plan Discuss the importance of quality standards


Quality Quality refers to a measurable characteristic or attribute of an item - such as color, length etc. and the measurable characteristics with reference to a program include cyclomatic complexity, cohesion, lines of code etc. While considering these characteristics, two kinds of quality may generally be encountered: quality of design and quality of conformance.
Quality of design refers to the characteristics that the designers specify for an item such as the grade of the materials, performance specifications . Quality of conformance is the degree to which the design specifications are followed during manufacturing. However, in software development, quality of design includes requirements, specifications, and design of the system. Quality of conformance is an issue which mainly depends on implementation.

Quality control (QC) It is an activity for controlling the quality of an item. It consists of a series of activities like inspections, reviews, and tests which is carried out during the entire life cycle of a software, so that each work product meets the requirements placed upon it. QC can be carried out automatically or manually or in a combined form. Quality Assurance (QA) It is an activity that consists of auditing, reporting functions of management. It gives the necessary information to the management regarding the quality of product. Cost of Quality It includes all costs incurred towards achieving quality. That is, it includes the cost of perceiving the quality and its related activities. Quality costs may be divided into costs associated with prevention, appraisal, and failure. Prevention cost include : quality planning

formal technical reviews test equipment training

Chapter 7 - Software Quality Assurance

appraisal cost include : in-process and inter process inspection
equipment calibration and maintenance testing

failure cost include : rework
repair help-line support warranty work

The SQA is an important activity usually performed by two groups : software engineer group and software quality assurance group. The software engineering group perform quality assurance by applying technical methods and measures, conducting formal technical reviews, well planned software testing etc. The software quality assurance group assist the software engineering group in achieving a high quality end product. Apart from helping the software engineering group, SQA group also performs other activities like: prepare SQA plan for a projects participate in the development of project’s software process description reviews software engineering activities to verify compliance with the defined software process audits designated software work products to verify compliance with those defined as part of software process ensures the deviations in software work and work products are documented and handled according to a documented procedure

BSIT 44 Software Engineering records any non-compliance and reports to senior management.


It also coordinates the control and management of change and helps to collect and analyze software metrics.

Reviews are conducted at various points during software development to find out the errors that can then be removed. So software reviews are a “filter” to software engineering processes. Software reviews are needed to 1. 2. 3. Point out the needed improvements in the product. Confirm those areas of the product where improvements are not needed. Make the technical work more manageable.

Types of reviews 1. Informal review – Informal reviews could be unscheduled and ad-hoc wherein issues could be discussed and clarifications obtained. These could also be called coffee shop reviews.
2. Formal review - also called as Formal Technical Review (FTR) - This is the most effective filter for software engineering processes - This is an effective means of improving software quality

Software defects Defect is a product anomaly. It is a quality problem that is discovered after the software has been released to end users. Defect amplification and removal A defect amplification model is used for generation and detection of errors during preliminary design, detail design and coding steps of software engineering process. A box represents a software development step. During the step, errors may be inadvertently generated. Review may fail to find newly generated errors and errors from the previous steps, resulting in some number of errors that are passed through. In some cases, errors passed from previous steps are amplified by the current work. In the figure 7.1, the box sub division represent these characteristics and the percent efficiency for detecting errors, which depicts a measure of review.

Development step Defects Errors from previous step Errors passed through

Chapter 7 - Software Quality Assurance


Amplified errors 1 : x

Percent efficiency for error detection

Errors passed to next step


Newly generated errors

7.1 Defect amplification model

This is a software quality assurance activity that is performed by software engineers. Each FTR is conducted as a review meeting.

Objectives 1. To detect the errors in functions, logic or implementation found in software.
2. 3. 4. 5. 6. 7. 8. To verify that the software under review meets its requirements. To ensure that the software has been represented according to predefined standards. To achieve software with consistent quality and on time. To make projects more manageable. It acts as a training ground for junior engineers to observe different approaches to software analysis, design, implementation. Serves to promote backup and continuity. FTR includes walkthroughs, inspections, round-robin reviews.

BSIT 44 Software Engineering


The SQA plan provides a road map and a layout for the software quality assurance activity. The figure 7.2 gives an outline for SQA plans as recommended by the IEEE. I. II. III. Purpose of plan References Management 1. Organization 2. Tasks 3. Responsibilities Documentation 1. Purpose 2. Required software engineering documents 3. Other documents Standards, practices and conventions 1. Purpose 2. Conventions Reviews and audits 1. Purpose 2. Review requirements a. software requirements review b. design reviews c. software verification and validation reviews d. functional audit e. physical audit f. in-process audits g. management reviews




VII. Test VIII. Problem reporting and corrective actions IX. X. Tools, techniques and methodologies Code control

XI. Media control

Chapter 7 - Software Quality Assurance

XII. Supplier control XIII. Records collection, maintenance, and retention XIV. Training XV. Risk management

Certain standards are to be followed for establishing quality in an organization. A system called Quality assurance system may be used for this purpose. This system is defined as the organizational structure, responsibilities, procedures, processes, and resources for implementing quality management. Examples of quality standards include international standards organization (ISO) and the capability maturity model (CMM). In order to get into such a system the organization has to get the company’s quality system and operations audited by the third-party auditors.

7.7.1 The ISO Approach
The International standards organization (ISO) quality assurance models treat an enterprise as a network of interconnected processes. For a quality system to be ISO-complaint, these processes must address the areas identified in the standard and must be documented and practiced as prescribed. Documenting a process helps an organization to understand, control and improve it. The ISO 9000 describes the elements of a quality assurance system in general. The elements include the organizational structure, procedures, processes, and resources needed to implement quality planning, quality control, quality assurance and quality improvement. The ISO 9001 standard is the quality assurance standard that applies to software engineering. The standard contains 20 requirements that must be present for effective quality assurance system. The 20 requirements given by ISP 2001 address the following topics : 1. 2. 3. 4. Management responsibility Quality system Contract review Design control

BSIT 44 Software Engineering 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. Document and data control Purchasing Control of customer supplied product Product identification and traceability Process control Inspection and testing Control of inspection, measuring, and test equipment Inspection and test status Control of nonconforming product Corrective and preventive action Handling, storage, packaging, prevention and delivery Control of quality records Internal quality audits Training Servicing Statistical techniques


In order for a software company to become ISO 9001 registered, it must set up policies and procedures to address each of the above said requirements and then demonstrate it in the form of practice.

7.7.2 The Capability Maturity Model
In the recent years, there has been a significant emphasis on “process maturity”. The Software Engineering Institute (SEI) has developed a comprehensive model called as Capability Maturity Model. This model is used to determine an organization’s current state of process maturity, which is based on a set of software engineering capabilities that should be present as organizations reach different levels of process maturity. SEI uses an assessment questionnaire and a five-point grading scheme that defines important activities required at different levels of process maturity. The five process maturity levels which are a part of a process model is given below as CMM level : Level 1 : Initial Unpredictable and poorly controlled

Level 2 : Repeatable Level 3 : Defined Level 4 : Managed Level 5: Optimizing -

Chapter 7 - Software Quality Assurance Can repeat previously mastered tasks Process characterized, fairly well understood Process measured and controlled Focus on process improvement

The SEI has associated key process areas ( KPA ) with each maturity levels. Each KPA is described by identifying the following characteristics : • • • • • • Goals Commitments Abilities Activities Methods for monitoring implementation Methods for verifying implementation

The concept of quality in the context of software needs attention. The factors that affect software can be measured either directly or indirectly. In order to measure the quality of software important factors proposed by McCall and others are used, and they are known as McCall’s Quality factors.

McCall’s Quality factors It focuses on three important aspects of a software product : its operational characteristics, its ability to undergo change, and its adaptability to new environments. These aspects of software quality can be visualized to have three dimensions :
• • • Product Operation Product Transition Product Revision

The first factor, product operations deals with quality factors such as correctness, reliability, efficiency, integrity and usability. The factor product transition deals with quality factors like portability, reusability and interoperability.

BSIT 44 Software Engineering


The factor product revision deals with maintainability, flexibility and testability. These three factors are shown in figure 7.2.

Correctness The extent to which a program satisfies its specification and fulfills the customer’s mission objectives. Reliability The extent to which a program can be expected to perform its intended function with required precision. Efficiency The amount of computing resources and code required by program to perform its function. Integrity The extent to which access to software or data by unauthorized persons can be controlled. Usability The effort required to learn, operate, prepare input, and interpret output of a program. Maintainability The effort required to locate and fix an error in a program. Flexibility The effort required to modify an operational program. Testability The effort required to test a program to ensure that it performs its intended function. Portability The effort required to transfer the program from one hardware and/or software system environment to another. Reusability The extent to which a program can be reused in other applications. Interoperability The effort required to couple one system to another.


Chapter 7 - Software Quality Assurance

Usually, these quality factors are associated with software quality metrics like audibility, accuracy, completeness, conciseness, consistency, data commonality, error tolerance, execution efficiency, expandability etc.

Maintainability Flexibility Testability Product Revision Product transitions

Portability Reusability Interoperability

Product operations

Correctness Reliability Usability Integrity Efficiency

Figure 7.2 McCall’s software quality factors

Software quality assurance is an umbrella activity that is applied at each step in the software process. The SQA activity includes procedures for effective application of software methods and tools, FTRs, testing strategies etc., Software review is very important and is an important SQA activity. It serves as a filter for the software process, removing errors while they are relatively inexpensive to find and correct. Software quality factors help to measure the quality of software product. And software metrics help to provide a quantitative way to access the quality of internal product attributes.

BSIT 44 Software Engineering


I Fill in the blanks : 1. ———————————is an activity that consists of auditing, reporting functions of management.
2. 3. 4. 5. 6. 7. ——————— are a “filter” to software engineering processes. FTR includes ——————— , inspections, round-robin reviews. The quality standard used in software engineering process is ———————— CMM stands for ———————————— McCall’s quality factors focus on ————————————, product transition and product revision —————————— defines the extent to which access to software or data by unauthorized persons can be controlled.

II Answer the following questions in one or two sentences :
1. 2. 3. 4. 5. 6. 7. 8. Define the terms: Quality, Quality assurance and quality control. What is software quality assurance ? Name the different types of software reviews. What is a defect amplification model ? Name any two commonly used quality standards for software. What is the importance of SQA plan ? What is the importance of McCall’s quality factors ? Why are quality metrics very important ?

I Answers 1. Quality assurance
2. 3. 4. 5. 6. 7. Software reviews walkthroughs ISO 9001 Capability Maturity Model Product operations Integrity

Chapter 7 - Software Quality Assurance

1. 2. Roger S. Pressman : Software Engineering – A Practitioner’s Approach (Fourth Edition) McGraw-Hill 1997. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa publishing house.



Software Configuration Management

e know that, through out software development process, the software consists of a collection of items such as programs, data and documents that can easily change. And it is also seen that during development process the requirements, design and code often get changed. So, this early changeable nature of software and the fact the changes often take place in the process of software development demands that the changes taking place need to be controlled. So, Software Configuration Management ( SCM ), a discipline that systematically control the changes that are occurring during the software development process is followed. The goal of SCM is to maximize production by minimizing mistakes. SCM is an umbrella that is applied through out the software process. SCM activities are developed to identify change, control change, ensure that change is properly being implemented and report change to others who may be interested in it. In this chapter, we will discuss about baselines, SCM process, version control, change control and how they influence on software quality.


At the end of this chapter, you should be able to : • • • • • Define “Software configuration management” Say “what baseline is ?” Appreciate the importance of baselines in software development process Identify the responsibilities of SCM process Identify the importance of versions and the way it is controlled

BSIT 44 Software Engineering


• • Answer “what is change control ?” Discuss the need for configuration audit

Chapter 8 - Software Configuration Management

The changes occurring to a software, in a software process is quite common. This is because, software passes through different kinds of users, who are subjected to think differently at different times. Customers demand for changes in their requirements, developers may want a change in their technical approach, managers may want to modify project management approach and so on. But, changes left wildly without control is very difficult to manage. So, software configuration management concept called Baseline is used. A baseline is a SCM concept that helps us to control change without seriously impeding justifiable change. As per IEEE, a baseline is defined as : A specification or product that has been formally reviewed and agreed upon, that thereafter serves as the basis for further development, and that can be changed only through formal change control procedure. Before a SCI item becomes a baseline, change can be made quickly and informally. However, once baseline is established, the changes that can be made become specific, and formal procedure must be applied to evaluate and verify each change. A baseline is a milestone in the software development process that is marked by the delivery of one or more Software Configuration Items ( SCI ), and these Software Configuration Items can be approved through Formal Technical Reviews. Figure 8.1 Baselines

BSIT 44 Software Engineering


System Engineering
System specification

Requirements Analysis
Software Requirements specification

Software Design
Design specification Coding Source code


Test plans/ procedures/data Release Operational system

The progression of activities that lead to a baseline is shown in figure 8.1.


Chapter 8 - Software Configuration Management

Software engineering activities produce one or more SCIs. After SCIs are reviewed and approved, they are placed in a project database. When a member of software engineering team wants to make a modification to a baseline SCI, it is copied from the project database into engineer’s private workspace. This extracted SCI can be modified only by following the SCM controls. Figure 8.2 shows the modification path for a baselined SCI.

modified SCIs approved SCIs Project database

Software engineering tasks


Formal Technical review



Extracted SCM controls SCIs

Figure 8.2 Modification paths for baseline SCIs

Software Configuration Item SCI is an information that is a part of software engineering process
It is the basic unit of modification It is a document that is explicitly under the configuration control

BSIT 44 Software Engineering


The following SCIs are the target for the configuration management technique and form a set of baselines : 1. 2. 3. system specification software project plan software requirements specification a. graphical analysis model b. process specifications c. prototypes d. mathematical specifications preliminary user manual design specification a. data design description b. architectural design description c. module design description d. interface design description source code listing test specification a. test plan and procedure b. test cases and recorded results operation and installation manuals executable program a. module executable code b. linked modules database description a. schema and file structure b. initial content as-built user manual maintenance documents a. software problem reports b. maintenance change orders c. engineering change orders standards and procedures for software engineering

4. 5.

6. 7.

8. 9.


11. 12.



Chapter 8 - Software Configuration Management

SCM is an important element of SQA. Its primary responsibility is to control the changes. However, other responsibilities of SCM include : 1. 2. 3. 4. 5. identification of individual SCIs version control change control configuration auditing reporting

8.3.1 Identification of Objects in Software Configurations
The software configuration items must be named and organized using object-oriented approach. This is required to control and manage SCIs. Two types of objects can be identified : basic objects and aggregate objects. Basic objects – it is a unit of text, created by software engineer during analysis, design , coding or testing. Example : a section of SRS, source code listing for a module. Aggregate objects - is collection of basic objects and other aggregate objects. Example : In SRS, design specification is an aggregate object which is an aggregation of objects like data design, architectural design module design etc., which in turn have objects in them. Figure 8.3 shows configuration objects, with design specification as an aggregate object ; data model and module N are basic objects. Each object or configuration object has certain distinct features like name, description, a list of resources and a realization.

BSIT 44 Software Engineering


Design specification

Data model

Data design description Architectural design Module design Interface design Module N Interface description PDL

Figure 8.3 Configuration objects

Module interconnection language will help in describing interdependencies among configuration objects and enables any version of a system to be constructed automatically. Since, objects evolve through out the software process, they need to be described about the changes. For this, a graph called evolution graph is used. The diagrammatic representation of en evolution graph is shown in figure 8.4.

OBJ 1.0

OBJ 1.1

OBJ 1.2

OBJ 1.3

OBJ 1.4

OBJ 1.1.1

OBJ 1.1.2

OBJ 2.0

OBJ 2.1

Figure 8.4 Evolution graph


Chapter 8 - Software Configuration Management

Here, configuration object “ OBJ 1.0” undergoes revision and becomes “ OBJ 1.1”, this “OBJ 1.1” then undergoes minor change and results in OBJ 1.1.1 and OBJ 1.1.2. A major update at OBJ 1.1 becomes OBJ 1.2 and similarly it evolves as OBJ 1.3, OBJ 1.4 etc.

8.3.2 Version Control
It combines procedures and tools to manage different version of configuration objects that were created during software engineering process. The configuration management allows the user to specify the alternative configuration of the software system through the selection of appropriate versions. Each software version has attributes associated with it. Attributes can be like a specific version number, attached to each object or a string of Boolean variables.

Version representation There are two version representations : 1. Evolution graph 2. Object-pool representation Evolution graph This is shown in figure 8.5, here each node represents an aggregate object ( i.e. complete version of software ). Each version of software is a collection of SCIs and may be composed of different variants. let us consider an example: a version of a simple program that is composed of five components. Let us say that component 4 is used only when software is implemented using color displays and component 5 is used only when the software is implemented using monochrome displays. This is shown in figure 8.5.





Figure 8.5 components


BSIT 44 Software Engineering


There are two variants of version: 1. 2. components 1, 2, 3, 4 components 1, 2, 3, 5

In-order to construct variant of a given version, each component is associated with one or more attributes.

Object- pool representation This describes the relationship between components, version and variants. and this model is shown in figure 8.6



object version

Figure 8.6 Object-pool representation

A component is composed of collection of objects at the same version level. A variant is different collection of objects at the same version level. A version or new version is defined when changes are made to one or more objects

144 8.3.3 Change Control

Chapter 8 - Software Configuration Management

Change control combines the human procedures and automated tools to provide the mechanism for control of change. Figure 8.7 shows the process of change control, and is described here: The user submits the change request for the current version The developer will evaluate the change request of software in order to access the technical merits, side effects, overall impact on configuration objects etc. He then prepares a report called change report and submits to change control authority ( CCA ) A CCA is a person or group who makes a final decision based on the change report either to deny the change request or grant the change request. Every time a change is approved an engineering change order ( ECO ) is generated. This ECO describes what changes are to be made, the constraints imposed, criteria for review and audit. The object to be changed is “checked-out” from project database and it is modified and appropriate SQA activities are applied. The changed object is then “checked-in” to the project database and appropriate version control mechanisms are used to create next version of software.


“check-in” and “check-out” processes implement two important elements of change control : 1. 2. access control synchronization control

BSIT 44 Software Engineering

Need for change is recognized

Change request from user

Developer evaluates

Change report is generated

Change control authority decides

Request queue for action, ECO generated

cha nge request is denied

Assign individuals to configuration objects

user is informed

“ check -out” the SCIs

make changes

review the changes

“check -in” the configuration item have been changed

establish a

baseline for testing

perform quality assurance & testing activities

create next version of software

distribute new version
Figure 8.7 the process of change control


Chapter 8 - Software Configuration Management

Access control : this governs which software engineer has authority to access and modify a particular configuration object. Synchronization control : this ensures that parallel changes made by two different people, don’t overwrite one another. For this purpose “ locks “ are used in the database.

8.3.4 Configuration Auditing
We can evaluate the changes made during change control using two methods: 1. 2. FTR – formal technical review Software configuration audit

Formal technical review it focuses on the technical correctness of configuration object that has been modified. Software configuration audit It complements the FTR by assessing a configuration object for characteristics that are generally not considered during FTR.
This activity is conducted by quality assurance group

The audit asks and answers the following questions : 1. 2. 3. 4. Has the changes specified in the ECO been made ? Have any additional modifications implemented ? Have an FTR been conducted for accessing technical correctness ? Have software engineering standards been followed ? Have all related SCIs been properly updated ? etc.

8.3.5 Status Reporting
it is also known as configuration status reporting (CSR) or status accounting. It contains the status information regarding changes that take place in the change control process this report is helpful in improving communication among all people involved in the process and plays an important role in the success of a large software development process

BSIT 44 Software Engineering



CSR is generated on regular basis, so that it keeps the management and practitioners informed about the important changes A CSR entry is made each time when a SCI is assigned new or updated identification, a change is approved by CCA and a configuration audit is conducted The output of CSR, may be placed in an on-line database for developers, maintainers to access the changed information CSR is an SCM task that answers the questions like :

1. 2. 3. 4.

What happened ? Who did it ? When did it happen ? What else will be affected ? etc.

Software configuration management is an umbrella activity that is applied through out the software process. SCM identifies, controls, audits, and reports modifications that occurs during the software development process and after the release of software. SCM can be viewed as a software quality assurance activity that is applied through out the software process. Once a configuration object has been developed and reviewed, it becomes a baseline. Changes to a base-lined object results in the creation of new version of that object, for which controls need to be applied. Change control is a procedural activity that ensures quality and consistency as changes are made to configuration object. The software configuration audit is an SQA activity that helps to maintain quality even when changes are applied to software. The information about the changes taking place is given out in the form of status report.


Chapter 8 - Software Configuration Management

I Fill in the blanks : 1. A baseline is a milestone in the software development process that is marked by the delivery of one or more ————————————.
2. Software Configuration Items can be approved through ————————. 3. SCM is an important element of ————————————— 4. 5. 6. 7. 8. —————————— will help in describing interdependencies among configuration objects and enables any version of a system to be constructed automatically. The most commonly used version representations are evolution graph and ——————— ———————— ECO stands for ———————————— ————————— ensures that parallel changes made by two different people, don’t overwrite one another. ————————— contains the status information regarding changes that take place in the change control process

II Answer the following questions in one or two sentences : 1. Give the IEEE definition of Software configuration management.
2. 3. 4. 5. 6. 7. 8. What is a baseline ? Give two examples of baselines. What is an SCM process ? What are aggregate objects ? Give one example. What is version control ? What is change control ? why is it important ? Name the important functions of SCM process.

BSIT 44 Software Engineering


I Answers 1. Software Configuration Items
2. 3. 4. 5. 6. 7. 8. Formal Technical Reviews Software Quality Assurance Module interconnection language Object-pool representation Engineering Change Order Synchronization control Status reporting

1. 2. 3. Roger S. Pressman : Software Engineering – A Practitioner’s Approach (Fourth Edition) McGraw-Hill 1997. Pankaj Jalote, An Integrated approach to Software Engineering (second edition), Narosa publishing house Richard Fairly, Software engineering concepts, McGraw-Hill Inc.,1985


allocation — The process of distributing requirements, resources, or other entities among the components of a system or program. analysis — In system/software engineering, the process of studying a system by partitioning the system into parts (functions or objects) and determining how the parts relate to each other to understand the whole. architectural design — The process of defining a collection of hardware and software components and their interfaces to establish the framework for the development of a computer system audit — An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria code — In software engineering, computer instructions and data definitions expressed in a programming language or in a form output by an assembler, compiler, or other translator. coding — In software engineering, the process of expressing a computer program in a programming language. configuration auditing – In configuration management, an independent examination of the configuration status to compare with the physical configuration. configuration control — In configuration management, an element of configuration management consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.



BSIT 44 Software Engineering


configuration identification — In configuration management, (1) An element of configuration management consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. (2) The current approved technical documentation for a configuration item as set forth in specifications, drawings, associated lists, and documents referenced therein. configuration management (CM) — In system/software engineering, a discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. configuration status accounting — In configuration management, an element of configuration management consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of approved changes. cost estimate — The estimated cost to perform a stipulated task or acquire an item. customer — In system/software system engineering, an individual or organization who specifies the requirements for and formally accepts delivery of a new or modified hardware/software product and its documentation. The customer may be internal or external to the parent organization of the project and does not necessarily imply a financial transaction between customer and developer. Cyclomatic complexity — is a software measure, that provides a quantitative measure of logical complexity of a program. data design – Design of a program’s data, especially table design in database applications. deliverable document — In software system engineering, a document that is deliverable to a customer. Examples are users’ manual, operator’s manual, and programmer’s maintenance manual design — The process of defining the architecture, components, interfaces, and other characteristics of a system or component. documentation — A collection of documents on a given subject. error — The difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. action that produces an incorrect result.] functional decomposition — In software engineering, the partitioning of higher-level system functions into smaller and smaller pieces to render them more manageable and understandable. functional design — The process of defining the working relationships among the components of a system.



human-engineering — In system/software system engineering, is a multidisciplinary activity that app lies the knowledge, derived from psychology and technology, to specify and design high quality humanmachine interfaces. The human engineering process encompasses the following steps: activity analysis, semantic analysis and design, syntactic and lexical design, user environment design, and prototyping. . implementation — The process of translating a design into hardware components, software components, or both. initiation and scope – A non-specific term for work performed early in a software project that includes high-level statements of requirements and rough estimates of project cost and schedule. inspections — A static analysis technique that relies on visual examination of development products to detect errors, violations of development standards, and other problems. maintenance — The process of modifying a software system or component after delivery to correct faults, improve performance or other attributes, or adapt to a changed environment. metric — A quantitative measure of the degree to which a system, component, or process possesses a given attribute. object-oriented design — A software development technique in which a system or component is expressed in terms of objects and connections between those objects. performance — The degree to which a system or component accomplishes its designated functions within given constraints, such as speed, accuracy, or memory usage. performance analysis techniques and tools — Techniques and tools that are used to measure and evaluate the performance of a software system process — A sequence of steps performed for a given purpose; for example, the software development process or An executable unit managed by an operating system scheduler. Or To perform operations on data. process definition (Software Engineering Process) Identification of a sequence of steps involving activities, constraints, and resources that are performed for a given purpose. process management (Software Engineering Process) — The direction, control, and coordination or work performed to develop a product or perform a service. process model (Software Engineering Process) – A development strategy, incorporated by a software engineer or team of engineers, that encompasses process, methods, and tools. product attributes — Characteristics of a software product. Can refer either to general characteristics such as reliability, maintainability, and usability or to specific features of a software product.

BSIT 44 Software Engineering


product measure – A metric (e.g. measures per person-month) that can be used to measure the characteristics of delivered documents and software. project management — A system of procedures, practices, technologies, and know-how that provides the planning, organizing, staffing, directing, and controlling necessary to successfully manage an engineering project. project plan — A document that describes the technical and management approach to be followed for a project. quality analysis — A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements. quality attribute — A feature or characteristic that affects an item’s quality. In a hierarchy of quality attributes, higher-level attributes may be called quality factors, and lower level attributes may be called quality attributes. quality management (Software Engineering Management - Planning) — That aspect of the overall management function that determines and implements the quality policy. requirement — A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents requirements elicitation (software requirements) — The process through which the customers (buyers and/or users) and developer (contractor) of a software system discover, review, articulate, and understand the requirements of the system. requirements engineering — a method of obtaining a precise formal specification from the informal and often vague requirements with a customer. reviews — A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. risk management (Software Engineering Management) — In system/software engineering, an “umbrella” title for the processes used to manage risk.. It is an organized means of identifying and measuring risk (risk assessment) and developing, selecting, and managing options (risk analysis) for resolving (risk handling) these risks. The primary goal of risk management is to identify and respond to potential problems with sufficient lead-time to avoid a crisis situation. schedule and cost estimates (Software Engineering Management - Planning) — The management activity of determining the probable cost of an activity or product and the time to complete the activity or deliver the product. Software — Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system. Contrast with hardware.



software configuration management — In system/software engineering, a discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. software configuration status accounting (software configuration management) – In configuration management, an element of configuration management consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of approved changes. software design- In software engineering, the process of defining the software architecture (structure), components, modules, interfaces, test approach, and data for a software system to satisfy specified requirements. software development methodology — In software engineering: an integrated set of software engineering methods, policies, procedures, rules, standards, techniques, tools, languages, and other methodologies for analyzing, designing, implementing, and testing software; software engineering process — The total set of software engineering activities needed to transform a user’s requirements into software software engineering project management (SEPM) — A system of procedures, practices, technologies, and know-how that provides the planning, organizing, staffing, directing, and controlling necessary to successfully manage a software engineering project. software metric — A quantitative measure of the degree to which a system, component, or process possesses a given attribute. software quality — In software engineering: The totality of features and characteristics of a software product that affects its ability to satisfy given needs (for example, to conform to specifications software requirements analysis — The process of studying user needs to arrive at a definition of system, hardware, or software requirements. software requirements specification (software requirements) – A document that specifies the requirements for a system or component. SQA (software quality assurance) — A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements. system integration — In system/software system engineering, this ensures that the various segments and elements of the total system can interface with each other and with the external environment. task — A task is a well-defined work assignment for one or more project members.

BSIT 44 Software Engineering


test coverage (software testing) — The degree to which a given test or set of tests addresses all specified requirements for a given system or component. test coverage of code (software testing) — The amount of code actually executed during the test process. test design (software testing) — Documentation specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests. test documentation (software testing) — Documentation describing plans for, or results of, the testing of a system or component. Types include test case specification, test incident report, test log, test plan, test procedure, test report. test execution (software testing) — Act of performing one or more test cases. testing strategies (software testing) — In software engineering, one of a number of approaches used for testing software. user interface — An interface that enables information to be passed between a human user and hardware or software components of a computer system. V&V (verification and validation) — The process of determining whether the requirements for a system or component are complete and correct, the products of each development phase fulfill the requirements or conditions imposed by the previous phase, and the final system or component complies with specified requirements. Validation – The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. verification — The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. version — An initial release or re-release of a computer software configuration item, associated with a complete compilation or recompilation of the computer software configuration item.

1. 2. 3. 4. 5. 6. 7.

Reference Books

Roger S. Pressman, Software engineering : A practitioner’s approach, MCGraw-Hill publication, 1997, ed.4 Edward Kit, Software testing in the real world, Addison-Wesley publications, 2000, ed.1 Ian Sommerville, Software engineering, Addison-Wesley publications, 1996, ed.5 Pankaj Jalote, An integrated approach to software engineering, Narosa publications, 1997, ed. 2 Shari Lawrence Peleeger, software engineering theory and practice, Pearson education, 2001, ed. 2 Richard Fairly, Software engineering concepts, McGraw-Hill Inc.,1985 Barry W. Boehm, Software Engineering Economics, Prentice-Hall Inc., 1981.

Sign up to vote on this title
UsefulNot useful