You are on page 1of 77

BCA – VI Semester

7BCA6C3 – Software Engineering

Syllabus

Unit I

Introduction: Introduction to software engineering – some definitions – some size


factors –quality and productivity factors – managerial issues

Planning a software project: Defining the problem – developing a solution strategy –


planning the development process – planning an organizational structure – other
planning activities

Unit II
Software Cost Estimation: software cost factors – software cost estimation
techniques- estimating software maintenance costs

Software Requirements Definition: The software requirements specification – formal


specification techniques

Unit III
Software Design: Fundamental design concepts – modules and modularization criteria
– design notations – design techniques – detailed design considerations – realtime and
distributed system design – test plan – milestones, walkthroughs and inspections –
design guidelines

Software Implementation: Structured coding techniques – coding style – standards


and guidelines

Unit IV
Software Testing: A Strategic approach to software testing – strategic issues – unit
testing –integration testing – validation testing – system testing – the art of debugging

Software Maintenance: Enhancing maintainability during development – managerial


aspects of software engineering – configuration management – source code metrics –
other maintenance tools and techniques

Unit V
Software Quality Assurance: Quality concepts – software quality assurance – software
reviews – formal technical reviews – statistical quality assurance – the SQA plan – the
ISO 9000 quality standards

Page 1 of 77
UNIT – I

INTRODUCTION TO SOFTWARE ENGINEERING

Definition :Software Engnieering


Software engineering is the technological and managerial discipline concerned
with systematic production and maintenance of software products that are developed
and modified on time and within cost estimates.

The primary goals of software engineering are to improve the quality of


software products and to increase the productivity and job satisfaction of software
engineers.

SOME DEFINITIONS

The term “programmer” is used to denote an individual who is concerned


with the details of implementing, packaging, and modifying algorithms and data
structures written in particular programming languages.

Software engineers are additionally concerned with issues of analysis, design,


verification and testing, documentation, software maintenance, and project
management.

The term “computer software” is often taken to be synonymous with


“computer program” or “source code”.

Computer software includes the source code and all the associated documents
and documentation that constitute a software product. Requirements documents,
design specifications, source code, test plans, principles of operation, quality
assurance procedures, software problem reports, maintenance procedures, user’s
manuals, installation instructions, and training aids are all components of a software
product.

“Documentation” explains the characteristics of a document. Internal


documentation of source code describes the characteristics of the code, and external
documentation explains the characteristics of the documents associated with the code.

The term “customer” is used to denote an individual or organization that initiates


procurement or modification of a software product.

Software quality is a primary concern of software engineers.

Fundamental quality attributes that every software product should possess. These
include usefulness, clarity, reliability, efficiency, and cost-effectiveness.

Software reliability is “the ability of a program to perform a required


function under stated conditions for a stated period of time”.
Page 2 of 77
SOME SIZE FACTORS

The level of effort devoted to software development and maintenance is


discussed; the distribution of effort among activities is presented; and size categories
for software projects are described.

PROJECT SIZE CATEGORIES

Project size is a major factor that determines the level of management control
and the types of tools and techniques required on a software project. The following
categories give some indication of software project size.

Trivial projects

A trivial software project involves one programmer working perhaps part


time, for a few days or a few weeks and results in a program of less than 500
statements, packaged in 10 to 20 subroutines.

Small projects

A small project employs one programmer for 1 to 6 months and results in a


product containing 1000 to 2000 lines of source code packaged in perhaps 25 to 50
routines. Small programs usually have no interactions with other programs.

Size categories for software products

Category Number of Duration Product size


programmers

Trivial 1 1-4 wks. 500 source lines

Small 1 1-6 mos. 1K-2K

Medium 2-5 1-2 yrs. 5K-50K

Large 5-20 2-3 yrs. 50K-100K

Very Large 100-1000 4-5 yrs. 1M

Extremely large 2000-5000 5-10 yrs. 1M-10M

Examples of such programs include scientific applications, commercial applications.

Medium-size projects
Page 3 of 77
A project of medium size requires two to five programmers working for 1to 2
years and results in 10,000 to 50,000 lines of source code packaged in 250 to 1000
routines.

Large projects

A large project requires 5 to 20 programmers for a period of 2 to 3 years and


results in a system of 50,000 to 100,000 source statements, packaged in several
subsystems. A large program often has significant interactions with other programs
and software systems.

Examples of large programs include large compilers, small time-sharing systems,


data-base packages.

Very large projects

A very large project requires 100 to 1000 programmers for a period of 4 to 5


years and results in a software system of 1 million source instructions. A very large
system generally consists of several major subsystems, each of which forms a large
system.

Very large systems often involve real-time processing, telecommunications, and


multitasking.

Extremely large projects

An extremely large project employs 2000 to 5000 programmers for periods of


up to 10 years and results in 1 million to 10 million lines of source code. Extremely
large systems consist of several very large subsystems and often involve real-time
processing, telecommunications, multitasking, and distributed processing.

Examples of extremely large systems include air traffic control, ballistic missile
defense, and military command and control systems.

QUALITY AND PRODUCTIVITY FACTORS

Development and maintenance of software products are complex tasks. Those


who have written only small programs for personal use or homework exercises may
find it difficult to understand the importance of systematic activities.

Software quality and programmer productivity can be improved by improving


the processes used to develop and maintain software products. Some factors that
influence quality and productivity are listed in the following table

Factors that influence quality and productivity

Individual ability Problem understanding

Page 4 of 77
Team communication Stability of requirements

Product complexity Required skills

Appropriate notations Facilities and resources

Systematic approaches Adequacy of Training

Change control Management skills

Level of technology Appropriate goals

Required reliability Rising expectations

Available time

Individual ability

Production and maintenance of software products are labor-intensive


activities. Productivity and quality are thus direct functions of individual ability and
effort.

Team communication

Programming has traditionally been regarded as an individual and private


activity. Many programmers have low social needs and prefer to work alone
(COU78). Programs are seldom perceived as public documents, and programmers
seldom discuss the exact details of their work in a systematic manner.

Many of the recent innovations in software engineering, such as design


reviews, structured walkthroughs, and code-reading exercises, have the goals of
making software more visible and improving communication among programmers.

Product complexity

There are three generally acknowledge levels of product complexity:


applications programs, utility programs, and system-level programs. Applications
programs include scientific and data processing routines written in a high-level
language such as FORTRAN, Pascal, or COBOL. Utility programs include compilers,
assemblers, linkage editors, and loaders.

Systems programs include data communication packages, real-time process


control systems, and operating systems routines, which are usually written in
assembly language or a high-level systems language such as PL/1 or Ada.

Appropriate notations

Page 5 of 77
Good notations can clarify the relationships and interactions of interest, while
poor notations complicate and interfere with good practice. Programming languages
provide concise notations for the implementation phase of software development, but
there are no widely accepted notations for stating functional requirements, design
specifications, test plans, or performance criterial.

Systematic approaches

Various systematic approaches to software development and maintenance are


discussed in this text, but because the field of software engineering is in its infancy,
many of our procedures and techniques have not yet matured.

At this point in the evolution of software engineering it is often not clear which of the
various approaches to software development should be used in which situations.

Change control

In a very real sense software adapts general-purpose computing hardware to


particular applications. Sometimes software must compensate design deficiencies in
the hardware, and software is often tailored to satisfy differing requirements of
different customers. The flexibility of software is a great strength and also a great
source of difficulty in software engineering. Because source code is easily modified, it
is not uncommon for difficult issues to be resolved in software, or for the customer to
request, and the project manager agree to, seemingly minor changes to the source
code.

Requirements can also change due to poor understanding of the problem, or


external economic and political factors beyond the control of customer or developer.

Level of Technology

The level of technology utilized on a software project accounts for such


factors as the programming language, the machine environment, the programming
practices, and the software tools.

Level of Reliability

Every software product must possess a basic level of reliability; however,


extreme reliability is gained only with great care in analysis, design, implementation,
system testing, and maintenance of the software product.

Problem understanding

Page 6 of 77
Failure to understand the true nature of the problem to be solved is a common
and difficult issue. There are several factors that contribute to this lack of
understanding.

Most customers (most people in general) are not trained to think in logical,
algorithmic terms, and often do not understand and true nature of their needs.

Sometimes the customer is not the end user of the system, and the software
engineer has no opportunity to investigate the user’s problem.

Available time

Software project requiring six programmer-months of effort can be completed


by one programmer in 6 months or by six programmers in 1 month, software projects
are sensitive not only to total effort but also to elapsed time and the number of people
involved.

Required skills

The practice of software engineering requires a vast range of skills. Extracting


information from customers in order to establish user needs demands good
communication skills, tact, and diplomacy as well as knowledge of the application
area.

Facilities and resources

Studies of factors that motivate programmers have shown that work-related


factors, such as good machine access and a quiet place to work, are more important to
the typical programmer than status-related factors.

Adequacy of training

Product implementation is only one aspect of software engineering, yet this is


the only phase of product development and maintenance taught in many educational
institutions.

Boehm has described the skills most lacking in entry-level programmers


(BOE76b). his list includes inability to:

1. Express oneself clearly in English.


2. Develop and validate software requirements and design specifications.
3. Work within application area.
4. Perform software maintenance.
5. Perform economic analyses.
6. Work with project management techniques.

Page 7 of 77
7. Work in groups.

Management skills

Software projects are often supervised by managers who have little, if any,
knowledge of software engineering. Many of the problems in software project
management are unique; even managers experienced in management of computer
hardware projects find software project management to be difficult due to the
differences in design methods, notations, development tools, and issues of concern.

Appropriate goals

The primary goal of software engineering is development of software products


that are appropriate for their intended use. Ideally, every software product should
provide optimal levels of generality, efficiency, and reliability.

Rising expectations

The most pervasive problem in software engineering is that of rising expectations.


There are two interrelated aspects to rising expectations: first is the concern for how
much functionality, reliability, and performance can be provided by a given amount of
development effort; and second is the issue of fundamental limitations of software
technology.

Other factors

There are many other factors that influence programmer productivity,


including familiarity with, access to, and stability of the computing system used to
develop or modify the software; the memory and timing constraint for the software
product; experience with the programming language; and data-base size.

MANAGERIAL ISSUES

Technical and managerial activities are equally to the success of a software


project. Mangers control the resources and the environment in which technical
activities occur. Managers also have ultimate responsibility for ensuring that software
products are delivered on time and within cost estimates.

Some of the problems identified by respondents as important management


problems were:

1. Planning for software engineering projects is generally poor.


2. Procedures and techniques for the selection of project managers are poor.
3. The accountability of many software engineering projects is poor, leaving
some question as to who is responsible for various project functions.
4. The ability to accurately estimate the resources required to accomplish a
software development project is poor.
Page 8 of 77
5. Success criteria for software development projects are frequently
inappropriate. This results in software products that are unreliable, difficult to
use, and difficult to maintain.
6. Decision rules to aid in selecting the proper organizational structure are not
available.
7. Decision rules to aid in selecting the correct management techniques for
software engineering projects are not available.
8. Procedures, methods, and techniques for designing a project control system
that will enable project managers to successfully control their project are not
readily available.
9. Procedures, techniques, strategies, and aids that will provide visibility of
progress to the project manager are not available.
10. Standards and techniques for measuring the quality of performance and the
quantity of production expected from programmers and data processing
analysts are available.

Some of the methods mentioned for solving these problems were (solutions are
not paired with problems):

1. Educate and train top management, project managers, and software


developers.
2. Enforce the use of standards, procedures, and documentation.
3. Analyse data from prior software project to determine effective methods.
4. Define objectives in terms of quality desired.
5. Define quality in terms of deliverables.
6. Establish success priority criteria.
7. Allow for contingencies.
8. Develop truthful, accurate cost and schedule estimates that are accepted by
management and customer, and manage to them.
9. Select project managers based on ability to manage software projects, rather
than on technical ability or availability.
10. Make specific work assignments to software developers and apply job
performance standards.

PLANNING A SOFTWARE PROJECT

Defining the problem

1. Develop a definitive statement of the problem to be solved, include a


description of the present situation, problem constraints, and a statement of the

Page 9 of 77
goals to be achieved. The problem statement should be phrased in the
customer’s terminology.
2. Justify a computerized solution strategy for the problem.
3. Identify the functions to be provided by, and the constraints on, the hardware
subsystem, the software subsystem, and the people subsystem.
4. Determine system-level goals and requirements for the development process
and the work products.
5. Establish high-level acceptance criteria for the system.

Developing a solution strategy

6. Outline several solution strategies, without regard for constraints.


7. Conduct a feasibility study for each strategy.
8. Recommend a solution strategy, indicating why other strategies were rejected.
9. Develop a list of priorities for product characteristics.

Planning the development process

10. Define a life-cycle model and an organizational structure for the project.
11. Plan the configuration management, quality assurance, and validation
activities.
12. Determine phase-dependent tools, techniques, and notations to be used.
13. Establish preliminary cost estimates for system development.
14. Establish a preliminary development schedule.
15. Develop preliminary staffing estimates.
16. Develop preliminary estimates of the computing resources required to operate
and maintain the system.
17. Prepare a glossary of terms.
18. Identify sources of information, and refer to them throughout the project plan.

The first step in planning a software project is to prepare, in the customer’s


terminology, a concise statement of the problem to be solved and the constraints that
exist for its solution. The definitive problem statement should include a description of
the present situation and the goals to be achieved by the new system.

The second step in planning a software project is to determine the appropriateness


of a computerized solution.

Software characteristics such as performance, accuracy, and level of reliability.

Goals and Requirements

Goals are targets for achievement, and serve to establish the framework for a
software development project.

Page 10 of 77
Goals can be either qualitative or quantitative:

A qualitative process goal: the development process should enhance the


professional skills of quality assurance personnel.

A quantitative process goal: the system should be delivered within 12


months.

A qualitative product goal: the system should make users’ jobs more
interesting.

A quantitative product goal: the system should reduce the cost of a


transaction by 25 percent.

Planning the Development Process


Planning the software development process involves several important
considerations. The first consideration is to define a product life cycle model.

The Software Life cycle encompasses all activities required to


define,develop,tes,deliver,operate and maintain a software product.

SDLC Overview
SDLC, Software Development Life Cycle is a process used by software industry
to design, develop and test high quality softwares. The SDLC aims to produce a high
quality software that meets or exceeds customer expectations, reaches completion
within times and cost estimates.
 SDLC is the acronym of Software Development Life Cycle.
 It is also called as Software development process.
 The software development life cycle (SDLC) is a framework defining tasks
performed at each step in the software development process.

A typical Software Development life cycle consists of the following stages:

 Stage 1: Planning and Requirement Analysis


 Stage 2: Defining Requirements
 Stage 3: Designing the product architecture
 Stage 4: Building or Developing the Product
 Stage 5: Testing the Product
 Stage 6: Deployment in the Market and Maintenance

SDLC Models
Page 11 of 77
There are various software development life cycle models defined and
designed which are followed during software development process. These models are
also referred as "Software Development Process Models". Each process model
follows a Series of steps unique to its type, in order to ensure success in process of
software development.

Following are the most important and popular SDLC models followed in the industry:

 Waterfall Model
 Iterative Model
 Spiral Model
 V-Model
 Big Bang Model

The other related methodologies are Agile Model, RAD Model, Rapid
Application Development and Prototyping Models.

SDLC Waterfall Model


Following is a diagrammatic representation of different phases of waterfall model.

The sequential phases in Waterfall model are:

 Requirement Gathering and analysis: All possible requirements of the


system to be developed are captured in this phase and documented in a
requirement specification doc.

Page 12 of 77
 System Design: The requirement specifications from first phase are studied in
this phase and system design is prepared. System Design helps in specifying
hardware and system requirements and also helps in defining overall system
architecture.
 Implementation: With inputs from system design, the system is first
developed in small programs called units, which are integrated in the next
phase. Each unit is developed and tested for its functionality which is referred
to as Unit Testing.
 Integration and Testing: All the units developed in the implementation phase
are integrated into a system after testing of each unit. Post integration the
entire system is tested for any faults and failures.
 Deployment of system: Once the functional and non functional testing is
done, the product is deployed in the customer environment or released into the
market.
 Maintenance: There are some issues which come up in the client
environment. To fix those issues patches are released. Also to enhance the
product some better versions are released. Maintenance is done to deliver
these changes in the customer environment.

All these phases are cascaded to each other in which progress is seen as flowing
steadily downwards (like a waterfall) through the phases. The next phase is started
only after the defined set of goals are achieved for previous phase and it is signed off,
so the name "Waterfall Model". In this model phases do not overlap.

SDLC Iterative Model

This model is most often used in the following scenarios:

 Requirements of the complete system are clearly defined and understood.


 Major requirements must be defined; however, some functionalities or
requested enhancements may evolve with time.
 There is a time to the market constraint.
 A new technology is being used and is being learnt by the development team
while working on the project.
Page 13 of 77
 Resources with needed skill set are not available and are planned to be used on
contract basis for specific iterations.
 There are some high risk features and goals which may change in the future.

SDLC Spiral Model


The spiral model has four phases. A software project repeatedly passes through
these phases in iterations called Spirals.

 Identification:This phase starts with gathering the business requirements in


the baseline spiral. In the subsequent spirals as the product matures,
identification of system requirements, subsystem requirements and unit
requirements are all done in this phase.

This also includes understanding the system requirements by continuous


communication between the customer and the system analyst. At the end of
the spiral the product is deployed in the identified market.

 Design: Design phase starts with the conceptual design in the baseline spiral
and involves architectural design, logical design of modules, physical product
design and final design in the subsequent spirals.
 Construct or Build: Construct phase refers to production of the actual
software product at every spiral. In the baseline spiral when the product is just
thought of and the design is being developed a POC (Proof of Concept) is
developed in this phase to get customer feedback.

Then in the subsequent spirals with higher clarity on requirements and design
details a working model of the software called build is produced with a
version number. These builds are sent to customer for feedback.
 Evaluation and Risk Analysis: Risk Analysis includes identifying,
estimating, and monitoring technical feasibility and management risks, such as
schedule slippage and cost overrun. After testing the build, at the end of first
iteration, the customer evaluates the software and provides feedback.
Following is a diagrammatic representation of spiral model listing the activities in
each phase:

V Model
The V - model is SDLC model where execution of processes happens in a
sequential manner in V-shape. It is also known as Verification and Validation model.
Page 14 of 77
V - Model is an extension of the waterfall model and is based on association of a
testing phase for each corresponding development stage. This means that for every
single phase in the development cycle there is a directly associated testing phase. This
is a highly disciplined model and next phase starts only after completion of the
previous phase.

The below figure illustrates the different phases in V-Model of SDLC.

SDLC Big Bang Model


The Big Bang model is SDLC model where there is no specific process
followed. The development just starts with the required money and efforts as the
input, and the output is the software developed which may or may not be as per
customer requirement.

Big Bang Model is SDLC model where there is no formal development


followed and very little planning is required. Even the customer is not sure about what
exactly he wants and the requirements are implemented on the fly without much
analysis.

Usually this model is followed for small projects where the development
teams are very small.

Agile Model
Agile thought process had started early in the software development and
started becoming popular with time due to its flexibility and adaptability.
Page 15 of 77
The most popular agile methods include Rational Unified Process (1994),
Scrum (1995), Crystal Clear, Extreme Programming (1996), Adaptive Software
Development, Feature Driven Development, and Dynamic Systems Development
Method (DSDM) (1995). These are now collectively referred to as agile
methodologies, after the Agile Manifesto was published in 2001.

Following are the Agile Manifesto principles

 Individuals and interactions . in agile development, self-organization and


motivation are important, as are interactions like co-location and pair
programming.
 Working software . Demo working software is considered the best means of
communication with the customer to understand their requirement, instead of
just depending on documentation.
 Customer collaboration . As the requirements cannot be gathered completely
in the beginning of the project due to various factors, continuous customer
interaction is very important to get proper product requirements.
 Responding to change . agile development is focused on quick responses to
change and continuous development.

RAD Model
Following are the typical scenarios where RAD can be used:

Page 16 of 77
 RAD should be used only when a system can be modularized to be delivered
in incremental manner.

 It should be used if there.s high availability of designers for modeling.

 It should be used only if the budget permits use of automated code generating
tools.

 RAD SDLC model should be chosen only if domain experts are available with
relevant business knowledge.

 Should be used where the requirements change during the course of the project
and working prototypes are to be presented to customer in small iterations of
2-3 months.

Software Prototyping
The Software Prototyping refers to building software application prototypes
which display the functionality of the product under development but may not
actually hold the exact logic of the original software.

Software prototyping is becoming very popular as a software development


model, as it enables to understand customer requirements at an early stage of
development. It helps get valuable feedback from the customer and helps software
designers and developers understand about what exactly is expected from the product
under development.
Page 17 of 77
Following is the stepwise approach to design a software prototype:

 Basic Requirement Identification: This step involves understanding the very


basics product requirements especially in terms of user interface. The more
intricate details of the internal design and external aspects like performance
and security can be ignored at this stage.
 Developing the initial Prototype: The initial Prototype is developed in this
stage, where the very basic requirements are showcased and user interfaces are
provided. These features may not exactly work in the same manner internally
in the actual software developed and the workarounds are used to give the
same look and feel to the customer in the prototype developed.
 Review of the Prototype: The prototype developed is then presented to the
customer and the other important stakeholders in the project. The feedback is
collected in an organized manner and used for further enhancements in the
product under development.
 Revise and enhance the Prototype: The feedback and the review comments
are discussed during this stage and some negotiations happen with the
customer based on factors like , time and budget constraints and technical
feasibility of actual implementation. The changes accepted are again
incorporated in the new Prototype developed and the cycle repeats until
customer expectations are met.

A Software Requirements Review is held to ensure the adequacy of the System


Definition, the Project Plan, the Software Requirement Specification, the Software
Verification Plan, and the preliminary User’s Manual.

The Cost Model

Another view of the software life cycle can be obtained by considering the
cost of performing the various activities in a software project. The cost of conducting
a software project is the sum of the costs incurred in conducting each phase of the
project.

PLANNING AN ORGANIZATIONAL STRUCTURE

During the lifetime of a software product, various tasks must be performed.


The tasks include planning, product development, services, publications, quality
assurance, support, and maintenance.

The planning task identifies external customers and internal product needs,
conducts feasibility studies, and monitors progress from beginning to end of the
product life cycle.

Programming Team Structure

Every programming team must have an internal structure. The best team
structure for any particular project depends on the nature of the project and the
product, and on the characteristics of the individual team members.
Page 18 of 77
Democratic team, in which all team members participate in all decisions;the chief
programmer team, in which a chief programmer is assisted and supported by other
team members.

Hierarchical team, which combines aspects of the democratic team and the chief
programmer team. Variations in these basic forms are also possible.

Democratic teams , or Egoless team , in this team , goals are set and decisions made
by group consensus.

Chief Programmer team: It is highly structured. The chief programmer takes all
major decisions.

Hierarchical team structure: The project leader assigns tasks, attendsreviews and
walkthroughs,detects problem areas,balances the workload., and participates in
technical activities.

Management by objectives (MBO)

An effective technique for assigning work activities is development of written


job descriptions for each project member. This reduces misunderstandings and false
assumptions of both manager and managee.

Management by objectives (MBO) is a related technique that is increasingly


popular in software organizations. Using MBO, employees set their own goals and
objectives with the help of their supervisor, participate in the setting of their
supervisor’s goals, and are evaluated by meeting concrete, written objectives.

Management by objectives can be applied at all levels of an organization, and


can include both product-oriented objectives, such as completion of design or testing,
and self-improvement objectives, such as skills to acquire and preparations for
advancement.

OTHER PLANNING ACTIVITIES

Other planning activities include planning the configuration management and


quality assurance functions, planning for independent validation and verification, and
planning phase-dependent tools and techniques.

Planning for Configuration Management and Quality Assurance

Configuration management is concerned with controlling changes in work


products, accounting for the status of work products, and maintaining the program
support library, which is the central repository for all project information.
Page 19 of 77
Planning for Independent Verification and Validation

An independent organization may provide verification and validation of work


products. Verification ensures that various work products are complete and consistent
with respect to other work products and customer needs.

Validation is concerned with assessing the quality of a software system in its actual
operating environment. Validation typically involves planning and execution of test
cases.

Planning Phase-Dependent Tools and Techniques

Automated tools, specialized notations, and modern techniques are often used
to develop software requirement specifications, architectural and detailed designs, and
the source code.

Other Planning Activities

Other planning activities include preparing preliminary cost estimates for


product development, establishing a preliminary development schedule, establishing
preliminary staffing levels, and developing preliminary estimates of the computing
resources and personal required to operate and maintain the system.

Page 20 of 77
UNIT – II

SOFTWARE COST ESTIMATION

Estimating the cost of a software product is one of the most difficult and error-
prone tasks in software engineering. It is difficult to make an accurate cost estimate
during the planning phase of software development, because of the large number of
unknown factors at that time, yet contracting practices often require a firm cost
commitment as part of the feasibility study.

SOFTWARE COST FACTORS

Major factors that influence software cost

Programmer ability
Product complexity
Product size
Available time
Required reliability
Level of technology

There are many factors that influence the cost of a software product. The
effects of most of these factors, and hence the cost of a development or maintenance
effort, are difficult to estimate.

Programmer Ability

The goal was to determine the relative influence of batch and time-shared
access on programmer productivity. Twelve experienced programmers were each
given two programming problem to solve, some using batch facilities and some using
time-sharing. The resulting differences in individual performance among the
programmers were much greater than could be attributed to the relatively small effect
of batch or time-shared machine access.

Product Complexity

There are three generally acknowledge categories of software products:


applications programs, which include data processing and scientific programs; utility
programs, such as compilers, linkage editors, and inventory systems; and system level
programs, such as data-base management systems, operating systems, and real-time
systems. Applications programs are developed in the environment provided by a
language compiler such as FORTRAN or Pascal.

Boehm uses three levels of product complexity and provides equations to


predict total programmer-months of effort, PM, in terms of the number of thousands
of delivered source instructions, KDSI, in the product.

Page 21 of 77
Programmer cost for a software project can be obtained by multiplying the
effort in programmer months by the cost per programmer-month. The equations were
derived by examining historical data from a large number of actual projects.

These levels roughly correspond to applications programs, utility programs,


and systems programs:

Applications programs: PM=2.4*(KDSI)**1.05


Utility programs: PM=3.0*(KDSI)**1.12
Systems programs: PM=3.6*(KDSI)**1.20

The development time for a program, as given by Boehm, is

Applications programs:TDEV=2.5*(PM)**0.38
Utility programs: TDEV=2.5*(PM)**0.35
Systems programs: TDEV=2.5*(PM)**0.32

Product Size

A large software product is obviously more expensive to develop than a small one.

Available Time

Total project effort is sensitive to the calendar time available for project
completion Several investigators have studied the question of optimal development
time, and most of them agree that software projects require more total effort if
development time is compressed or expanded from the optimal time.

Project effort is inversely proportional to the fourth power of development


time, E=k/(Td**4).

Required Level of Reliability

Software reliability can be defined as the probability that a program will


perform a required function under state conditions for a stated period of time.
Reliability can be expressed in terms of accuracy, robustness, completeness, and
consistency of the source code (these terms are defined in Table2.2). Reliability
characteristics can be built into a software product, but there is a cost associated with
the increased level of analysis, design, implementation, and verification and
validation effort that must be exerted to ensure high reliability.

Level of Technology

The level of technology in a software development project is reflected by the


programming language, the abstract machine (hardware plus software), the
programming practices, and the software tools used. It is well known that the number
of source instructions written per day is largely independent of the language used, and

Page 22 of 77
that program statements written in high-level languages such as FORTRAN and
Pascal typically expand into several machine-level statements.

Modern programming languages such as Ada provide additional features to


improve programmer productivity and software reliability. These features include
strong type-checking, data abstraction, separate compilation, exception handling,
interrupt handling, and concurrency mechanisms.

Modern programming practices include use of systematic analysis and design


techniques, structured design notations, walkthroughs and inspections, structured
coding, systematic testing, and a program development library.

SOFTWARE COST ESTIMATION TECHNIQUES

Software cost estimates are based on past performance. Historical data are
used to identify cost factors and determine the relative importance of various factors
within the environment of that organization.

Cost estimates can be made either top-down or bottom-up. Top-down


estimation first focuses on system-level costs, such as the computing resources and
personnel required to develop the system, as well as the costs of configuration
management, quality assurance, system integration, training, and publications.
Personnel costs are estimated by examining the cost of similar past projects.

Bottom-up cost estimation first estimates the cost to develop each module or
subsystem.

Expert Judgment

The most widely used cost estimation technique is expert judgment, which is
an inherently top-down estimation technique. Expert judgment relies on the
experience, background, and business sense of one or more key people in the
organization.

An expert might arrive at a cost estimate in the following manner:

The system to be developed is a process control system similar to one that


developed last year in 10 months at a cost of $1 million. We did not get rich on the
project, but we did make a respectable profit; therefore, no adjustment to the baseline
project is required. The new system has similar control functions, but has 25 percent
more activities to control; thus, we will increase our time and cost estimates by 25
percent. The previous system was the first of its type that we developed; however, we
will use the same computer and external sensing/controlling devices, and many of the
same people are available to develop the new system, so we can reduce our estimate
by 20 percent. Furthermore, we can reuse much of the low-level code from the
previous product, which reduces the time and cost estimates by 25 percent. The net

Page 23 of 77
effect of these considerations is a time and cost reduction of 20 percent, which results
in an estimate of $800,000 and 8 months development time. We know that the
customer has budgeted $1 million and 1 year delivery time for the system.
Therefore, we add a small margin of safety and bid the system at $850,000 and 9
months development time.

The biggest advantage of expert judgment, namely, experience, can also be a


liability.

The major disadvantage of group estimation is the effect that interpersonal


group dynamics may have on individuals in the group. Group members may be less
than candid due to political considerations, the presence of authority figures in the
group, or the dominance of an overly assertive group member. The Delphi technique
can be used to overcome these disadvantages.

Delphi Cost Estimation


The Delphi technique was developed at the Rand Corporation in 1948 to gain
expert consensus without introducing the adverse side effects of group meetings
(HEL66). The Delphi technique can be adapted to software cost estimation in the
following manner.

1. A coordinator provides each estimator with the System Definition document


and a form for recording a cost estimate.
2. Estimators study the definition and complete their estimates anonymously.
They may ask questions of the coordinator, but they do not discuss their
estimates with one another.
3. The coordinator prepares and distributes a summary of the estimators’
responses, and includes any unusual rationales noted by the estimators.
4. Estimators complete another estimate, again anonymously, using the results
from the previous estimate. Estimators whose estimates differ sharply from the
group may be asked, anonymously, to provide justification for their estimates.
5. The process is iterated for as many rounds as required. No group discussion is
allowed during the entire process.

The following approach is a variation on the standard Delphi technique that increases
communication while preserving anonymity:

1. The coordinator provides each estimator with a System Definition and an


estimation form.
2. The estimators study the definition, and the coordinator calls a group meeting
so that estimators can discuss estimation issues with the coordinator and one
another.
3. Estimators complete their estimates anonymously.

Page 24 of 77
4. The coordinator prepares a summary of the estimates, but does not record any
rationales.
5. The coordinator calls a group meeting to focus on issues where the estimates
vary widely.
6. Estimators complete another estimate, again anonymously. The process is
iterated for as many rounds as necessary.

It is possible that several rounds of estimates will not lead to a consensus estimate.
In this case, the coordinator must discuss the issues involved with each estimator to
determine the reasons for the differences. The coordinator may have to gather
additional information and present it to the estimators in order to resolve the
differences in viewpoint.

Work Breakdown Structures

The work breakdown structure method is a bottom-up estimation tool. A work


break-down structure is a hierarchical chart that accounts for the individual parts of a
system. A WBS chart can indicate either product hierarchy or process hierarchy.

Product hierarchy identifies the product components and indicates the manner
in which the components are interconnected. A WBS chart of process hierarchy
identifies the work activities and the relationships among those activities. Typical
product and process WBS charts are illustrated in Figures 3.5a and b. Using the WBS
technique, costs are estimated by assigning costs to each individual component in the
chart and summing the costs.

The primary advantages of the WBS technique are in identifying and


accounting for various process and product factors, and in making explicit exactly
which costs are included in the estimate.

Expert judgment, group consensus, and work breakdown structures are the
most widely used cost estimation techniques. Many organizations use all three
approaches and iterate on the estimates until differences have been resolved.

Algorithmic Cost Models

Algorithmic cost estimators compute the estimated cost of a software system


as the sum of the costs of the modules and subsystems that comprise the system.
Algorithmic models are thus bottom-up estimators.

The Constructive Cost Model (COCOMO) is an algorithmic cost mode described by


Boehm (BOE81). COCOMO is briefly summarized here.

When using COCOMO, the equations are used to provide nominal estimates
of programmer-months and development schedule for a program unit, based on the
estimated number of Delivered Source Instructions (DSI) in the unit. Effort

Page 25 of 77
multipliers are then used to adjust the estimate for product attributes, computer
attributes, personnel attributes, and project attributes.

The COCOMO equations incorporate a number of assumptions. For example,


the nominal organic mode (applications programs) equations apply in the following
types of situations:

Small to medium-size projects (2K to 32K DSI)


Familiar applications area
Stable, well-understood virtual machine
In-house development effort

Effort multipliers are used to modify these assumptions.


The following activities are covered by the estimates:

Covers design through acceptance testing


Includes cost of documentation and reviews
Includes cost of project manager and program librarian

The effort estimators exclude planning and analysis costs, installation and
training costs, and the cost of secretaries, janitors, and computer operators. The DSI
estimate includes job control statements and source statements, but excludes
comments and unmodified utility routines. A DSI is considered to be one line or card
image, and a programmer-month consists of 152 programmer-hours.

Other assumptions concerning the nature of software projects estimated by


COCOMO include the following:

 Careful definition and validation of requirements is performed by a small number of


capable people.
 The requirements remain stable throughout the project.
 Careful definition and validation of the architectural design is performed by a small
numberof capable people.
 Detailed design, coding, and unit testing are performed in parallel by groups of
programmers working in teams.
 Integration testing is based on early test planning.
 Interface errors are mostly found by unit testing and by inspections and walk-
throughs before integration testing.
 Documentation is performed incrementally as part of the development process.

ESTIMATING SOFTWARE MAINTENANCE COSTS

 Software maintenance typically requires 40 to 60 percent, and in some cases


much as 90 percent, of the total life-cycle effort devoted to a software product.
Maintenance activities include adding enhancements to the product, adapting
the product to new processing environment and correcting problems.

Page 26 of 77
 A widely used rule of thumb for the distribution of maintenance activities is
60 percent for enhancements, 20 percent for adaptation, and 20 percent for
error correction.

SOFTWARE REQUIREMENTS DEFINITION

The Software Requirements Specification is a technical specification of


requirements for the software product. The goal of software requirements definition is
to completely and consistently specify the technical requirements for the software
product in a concise and unambiguous manner, using formal notations as appropriate.

THE SOFTWARE REQUIREMENTS SPECIFICATION

The format of a requirements specification document is presented in Table 4.1.


As illustrated there, Sections 1 and 2 of the requirements document present an
overview of product features and summarize the processing environments for
development, operation, and maintenance of the product.

Section 3 specifies the externally observable characteristics of the software product.


Items in Section 3 include user displays and report formats, a summary of user
commands and report options, data flow diagrams, and a data dictionary.
Specifications for the user interface displays and reports are refinements of
information contained in the System Definiton and the preliminary User’s Manual.

Data flow diagrams

Data flow diagrams specify data sources and data sinks, data stores,
transformations to be performed on the data, and the flow of data between sources,
sinks, transformations, and stores.

Data flow diagrams can be depicted informally, as illustrated in Figure 4.1, or by


using special notation as illustrated in Figure 4.3, where data sources and data sinks
are depicted by shaded rectangles, transformations by ordinary rectangles, and data
stores by open ended rectangles. The arcs in a data flow diagram specify data flow;
they are labelled with the names of data items whose characteristics specified in the
data dictionary.

Like flowcharts, data flow diagrams can be used at any level of detail. They
can be hierarchically decomposed by specifying the inner workings of the functional
nodes using additional data flow diagrams. Unlike flowcharts, data flow diagrams are
not concerned with decision structure or algorithmic details.

Table Format of a software requirements specification

Page 27 of 77
Section 1: Product Overview and Summary

Section 2: Development, Operating, and Maintenance Environments

Section 3: External Interfaces and Data Flow

Section 4: Functional Requirements

Section 5: Performance Requirements

Section 6: Exception Handling

Section 7: Early Subsets and Implementation Priorities

Section 8: Foreseeable Modifications and Enhancements

Section 9: Acceptance Criteria

Section 10: Design Hints and Guidelines

Section 11: Cross-Reference Index

Section 12: Glossary of Terms

A data dictionary entry is illustrated in the following Table. Entries in a data


dictionary include the name of the data item, and attributes such as the data flow
diagrams where it is used, its purpose, where it is derived from, its subitems, and any
notes that may be appropriate. Each named data item on each data flow diagram
should appear in the data dictionary.

A Data dictionary entry

NAME: Create

WHERE USED: SDLP

PURPOSE: Create passes a user-created design entity to the SLP


processor for verification of syntax.

User Interface Processor


DERIVED FROM:

Name
SUBITEMS:

Page 28 of 77
Uses

Procedures

References

NOTES: Create contains one complete user-created design entity.

Section 4 of a Software Requirements Specification specifies the functional


requirements for the software product.

Performance characteristics such as response time for various activities, processing


time for various processes, throughput, primary and secondary memory constraints,
required telecommunication bandwidth, and special items such as extraordinary
security constraints or unusual reliability requirements are specified in Section 5 of
the requirements document.

Exception handling, including the actions to be taken and the messages to be


displayed in response to undesired situations or events, is described in Section 6 of the
Software Requirements Specification.

Section 7 of the Software Requirements Specification specifies early subsets and


implementation priorities for the system under development.

Foreseeable modifications and enhancements that may be incorporated into the


product following initial product release are specified in Section 8 of the product
requirements.

The software product acceptance criteria are specified in Section 9 of the


requirements document.

Section 10 of the Software Requirements Specification contains design hints and


guidelines.

Section 11 relates product requirements to the sources of information used in deriving


the requirements.

Section 12 of the Software Requirements Specification provides definition of terms


that may be unfamiliar to the customer and the product developers.

Desirable properties

There are a number of desirable properties that a Software Requirements


Specification should possess. In particular, a requirements document should be:

Page 29 of 77
Correct
Complete
Consistent
Unambiguous
Functional
Verifiable
Traceable
Easily changed

FORMAL SPECIFICATION TECHNIQUES

Specifying the functional characteristics of a software product is one of the


most important activities to be accomplished during requirements analysis. Formal
notations have the advantage of being concise and unambiguous, they support formal
reasoning about the functional specifications, and they provide a basis for verification
of the resulting software product.

Both relational and state-oriented notations are used to specify the functional
characteristics of software. Relational notations are based on the concepts of entities
and attributes.

Relational notations include implicit equations, recurrence relations, algebraic


axioms, and regular expressions. State-oriented notations include decision tables,
event tables, transition tables, finite-state mechanisms, and Petri nets.

RELATIONAL NOTATIONS

Implicit equations

Implicit equations state the properties of a solution without stating a solution


method. One might, for example, specify matrix inversion as

MxM’ = I ± E

In the above Equation, matrix inversion is specified such that the matrix
product of the original matrix M and the inverse of M, M’, yields the identity matrix I
plus or minus the error matrix E, where E specifies allowable computational errors.
Complete specification of matrix inversion must include items such as matrix size,
type of data elements, and degree of sparseness. Given a complete functional
specification for matrix inversion, design involves specifying a data structure, an
algorithm for computing the inverse, and a packaging technique for the inversion
module.

Page 30 of 77
Algebraic axioms

Mathematical systems are defined by axioms. The axioms specify fundamental


properties of a system and provide a basis for deriving additional properties that are
implied by the axioms. These additional properties are called theorems.

Regular expressions

Regular expressions can be used to specify the syntactic structure of symbol


strings. Because many software products involve processing of symbol strings,
regular expressions provide a powerful and widely used notation in software
engineering.

State-Oriented Notations

Decision tables

Decision tables provide a mechanism for recording complex decision logic.


Decision tables are widely used in data processing applications and have an
extensively developed literature (POO74). As illustrated in Table 4.3, a decision table
is segmented into four quadrants: condition stub, condition entry, action stub, and
action entry.

Event tables

Event tables specify actions to be taken when events occur under different sets of
conditions. A two-dimensional event table relates actions to two variables; f(M,E)=A,
where M denotes the current set of operating conditions, E is the event of interest, and
A is the action to be taken.

Table: A two-dimensional event table

Event

Mode E13 E37 E45 … …

Start-up A16 -- A14; A32

Steady X A6, A6 --

Shut-down … … …

Alarm … … …

Page 31 of 77
Transition Tables

Transition tables are used to specify changes in the state of a system as a


function of driving forces. The state of a system summarizes the status of all entities
in the system at a particular time.

A simple transition table

Current
input

Current state a b

S0 S0 S1

S1 S1 S0

Next state

UNIT – III

SOFTWARE DESIGN

SOFTWARE DESIGN

Software design, there are three distinct types of activities: external design,
architectural design, and detailed design. Architectural and detailed design are
collectively referred to as internal design.

External design of software involves conceiving, planning out, and specifying


the externally observable characteristics of a software product.

External design begins during the analysis phase and continues into the design
phase.

Internal design involves conceiving, planning out, and specifying the internal
structure and processing details of the software product.

FUNDAMENTAL DESIGN CONCEPTS

Every intellectual discipline is characterized by fundamental concepts and


specific techniques. Techniques are the manifestations of the concepts as they apply
to particular situations. Techniques come and go with changes in technology,
intellectual fads, economic conditions, and social concerns.

Page 32 of 77
Fundamental concepts of software design include abstraction, structure, information
hiding, modularity, concurrency, verification, and design aesthetics.

Abstraction

Abstraction is the intellectual tool that allows us to deal with concepts apart
from particular instances of those concepts. During requirements definition and
design, abstraction permits separation of the conceptual aspects of a system from the
(yet to be specified) implementation details.

Three widely used abstraction mechanisms in software design are functional


abstraction, data abstraction, and control abstraction. These mechanisms allow us to
control the complexity of the design process by systematically proceeding from the
abstract to the concrete.

Functional abstraction involves the use of parameterized subprograms. The ability


to parameterize a subprogram and to bind different parameter values on different
invocations of the subprogram is a powerful abstraction mechanism.

Data abstraction involves specifying a data type or a data object by specifying legal
operations on objects; representation and manipulation details are suppressed.

Control abstraction is used to state a desired effect without stating the exact
mechanism of control.

Information Hiding

Information hiding is a fundamental design concept for software.

When a software system is designed using the information hiding approach,


each module in the system hides the internal details of its processing activities and
modules communicate only through well-defined interfaces.

In addition to hiding of difficult and changeable design decisions, other candidates


for information hiding include:

1. A data structure, its internal linkage, and the implementation details of the
procedures that manipulate it (this is the principle of data abstraction)
2. The format of control blocks such as those for queues in an operating system
(a “control-block” module)
3. Character codes, ordering of character sets, and other implementation details
4. Shifting, masking, and other machine dependent details.

Page 33 of 77
Structure

Structure is a fundamental characteristic of computer software. The use of


structuring permits decomposition of a large system into smaller, more manageable
units with well-defined relationships to the other units in the system.

The most general form of system structure is the network. A computing


network can be represented as a directed graph, consisting of nodes and arcs. The
nodes can represent processing elements that transform data and the arcs can be used
to represent data links between nodes. Alternatively, the nodes can represent data
stores and the arcs data transformation.

Diagram –page 143

Modularity
Modular systems consist of well-defined, manageable units with well-defined
interfaces among the units. Desirable properties of a modular system include:

1. Each processing abstraction is a well-defined subsystem that is potentially


useful in other applications.
2. Each function in each abstraction has a single, well defined purpose.
3. Each function manipulates no more than one major data structure.
4. Functions share global data selectively. It is easy to identify all routines that
share a major data structure.
5. Functions that manipulate instances of abstract data types are encapsulated
with the data structure being manipulated.

Modularity enhances design clarity, which in turn eases implementation,


debugging, testing, documenting and maintenance of the software product.

Concurrency

Software systems can be categorized as sequential or concurrent. In a


sequential system, only one portion of the system is active at any given time.
Concurrent systems have independent processes that can be activated simultaneously
if multiple processors are available. On a single processor, concurrent processes can
be interleaved in execution time. This permits implementation of time-shared, multi-
programmed, and real-time systems.

Page 34 of 77
Verification

Verification is a fundamental concept in software design. Design is the bridge


between customer requirements and an implementation that satisfies those
requirements. A design is verifiable if it can be demonstrated that the design will
result in an implementation that satisfies the customer’s requirements. This is
typically done in two steps: (1) verification that the software requirements definition
satisfies the customer’s needs (verification of the requirements); and (2) verification
that the design satisfies the requirements definition (verification of the design).

Aesthetics

Aesthetic considerations are fundamental to design, whether in art or


technology. Simplicity, elegance, and clarity of purpose distinguish products of
outstanding quality.

MODULES AND MODULARIZATION CRITERIA

Architectural design has the goal of producing well-structured, modular


software systems.

Software module to be a named entity having the following characteristics.

1. Modules contain instructions, processing logic, and data structures.


2. Modules can be separately compiled and stored in a library.
3. Modules can be included in a program.
4. Module segments can be used by invoking a name and some parameters.
5. Modules can use other modules.

Coupling and Cohesion


A fundamental goal of software design is to structure the software product so
that the number and complexity of interconnections between modules is minimized.
An appealing set of heuristics for achieving this goal involves t he concepts of
coupling and cohesion.

Coupling between modules can be ranked on a scale of strongest (least desirable)


to weakest (most desirable) as follows:

1. Content coupling
2. Common coupling
3. Control coupling
4. Stamp coupling
5. Data coupling

Page 35 of 77
Content coupling

Content coupling occurs when one module modifies local data values or
instructions in another module. Content coupling can occur in assembly language
programs.

Common coupling

Common coupling modules are bound together by global data structures.

Control coupling

Control coupling involves passing control flags (as parameters or globals)


between modules so that one module controls the sequence of processing steps in
another module.

Stamp coupling

Stamp coupling is similar to common coupling, except that global data items
are shared selectively among routines that require the data.

Data coupling

Data coupling involves the use of parameter lists to pass data items between
routines. The most desirable form of coupling between modules is a
combination_____ and data coupling.

Cohesion

The internal cohesion of a module is measured in terms of the strength of binding


of elements within the module. Cohesion of elements occurs on the scale of weakest
(least desirable) to strongest (most desirable) in the following order:

1. Coincidental cohesion
2. Logical cohesion
3. Temporal cohesion
4. Communication cohesion
5. Sequential cohesion
6. Functional cohesion
7. Informational cohesion

Coincidental cohesion

Coincidental cohesion occurs when the elements within a module have no


apparent relationship to one another.

Page 36 of 77
Logical cohesion

Logical cohesion implies some relationship among the elements of the


module;

Temporal cohesion

Temporal cohesion exhibit many of the same disadvantages as logically bound


modules.

A typical example of temporal cohesion is a module that performs program


initialization.

Communication cohesion

Communicational cohesion refer to the same set of input and/or output data.

Sequential cohesion

Sequential cohesion of elements occurs when the output of one element is the
input for the next element.

Functional cohesion

Functional cohesion is a strong, and hence desirable, type of binding of


elements in a module because all elements are related to the performance of a single
function. Examples of functionally bound modules are “Compute square Root,”
“Obtain Random Number,” and “Write Record to Output File”.

Informational cohesion

Informational cohesion of elements in a module occurs when the module


contains a complex data structure and several routines to manipulate the datastructure.

DESIGN NOTATIONS

In software design, as in mathematics, the representation schemes used are of


fundamental importance. Good notation can clarify the interrelationships and
interactions of interest, while poor notation can complicate and interfere with good
design practice. At least three levels of design specifications exist: external design
specifications, which describe the external characteristics of a software system;
architectural design specifications, which describe the structure of the system; and
detailed design specifications, which describe control flow, data representation, and
other algorithmic details within the modules.

Page 37 of 77
Data Flow Diagrams

Data flow diagrams (“bubble charts”) are directed graphs in which the nodes
specify processing activities and the arcs specify data items transmitted between
processing nodes.

Data flow diagrams can be expressed using informal notation, as illustrated in


the following Figure, or special symbols can be used to denote processing nodes, data
sources, data sinks, and data stores.

Structure Charts

Structure charts are used during architectural design to document hierarchical


structure, parameters, and interconnections in a system. A structure chart differs from
a flowchart in two ways: a structure chart has no decision boxes, and the sequential
ordering of tasks inherent in a flowchart can be suppressed in a structure chart.

Page 38 of 77
HIPO Diagrams

HIPO diagrams (Hierarchy-Process-Input-Output) were developed at IBM as


design representation schemes for top-down software development, and as external
documentation aids for released products.

A set of HIPO diagrams contains a visual table of contents, a set of overview


diagrams, and a set of detail diagrams. The visual table of contents is a directory to
the set of diagrams in the package; it consists of a tree-structured (or graph structured)
directory, a summary of the contents of each overview diagram, and a legend of
symbol definitions.

Procedure Templates

The format of a procedure interface specification is illustrated in Figure 5.10.


In the early stages of architectural design, only the information in level 1 need be
supplied. As design progresses, the information on levels 2,3, and 4 can be included in
successive steps.
Page 39 of 77
It is recommended that only the information on level 1 following example be
provided during initial architectural design, because detailed specification of side
effects, exception handling, processing algorithms, and concrete data representations
will sidetrack the designer into inappropriate levels of detail too soon.

PROCEDURE NAME:

PART OF: (subsystem name & number)

CALLED BY: LEVEL 1

PURPOSE:

DESIGNER/DATE(s):

PARAMETERS: (Names, modes, attributes, purposes)

INPUT ASSERTION: (preconditions) LEVEL 2

OUTPUT ASSERTION: (postconditions)

GLOBALS: (names, modes, attributes, purpose, shared with)

SIDE EFFECTS:

LOCAL DATA STRUCTURES: (names, attributes, purposes)

EXCEPTIONS: (conditions, responses) LEVEL 3

TIMING CONSTRAINTS:

OTHER LIMITATIONS:

PROCEDURE BODY: (Pseudocode, structured English, LEVEL 4


structured flowchart, decision table)

Pseudocode
Pseudocode notation can be used in both the architectural and detailed design
phases. Like flowcharts, pseudocode can be used at any desired level of abstraction.
Using pseudocode, the designer describes system characteristics using short, concise,
English language phrases that are structured by key words suchas If-Then-Else,
While-Do, and End.

Pseudocode can replace flowcharts and reduce the amount of external


documentation required to describe a system.

Structured Flowcharts
Page 40 of 77
Flowchart

Flowcharts are the traditional means for specifying and documenting


algorithmic details in a software system. Flowcharts incorporate rectangular boxes for
actions, diamond shaped boxes for decisions, directed arcs for specifying
interconnections between boxes, and a variety of specially shaped symbols to denote
input, output, data stores, etc.

Structured flowcharts

Structured flowcharts differ from traditional flowcharts in that structured


flowcharts are restricted to compositions of certain basic forms. This makes the
resulting flowchart the graphical equivalent of a structured pseudocode description. A
typical set of basic forms and the pseudocode equivalents are illustrated in Figure.

STRUCTURED ENGLISH

Structured English can be used to provide a step-by-step specification for an


algorithm. Like pseudocode, structured English can be used at any desired level of
detail. Structured English is often used to specify cookbook recipes:

1. Preheat oven to 350 degrees F.


2. Mix eggs, milk, and vanilla.
3. Add flour and baking soda.
4. Pour into a greased baking dish.
5. Cook until done.

Page 41 of 77
Decision Tables

Decision tables can be used to specify complex decision logic in a high-level


software specification.

They are also useful for specifying algorithmic logic during detailed design.
At this level of usage, decision tables can be specified and translated into source code
logic.

DESIGN TECHNIQUES

The design process involves developing a conceptual view of the system,


establishing system structure, identifying data streams and data stores, decomposing
high-level functions into subfunctions, establishing relationships and interconnections
among components, developing concrete data representations, and specifying
algorithmic details.

Design techniques are typically based on the “top-down” and/or “bottom-up”


design strategies.

A predominately top-down strategy is most successful when a well-defined


environment exists for software development.

Stepwise Refinement

Stepwise refinement is a top-down technique for decomposing a system from


high-level specifications into more elementary levels. Stepwise refinement is also
known as “stepwise program development” and “successive refinement”.

Stepwise refinement involves the following activities:

1. Decomposing design decisions to elementary levels.


2. Isolating design aspects that are not truly interdependent.
3. Postponing decisions concerning representation details as long as possible.
4. Carefully demonstrating that each successive step in the refinement process is
a faithful expansion of previous steps.

Stepwise refinement begins with the specifications derived during requirements


analysis and external design. The problem is first decomposed into a few major
processing steps that will demonstrably solve the problem.

The major benefits of stepwise refinement as a design technique are:

1. Top-down decomposition
2. Incremental addition of detail
3. Postponement of detail decisions
4. Continual verification of consistency (formally or informally)

Page 42 of 77
Levels of Abstraction

Levels of abstraction was originally described by Dijkstra as a bottom-up


design technique in which an operating system was designed as a layering of
hierarchical levels starting at level 0 (processor allocation, real-time clock interrupts)
and building up to the level of processing independent user programs.

Each level of abstraction performs a set of services for the functions on the
next higher level of abstraction.

Each level of abstraction has exclusive use of certain resources (I/O devices,
data structures) that other levels are not permitted to access.

Structured Design

Structured design was developed by Constantine as a top-down technique for


architectural design of software systems (STE74). The basic approach in structured
design is systematic conversion of data flow diagrams into structure charts.

The first step in structured design is review and refinement of the data flow
diagram(s) developed during requirements definition and external design. The second
step is to determine whether the system is transform-centered or transaction-driven,
and to derive a high-level structure chart based on this determination. In a transform-
centered system, the data flow diagram contains Input, Processing, and Output
subsystems in the structure chart. Boundaries between the three major subsystems in a
transform-centered system are identified by determining the point of most abstract
input data and the point of most abstract output data on the data flow diagram.

The third step in structured design is decomposition of each subsystem using


guidelines such as coupling, cohesion, information hiding, levels of abstraction, data
abstraction.

Page 43 of 77
DETAILED DESIGN CONSIDERATIONS

Detailed design is concerned with specifying algorithmic details, concrete data


representations, interconnections among functions and data structures, and packaging
of the software product. Detailed design is strongly influenced by the implementation
language, but it is not the same as implementation; detailed design is more concerned
with semantic issues and less concerned with syntactic details than is implementation.

Detailed design separates the activity of low-level design from


implementation, just as the analysis and design activities isolate consideration of what
is desired from the structure that will achieve the desired result.

REAL-TIME AND DISTRIBUTED SYSTEM DESIGN

Many of the popular design “methodologies” were developed as design


techniques for applications programs, operating systems, and utility programs.

These methods support concepts suchas hierarchical decomposition,


modularity, information hiding, and data abstraction. These design concepts are
important for real-time and distributed systems.

Major issues to be addressed in designing a distributed system include


specifying the topology of the communication network, establishing rules for
accessing the shared communication channel, allocating application processing
functions to processing nodes in the network, and establishing rules for process
communication and synchronization.

Real-time systems must provide specified amounts of computation within


fixed time intervals. Real-time systems typically sense and control external devices,
respond to external events, and share processing time between multiple tasks.

A real-time network for process control may consist of several minicomputers


and microcomputers connected to one or more large processors.

Process control systems often utilize communication networks having fixed


static topology and known capacity requirements.

Several notations have been developed to support analysis and design of real-
time and distributed systems. They include RSL, SDLP, NPN, communication ports.

TEST PLANS

The test plan specifies the objectives of testing (e.g., to achieve error-free
operation under stated conditions for a stated period of time), the test completion
criteria (to achieve a specified rate of error exposure, to achieve a specified percent of
logical path coverage), the system integration plan (strategy, schedule, responsible
individuals), methods to be used on particular modules (walkthroughs, inspections,
Page 44 of 77
static analysis, dynamic tests, formal verification), and the particular test cases to be
used.

There are four types of tests that a software product must satisfy: functional
tests, performance tests, stress tests, and structural tests. Functional tests and
performance tests are based on the requirements specifications; they are designed to
demonstrate that the system satisfies its requirements. Therefore, the test plan can be
only as good as the requirements, which in turn must be phrased in quantified, testable
terms.

Functional test cases specify typical operating conditions, typical input


values, and typical expected results.

Performance test should be designed to verify response time (under various


loads), execution time, throughput, primary and secondary memory utilization, and
traffic rates on data channels and communication links.

Stress tests are designed to overload a system in various ways, such as


attempting to sign on more than the maximum allowed number of terminals,
processing more than the allowed number of identifiers or static levels, or
disconnecting a communication link.

Structural tests are concerned with examining the internal processing logic of
a software system.

MILESTONES, WALKTHROUGHS, AND INSPECTIONS

The two major milestones during software design are the Preliminary Design
Review (PDR) and the Critical Design Review (CDR). The PDR is typically held near
the end of architectural design and prior to detailed design. CDR occurs at the end of
detailed design and prior to implementation.

The major goal of a PDR is to demonstrate that the externally observable


characteristics and architectural structure of the product will satisfy the customer’s
requirements. Functional characteristics, performance attributes, external interfaces,
user dialogues, report formats, exception conditions and exception handling, product
subsets, and future enhancements to the product should all be reviewed during the
PDR.

The CDR is held at the end of detailed design and prior to implementation.
Among other things, CDR provides a final management decision point to build or
cancel the system.

Page 45 of 77
WALKTHROUGHS AND INSPECTIONS

A structured walkthrough is an in-depth, technical review of some aspect of a


software system. Walkthroughs can be used at any time, during any phase of a
software project.

A walkthrough team consists of four to six people. The person whose material
is being reviewed is responsible for providing copies of the review material to
members of the walkthrough team in advance of the walkthrough session, and team
members are responsible for reviewing the material prior to the session. During the
walkthrough the reviewee “walks through” the material while the reviewers look for
errors, request clarifications, and explore problem areas in the material under review.

The focus of a walkthrough is on detection of errors and not on corrective


actions. A designated secretary for the session records action items to be pursued by
the reviewee following the review session.

The reviewee is responsible for follow-up and for informing the reviewers of
corrective actions taken. More detailed information concerning the format and
conduct of walkthrough session is provided in Chapter 8.

Design inspections are conducted by teams of trained inspectors who work


from checklists of items to examine.

DESIGN GUIDELINES

Design is a creative process that can be guided and directed, but it can never be
reduced to an algorithmic procedure. The design of a real-time system is a much
different activity than the design of a data processing application. The following
guidelines do not constitute a “design methodology”, but rather provide some
guidance for organizing the activities of software design.

1. Review the requirements specification. In particular, study the desired


functional characteristics and performance attributes of the system.
2. Review and expand the external interfaces, user dialogues, and report formats
developed during requirements analysis.
3. Review and refine the data flow diagrams developed during requirements
analysis and external design. Identify internal data stores and elaborate the
processing functions.
4. Identify functional abstractions and data abstractions. Record them using a
design notation.
5. Define the visible interfaces for each functional abstraction and each data
abstraction. Record them using a design notation.
6. Define the modularization criteria to be used in establishing system structure.
7. Apply the techniques of your particular design method to establish system
structure. For example, derive the input and output data structures and convert
Page 46 of 77
the data structures into a processing structure, as in Jackson Structured
Programming; or convert the data flow diagrams into structure charts, as in
structured design; or derive a system structure that maximizes information
hiding, or a structure that enhances the overlay capabilities or real-time
response rate, etc.
8. Iterate steps 4 through 7, and, as necessary, steps 1 through 7 until a suitable
structure is achieved.
9. Verify that the resulting system structure satisfies the requirements.
10. Develop interface specifications for the procedures in each module.
11. Conduct the Preliminary Design Review. The level of detail should be
inclusive to levels 1 and 2 of the procedure templates in Figure 5.10.
12. Develop concrete data representations for the data stores and data abstractions.
13. Expand the procedure templates of Figure 5.10 to include the information in
level 3.
14. Specify algorithmic details for the body of each procedure in the system, using
successive refinement, HIPO diagrams, pseudocode, structured English,
and/or structured flowcharts. Develop concrete data representations and
interconnections between data structures and algorithms.
15. Conduct the Critical Design Review
16. Redesign as necessary.

SOFTWARE IMPLEMENTATION

The implementation phase of software development is concerned with


translating design specifications into source code. The primary goal of
implementation is to write source code and internal documentation so that
conformance of the code to its specifications can be easily verified, and so that
debugging, testing, and modification are eased.

Source-code clarity is enhanced by structured coding techniques, by good


coding style, by appropriate supporting documents, by good internal comments, and
by the features provided in modern programming languages.

Production of high-quality software requires that the programming team have


a designated leader, a well-defined organizational structure, and a thorough
understanding of the duties and responsibilities of each team member.

The implementation team should be provided with a well-defined set of


software requirements, an architectural design specification, and a detailed design
description. Also, each team member must understand the objectives of
implementation.

Weinberg gave five programmers five different implementation goals for the
same program: minimize the memory required, maximize output readability,

Page 47 of 77
maximize source test readability, minimize the number of source statements, and
minimize development time

STRUCTURED CODING TECHNIQUES

The goal of structured coding is to liberalize control flow through a computer


program so that the execution sequence follows the sequence in which the code is
written.

This enhances readability of code, which eases understanding, debugging,


testing, documentation, and modification of programs. It also facilitates formal
verification of programs. Linear flow of control can be achieved by restricting the set
of allowed program constructs to single entry, single exit formats; however, strict
adherence to nested, single entry, single exit constructs leads to questions concerning
efficiency, questions about “reasonable” violations of single entry, single exit, and
questions about the proper role of the goto statement in structured coding.

Single Entry, Single Exit Constructs

Every algorithm (every Turing machine program) can be written using


sequencing, selection, and iteration.

Any single entry, single exit program segment that has all statements on some
path from the entry to the exit can be specified using only sequencing,
selection, and iteration.

A sufficient set of single entry, single exit constructs for specifying control flow in
algorithms is

Sequencing: S1;S2;S3;
Selection: if B then S1 else S2;
Iteration: while B do S;

Page 48 of 77
Efficiency Considerations

A recurring criticism of strict adherence to single entry, single exit constructs


is that they result in inefficient use of memory space and execution time. The need for
auxiliary variables, repeated segments of code, and excessive subprogram calls are
often cited.

The following example illustrates the need for auxiliary variables in a search
loop on a linked list:

Violations of Single Entry, Single Exit


The goal of structured coding is to improve the clarity and readability of
source programs. Clarity is greatly enhanced by use of single entry, single exit
constructs. There are, however, commonly occurring situations which warrant
violation of strict adherence to single entry, single exit and yet are consistent with the
goal of improved clarity. Three commonly occurring situations are multiple loop
exits, error handling, and data encapsulation.

Data Encapsulation

Data encapsulation involves packaging of a data structure and its access


routines in a single module. The data structure is manipulated only by the access
routines, and other routines use the data structure by calling the appropriate access
routine. A data structure is thus defined by the operations that can be performed on it.

The Goto Statement

The goto statement provides unconditional transfer of control and thus allows
violation of the single entry, single exit condition of structured coding. We have
presented several examples where goto statements were used to achieve a desired
Page 49 of 77
result in a manner that violates single entry, single exit and yet maintained locality
and linearity of control flow.

The goto statement also can be used to simulate single entry, single exit
constructs in primitive programming languages.

EX:

If B then S1 else S2; IF(.NOT.(B)) GO TO 10


S1
GO TO 20
10 S2
20 CONTINUE
Recursion
A recursive subprogram is one that calls itself, either directly or indirectly;
recursion is a powerful and elegant programming technique.

The goals of clarity and efficiency are thus served by appropriate use of
recursion.

Recursive algorithms arise naturally in conjunction with recursive data


structures (e.g., linked lists trees).

For example, an algorithm for in-order traversal of binary trees can be


expressed recursively.

CODING STYLE

Style is the consistent pattern of choices made among alternative ways of


achieving a desired effect.

In computer programming, coding style is manifest in the patterns used by a


programmer to express a desired action or outcome. Programmers who work together
soon come to recognize the coding styles of their colleagues.

The guidelines are examined in the following discussion.

 Use a few standard control constructs


 Use gotos in a disciplined manner
 Introduce user-defined data types to model entities in the problem
domain
 Hide data structures behind access functions. Isolate machine
dependencies in a few routines
 Provide standard documentation prologues for each subprogram and/or
compilation unit
 Carefully examine routines having fewer than 5 or more than 25
statements
Page 50 of 77
 Use indentation, parentheses, blank spaces, blank lines, and borders
around blocks of comments to enhance readability
 Don’t be too clever
 Avoid null then statements
 Avoid then…if statements
 Don’t nest too deeply
 Avoid obscure side effects
 Don’t suboptimize
 Carefully examine routines having more than five formal parameters
 Don’t use an identifier for multiple purposes

STANDARDS AND GUIDELINES

Coding standards are specifications for a preferred coding style. Given a


choice of ways to achieve an effect, a preferred way is specified. Coding standards are
often viewed by programmers as mechanisms to constrain and devalue the
programmer’s creative problem solving skills.

It is desirable that all programmers on a software project adopt similar coding


styles so that code of uniform quality is produced.

A programming standard might specify items such as

1. Goto statements will not be used.


2. The nesting depth of program constructs will not exceed five levels.
3. Subroutine length will not exceed 30 lines.

A guideline rephrases these specifications in the following manner:


1. The use of goto statements should be avoided in normal circumstances.
2. The nesting depth of program constructs should be five or less in normal
circumstances.
3. The number of executable statements in a subprogram should not exceed 30 in
normal circumstances.
4. Departure from normal circumstances requires approval by the project leader.

DOCUMENTATION GUIDELINES

Computer software includes the source code for a system and all the
supporting documents generated during analysis, design, implementation, testing, and
maintenance of the system. Internal documentation includes standard prologues for
compilation units and subprograms, the self documenting aspects of the source code,
and the internal comments embedded in the source code.

Page 51 of 77
Supporting Documents

Requirements specifications, design documents, test plans, user’s manuals,


installation instructions, and maintenance reports are examples of supporting
documents. These documents are the products that result from systematic
development and maintenance of computer software.

A systematic approach to software development assures that supporting


documents evolve in an orderly manner, and that the documents are available when
needed.

Supporting documents of substandard quality, that are not available when


needed, are a strong indication of problems with the processes being used to develop
and maintain software.

Program Unit Notebooks

A program unit is a unit of source code that is developed and/or maintained by


one person; that person is responsible for the unit. In a well-designed system a
program unit is a subprogram or group of subprograms that provide well-defined
function or form a well defined subsystem. A program unit is also small enough, and
modular enough, that it can be thoroughly tested in isolation by the programmer who
develops or modifies it. Program unit notebooks are used by individual programmers
to organize their work activities, and to maintain the documentation for their program
units.

UNIT NAME:____________________________________PROGRAMMER:____________
ROUTINES INCLUDED: _____________________________________________________
SECTION CONTENTS DUE DATE COMPLETED DATE REVIEWER/DATE

1 RQMTS.

2 ARCH. DESIGN

3 DETAIL

DESIGN

4 TEST PLAN

5 SOURCE

CODE

6 TEST

RESULTS

7 CHANGE

Page 52 of 77
REQUESTS

8 NOTES

RELEASE APPROVAL:_____________________________________DATE:___________

Internal Documentation

Internal documentation consists of a standard prologue for each program unit


and compilation unit, the self-documenting aspects of the source code, and the
internal comments embedded in the executable portion of the code.

Table: Typical format of subprogram and compilation unit prologues

Name of author:
Date of compilation:
Function(s) performed:
Algorithms used:
Author/date/purpose of modifications:
Parameters and modes:
Input assertion:
Output assertion:
Global variables:
Side effects:
Major data structures:
Calling routines:
Called routines:
Timing constraints:
Exception handling:
Assumptions:

Page 53 of 77
Unit IV
Software Testing
Definition - Software Testing :

 It is the process of Creating, Implementing and Evaluating tests.


 Testing measures software quality
 Testing can find faults. When they are removed, software quality is improved.
 Testing is executing a program with an indent of finding Error/Fault and
Failure.
 IEEE Terminology : An examination of the behavior of the program by
executing on sample data sets.
 Testing is a process of executing a program with the intent of finding an error .
 A good test case is one that has a high probability of finding an as yet
undiscovered error.
 Testing is a process of exercising or evaluating a system component by
manual or automated means to verify that it satisfies specified requirement

Why testing is important?

 This is China Airlines Airbus A300 crashing due to a software bug on April
26, 1994, killing 264 innocent lives
 Software bugs can potentially cause monetary and human loss, history is full
of such examples

 In 1985, Canada's Therac-25 radiation therapy machine malfunctioned due to


software bug and delivered lethal radiation doses to patients, leaving 3 people
dead and critically injuring 3 others
 In April of 1999, a software bug caused the failure of a $1.2 billion military
satellite launch, the costliest accident in history
 In may of 1996, a software bug caused the bank accounts of 823 customers of
a major U.S. bank to be credited with 920 million US dollars
 As you see, testing is important because software bugs could be expensive
or even dangerous
 As Paul Elrich puts it - "To err is human, but to really foul things up you need
a computer."

What is the objective of software testing?

Software testing has many objectives.


 Software testing helps to make sure that it meets all the requirement it was
supposed to meet.
 It will bring out all the errors, if any, while using the software.
 Software testing helps to understand that the software that is being tested is a
complete success
 Software testing helps to give a quality certification that the software can be
used by the client immediately.
 It ensures quality of the product.
Page 54 of 77
A Strategic approach to software testing

 Many software errors are eliminated before testing begins by conducting effective
technical reviews
 Testing begins at the component level and works outward toward the integration
of the entire computer-based system.
 Different testing techniques are appropriate at different points in time.
 The developer of the software conducts testing and may be assisted by
independent test groups for large projects.
 Testing and debugging are different activities.
 Debugging must be accommodated in any testing strategy.

Verification and Validation

 Make a distinction between verification (are we building the product right?) and
validation (are we building the right product?)
 Software testing is only one element of Software Quality Assurance (SQA)
 Quality must be built in to the development process, you can’t use testing to add
quality after the fact

Organizing for Software Testing

 The role of the Independent Test Group (ITG) is to remove the conflict of interest
inherent when the builder is testing his or her own product.
 Misconceptions regarding the use of independent testing teams
o The developer should do no testing at all
o Software is tossed “over the wall” to people to test it mercilessly
o Testers are not involved with the project until it is time for it to be tested
 The developer and ITGC must work together throughout the software project to
ensure that thorough tests will be conducted

Software Testing Strategy

Types of Testing

• Black Box Testing


• White Box Testing
• Grey Box Testing

Black Box Testing


This testing is termed as BlackBox Testing since the tester views the program
as a black box, that is the tester is completely unconcerned about the internal behavior
and structure of the program.

Page 55 of 77
White Box Testing

The tester views the internal behavior and structure of the program. The
testing strategy permits one to examine the internal structure of the program.

Grey Box Testing

Combination of Black box & White box.

Critical or Complex Modules can be tested using White box testing while the
rest of the application is tested using Black Box Testing.

Levels of Testing

• Unit Testing
• Integration Testing
• System Testing Or FURPS……Testing
• Acceptance Testing
• Regression Testing

Unit Testing

 Lowest Level of Testing


 Individual unit of the software are tested in isolation from other parts of a
program
 It is a testing activity which has to be performed by development team during
coding / after coding is completed.
 What can be a UNIT?
• Screen components.
• Screen / Program.
• Back-end related to a Screen.
• Back-end and the Screen.

Integration Testing

• Intermediate level of testing


• Progressively unit tested software components are integrated and tested
until the software works as a whole
• Test that evaluate the interaction and consistency of interacting
components

Types of Integration Testing

 Big Bang Testing


 Bottom-Up Testing
 Top-Down Testing

Page 56 of 77
Big Bang Testing

• A type of Integration test in which the software component of an


application are combined all at once into a overall system.
• According to this approach, every module is first unit tested in isolation
from other module. After each module is tested, all of the modules are
integrated together at once.

Bottom-Up Testing

• Begins construction and testing with atomic modules (i.e.) modules at lowest
level in program structure
• The terminal module is tested in isolation first then the next set of higher level
modules are tested with the previously tested lower module.

Page 57 of 77
Top-Down Testing

• Program merged and tested from the top to the bottom


• Modules are integrated by moving downward through the control hierarchy
with the main control module.

System Testing

• Testing conducted on a complete, integrated system to evaluate the system’s


compliance with its specified requirements
• In which the complete software build is made and tested to show that all
requirements are met.
• Here all possible testings w.r.t
• Functionality, Usability, Installation, Performance,
Security, etc… are expected to be carried out.

Activities of The Testing Team

 Functionality Testing.
 Usability Testing.
 Reliability Testing.
 Performance Testing.
 Scalability Testing.
Functionality Testing

This testing is done to ensure that all the functionalities defined in the
requirements are being implemented correctly.

Usability Testing

The catch phrase “User Friendliness” can be achieved through Usability


Testing.

Page 58 of 77
• Ease Of Operability
• Communicativeness
• This test is done keeping the kind of end user's who are going to use the
product.
Reliability Testing

• These Tests are carried out to assess the system’s capability to handle various
scenarios like failure of a hard drive on the web, database server or
communication link failure.
• Software reliability is defined in statistical terms as “the probability of failure-
free operation of a computer program in a specified environment for a
specified time”.
Performance Testing

This test is done to ensure that the software / product works the way it is
supposed to at various load (load testing ), Stress (stress testing ) and Volume (
volume testing ).

Volume Testing

 The purpose is to find weakness in the system with respect to its


handling of large amounts of data during short time periods.
Stress Testing

 The purpose is to that the system has the capacity to handle large
numbers of processing transactions during peak periods.
Performance Testing

• Can be accomplished in parallel with Volume and Stress testing


because we want to assess performance under all conditions.
• System performance is generally assessed in terms of response times
and throughput rates under differing processing and configuration
conditions.
Scalability Testing

These tests ensure the degree to which the application/ system loads /
processes can be distributed across additional servers and clients.

Acceptance Testing

• Acceptance Testing is defined as the process of formal testing conducted to


determine whether or not the system satisfies its acceptance criteria and to
enable the customer to determine whether or not to accept the system
• The purpose of this testing is to ensure that Customer’s requirements
objectives are met and that all the components are correctly included in the
customer package.
Regression Testing

Regression testing is the process of testing changes to computer programs to


make sure that the older programming still works with the new changes.
Page 59 of 77
Strategic issues

 Specify product requirements in a quantifiable manner before testing starts.


 Specify testing objectives explicitly.
 Identify categories of users for the software and develop a profile for each.
 Develop a test plan that emphasizes rapid cycle testing.
 Build robust software that is designed to test itself.
 Use effective formal reviews as a filter prior to testing.
 Conduct formal technical reviews to assess the test strategy and test cases.
 Develop a continuous improvement approach for the testing process.

Validation testing

The process of evaluating software during the development process or at the


end of the development process to determine whether it satisfies specified business
requirements.
Validation Testing ensures that the product actually meets the client's needs. It
can also be defined as to demonstrate that the product fulfills its intended use when
deployed on appropriate environment.
It answers to the question, Are we building the right product?

Validation Testing - Workflow:

Validation testing can be best demonstrated using V-Model. The


Software/product under test is evaluated during this type of testing.

Activities:

 Unit Testing
 Integration Testing
 System Testing
 User Acceptance Testing

Page 60 of 77
Verification and Validation

• Software testing is part of a broader group of activities called verification and


validation that are involved in software quality assurance
• Verification (Are the algorithms coded correctly?)
– The set of activities that ensure that software correctly implements a
specific function or algorithm
• Validation (Does it meet user requirements?)
– The set of activities that ensure that the software that has been built is
traceable to customer requirements

Alpha and Beta Testing

• Alpha testing
– Conducted at the developer’s site by end users
– Software is used in a natural setting with developers watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an environment that
cannot be controlled by the developer
– The end-user records all problems that are encountered and reports
these to the developers at regular intervals
After beta testing is complete, software engineers make software modifications and
prepare for release of the software product to the entire customer base.

The art of debugging

What is Debugging ?

Debugging happens as a result of testing. When a test case uncovers an error,


debugging is the process that causes the removal of that error.

The Debugging Process

Debugging is not testing, but always happens as a response of testing. The


debugging process will have one of two outcomes:

(1) The cause will be found, corrected and removed,

or

(2) the cause will not be found. Why is debugging difficult?


The symptom and the cause are geographically remote.

Page 61 of 77
 The symptom may disappear when another error is corrected.
 The symptom may actually be the result of nonerrors (eg round off in
accuracies).
 The symptom may be caused by a human error that is not easy to find.
 The symptom may be intermittent.
 The symptom might be due to the causes that are distributed across various
tasks on diverse processes.

Debugging Strategies

• Objective of debugging is to find and correct the cause of a software error


• Bugs are found by a combination of systematic evaluation, intuition, and luck
• Debugging methods and tools are not a substitute for careful evaluation based
on a complete design model and clear source code

• There are three main debugging strategies


– Brute force
– Backtracking
– Cause elimination

Brute Force

• Most commonly used and least efficient method


• Used when all else fails
• Involves the use of memory dumps, run-time traces, and output statements
• Leads many times to wasted effort and time

Backtracking

• Can be used successfully in small programs


• The method starts at the location where a symptom has been uncovered
• The source code is then traced backward (manually) until the location of the
cause is found
• In large programs, the number of potential backward paths may become
unmanageably large
Cause Elimination

The third approach to debugging, cause elimination, is manifested by


induction or deduction and introduces the concept of binary partitioning. This
approach is also called induction and deduction. Data related to the error occurrence
are organized to isolate potential causes. A "cause hypothesis" is devised and the data
are used to prove or disprove the hypothesis. Alternatively, a list of all possible causes
is developed and tests are conducted to eliminated each. If initial tests indicate that a
particular cause hypothesis shows promise, the data are refined in an attempt to isolate
the bug.
Page 62 of 77
Software maintenance
Software maintenance is widely accepted part of SDLC now a days. It stands for
all the modifications and updations done after the delivery of software product. There
are number of reasons, why modifications are required, some of them are briefly
mentioned below:

 Market Conditions - Policies, which changes over the time, such as taxation
and newly introduced constraints like, how to maintain bookkeeping, may
trigger need for modification.
 Client Requirements - Over the time, customer may ask for new features or
functions in the software.
 Host Modifications - If any of the hardware and/or platform (such as
operating system) of the target host changes, software changes are needed to
keep adaptability.
 Organization Changes - If there is any business level change at client end,
such as reduction of organization strength, acquiring another company,
organization venturing into new business, need to modify in the original
software may arise.

Types of maintenance

In a software lifetime, type of maintenance may vary based on its nature. It may
be just a routine maintenance tasks as some bug discovered by some user or it may be
a large event in itself based on maintenance size or nature. Following are some types
of maintenance based on their characteristics:

 Corrective Maintenance - This includes modifications and updations done in


order to correct or fix problems, which are either discovered by user or
concluded by user error reports.
 Adaptive Maintenance - This includes modifications and updations applied
to keep the software product up-to date and tuned to the ever changing world
of technology and business environment.
 Perfective Maintenance - This includes modifications and updates done in
order to keep the software usable over long period of time. It includes new
features, new user requirements for refining the software and improve its
reliability and performance.
 Preventive Maintenance - This includes modifications and updations to
prevent future problems of the software. It aims to attend problems, which are
not significant at this moment but may cause serious issues in future.
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on estimating software
maintenance found that the cost of maintenance is as high as 67% of the cost of entire
software process cycle.

Page 63 of 77
Configuration management
Software configuration management (SCM) is a software engineering
discipline consisting of standard processes and techniques often used by organizations
to manage the changes introduced to its software products. SCM helps in identifying
individual elements and configurations, tracking changes, and version selection,
control,andbaselining.

SCM is also known as software control management. SCM aims to control changes
introduced to large complex software systems through reliable version selection and
version control.

SCM defines a mechanism to deal with different technical difficulties of a project


plan. In a software organization, effective implementation of software configuration
management can improve productivity by increased coordination among the
programmers in a team. SCM helps to eliminate the confusion often caused by
miscommunication among team members. The SCM system controls the basic
components such as software objects, program code, test data, test output, design
documents,anduser manuals.

The SCM system has the following advantages:

 Reduced redundant work.


 Effective management of simultaneous updates.
 Avoids configuration-related problems.
 Facilitates team coordination.
 Helps in building management; managing tools used in builds.
 Defect tracking: It ensures that every defect has traceability back to its source.

Page 64 of 77
Unit – V

Software Quality Assurance

Software Quality
Software quality is called the conformance to explicitly stated functional and
performance requirements, documented development standards, and implicit
characteristics.

A software is said to be of a good quality if it

• Meets the customer’s requirement


• Is completed in the estimated Time.
• Is completed within the estimated cost.

Quality Concepts

Quality Management also called Software quality Assurance(SQA).

• Serves as an umbrella activity that is applied throughout the software process


• Involves doing the software development correctly versus doing it over again
• Reduces the amount of rework, which results in lower costs and improved
time to market

1. Quality
2. Quality Control
3. Quality assurance
4. Cost of quality

Page 65 of 77
Two kinds of quality may be encountered:

 Quality of design
 Quality of conformance

Quality of Design

Quality of design refers to the characteristics tht designers specify for an item.
The grade of materials, tolerance and performance specifications all contribute quality
of design.

Quality of Conformance

Quality of Conformance is the degree to which the design specifications are


followed during manufacturing.

Quality Control (QC)

QC is the series of inspections, reviews,and tests used throughout the


development cycle to ensure that each work product meets the requirements placed
upon it.

QC includes a feedback loop to the process that created the work product.

Quality Assurance (QA)

• Consists of a set of auditing and reporting functions that assess the


effectiveness and completeness of quality control activities.
• Provides management personnel with data that provides insight into the
quality of the products.
• Alerts management personnel to quality problems so that they can apply the
necessary resources to resolve quality issues.
Cost of Quality

Includes all costs incurred in the pursuit of quality or in performing quality-


related activities.

Involves various kinds of quality costs.

• Prevention costs
Quality planning, formal technical reviews, test equipment, training

• Appraisal costs
Inspections, equipment calibration and maintenance, testing

• Failure costs – subdivided into internal failure costs and external failure costs
Internal failure costs

• Incurred when an error is detected in a product prior to


shipment.
Page 66 of 77
• Include rework, repair, and failure mode analysis.
External failure costs

•Involves defects found after the product has been shipped


•Include complaint resolution, product return and replacement,
help line support, and warranty work.
Software Quality Assurance

Conformance to explicitly stated functional and performance requirements,


explicitly documented development standards, and implicit characteristics that are
expected of all professionally developed software.

The above definition emphasizes three points

(1) Software requirements are the foundation from which quality is


measured; lack of conformance to requirements is lack of quality.

(2) Specified standards define a set of development criteria that guide the
manner in which software is engineered; if the criteria are not
followed, lack of quality will almost surely result.

(3) A set of implicit requirements often goes unmentioned; if software


fails to meet implicit requirements, software quality is suspect.

(i) SQA Activities

• Prepares an SQA plan for a project.


• Participates in the development of the project's software process description.
• Reviews software engineering activities to verify compliance with the defined
software process.
• Audits designated software work products to verify compliance with those
defined as part of the software process.
• Ensures that deviations in software work and work products are documented.
and handled according to a documented procedure.
• Records any noncompliance and reports to senior management.
• Coordinates the control and management of change.
• Helps to collect and analyze software metrics.

Page 67 of 77
(ii) Software Reviews

Software reviews are a “filter” for the software engineering process. That
is reviews are applied at various points during the software development
process an server to uncover errors that can then be removed.

Software reviews serve to “Purify” the software analysis, design, coding,


and testing activities.

• Catch large classes of errors that escape the originator more than other
practitioners
• Include the formal technical review (also called a walkthrough or inspection)
– Acts as the most effective SQA filter
– Conducted by software engineers for software engineers
– Effectively uncovers errors and improves software quality
– Has been shown to be up to 75% effective in uncovering design flaws
(which constitute 50-65% of all errors in software)
• Require the software engineers to expend time and effort, and the organization
to cover the costs.

Formal Technical Review (FTR)

Formal Technical Review is aSQA activity that is performed by software


engineers.

Objectives of FTR are

 To uncover errors in function, logic, or implementation for any representation


of the software.
 To verify that the software under review meets its requirements.
 To ensure that the software has been represented according to predefined
standards.
 To achieve software that is developed in a uniform manner.
 To make projects more manageable.

In addition, the FTR serves as a training ground for junior software engineers to
observe different approaches to software analysis, design, and construction.

Promotes backup and continuity because a number of people become familiar


with other parts of the software.

May sometimes be a sample-driven review.

 Project managers must quantify those work products that are the primary
targets for formal technical reviews.
 The sample of products that are reviewed must be representative of the
products as a whole.
Page 68 of 77
(i) Review Meeting

• Has the following constraints


– From 3-5 people should be involved
– Advance preparation (i.e., reading) should occur for each participant
but should require no more than two hours a piece and involve only a
small subset of components
– The duration of the meeting should be less than two hours
• Focuses on a specific work product (a software requirements specification, a
detailed design, a source code listing)
• Activities before the meeting
– The producer informs the project manager that a work product is
complete and ready for review
– The project manager contacts a review leader, who evaluates the
product for readiness, generates copies of product materials, and
distributes them to the reviewers for advance preparation
– Each reviewer spends one to two hours reviewing the product and
making notes before the actual review meeting
– The review leader establishes an agenda for the review meeting and
schedules the time and location
• Activities during the meeting
– The meeting is attended by the review leader, all reviewers, and the
producer
– One of the reviewers also serves as the recorder for all issues and
decisions concerning the product
– After a brief introduction by the review leader, the producer proceeds
to "walk through" the work product while reviewers ask questions and
raise issues
– The recorder notes any valid problems or errors that are discovered; no
time or effort is spent in this meeting to solve any of these problems or
errors
• Activities at the conclusion of the meeting
– All attendees must decide whether to
• Accept the product without further modification
• Reject the product due to severe errors (After these errors are
corrected, another review will then occur)
• Accept the product provisionally (Minor errors need to be
corrected but no additional review is required)
– All attendees then complete a sign-off in which they indicate that they
took part in the review and that they concur with the findings
• Activities following the meeting
– The recorder produces a list of review issues that
• Identifies problem areas within the product
• Serves as an action item checklist to guide the producer in
making corrections
– The recorder includes the list in an FTR summary report

Page 69 of 77
• This one to two-page report describes what was reviewed, who
reviewed it, and what were the findings and conclusions
– The review leader follows up on the findings to ensure that the
producer makes the requested corrections

(ii) FTR Guidelines

1) Review the product, not the producer.


2) Set an agenda and maintain it.
3) Limit debate and rebuttal; conduct in-depth discussions off-line.
4) Enunciate problem areas, but don't attempt to solve the problem noted.
5) Take written notes; utilize a wall board to capture comments.
6) Limit the number of participants and insist upon advance preparation.
7) Develop a checklist for each product in order to structure and focus the
review.
8) Allocate resources and schedule time for FTRs.
9) Conduct meaningful training for all reviewers.
10) Review your earlier reviews to improve the overall review process.

Statistical Software Quality Assurance

Statistical quality assurance implies the following steps.

1) Collect and categorize information (i.e., causes) about software defects that
occur.
2) Attempt to trace each defect to its underlying cause (e.g., nonconformance to
specifications, design error, violation of standards, poor communication with
the customer).
3) Using the Pareto principle (80% of defects can be traced to 20% of all causes),
isolate the 20%.
Although hundreds of errors are uncovered all can be tracked to one of the
following causes.

• Incomplete or erroneous specifications.


• Misinterpretation of customer communication.
• Intentional deviation from specifications.
• Violation of programming standards.
• Errors in data representation.
• Inconsistent component interface.
• Errors in design logic.
• Incomplete or erroneous testing.
• Inaccurate or incomplete documentation.
• Errors in programming language translation of design.
• Ambiguous or inconsistent human/computer interface.

Page 70 of 77
Six Sigma
 Popularized by Motorola in the 1980s
 Is the most widely used strategy for statistical quality assurance
 Uses data and statistical analysis to measure and improve a company's
operational performance
 Identifies and eliminates defects in manufacturing and service-related
processes
 The "Six Sigma" refers to six standard deviations (3.4 defects per a million
occurrences)

SQA Plan

The SQA Provides a road map for instituting software quality assurance in an
organization.

Developed by the SQA group to serve as a template for SQA activities that are
instituted for each software project in an organization.

• Structured as follows:
– The purpose and scope of the plan
– A description of all software engineering work products that fall within
the purview of SQA
– All applicable standards and practices that are applied during the
software process
– SQA actions and tasks (including reviews and audits) and their
placement throughout the software process
– The tools and methods that support SQA actions and tasks
– Methods for assembling, safeguarding, and maintaining all SQA-
related records
– Organizational roles and responsibilities relative to product quality

The ISO 9000 quality standards

ISO 9000 is a set of international standards on quality management and quality


assurance developed to help companies effectively document the quality system
elements to be implemented to maintain an efficient quality system. They are not
specific to any one industry and can be applied to organizations of any size.

ISO 9000 can help a company satisfy its customers, meet regulatory
requirements, and achieve continual improvement.

ISO 9000 Series standards

The ISO 9000 family contains these standards:

 ISO 9001:2015: Quality management systems - Requirements


 ISO 9000:2015: Quality management systems - Fundamentals and vocabulary)
Page 71 of 77
 ISO 9004:2009: Quality management systems – Managing for the sustained
success of an organization (continuous improvement)
 ISO 19011:2011: Guidelines for auditing management systems

ISO 9000 certification

Individuals and organizations cannot be certified to ISO 9000. ISO 9001 is


the only standard within the ISO 9000 family to which organizations can certify.

ISO 9000 principles of quality management

The ISO 9000:2015 and ISO 9001:2015 standards are based on seven quality
management principles that senior management can apply for organizational
improvement:

1. Customer focus
o Understand the needs of existing and future customers
o Align organizational objectives with customer needs and expectations
o Meet customer requirements
o Measure customer satisfaction
o Manage customer relationships
o Aim to exceed customer expectations

Learn more about the customer experience and customer satisfaction.

2. Leadership
o Establish a vision and direction for the organization
o Set challenging goals
o Model organizational values
o Establish trust
o Equip and empower employees
o Recognize employee contributions

Learn more about leadership and find related resources.

3. Engagement of people
o Ensure that people’s abilities are used and valued
o Make people accountable
o Enable participation in continual improvement
o Evaluate individual performance
o Enable learning and knowledge sharing
o Enable open discussion of problems and constraints

Learn more about employee involvement.

4. Process approach
o Manage activities as processes
o Measure the capability of activities
o Identify linkages between activities
Page 72 of 77
o Prioritize improvement opportunities
o Deploy resources effectively

Learn more about a process view of work and see process analysis
tools.

5. Improvement
o Improve organizational performance and capabilities
o Align improvement activities
o Empower people to make improvements
o Measure improvement consistently
o Celebrate improvements

Learn more about approaches to continual improvement.

6. Evidence-based decision making


o Ensure the accessibility of accurate and reliable data
o Use appropriate methods to analyze data
o Make decisions based on analysis
o Balance data analysis with practical experience

See tools for decision making.

7. Relationship management
o Identify and select suppliers to manage costs, optimize resources, and
create value
o Establish relationships considering both the short and long term
o Share expertise, resources, information, and plans with partners
o Collaborate on improvement and development activities
o Recognize supplier successes

Learn more about supplier quality and see resources related to


managing the supply chain.

Page 73 of 77
SOFTWARE ENGINEERING

University Questions

Sub.Code: 7BCA6C3

PART A

1. What is meant by software quality?


2. Define “Trivial projects”.
3. What are the major factors that influence software cost?
4. Write the format of a software requirements specification.
5. What is meant by coupling and cohesion?
6. What is meant by single entry and single exit constructs?
7. What is meant by unit testing?
8. What is meant by configuration management?
9. Define “SQA – plan”.
10. Define “Quality Assurance”.
11. What are the factors that influence quality and productivity?
12. What are the factors to consider in setting project goals?
13. Which principle is used in Delphi cost estimation technique?
14. What are the used of Decision tables?
15. What is meant by Milestone?
16. What are the types of couplings?
17. What is meant by Acceptance testing?
18. State the “Halstead’s effort equation”.
19. What is meant by Formal technical reviews?
20. Write the necessity of quality assurance.
21. What are the factors of medium size projects?
22. What are the factors of consider in project planning?
23. Draw the product work breakdown structure.
24. What is meant by relational notations?
25. Write the fundamental design concept of software.
26. What is meant by test plan?
27. What is meant by Integration testing?
Page 74 of 77
28. What is meant by system testing?
29. What is meant by statistical quality assurance?
30. What are the factors determine the quality of a software?
31. What is meant by Milestone and deliverables?
32. What are the difference between software engineering and other engineering
branches?
33. What are the scale factors used in the COCOMO 2 exponent computation cost
estimation technique?
34. Write the domain requirements in a software development process.
35. What are the real time executives?
36. Define modularization technique.
37. Define configuration management.
38. What is meant by validation and verification?
39. What are the differences between product standard and process standard?
40. What is meant by quality planning?

PART B

1. Explain the productivity factors.


2. Explain the large projects.
3. Explain about staffing level estimation.
4. Explain about state oriented notations.
5. Describe abstraction in detail.
6. Describe the violation of single entry, single exit.
7. Explain the validation testing.
8. Explain strategic issues of software testing.
9. Explain about the formal technical reviews.
10. Describe the quality aspects in detail.
11. Explain the size categories for software products.
12. Explain the phased life-cycle model.
13. Explain the Expert judgment cost estimation technique.
14. Write the format of a software requirements specification.
15. Explain the Abstraction mechanism.
16. Explain the Single entry, Single exit constructs.
Page 75 of 77
17. Explain the management aspect of Software engineering.
18. Explain about the Configuration Management.
19. What are the duties performed by the quality assurance personnel?
20. How the quality of a software product is determined?
21. Explain the distribution of effort.
22. Explain the Hierarchical team structure in detail.
23. Explain the recurrence relations of formal specification techniques.
24. Explain about modularity of software design.
25. Explain about the coding style in detail.
26. Describe about Walk throughs.
27. Explain the managerial aspects of software engineering.
28. Explain the need for quality assurance.
29. Explain about the formal technical reviews.
30. Explain the essential attributes of a good software.
31. Explain about the software requirements specification.
32. What are the cost estimation techniques present in software engineering?
Define all.
33. Explain about the formal specification techniques.
34. Explain the distributed system design.
35. Explain about coding style.
36. Explain the art of debugging.
37. Describe about the quality concepts.
38. Explain about the software review.

PART C

1. Explain the size factors of a software.


2. Explain the software requirements specifier in detail.
3. Explain the distributed system design.
4. Explain the Integration Testing in detail.
5. Explain the ISO-9000 quality standards in detail.
6. Explain the problems identified by respondents in Managerial Issues.
7. Describe the Algorithmic cost models.
8. Describe the software design guidelines.
Page 76 of 77
9. Explain about the unit testing and validation testing in detail.
10. Describe about software reviews.
11. Explain the total effort devoted to software.
12. Explain the algorithm cost models.
13. Explain the Milestones and Inspections.
14. Explain the sour code metrics.
15. Describe about Waterfall model.
16. Describe about software cost factors.
17. Explain about Walkthrough and Inspections.
18. Explain the strategic issues of software testing.
19. Explain the Formal Technical Review.

Page 77 of 77

You might also like