Professional Documents
Culture Documents
Module – 01
Introduction
1. Motivation:
Presented chapter is the foundation of basic software process and therefore
motivation of this course is many problems can plague a software project.
Software planning involves estimating how much time, effort, money, and
resources will be required to build a specific software system.
2. Learning Objective:
This module explains that everyone involved in the software process—managers,
software engineers, and customers— participate in development of software
project
3. Prerequisite:
SOOAD, WTL
4. Syllabus:
5. Learning OutComes:
1
Software Engineering
The student will learn steps that help a software team to understand various
processes in models.
Idea to identify problem, assess its probability of occurrence, estimate its impact,
and establish a contingency plan should the problem actually occur.
The student will learn key element of good software project management.
6. Weightage:
06 hours
7. Abbreviations
8. Key Definition:
Software process & project metrics: Are quantitative measures that enable software
engineers to gain insight into the efficacy of the software process & the projects that
are conducted using the process as a framework.
Process and Project Metrics: Software process and project metrics are quantitative
measures that enable software engineers to gain insight into the efficiency of the
software process and the projects conducted using the process framework.
2
Software Engineering
Software estimation: Software planning involves estimating how much time, effort,
money, and resources will be required to build a specific software system.
9. Theory:
Overview
The roadmap to building high quality software products is software process.
Software processes are adapted to meet the needs of software engineers and
managers as they undertake the development of a software product.
A software process provides a framework for managing activities that can very
easily get out of control.
The best indicators of how well a software process has worked are the quality,
timeliness, and long-term viability of the resulting software product.
Software Engineering
3
Software Engineering
This view opposed uniqueness and "magic" of programming in an effort to move the
development of software from "magic" (which only a select few can do) to "art" (which
the talented can do) to "science" (which supposedly anyone can do!). There have been
numerous definitions given for software engineering (including that above and below).
NASA
StarWars Defense Initiative
Social Security Administration
financial transaction systems
Changes in the ratio of hardware to software costs
4
Software Engineering
Thus, the following phases are heavily affected by selected software paradigms
5
Software Engineering
Design
Implementation
Integration
Maintenance
The software development cycle involves the activities in the production of a software
system. Generally the software development cycle can be divided into the following
phases:
Many people (and not a few professors) believe that prescriptive models are “old school”—
ponderous, bureaucratic document-producing machines.
Communication
project initiation Planning
requirement gathering estimating Modeling
scheduling
analysis Construction
tracking 6
design Deployment
code
test delivery
support
f eedback
Software Engineering
Many people dismiss the waterfall as obsolete and it certainly does have problems. But this
model can still be used in some situations.
Among the problems that are sometimes encountered when the waterfall model is applied are:
A Real project rarely follows the sequential flow that the model proposes. Change can
cause confusion as the project proceeds.
It is difficult for the customer to state all the requirements explicitly. The waterfall
model requires such demand.
The customer must have patience. A working of the program will not be available until
late in the project time-span.
Advantages
Disadvantages
1. Projects rarely flow its sequential flow as we cannot go back to introduce change.
2. Small changes or errors that arise in the completed software may cause a lot of problem.
3. it happens that the client is not very clear of what he exactly wants from the software.
Any changes that he mentions in between may cause a lot of confusion.
4. The greatest disadvantage of the waterfall model is that until the final stage of the
development cycle is complete, a working model of the software does not lie in the
hands of the client
7
Software Engineering
The process models in this category tend to be among the most widely used (and effective) in
the industry.
The incremental model combines elements of the waterfall model applied in an iterative fashion.
The model applies linear sequences in a staggered fashion as calendar time progresses.
Each linear sequence produces deliverable “increments” of the software. (Ex: a Word Processor
delivers basic file mgmt., editing, in the first increment; more sophisticated editing, document
production capabilities in the 2nd increment; spelling and grammar checking in the 3rd
increment.
When an increment model is used, the 1st increment is often a core product. The core product is
used by the customer.
As a result of use and / or evaluation, a plan is developed for the next increment.
The plan addresses the modification of the core product to better meet the needs of the
customer and the delivery of additional features and functionality.
8
Software Engineering
The process is repeated following the delivery of each increment, until the complete product is
produced.
If the customer demands delivery by a date that is impossible to meet, suggest delivering one or
more increments by that date and the rest of the Software later.
Rapid Application Development (RAD) is an incremental software process model that emphasizes
a short development cycle.
If requirements are well understood and project scope is constrained, the RAD process enables a
development team to create a fully functional system within a short period of time.
Advantages
1. For large, but scalable projects, RAD requires sufficient human resources to create the
right number of RAD teams.
9
Software Engineering
2. If developers and customers are not committed to the rapid-fire activities necessary to
complete the system in a much abbreviated time frame, RAD project will fail.
3. If a system cannot properly be modularized, building the components necessary for
RAD will be problematic.
10
Software Engineering
Team # n
Modeling
business modeling
data modeling
process modeling
Planning
Construction Deployment
Team # 1 component reuse
integration
automatic code
generation delivery
Modeling testing feedback
business modeling
data modeling
process modeling
Construction
component reuse
automatic code
generation
testing
60 - 90 days
11
Software Engineering
Software evolves over a period of time; business and product requirements often change
as development proceeds, making a straight-line path to an end product unrealistic.
Software Engineering needs a process model that has been explicitly designed to
accommodate a product that evolves over time.
Evolutionary process models are iterative. They produce increasingly more complete
versions of the Software with each iteration.
1.2.3.1 Prototyping
Customers often define a set of general objectives for Software, but doesn’t identify
detailed input, processing, or input requirements.
Prototyping paradigm assists the Software engineering and the customer to better
understand what is to be built when requirements are fuzzy.
12
Software Engineering
Quick plan
Communication
Modeling
Quick design
Deployment
Delivery
& Feedback Construction
of
prototype
13
Software Engineering
The prototyping paradigm begins with communication where requirements and goals of
Software are defined.
Prototyping iteration is planned quickly and modeling in the form of quick design
occurs.
The quick design focuses on a representation of those aspects of the Software that will be
visible to the customer “Human interface”.
Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while
enabling the developer to better understand what needs to be done.
The prototype can serve as the “first system”. Both customers and developers like the
prototyping paradigm as users get a feel for the actual system, and developers get to
build Software immediately. Yet, prototyping can be problematic:
The key is to define the rules of the game at the beginning. The customer and the
developer must both agree that the prototype is built to serve as a mechanism for
defining requirements.
The spiral model is an evolutionary Software process model that couples the iterative
nature of prototyping with the controlled and systematic aspects of the waterfall model.
14
Software Engineering
During later iterations, increasingly more complete versions of the engineered system
are produced.
A spiral model is divided into a set of framework activities divided by the Software
engineering team.
As this evolutionary process begins, the Software team performs activities that are
implied by a circuit around the spiral in a clockwise direction, beginning at the center.
The first circuit around the spiral might result in the development of a product
specification; subsequent passes around the spiral might be used to develop a prototype
and then progressively more sophisticated versions of the Software.
Each pass through the planning region results in adjustments to the project plan. Cost
and schedule are adjusted based on feedback derived from the customer after delivery.
Unlike other process models that end when Software is delivered, the spiral model can
be adapted to apply throughout the life of the Software.
15
Software Engineering
planning
estimation
scheduling
risk analysis
communication
modeling
analysis
design
start
deployment
construction
delivery code
feedback test
16
Software Engineering
31.1% of projects will be canceled before they ever get completed. 52.7% of
projects will cost 189% of their original estimates.
– The Standish Group
Plus project complexity is increasing
– Demand for quicker delivery of useful systems
– Increasingly vague, volatile requirements
– Greater uncertainty/risk from limited knowledge of:
• Underlying technologies
• Off-the-shelf (OTS) components used
1) the customer did not give all of the expected requirements and
17
Software Engineering
4. Enterprise metrics - longer term metrics that measure the effectiveness of the
enterprise in undertaking IR&D and developing new products
18
Software Engineering
- Breakeven time
- Percent of revenue from products developed in last 4 years
- Proposal win %
- Development cycle time trend (normalized to program complexity)
10.References:
19
Software Engineering
1. Software is a product and can be manufactured using the same technologies used
for other engineering artifacts.
A) True
B) False
2. Software deteriorates rather than wears out because…
A) Software suffers from exposure to hostile environments
B) Defects are more likely to arise after software has been used often
C) Multiple change requests introduce errors in component interactions
D) Software spare parts become harder to order
3. Agility is nothing more than the ability of a project team to respond rapidly to
change.
A) True
B) False
4. Which of the items listed below is not one of the software engineering layers?
A) Process
B) Manufacturing
C) Methods
D) Tools
5. Software engineering umbrella activities are only applied during the initial phases
of software development projects.
A) True
B) False
6. Process models are described as agile because they
A) eliminate the need for cumbersome documentation
B) emphasize maneuverability and adaptability
C) do not waste development time on planning activities
D) make extensive use of prototype creation
7. Which of these terms are level names in the Capability Maturity Model?
A) Performed
B) Repeated
C) Reused
D) Optimized
E) both a and d
8. Software processes can be constructed out of pre-existing software patterns to best
meet the needs of a software project.
A) True
B) False
9. Which of these are standards for assessing software processes?
A) SEI
B) SPICE
C) ISO 19002
D) ISO 9001
E) Both b and d
20
Software Engineering
10. The best software process model is one that has been created by the people who will
actually be doing the work.
A) True
B) False
(Ans : 1-B,2-C,3-A,4-B,5-B,6-C,7-E,8-E,9-A,10-A)
13. Questions
1. Compare Waterfall model and Spiral model of Software development. [May 2010]
[10 marks]
Ans:
Waterfall
– Testing at end
– Changes cannot be incorporated in between
– Cascade effect
– Document driven
– Well understood products as requirements cannot change
– Linear sequential model or classical life cycle model
Spiral
– Testing is done at every step
– Iterations
– Not completely document driven
– the requirements are not defined in detail
2. Explain the Open Source software life cycle model. [May 2010] [10 marks]
Ans:
Open source is used by the open source initiative to determine whether or not a
software license can be considered open source.
21
Software Engineering
i. Free redistribution
ii. Source code
iii. Derived works
iv. Integrity of the author’s source code
v. No discrimination against fields of endeavor
vi. Distribution of license
vii. License must not be specific to a product
viii. License must not restrict other software
ix. License must be technology-central
4. What is an Agile Process? Explain any one Agile Process model with its advantages
and disadvantages. [Nov. 2010] [10 marks]
Ans:
Agile software development refers to a group of software development methodologies
based on iterative development, where requirements and solutions evolve through
collaboration between self-organizing cross-functional teams
The different Agile Process models are
22
Software Engineering
Extreme Programming:
XP Practices
23
Software Engineering
Extreme Programmers work together in pairs and as a group, with simple design
and obsessively tested code, improving the design continually to keep it always
just right for the current needs.
The Extreme Programming team keeps the system integrated and running all the
time. The programmers write all production code in pairs, and all work together
all the time. They code in a consistent style so that everyone can understand and
improve all the code as needed.
The Extreme Programming team shares a common and simple picture of what
the system looks like. Everyone works at a pace that can be sustained
indefinitely.
Ans: The development models are the various processes or methodologies that are
being selected for the development of the project depending on the project’s aims and
goals. There are many development life cycle models that have been developed in order
to achieve different required objectives. The models specify the various stages of the
process and the order in which they are carried out.
The selection of model has very high impact on the testing that is carried out. It will
define the what, where and when of our planned testing, influence regression testing
and largely determines which test techniques to use.
24
Software Engineering
There are various Software development models or methodologies. They are as follows:
Waterfall model
V model
Incremental model
RAD model
Agile model
Iterative model
Spiral model
Choosing right model for developing of the software product or application is very
important. Based on the model the development and testing processes are carried out.
Different companies based on the software application or product, they select the type
of development model whichever suits to their application. But these days in market the
‘Agile Methodology’ is the most used model. ‘Waterfall Model’ is the very old model. In
‘Waterfall Model’ testing starts only after the development is completed. Because of
which there are many defects and failures which are reported at the end. So,the cost of
fixing these issues are high. Hence, these days people are preferring ‘Agile Model’. In
‘Agile Model’ after every sprint there is a demo-able feature to the customer. Hence
customer can see the features whether they are satisfying their need or not.
‘V-model’ is also used by many of the companies in their product. ‘V-model’ is nothing
but ‘Verification’ and ‘Validation’ model. In ‘V-model’ the developer’s life cycle and
tester’s life cycle are mapped to each other. In this model testing is done side by side of
the development.
Likewise ‘Incremental model’, ‘RAD model’, ‘Iterative model’ and ‘Spiral model’ are
also used based on the requirement of the customer and need of the product.
25
Software Engineering
1. Motivation:
Poorly managed software production leads to wastage of money and time.
26
Software Engineering
2. Learning Objective:
This module explains that people involved in the Software Estimation planning
& process will be able to come out with a cost effective solution to develop any
kind of a software solution.
1. Students will be able to understand the Software Estimation- empirical estimation
models-Cost/Effort estimation model.
2. Students will be able to understand the Planning- work breakdown structure, Gantt
chart. They will be able to discuss cost and how to manage the schedule slippage.
3. Objective:
To get knowledge about the activities involved in project management.
4. Prerequisite:
Phases of software life cycle.
5. Learning OutComes:
The student will learn steps that help a software team to understand Software
Estimation using empirical estimation models.
The student will learn steps of Planning- work breakdown structure.
The student will understand Gantt chart, Discuss cost and schedule slippage.
6. Syllabus:
Module Content Duration Self Study Time
5. Learning
• Product metrics
• Project Planning
27
Software Engineering
• Project Scheduling
• Project Tracking
5. Weightage:
04 hours
7. Abbreviations:
(1) PERT - Program Evaluation Review Technique
(2) LOC – Lines Of Code
(3) FP - Function Point
(4) COCOMO - Constructive Cost Model
(5) WBS –Work Breakdown Structure
8. Key Definitions:
1. Planning: Project planning is part of project management, which relates to the
use of schedules such as Gantt charts to plan and subsequently report progress
within the project environment.
2. Estimation: The ability to accurately estimate the time and/or cost taken for a
project to come in to its successful conclusion is a serious problem for software
engineers. The use of a repeatable, clearly defined and well understood software
development process has, in recent years, shown itself to be the most effective
method of gaining useful historical data that can be used for statistical
estimation. In particular, the act of sampling more frequently, coupled with the
loosening of constraints between parts of a project, has allowed more accurate
estimation and more rapid development times.
3. Software metric: Software metric is a measure of some property of a piece of
software or its specifications. Since quantitative measurements are essential in all
sciences, there is a continuous effort by computer science practitioners and
theoreticians to bring similar approaches to software development. The goal is
obtaining objective, reproducible and quantifiable measurements, which may
have numerous valuable applications in schedule and budget planning, cost
estimation, quality assurance testing, software debugging, software performance
optimization, and optimal personnel task assignments.
4. LOC (Lines Of Code): Lines of code is a software metric used to measure the size
of a software program by counting the number of lines in the text of the
program's source code. SLOC is typically used to predict the amount of effort
28
Software Engineering
9. Theory
2.1 Software estimation – Empirical estimation models – Cost/Effort
estimation
Software project management begins with a set of activities that are collectively
called project planning. Before the project can begin, the manager and the software
team must estimate the work to be done, the resources that will be required, and the
time that will elapse from start to finish. Whenever estimates are made, future is looked
into and some degree of uncertainty is accepted as a matter of course.
Metrics are measurable ways to design and assess the software product. Software
metrics domain is partitioned into process, project and product metrics. The product
29
Software Engineering
metrics are private to an individual and are often combined to develop project metrics
that are public to a software team. Project metrics are then consolidated to create
process metrics that are public to the software organization as a whole. An organization
combines metrics that come from different individuals or projects by normalizing the
measures. By doing so, it is possible to create software metrics that enable comparison
to broader organizational averages.
In this section product metric is concentrated. Product metrics help software
engineers gain insight into the design and construction of the software they build.
Product metrics focus on specific attributes of software engineering work products and
are collected as technical tasks (analysis, design, coding, and testing) are being
conducted. Software engineers use product metrics is to help them build higher-quality
software.
The product Metrics Landscape:
Although a wide variety of metrics taxonomies have been proposed, the
following outline addresses the most important metrics area:
i. Metrics for the analysis model. These metrics address various aspects of the analysis
model and include:
Functionally delivered-provides an indirect measure of the functionality that is
packaged within the software.
System size-measures of the overall size of the system defined in terms of
information available as part of the analysis model.
Specification quality-provides an indication of the specificity and completeness of
a requirements specification.
ii. Metrics for the design model. These metrics quantify design attributes in a manner
that allows a software engineer to assess design quality. Metrics include:
Architectural metrics-provide an indication of the quality of the architectural
design.
Component-level metrics-measure the complexity of software components and
other characteristics that have a bearing on quality.
Interface design metrics-focus primarily on usability.
Specialized OO design metrics-measure characteristics of classes and their
communication and collaboration characteristics.
iii. Metrics for source code. These metrics measure the source code and can be used to
assess its complexity, maintainability, and testability, among other characteristics:
Halstead metrics-controversial but nonetheless fascinating, these metrics provide
unique measures of a computer program.
Complexity metrics-measure the logical complexity of source code (can also be
considered to be component-level design metrics).
30
Software Engineering
iv. Metrics for testing. These metrics assist in the design of effective test cases and
evaluate the efficacy of testing:
Statement and branch coverage metrics-lead to the design of test cases that provide
program coverage.
Defect-related metrics-focus on bugs found, rather than on the tests themselves.
Testing effectiveness-provide a real-time indication of the effectiveness of tests that
have been conducted.
In-process metrics-process related metrics that can be determined as testing is
conducted.
In many cases, metrics for one model may be used in later software engineering
activities. For example, design metrics may be used to estimate the effort required to
generate source code. In addition, design metrics may be used in test planning and test
case design.
31
Software Engineering
The degree of structural uncertainty also has an effect on estimation risk. In this
context, structure refers to the degree to which requirements have been solidified, the
ease with which functions can be compartmentalized, and the hierarchical nature of the
information that must be processed.
The availability of historical information has a strong influence on estimation
risk. By looking back, we can emulate things that worked and improve areas where
problems arose. When comprehensive software metrics are available for past projects,
estimates can be made with greater assurance, schedules can be established to avoid
past difficulties, and overall risk is reduced.
LOC (Lines Of Code):
Measurements in the physical world can be categorized in two ways: direct
measures (e.g., the length of a bolt) and indirect measures (e.g., the "quality" of bolts
produced, measured by counting rejects). Software metrics can be categorized similarly
(direct and indirect measures).
Direct measures of the software engineering process include cost and effort
applied. Direct measures of the product include lines of code (LOC) produced,
execution speed, memory size, and defects reported over some set period of time. The
cost and effort required to build software, the number of lines of code produced, and
other direct measures are relatively easy to collect, as long as specific conventions for
measurement are established in advance.
LOC comes under the category of size-oriented metrics. Size-oriented software
metrics are derived by normalizing quality and/or productivity measures by
considering the size of the software that has been produced. If a software organization
maintains simple records, a table of size-oriented measures, such as the one shown in
the Figure below, can be created. The table lists each software development project that
has been completed over the past few years and corresponding measures for that
project.
32
Software Engineering
In order to develop metrics that can be assimilated with similar metrics from
other projects, we choose lines of code as our normalization value. From the
rudimentary data contained in the table, a set of simple size-oriented metrics can be
developed for each project:
• Errors per KLOC (thousand lines of code).
• Defects per KLOC.
• $ per LOC.
• Page of documentation per KLOC.
In addition, other interesting metrics can be computed:
• Errors per person-month.
• LOC per person-month.
• $ per page of documentation.
Size-oriented metrics are not universally accepted as the best way to measure the
process of software development. Proponents of the LOC measure claim that LOC is an
“artifact” of all software development projects that can be easily counted. On the other
hand, opponents argue that LOC measures are programming language dependent, that
they penalize well-designed but shorter programs, that they cannot easily accommodate
nonprocedural languages.
LOC and FP (Function Point) estimation are distinct estimation techniques. Yet
both have a number of characteristics in common. The project planner begins with a
bounded statement of software scope and from this statement attempts to decompose
software into problem functions that can each be estimated individually. LOC or FP (the
estimation variable) is then estimated for each function. Alternatively, the planner may
choose another component for sizing such as classes or objects, changes, or business
processes affected.
Baseline productivity metrics (e.g., LOC/person-month or FP/person-month)
are then applied to the appropriate estimation variable, and cost or effort for the
function is derived. Function estimates are combined to produce an overall estimate for
the entire project.
The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning. When LOC is used as the estimation
variable, decomposition10 is absolutely essential and is often taken to considerable
levels of detail. For FP estimates, decomposition works differently. Rather than focusing
on function, each of the information domain characteristics—inputs, outputs, data files,
inquiries, and external interfaces—as well as the 14 complexity adjustment values
(which is discussed below). The resultant estimates can then be used to derive a FP
value that can be tied to past data and used to generate an estimate.
Regardless of the estimation variable that is used, the project planner begins by
estimating a range of values for each function or information domain value. Using
33
Software Engineering
historical data or (when all else fails) intuition, the planner estimates an optimistic, most
likely, and pessimistic size value for each function or count for each information
domain value. An implicit indication of the degree of uncertainty is provided when a
range of values is specified.
A three-point or expected value can then be computed. The expected value for the
estimation variable (size), S, can be computed as a weighted average of the optimistic
(sopt), most likely (sm), and pessimistic (spess) estimates. For example,
S = (sopt + 4sm + spess)/6 -------------------------------- (1)
gives heaviest credence to the “most likely” estimate and follows a beta probability
distribution. We assume that there is a very small probability the actual size result will
fall outside the optimistic or pessimistic values.
Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied. One can’t be sure that the estimates
are correct. Any estimation technique, no matter how sophisticated, must be cross-
checked with another approach. Even then, common sense and experience must prevail.
An example of LOC-Based estimation:
As an example of LOC and FP problem-based estimation techniques, a software
package to be developed for a computer-aided design application for mechanical
components is considered. A review of the System Specification indicates that the
software is to execute on an engineering workstation and must interface with various
computer graphics peripherals including a mouse, digitizer, high resolution color
display and laser printer.
Using the System Specification as a guide, a preliminary statement of software scope can
be developed:
“The CAD software will accept two- and three-dimensional geometric data from an
engineer. The engineer will interact and control the CAD system through a user
interface that will exhibit characteristics of good human/machine interface design. All
geometric data and other supporting information will be maintained in a CAD
database. Design analysis modules will be developed to produce the required output,
which will be displayed on a variety of graphics devices. The software will be designed
to control and interact with peripheral devices that include a mouse, digitizer, laser
printer, and plotter.”
This statement of scope is preliminary—it is not bounded. Every sentence would
have to be expanded to provide concrete detail and quantitative bounding. For
example, before estimation can begin the planner must determine what “characteristics
of good human/machine interface design” means or what the size and sophistication of
the “CAD database” are to be.
34
Software Engineering
For our purposes, it is assumed that further refinement has occurred and that the
following major software functions are identified:
Applying equation (1) mentioned above, the expected value for the 3D geometric
analysis function is 6800 LOC. Other estimates are derived in a similar fashion. By
summing vertically in the estimated LOC column, an estimate of 33,200 lines of code is
established for the CAD system.
A review of historical data indicates that the organizational average productivity
for systems of this type is 620 LOC/pm. Based on a burdened labor rate of $8000 per
month, the cost per line of code is approximately $13. Based on the LOC estimate and
the historical productivity data, the total estimated project cost is $431,000 and the
estimated effort is 54 person-months (the estimates are rounded-off here).
Function Point (FP):
As mentioned earlier, software metrics can be categorized as direct measures and
indirect measures. As discussed before direct measures of the product include LOC.
35
Software Engineering
Number of user inputs. Each user input that provides distinct application oriented data
to the software is counted. Inputs should be distinguished from inquiries, which are
counted separately.
Number of user outputs. Each user output that provides application oriented
information to the user is counted. In this context output refers to reports, screens, error
messages, etc. Individual data items within a report are not counted separately.
Number of user inquiries. An inquiry is defined as an on-line input that results in the
generation of some immediate software response in the form of an on-line output. Each
distinct inquiry is counted.
Number of files. Each logical master file (i.e., a logical grouping of data that may be
one part of a large database or a separate file) is counted.
36
Software Engineering
Number of external interfaces. All machine readable interfaces (e.g., data files on
storage media) that are used to transmit information to another system are counted.
Once these data have been collected, a complexity value is associated with each
count. Organizations that use function point methods develop criteria for determining
whether a particular entry is simple, average, or complex. Nonetheless, the
determination of complexity is somewhat subjective.
To compute function points (FP), the following relationship is used:
FP = count total × [0.65 + 0.01 ×Σ(Fi)] ------------------------------- (2)
where count total is the sum of all FP entries obtained from the figure for computing
function points shown above.
The Fi (i = 1 to 14) are "complexity adjustment values" based on responses to the
following questions:
1. Does the system require reliable backup and recovery?
2. Are data communications required?
3. Are there distributed processing functions?
4. Is performance critical?
5. Will the system run in an existing, heavily utilized operational environment?
6. Does the system require on-line data entry?
7. Does the on-line data entry require the input transaction to be built over multiple
screens or operations?
8. Are the master files updated on-line?
9. Are the inputs, outputs, files, or inquiries complex?
10. Is the internal processing complex?
11. Is the code designed to be reusable?
12. Are conversion and installation included in the design?
13. Is the system designed for multiple installations in different organizations?
14. Is the application designed to facilitate change and ease of use by the user?
Each of these questions is answered using a scale that ranges from 0 (not
important or applicable) to 5 (absolutely essential). The constant values in equation (2)
and the weighting factors that are applied to information domain counts are
determined empirically.
Once function points have been calculated, they are used in a manner analogous
to LOC as a way to normalize measures for software productivity, quality, and other
attributes:
• Errors per FP.
• Defects per FP.
• $ per FP.
• Pages of documentation per FP.
• FP per person-month.
An example of FP-based estimation:
37
Software Engineering
38
Software Engineering
The organizational average productivity for systems of this type is 6.5 FP/pm. Based on
a burdened labor rate of $8000 per month, the cost per FP is approximately $1230. Based
on the LOC estimate and the historical productivity data, the total estimated project cost
is $461,000 and the estimated effort is 58 person-months.
COCOMO Model:
39
Software Engineering
Like function points (described earlier), the object point is an indirect software
measure that is computed using counts of the number of (1) screens (at the user
interface), (2) reports, and (3) components likely to be required to build the application.
Each object instance (e.g., a screen or report) is classified into one of three complexity
levels (i.e., simple, medium, or difficult) using criteria suggested by Boehm. In essence,
complexity is a function of the number and source of the client and server data tables
that are required to generate the screen or report and the number of views or sections
presented as part of the screen or report.
Once the productivity rate has been determined, an estimate of project effort can be
derived as:
estimated effort = NOP/PROD
In more advanced COCOMO II models, a variety of scale factors, cost drivers, and
adjustment procedures are required.
40
Software Engineering
Project management involves the planning, monitoring, and control of the people,
process, and events that occur as software evolves from a preliminary concept to an
operational implementation.
2.2.1 Planning
Software project management begins with asset of activities that are collectively
called as project planning. Before the project can begin the manager and the software
team must estimate the work to be done and the resources that will be required, and the
time that will elapse from start to finish.
Project planning focuses on the definition phase- the definition of the problem,
planning a solution and estimating resources-and results in the following work
products:
• The problem statement is a short document describing the problem the system
should address, the target environment, client deliverables, and acceptance
criteria. The problem statement is an initial description and is the seed for the
Project Agreement, formalizing the common understanding of the project by the
client and management, and for the Requirements Analysis Document (RAD), a
precise description of the system under development.
• The top-level design represents the initial decomposition of the system into
subsystems. It is used to assign subsystems to individual teams. The top-level
design will be the seed for the System Design Document (SDD), a precise
description of the software architecture.
• The Software Project Management Plan (SPMP) describes all the managerial
aspects of the project, in particular the work breakdown structure, the schedule,
organization, work packages, and budget.
Moving the design of the software architecture to the beginning is called architecture-
centric project management. However, this a deviation from long-standard software
project management practices. In university environments and in small company
projects, the software project manager and software architect roles are often assumed by
the same person. In large projects the roles should be assumed by different people. The
software architect and project manager should work closely together as a decision-
making team, dividing responsibilities between management and technical decisions.
Developing the problem statement
41
Software Engineering
The problem statement is developed by the project manager and the client as a
mutual understanding of the problem to be addressed by the system. The problem
statement describes the current situation, the functionality it should support, and
environment in which the system will be deployed. It also defines the deliverables
expected by the client, together with delivery dates and a set of acceptance criteria. The
problem statement may also specify constraints on the development environment, such
as the programming language to be used. The problem statement is not a precise or
complete specification of the system. Instead, it is a high-level summary of two project
documents yet to be developed, the RAD and the SPMP. The following figure shows an
outline for a problem statement:
The first section describes the problem domain. Section 2 of the problem statement
provides example scenarios describing the interaction between the users and the system
for the essential functionality. Scenarios are used for describing both the current
situation and the future situation. Section 3 summarizes the functional requirements of
the system. Section 4 summarizes the nonfunctional requirements including constraints
placed by the client, such as the choice of the programming language, component reuse,
or selection of a separate testing team. Section 5 describes the deployment environment,
including platform requirements, a description of the physical environment, users, and
so on. Section 6 lists client deliverables and their associated deadlines.
Defining the top-level design
The top-level design describes the software architecture of the system. The
subsystem decomposition should be high level, focusing on functionality. It should be
kept constant until the analysis phase produces a stable system model. The subsystem
decomposition can and should be revisited during subsystem design, and new
subsystems are usually created during the system design phase. In this case the
organization might have to be changed to reflect the new design. The software architect
identifies the major subsystems and their services, but does not yet define their
42
Software Engineering
interfaces at this point. The subsystem decomposition is refined and modified later
during system design.
Identifying the work breakdown structure
There are several different approaches to developing the work breakdown
structure (WBS) for a project. The most commonly used approach is functional
decomposition based on the software process. For example, the work breakdown
structure shown in the below table is based on a functional breakdown of the work to
be performed.
A functional decomposition based on the functions of the product itself can also
be used to define the tasks. This is often used in high-risk projects where the system
functionality is released incrementally in a set of prototypes.
An object-oriented work breakdown structure is also possible. For example, if a
product consists of five components, an object-oriented WBS would include five tasks
associated with the construction of each of these components. Specifying the
development of these subsystems of the software architecture as tasks comes quite
naturally in object-oriented software projects.
Another decomposition is based on geographical areas. If the project has
developer teams in different geographical regions, the tasks might be identified for each
of these regions. Another WBS is based on organizational units. These can be the
organizational units of the company-for example, marketing, operations, and sales-or
they can be based on the people in the project.
43
Software Engineering
Table 2.3: Example of functional work breakdown structure for a simple ATM
No matter what approach is used, the ultimate goal is to decompose the total
work into tasks that are enough to be doable, whose duration can easily be estimated,
and that can be assigned to a single person.
Creating the initial schedule
After the temporal dependencies are established in the task model, a schedule is
created. This requires the estimation of durations for each of the tasks. Estimated times
can of course be based on prior experience of the manager or the estimator, but they
often do not give the needed precision. It is impossible to come up with a precise model
for the complete project. One solution is to start with an initial schedule representing
deadlines mutually agreed by the client and the project manager. These dates are
generous enough that, even in the event of change, they can still be met.
The initial version of the SPMP should be viewed as a proposed version to be
reviewed by the software architecture and the subsystem teams before it can be
considered binding for all project participants. The individual teams should start
working on a revision of the initial version of the subsystem decomposition right after
the initial team meeting, focusing on their subsystem and work breakdown structure for
the development of the subsystem.
This planning activity has to be done by the team in parallel with ongoing
requirements analysis and system design activities. The newly found tasks should be
presented during a formal planning review scheduled during or near the end of the
analysis phase. The result of the planning review should be revised work breakdown
structure and software architecture. This revision, In turn, is the basis for the second
version of the SPMP, which should be baselined and used as the basis for the project
agreement with the client. Scheduling requires a continuous trade-off between
resources, time, and implemented functionality and should be done regularly by the
project manager.
44
Software Engineering
2.2.2 Scheduling:
A schedule is the mapping of tasks onto time: each task is assigned start and end
times. This allows us to plan the deadlines for individual deliverables. The two most
often used diagrammatic notations for schedules are PERT (Program Evaluation
Review Technique) and Gantt charts. A Gantt chart is a compact way to present the
schedule of a software project along the time axis. A Gantt chart is a bar graph on which
the horizontal axis represents time and the vertical axis lists the different tasks to be
done. Tasks are represented as bars whose length corresponds to the planned duration
of the task. A schedule for the database subsystem example is represented as a Gantt
chart in the following figure. Before the tasks for the database subsystem is given in a
table.
Table 2.4: Examples of tasks for the realization of the database subsystem
45
Software Engineering
Figure 2.6: An example of schedule for the database subsystem (Gantt chart)
46
Software Engineering
2.2.3 Tracking
47
Software Engineering
48
Software Engineering
Consider that one wishes to document when he/she expects each task to be
completed. In this example, this has been done by filling in the week number in the
planned completion column. In order to track progress against this plan, one should
have an idea of what percentage complete he/she expects to be at the end of each week.
A cumulative total of the planned value is kept in the cumulative planned value
column. This column can be used to determine what percentage of the project will be
complete by the end of week 3, for example.
It can be seen that the sum of all of the planned values for tasks planned to be
complete before the end of week 3 is approx. 48.81% (because the tasks are in order, the
cumulative planned value for the last week 3 entry gives us this number - if the tasks
are listed unordered by completion date, the sum must be calculated by adding the
planned values of all tasks due to be completed before the date in question).
49
Software Engineering
Consider that the end of the second week is being reached on this project and
one need to know the progress toward completion on time. Also consider that
everything has not been done in the order in which it was planned, so this may not be
easy to guess at. In this case the concept of an earned value can be used. Earned value is
assigned only to completed tasks. If a task is completed with a planned value of p%,
then the earned value for that task is p%. Incomplete tasks have no earned value.
In the example below (shown by the following figure), compiled at the end of the
second week, it is observed that the group has completed the following tasks with their
associated earned value:
– Planning – 7.81%
– Scope – 2.64%
– Product Features – 2.64%
– User Profile – 1.98%
– Assumptions – 2.64%
– UI requirements – 5.21%
By summing the earned values for the completed tasks it is determined that at the end
of week 2, the cumulative earned value (the percent complete) is
7.81+2.64+2.64+1.98+2.64+5.21=22.91%. Now, knowing the earned value the project
group can determine if they are ahead, behind on schedule to completion.
50
Software Engineering
Looking at the cumulative planned value in the table, it is observed that at the
end of week 2 to be on schedule, the group should have been 33.19% complete. By the
above calculation that they are actually only 22.91% complete. Hence it is noted that, the
project is falling behind schedule. Knowing this, allows the project team to take action.
Such action may involve: adjusting work practices, negotiating with the client for an
extension to the deadline etc.
51
Software Engineering
1. Software project estimation techniques can be broadly classified under which of the
following:
a. automated processes
b. decomposition techniques
c. empirical models
d. regression models
e. both b and c
2. The size estimate for the software product to be built must be based on a direct
measure like LOC
a. True
b. False
3. Problem-based estimation is based on problem decomposition which focuses on
a. information domain values
b. project schedule
c. software functions
d. process activities
e. both a and c
4. LOC-based estimation techniques require problem decomposition based on
a. information domain values
b. project schedule
c. software functions
d. process activities
5. FP-based estimation techniques require problem decomposition based on
a. information domain values
b. project schedule
c. software functions
d. process activities
6. Process-based estimation techniques require problem decomposition based on
a. information domain values
b. project schedule
c. software functions
d. process activities
e. both c and d
7. Unlike a LOC or function point each person’s “use-case” is exactly the same size
a. True
b. False
8. When agreement between estimates is poor the cause may often be traced to
‘inadequately defined project scope’ or ‘inappropriate productivity data’
a. True
b. False
52
Software Engineering
12. Questions:
53
Software Engineering
This is top level model. Basic COCOMO, is applicable to the large majority of software
projects. The model estimates cost using one of three different development modes:
organic, semidetached and embedded.
3. Explain how project scheduling and tracking is done for a software development
project.
Ans:
A schedule is the mapping of tasks onto time: each task is assigned start and end times.
This allows us to plan the deadlines for individual deliverables. The two most often
used diagrammatic notations for schedules are PERT (Program Evaluation Review
Technique) and Gantt charts.
54
Software Engineering
3 Risk Management
55
Software Engineering
1. Motivation:
The following are the motivational factors in risk management
3. Objective:
Risks are potential problems that might affect the successful completion of a
software project. Risks involve uncertainty and potential losses. Risk analysis and
management are intended to help a software team understand and manage
uncertainty during the development process. The important thing is to remember
that things can go wrong and to make plans to minimize their impact when they
do. The work product is called a Risk Mitigation, Monitoring, and Management Pl an
(RMMM).
3. Prerequisite:
Knowledge of planning and estimation of software.
4. Syllabus:
56
Software Engineering
5. Learning Outcomes:
Risks involve uncertainty and potential losses. Risk analysis and management
are intended to help a software team understand and manage uncertainty
during the development process. The important thing is to remember that
things can go wrong and to make plans to minimize their impact when
they do. The work product is called a Risk Mitigation, Monitoring and
Management Plan (RMMM).
6. Weightage: Marks
7. Abbreviations:
8. Key Definitions:
risk: Risk is an undesired event or circumstance that occur while a project is underway
Uncertainty—the risk may or may not happen; that is, there are no 100% probable risks.
Loss—if the risk becomes a reality, unwanted consequences or losses will occur.
9. Theory
3.1 Risk management:
• It is necessary for the project manager to anticipate and identify different risks
that a project may be susceptible to.
57
Software Engineering
Risk Management- It aims at reducing the impact of all kinds of risk that may
effect a project by identifying, analyzing and managing them. Risk analysis and
management are a series of steps that help a software team to understand and
manage uncertainty.
Types of risks
Project risks which threaten the project plan. if project risks become real, it is
likely that the project schedule will slip and that costs will increase
Technical risks are the risks that threaten the quality and timeliness of the
software to be produced. If a technical risk becomes a reality, implementation
may become difficult or impossible.
Business risks threaten the viability of the software to be built and often
jeopardize the project or the product.
1.Risk identification
Product size—risks associated with the overall size of the software to be built
or modified.
Business impact—risks associated with constraints imposed by management or
the marketplace.
58
Software Engineering
Staff size and experience—risks associated with the overall technical and
project experience of the software engineers who will do the work.
Performance risk—the degree of uncertainty that the product will meet its
requirements and be fit for its intended use.
Cost risk—the degree of uncertainty that the project budget will be maintained.
Support risk—the degree of uncertainty that the resultant software will be easy to
correct, adapt, and enhance.
The impact of each risk driver on the risk component is divided into one of four
impact categories—negligible, marginal, critical, or catastrophic.
2. Risk Projection
Impact Assessment:
59
Software Engineering
60
Software Engineering
The RMMM plan consists of risk avoidance, risk monitoring, risk management and
contingency planning
(2) to ensure that risk aversion steps defined for the risk are being properly applied;
and
(3) to collect information that can be used for future risk analysis.
61
Software Engineering
62
Software Engineering
04 Software Configuration
Management
_______________________________
1. Motivation:
Presented chapter is the foundation of basic software engineering and therefore
motivation of this course is to understand software engineering process is involved
with SCM.
2. Learning Objective:
This module explains that people involved in the Software Configuration Management.
will be able to understand the factors involved in the configuration management.
1. Students will be able to understand the Software configuration management
process, Identification of objects in software configuration.
2. Students will be able to understand Managing and controlling changes,
Managing and controlling versions.
3. Students will be able to understand the configuration audit, status reporting,
SCM standards and SCM issues.
3. Objective:
This module explains set of activities designed to control change by identifying the
work products that are likely to change, establishing relationships among them,
defining mechanisms for managing different versions of these work products,
controlling the changes imposed, and auditing and reporting on the changes made.
4. Prerequisite:
Workflow of different phases of software life cycle
5. Syllabus:
63
Software Engineering
5. Learning Outcomes:
Students will be able to do the software configuration and also able to Identify
the objects in software configuration.
Students will be able to Manage and control changes during the configuring
process.
Students will be able to understand the configuration audits and prepare reports
of the status.
Students will be able to understand software engineering process involved with
SCM.
6. Weightage: 10 marks
7. Abbreviation
8. Key Definition:
Software Configuration Objects: To control and manage configuration items, each must
be named and managed using an object-oriented approach. Basic objects are created by
software engineers during analysis, design, coding, or testing.
Version Control: Version control is to identify and manage project elements as they
change over time.
64
Software Engineering
Change Control: Change control refers to the policy, rules, procedures, information,
activities, roles, authorization levels and states relating to creation, updates, approvals,
tracking and archiving of items involved with the implementation of a change process.
9. Theory
General Points
Every software engineer has to be concerned with how changes made to work
products are tracked and propagated throughout a project.
Support is a set of software engineering activities that occur after software has
been delivered to the customer and put into operation.
65
Software Engineering
The output of the software process is information that may be divided into three
broad categories:
Computer programs (both source level and executable forms);
Documents that describe the computer programs (targeted at both technical
practitioners and users),
Data (contained within the program or external to it).
The items that comprise all information produced as part of the software process
are collectively called a software configuration.
As the software process progresses, the number of software configuration items
(SCIs) grows rapidly.
A System Specification spawns a Software Project Plan and Software
Requirements Specification (as well as hardware related documents).
These in turn spawn other documents to create a hierarchy of information. If
each SCI simply spawned other SCIs, little confusion would result.
Unfortunately, another variable enters the process—change.
In the extreme, a SCI could be considered to be a single section of a large
specification or one test case in a large suite of tests.
More realistically, an SCI is a document, a entire suite of test cases, or a named
program component (e.g., a C++ function or an Ada package).
In addition to the SCIs that are derived from software work products, many
software engineering organizations also place software tools under configuration
control.
That is, specific versions of editors, compilers, and other CASE tools are "frozen"
as part of the software configuration.
Because these tools were used to produce documentation, source code, and data,
they must be available when changes to the software configuration are to be
made.
Although problems are rare, it is possible that a new version of a tool (e.g., a
compiler) might produce different results than the original version.
For this reason, tools, like the software that they help to produce, can be
baselined as part of a comprehensive configuration management process.
In reality, SCIs are organized to form configuration objects that may be cataloged
in the project database with a single name.
66
Software Engineering
67
Software Engineering
Baselines
68
Software Engineering
After SCIs are reviewed and approved, they are placed in a project database (also
called a project library or software repository).
However, this extracted SCI can be modified only if SCM controls (discussed
later in this chapter) are followed.
The arrows in Figure 5.2 illustrate the modification path for a base lined SCI.
To control and manage configuration items, each must be named and managed
using an object-oriented approach.
Basic objects are created by software engineers during analysis, design, coding,
or testing.
69
Software Engineering
Aggregate objects are collections of basic objects and other aggregate objects.
Design Specification is an aggregate object.
unique name,
description,
list of resources,
2. Version Control
70
Software Engineering
One or more attributes is assigned for each variant. For example, a color attribute
could be used to define which entity should be included when color displays are to
be supported.
Another way to conceptualize the relationship between entities, variants and
versions (revisions) is to represent them as an object pool. Referring to Figure 5.4,
the relationship between configuration objects and entities, variants and versions
can be represented in a three-dimensional space.
An entity is composed of a collection of objects at the same revision level. A
variant is a different collection of objects at the same revision level and therefore
coexists in parallel with other variants.
A new version is defined when major changes are made to one or more objects.
A number of different automated approaches to version control have been
proposed over the past decade. The primary difference in approaches is the
sophistication of the attributes that are used to construct specific versions and
variants of a system and the mechanics of the process for construction.
71
Software Engineering
3. Change Control
Change request is submitted and evaluated to assess technical merit and impact
on the other configuration objects and budget
Change report contains the results of the evaluation
Change control authority (CCA) makes the final decision on the status and
priority of the change based on the change report
Engineering change order (ECO) is generated for each change approved
(describes change, lists the constraints, and criteria for review and audit)
Object to be changed is checked-out of the project database subject to access
control parameters for the object
Modified object is subjected to appropriate SQA and testing procedures
Modified object is checked-in to the project database and version control
mechanisms are used to create the next version of the software
Synchronization control is used to ensure that parallel changes made by different
people don’t overwrite one another
72
Software Engineering
4. Configuration Audit
Identification, version control, and change control help the software developer to
maintain order in what would otherwise be a chaotic and fluid situation.
To ensure that the change has been properly implemented we conduct :
o formal technical reviews and
73
Software Engineering
The reviewers assess the SCI to determine consistency with other SCIs,
omissions, or potential side effects.
A formal technical review should be conducted for all but the most trivial
changes.
A software configuration audit complements the formal technical review by
assessing a configuration object for characteristics that are generally not
considered during review.
The audit asks and answers the following questions:
Has the change specified in the ECO (Engineering change order) been
made? Have any additional modifications been incorporated?
Has the software process been followed and have software engineering
standards been properly applied?
Has the change been "highlighted" in the SCI? Have the change date and
change author been specified? Do the attributes of the configuration object
reflect the change?
Have SCM procedures for noting the change, recording it, and reporting it
been followed?
In some cases, the audit questions are asked as part of a formal technical review.
However, when SCM is a formal activity, the SCM audit is conducted separately by
the quality assurance group.
5. Status Reporting
Each time an SCI is assigned new or updated identification, a CSR entry is made.
74
Software Engineering
Each time a configuration audit is conducted, the results are reported as part of the
CSR task.
Output from CSR may be placed in an on-line database, so that software developers
or maintainers can access change information by keyword category.
In addition, a CSR report is generated on a regular basis and is intended to keep
management and practitioners appraised of important changes.
Configuration status reporting plays a vital role in the success of a large software
development project.
When many people are involved, it is likely that "the left hand not knowing what the
right hand is doing" syndrome will occur.
Two developers may attempt to modify the same SCI with different and conflicting
intents.
A software engineering team may spend months of effort building software to an
obsolete hardware specification.
Change control adds human procedure with the automated tools so that effective
control of change will be made.
Difference
10. References:
75
Software Engineering
a) True b) False
a) design specification
b) marketing data
c) organizational structure description
d) test plans
e) both b and c
5. Modern software engineering practice suggests that a software team maintain
SCI's in a project database or repository
a) True b) False
6. A data repository meta model is used to determine how
a) information is stored in the repository
b) data integrity can be maintained
c) the existing model can be extended
d) All of the above
7. Many data repository requirements are the same as those for a typical
database application.
76
Software Engineering
a) True b) False
a) True b) False
9. Which of the following tasks is not part of software configuration
management?
a) change control
b) reporting
c) statistical quality control
d) version control
10. A basic configuration object is a __________ created by a software engineer
during some phase of the software development process.
a) program data structure
b) a software component
c) unit of information
d) all of the above
11. A new __________ is defined when major changes have been made to one or
more Configuration objects
a) Entity
b) Item
c) Variant
d) Version
12. Change control is not necessary if a development group is making use of an
automated project database tool
a) True b) False
13. When software configuration management is a formal activity, the software
configuration audit is conducted by the
a) development team
b) quality assurance group
c) senior managers
d) testing specialists
14. The primary purpose of configuration status reporting is to
a) allow revision of project schedules and cost estimates by project
managers
b) evaluate the performance of software developers and organizations
c) make sure that change information is communicated to all affected
parties
d) none of the above
15. Configuration issues that need to be considered when developing Web Apps
include:
a) Content
b) Cost
77
Software Engineering
c) People
d) Politics
e) a, b, and c
(Ans : 1-E,2-B,3-B,4-E,5-A,6-B,7-B,8-A,9-C,10-D,11-D,12-B,13-B,14-C,15-E)
05 Software Design
Specification
1. Motivation:
78
Software Engineering
2. Learning Objective:
This module explains that people involved in the Software Design Process will be able
to understand the Design Process during the design development of the software.
1. Students will be able to understand Software Design – Abstraction , Modularity.
2. Students will be able to understand the Software Architecture – Effective
modular design, Cohesion and Coupling, Example of code for cohesion and
coupling.
3. Students will be able to understand User Interface Design – Human Factors,
Interface standards, Design Issues – User Interface Design.
4. Objective:
This module explains the Software Design Process during the development of the
software.
5. Learning Outcomes:
This module will help students to learn the Software Design Process during the
development of the software along with the concepts of Abstraction, Modularity.
6. Prerequisite:
Workflow of different phases of software life cycle
7. Syllabus:
79
Software Engineering
Design Workflow
This involves the activities involved in the design phase of software life cycle.
System design concepts, architectural styles and design patterns are explained below.
At the highest level of abstraction, a solution is stated in broad terms using the language
of the problem environment. At lower levels of abstraction, a more detailed description
of the solution is provided.
Modularity
Coupling:
80
Software Engineering
Interface design
Using information developed during interface analysis, define interface objects
and actions (operations).
Define events (user actions) that will cause the state of the user interface to
change. Model this behavior.
Depict each interface state as it will actually look to the end user.
Indicate how the user interprets the state of the system from information
provided through the interface.
Design Issues
Response time. System response time is the primary complaint for many
interactive applications. system response time is measured from the point at
which the user performs some control action (e.g., hits the return key or clicks a
mouse) until the software responds with desired output or action.
Help facilities. Almost every user of an interactive, computer-based system
requires help now and then. In some cases, a simple question addressed to a
knowledgeable colleague can do the trick. In others, detailed research in a
multivolume set of “user manuals” may be the only option. In most cases,
however, modern software provides online help facilities that enable a user to
get a question answered or resolve a problem without leaving the interface.
Error handling. Error messages and warnings impart useless or misleading
information and serve only to increase user frustration. There are few computer
users who have not encountered an error of the form
Menu and command labeling. The typed command was once the most common
mode of interaction between user and system software and was commonly used
for applications of every type. Today, the use of window-oriented, point-andpick
interfaces has reduced reliance on typed commands, but some power-users
continue to prefer a command-oriented mode of interaction.
Application accessibility. As computing applications become ubiquitous,
software engineers must ensure that interface design encompasses mechanisms
81
Software Engineering
that enable easy access for those with special needs. Accessibilityfor users (and
software engineers) who may be physically challenged is an imperative for
ethical, legal, and business reasons
Internationalization. Software engineers and their managers nvariably
underestimate the effort and skills required to create user interfaces that
accommodate the needs of different locales and languages.
Types of Coupling:
1. Data coupling
2. Stamp coupling
3. Control coupling
4. Hybrid coupling
5. Common coupling
6. Content coupling
Data coupling:
Drawbacks: (i) Too many parameters make the interface difficult to understand and
possible error to occur.
(ii) Tramp data - data ‘traveling’ across modules before being used
Stamp coupling:
Control coupling:
A module controls the logic of another module through the parameter. The
controlling module needs to know how the other module works.
Hybrid coupling:
Common coupling:
82
Software Engineering
Content coupling:
A module refers to the inside of another module. It can also branch into another
module and refers to data within another module. By doing so, a module can change
the internal workings of another module.
Cohesion:
Strong cohesion implies that all parts of a subsystem should have a close logical
relationship with each other. That means, in case some kind of change is required in the
software, all the related pieces are found at one place.
A class will be cohesive if most of the methods defined in a class use most of the
data members most of the time. If we find different subsets of data within the same
class being manipulated by separate groups of functions then the class is not cohesive
and should be broken down as shown below.
83
Software Engineering
Types of cohesion:
1. Functional cohesion
2. Sequential cohesion
3. Communicational cohesion
4. Procedural cohesion
5. Temporal cohesion
6. Logical cohesion
7. Coincidental cohesion
Functional cohesion:
All elements contribute to the execution of one and only one problem-related task. It
has a focused - strong, single-minded purpose. Unrelated activities are not done by the
elements.
Sequential cohesion:
Elements are involved in activities such that output data from one activity becomes
input data to the next. Usually this cohesion has good coupling and is easily
maintained. This is not so readily reusable because activities that will not in general be
useful together.
Communicational cohesion:
Elements contribute to activities that use the same input or output data. This
cohesion is not flexible, for example, if we need to focus on some activities and not the
others. It is possible that links cause activities to affect each other. So it is better to split
the elements into functional cohesive ones.
Procedural cohesion:
Elements are related only by sequence, otherwise the activities are unrelated.
This is similar to sequential cohesion, except for the fact that elements are unrelated.
Commonly found at the top of hierarchy, such as the main program module.
Temporal cohesion:
Elements are involved in activities that are related in time. Commonly found in
initialization and termination modules. Elements are basically unrelated, so the module
will be difficult to reuse. A good practice is to initialize as late as possible and terminate
as early as possible.
84
Software Engineering
Logical cohesion:
Elements contribute to activities of the same general category (type). For example, a
report module, display module or I/O module. The elements usually have control
coupling, since one of the activities will be selected.
Coincidental cohesion:
Human Interface
85
Software Engineering
Importance in design are perception, memory, visual acuity, foveal and peripheral
vision, sensory storage, information processing, learning, skill, and individual
differences.
• Memory: Memory is not the most stable of human attributes, as anyone who has
forgotten why they walked into a room, or forgotten a very important birthday,
can attest.
Short-term, or working, memory.
Long-term memory
Mighty memory
Sensory Storage
Mental Models: As a result of our experiences and culture, we develop mental models
of things and people we interact with
Movement Control: Once data has been perceived and an appropriate action decided
upon, a response must be made;
Learning: Learning, as has been said, is the process of encoding in long-term memory
information that is contained in short-term memory.
Skill: The goal of human performance is to perform skillfully. To do so requires linking
inputs and responses into a sequence of action.
Individual Differences : In reality, there is no average user. A complicating but very
advantageous human characteristic is that we all differ-in looks, feelings, motor
abilities, intellectual abilities, learning abilities and speed, and so on
Questions:
1. What is Modularity?
2. What is Abstraction?
3. Explain different types of Coupling.
4. What is Cohesion? Explain different types of Cohesion
86
Software Engineering
06 Software Quality
Assurance
6.1 . Motivation:
Presented chapter is the foundation of basic software engineering and therefore
motivation of this course is to understand software quality assurance process is
involved with software.
87
Software Engineering
6.3 Objective:
The main objective is to introduce to the students about the product that is to be
engineered and the process that provides a framework for the engineering technology.
1. To provide knowledge of software engineering discipline.
2. To analyze risk in software design and quality.
3. To introduce the concept of advance software methodology.
6.4 Prerequisite:
Workflow of different phases of software life cycle
6.6 Syllabus:
Software Quality
88
Software Engineering
1. Software requirements are the foundation from which quality is measured. Lack of
conformance to requirements is lack of quality.
2. Specified standards define a set of development criteria that guide the manner in
which software is engineered. If the criteria are not followed, lack of quality will
almost surely result.
3. There is a set of implicit requirements that often goes unmentioned (e.g., the desire
for ease of use). If software conforms to its explicit requirements but fails to meet
implicit requirements, software quality is suspect.
Software quality is a complex mix of factors that will vary across different
applications and the customers who request them.
McCall’s Quality Factors:
The factors that affect software quality can be categorized in two broad groups:
(1) factors that can be directly measured (e.g., defects per function-point) and (2) factors
that can be measured only indirectly (e.g., usability or maintainability). In each case
measurement must occur. The software must be compared (documents, programs, data)
to some datum and an indication of quality is arrived at.
McCall, Richards, and Walters propose a useful categorization of factors that
affect software quality. These software quality factors, shown in the figure below, focus
on three important aspects of a software product: its operational characteristics, its
ability to undergo change, and its adaptability to new environments.
Referring to the factors noted in the figure below, McCall and his colleagues
provide the following descriptions:
Correctness. The extent to which a program satisfies its specification and fulfills the
customer's mission objectives.
Reliability. The extent to which a program can be expected to perform its intended
function with required precision.
89
Software Engineering
Usability. Effort required to learn, operate, prepare input, and interpret output of a
program.
Testability. Effort required to test a program to ensure that it performs its intended
function.
Portability. Effort required to transfer the program from one hardware and/or software
system environment to another.
Reusability. Extent to which a program [or parts of a program] can be reused in other
applications—related to the packaging and scope of the functions that the program
performs.
Quality Standards
90
Software Engineering
quality levels throughout the development and manufacturing process. ISO 9000
describes quality assurance elements in generic terms that can be applied to any
business regardless of the products or services offered.
The ISO 9000 standards have been adopted by many countries including all
members of the European Community, Canada, Mexico, the United States, Australia,
New Zealand, and the Pacific Rim. Countries in Latin and South America have also
shown interest in the standards.
After adopting the standards, a country typically permits only ISO registered
companies to supply goods and services to government agencies and public utilities.
Telecommunication equipment and medical devices are examples of product categories
that must be supplied by ISO registered companies. In turn, manufacturers of these
products often require their suppliers to become registered. Private companies such as
automobile and computer manufacturers frequently require their suppliers to be ISO
registered as well.
ISO 9000 describes the elements of a quality assurance system in general terms.
These elements include the organizational structure, procedures, processes, and
resources needed to implement quality planning, quality control, quality assurance, and
quality improvement. However, ISO 9000 does not describe how an organization
should implement these quality system elements. Consequently, the challenge lies in
designing and implementing a quality assurance system that meets the standard and
fits the company’s products, services, and culture.
91
Software Engineering
ISO 9001 is the quality assurance standard that applies to software engineering.
The standard contains 20 requirements that must be present for an effective quality
assurance system. Because the ISO 9001 standard is applicable to all engineering
disciplines, a special set of ISO guidelines (ISO 9000-3) have been developed to help
interpret the standard for use in the software process.
The ISO 9126 standard was developed in an attempt to identify the key quality
attributes for computer software. The standard identifies six key quality attributes:
Functionality. The degree to which the software satisfies stated needs as indicated by the
following subattributes: suitability, accuracy, interoperability, compliance, and security.
Reliability. The amount of time that the software is available for use as indicated by the
following sub-attributes: maturity, fault tolerance, recoverability.
Usability. The degree to which the software is easy to use as indicated by the following
subattributes: understandability, learnability, operability.
Efficiency. The degree to which the software makes optimal use of system resources as
indicated by the following sub-attributes: time behavior, resource behavior.
Maintainability. The ease with which repair may be made to the software as indicated by
the following subattributes: analyzability, changeability, stability, testability.
Portability. The ease with which the software can be transposed from one environment
to another as indicated by the following subattributes: adaptability, installability,
conformance, replaceability.
92
Software Engineering
Like McCall’s quality factors, the ISO 9126 factors do not necessarily lend
themselves to direct measurement. However, they do provide a worthwhile basis for
indirect measures and an excellent checklist for assessing the quality of a system.
Quality Metrics
The overriding goal of software engineering is to produce a high-quality system,
application, or product. To achieve this goal, software engineers must apply effective
methods coupled with modern tools within the context of a mature software process. In
addition, a good software engineer (and good software engineering managers) must
measure if high quality is to be realized.
The quality of a system, application, or product is only as good as the
requirements that describe the problem, the design that models the solution, the code
that leads to an executable program, and the tests that exercise the software to uncover
errors. A good software engineer uses measurement to assess the quality of the analysis
and design models, the source code, and the test cases that have been created as the
software is engineered. To accomplish this real-time quality assessment, the engineer
must use technical measures to evaluate quality in objective, rather than subjective
ways.
The project manager must also evaluate quality as the project progresses. Private
metrics collected by individual software engineers are assimilated to provide project
level results. Although many quality measures can be collected, the primary thrust at
the project level is to measure errors and defects. Metrics derived from these measures
provide an indication of the effectiveness of individual and group software quality
assurance and control activities.
Metrics such as work product (e.g., requirements or design) errors per function
point, errors uncovered per review hour, and errors uncovered per testing hour provide
insight into the efficacy of each of the activities implied by the metric. Error data can
also be used to compute the defect removal efficiency (DRE) for each process framework
activity.
Measuring quality:
93
Software Engineering
94
Software Engineering
Different testing strategies have been discussed in the previous section. The
following paragraphs explain Software Quality Assurance.
Even the most jaded software developers will agree that high-quality software is
an important goal. Many definitions have been proposed in the literature. One such
definition is follows: “Conformance to explicitly stated functional and performance
requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software.” The
definition serves to emphasize the following three important points:
• Software requirements are the foundation from which quality is measured. Lack
of conformance to requirements is lack of quality.
• Specified standards define a set of development criteria that guide the manner in
which software is engineered. If the criteria are not followed, lack of quality will
almost surely result.
• A set of implicit requirements often goes unmentioned (e.g., the desire for ease of
use and good maintainability). If software conforms to its explicit requirements
but fails to meet implicit requirements, software quality is suspect.
SQA activities:
Software engineers address quality (and perform quality assurance and quality
control activities) by applying solid technical methods and measures, conducting
formal technical reviews, and performing well-planned software testing.
The charter of the SQA group is to assist the software team in achieving a high
quality end product. The Software Engineering Institute recommends a set of SQA
activities that address quality assurance planning, oversight, record keeping, analysis,
and reporting. These activities are performed (or facilitated) by an independent SQA
group that:
95
Software Engineering
Prepares an SQA plan for a project. The plan is developed during project planning and is
reviewed by all interested parties. Quality assurance activities performed by the
software engineering team and the SQA group are governed by the plan. The plan
identifies:
• evaluations to be performed
• audits and reviews to be performed
• standards that are applicable to the project
• procedures for error reporting and tracking
• documents to be produced by the SQA group
• amount of feedback provided to the software project team
Participates in the development of the project’s software process description. The software team
selects a process for the work to be performed. The SQA group reviews the process
description for compliance with organizational policy, internal software standards,
externally imposed standards (e.g., ISO-9001), and other parts of the software project
plan.
Reviews software engineering activities to verify compliance with the defined software process.
The SQA group identifies, documents, and tracks deviations from the process and
verifies that corrections have been made.
Audits designated software work products to verify compliance with those defined as part of the
software process. The SQA group reviews selected work products; identifies, documents,
and tracks deviations; verifies that corrections have been made; and periodically reports
the results of its work to the project manager.
Ensures that deviations in software work and work products are documented and handled
according to a documented procedure. Deviations may be encountered in the project plan,
process description, applicable standards, or technical work products.
Records any noncompliance and reports to senior management. Noncompliance items are
tracked until they are resolved.
In addition to these activities, the SQA group coordinates the control and
management of change and helps to collect and analyze software metrics.
96
Software Engineering
This is a department function, which compares the standards to the product, and
takes action when non-conformance is detected for example testing.
An application that meets its requirements totally can be said to exhibit quality.
Quality is not based on a subjective assessment but rather on a clearly demonstrable,
and measurable, basis.
- Quality Assurance (QA), on the other hand, is a review with a goal of improving the
process as well as the deliverable. QA is often an external process. QA is an effective
approach to producing a high quality product.
One aspect is the process of objectively reviewing project deliverables and the
processes that produce them (including testing), to identify defects, and then making
recommendations for improvement based on the reviews. The end result is the
assurance that the system and application is of high quality, and that the process is
working. The achievement of quality goals is well within reach when organizational
97
Software Engineering
strategies are used in the testing process. From the client's perspective, an application's
quality is high if it meets their expectations.
98
Software Engineering
1. What are the five of the most important attributes of software quality? Explain them.
(May 2010) (10 marks)
99
Software Engineering
7 Software Testing
1. Motivation:
• Object design which comes before implementation, if not well understood, and if
not well done leads to system degradation.
• The average software product released on the market is not error free
2. Objective:
To get knowledge about the activities in the different testing strategies.
3. Learning Objective:
This module explains the meaning of Software Testing, with the help of concepts that
define the Testing process.
Student will be able to understand importance of Software Testing during the testing of
the software.
1. Students will be able to understand Software Testing Concepts and the various
Software standards.
2. Students will be able to understand the Testing Process and
Basic concept and terminology, Verification & validation, White Box Testing-
Path Testing, Control Structures Testing, DEFUSE testing.
3. Students will be able to understand Black Box Testing, OO testing methods.
4. Students will be able to understand Software Maintenance and Reverse Engineering.
4. Prerequisite:
Activities involved in requirement, analysis and design phases of software life cycle
5. Learning Outcomes:
100
Software Engineering
Student will be able to test a software with the help of various techniques
like Black Box Testing, OO testing methods, White Box Testing-
Path Testing, Control Structures Testing, DEFUSE testing.
6. Syllabus:
Module Content Duration Self Study Time
5. Learning
• Various testing strategies
• Types of White box and black box testing
• Object oriented testing
• Maintenance types
• Maintenance log and defect reports
• Software reengineering
6. Weightage: Marks
7. Abbreviations:
101
Software Engineering
8. Key Definitions:
1. Testing: Software testing is an investigation conducted to provide stakeholders
with information about the quality of the product or service under test. Software
testing also provides an objective, independent view of the software to allow the
business to appreciate and understand the risks of software implementation.
2. Black box testing: It is a software testing technique whereby the internal
workings of the item being tested are not known by the tester. For example, in a
black box test on a software design, the tester only knows the inputs and what
the expected outcomes should be and not how the program arrives at those
outputs. The tester does not ever examine the programming code and does not
need any further knowledge of the program other than its specifications.
3. White box testing: It is a software testing technique whereby explicit knowledge
of the internal workings of the item being tested is used to select the test data.
Unlike black box testing, white box testing uses specific knowledge of
programming code to examine outputs. The test is accurate only if the tester
knows what the program is supposed to do. He or she can then see if the
program diverges from its intended goal. White box testing does not account for
errors caused by omission, and all visible code must also be readable.
4. Reverse engineering: Software reverse engineering is done to retrieve the source
code of a program because the source code was lost, to study how the program
performs certain operations, to improve the performance of a program, to fix a
bug (correct an error in the program when the source code is not available), to
identify malicious content in a program such as a virus or to adapt a program
written for use with one microprocessor for use with another.
5. Forward engineering: Forward engineering is defined as the normal execution of
the software life cycle, i.e. in the forward direction. Therefore it is the traditional
process of moving from high-level abstractions and logical, implementation-
independent designs to the physical implementation of a system.
6. Defect report: It is a document reporting on any flaw in a component or system
that can cause the component or system to fail to perform its required function.
7. Maintenance log: It means a log in which unserviceabilities, rectifications and
daily inspections are recorded.
8. Unit testing: Unit testing is a software development process in which the
smallest testable parts of an application, called units, are individually and
independently scrutinized for proper operation. Unit testing is often automated
but it can also be done manually. This testing mode is a component of Extreme
Programming (XP), a pragmatic method of software development that takes a
meticulous approach to building a product by means of continual testing and
revision.
102
Software Engineering
9. Theory
9.1 Testing
Software testing is an investigation conducted to provide stakeholders with
information about the quality of the product or service under test. Software testing also
provides an objective, independent view of the software to allow the business to
appreciate and understand the risks of software implementation.
Walkthroughs:
103
Software Engineering
A walkthrough team should consist of four to six individuals. The team should
include at least one representative drawing up the specifications (in case of specification
walkthrough), the manager responsible for the specifications, a client representative, a
representative of the team that will perform the next phase of the development (for
specification walkthrough it is a representative from design team), and a representative
of the Software quality assurance group. The walkthrough should be chaired by the
SQA representative. The members of the walkthrough team, as far as possible, should
be experienced senior technical staff members because they tend to find the important
faults. The material for the walkthrough must be distributed to the participants well in
advance to allow for careful preparation. Each reviewer should study the material and
develop two lists: a list of items the reviewer does not understand and a list of items the
reviewer believes are incorrect.
The person leading the walkthrough guides the other members of the
walkthrough team through the document to uncover any faults. It is not the task of the
team to correct faults, merely to record them for later correction. There are two ways of
conducting a walkthrough. The first is participant driven. Participants present their lists
of unclear items and items they think are incorrect. The representative of the
specifications team must respond to each query, clarifying what is unclear to the
reviewer and either agreeing that indeed there is a fault or explaining why the reviewer
is mistaken. The second way of conducting a review is document driven. A person
responsible for the document, either individually or as part of a team, walks the
participants through that document, with the reviewers interrupting either with their
prepared comments or comments triggered. The second is likely to be more thorough
leading to detection of more faults.
The primary role of the walkthrough leader is to elicit questions and facilitate
discussion. A walkthrough is an iterative process; it is not supposed to be one-sided
instruction by the presenter. It also is essential that the walkthrough not be used as a
means of evaluating the participants, because the walkthrough degenerates into a point-
scoring session and does not detect faults. Walkthrough includes all kinds like:
specification walkthrough, design walkthrough, plan walkthrough and code
walkthrough.
Inspections:
104
Software Engineering
Inspections were first proposed by Fagan for testing designs and code. An
inspection goes far beyond a walkthrough and has five formal steps. First, an overview
of the document to be inspected (specification, design, code, or plan) is given by one of
the individuals responsible for producing that document. At the end of the overview
session, the document is distributed to the participants. In the second step, preparation,
the participants try to understand the document in detail. Lists of fault types found in
recent inspections, with the fault types ranked by frequency, are excellent aids. The
third step is the inspection. To begin, one participant walks through the document with
the inspection team, ensuring that every item is covered and that every branch is taken
at least once. Then fault finding commences. Within one day the leader of the inspection
team (the moderator) must produce a written report of the inspection to ensure
meticulous follow-through. The fourth stage is the rework, in which the individual
responsible for that document resolves all faults and problems noted in the written
report. The final stage is the follow-up. The moderator must ensure that every single
issue raised has been resolved satisfactorily, by either fixing the document or clarifying
items incorrectly flagged as faults. All fixes must be checked to ensure that no new
faults have been introduced. If more than 5 percent of the material inspected has been
reworked, then the team must reconvene for a 100 percent reinspection.
The inspection should be conducted by a team of four. For example, in the case
of a design inspection, the team will consist of a moderator, designer, implementer, and
tester. The moderator is both manager and leader of the inspection team. There must be
a representative of the team responsible for the current phase as well as a representative
of the team responsible for the next phase. The tester be any programmer responsible
for setting up test cases; preferably the tester should be a member of the SQA group.
The IEEE standard recommends a team of between three and six participants. Special
roles are played by the moderator; the reader, who leads the team through the design (or
code etc.); and the recorder, who is responsible for producing a written report of the
detected faults.
105
Software Engineering
Unit Testing:
Unit testing focuses on the building blocks of the software system, that is, objects
and subsystems. The specific candidates for unit testing are chosen from the object
model and the system decomposition. In principle, all the objects developed during the
development process should be tested, which is often not feasible because of time and
budget constraints. The minimal set of objects to be tested should be the participating
objects in the use cases. Subsystems should be tested after each of the objects and classes
within that subsystem have been tested individually Unit testing focuses verification
effort on the smallest unit of software design—the software component or module.The
unit test is white-box oriented. . In Unit testing the following are tested,
1. The module interface is tested to ensure that information properly flows into and
out of the program unit under test.
2. The local data structure is examined to ensure that data stored temporarily
maintains its integrity.
3. Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing.
4. All independent paths through the control structure are exercised to ensure that
all statements in a module have been executed at least once.
106
Software Engineering
i. Equivalence testing
This black box testing technique minimizes the number of test cases. The possible
inputs are partitioned into equivalence classes, and a test case is selected for each class.
The assumption of equivalence testing is that systems usually behave in similar ways
for all members of a class. Only one member of an equivalence class needs to be tested
in order to test the behavior associated with that class. Equivalence class consists of two
steps: identification of the equivalence classes and selection of the test inputs. The
following criteria are used in determining the equivalence classes.
107
Software Engineering
For each equivalence class, at least two pieces of data are selected: a typical input, which
exercises the common case, and an invalid input, which exercises the exception
handling capabilities of the component. After all equivalence classes have been
identified, a test input for each class has to be identified that covers the equivalence
class. If, not all the elements of the equivalence class are covered by the test input, the
equivalence class must be split into smaller equivalence classes, and test inputs must be
identified for each of the new classes.
A method that returns the number of days in a month, given the month and year
is considered as example here. The month and year are specified as integers. By
convention 1 represents the month of January, 2 the month of February, and so on. The
range of valid inputs for the year is 0 to maxInt.
Figure 4.16: Interface for a method computing the number of days in a given month.
108
Software Engineering
This special case of equivalence testing focuses on the conditions at the boundary
of the equivalence classes. Rather than selecting any element in the equivalence class,
boundary testing requires that the elements be selected from the “edges” of the
equivalence class. In the example discussed in equivalence testing, the month of
February presents several boundary cases. In general, years that are multiples of 4 are
leap years. Years that are multiples of 100, however, are not leap years, unless they are
also multiple of 400. For example, 2000 was a leap year, whereas 1900 was not. Both
year 1900 and 2000 are good boundary cases that should be tested. Other boundary
cases include the months 0 and 13. A disadvantage of equivalence class and boundary
testing is that these techniques do not explore combinations of test input data. This
problem is addresses by Cause-Effect testing that establishes logical relationships
between input and outputs or inputs and transformations. The inputs are called causes,
the outputs or transformations are effects.
The starting point for path testing is the flow graph. A flow graph consists of
nodes representing executable blocks and associations representing flow of control. A
flow graph is constructed from the code of a component by mapping decision
statements (e.g., if statements, while loops) to nodes lines. Statements between each
decision (e.g., then block, elser block) are mapped to other nodes. The following figures
depict the example faulty implementation of the getNumDaysInMonth() method and
the equivalent flow graph as a UML activity diagram. In the activity diagram, decisions
109
Software Engineering
are modeled with UML branches, blocks with UML action states, and control flow with
UML transitions.
110
Software Engineering
Figure 4.18: Equivalent flow graph for the (faulty) implementation of the
getNumDaysInMonth() method of the above implementation (UML activity diagram)
Complete path testing is done by examining the condition associated with each
branch point and selecting an input for the true branch and another input for the false
branch. For example, examining the first branch point in the UML activity diagram, two
inputs are selected: ‘year=0’ (such that year<1 is true) and ‘year=1901 (such that year<1
is false). The process is then repeated for the second branch and the inputs such as
‘month=1’ and ‘month=2’ are selected. The input (year=0, month=1) produces the path
{throw1}. The input (year=1901, month=1) produces a second complete path {n=32
return}, which uncovers one of the faults in the getNumDaysInMonth() method. The
following table depicts the test cases and equivalent paths generated by repeating the
above process for each node.
111
Software Engineering
Using graph theory, it can be shown that the minimum number of tests necessary to
cover all edges is equal to the number of independent paths through the flow graph.
This is defined as the cyclomatic complexity CC of the flow graph, which is
where the number of nodes is the number of branches and action states, and the
number of edges is the number of transitions in the activity diagram. The cyclomatic
complexity of the getNumDaysInMonth() method is 6, which is also the number of test
cases found (shown in the above table). Path testing and whitebox methods can detect
only faults resulting from exercising a path in the program, such as the faulty
numDays=32 statement. Whitebox testing methods cannot detect omissions, such as the
failure to handle the non-leap year 1900. Also path testing is heavily based on the
control structure of the program. However, no testing method short of exhaustive
testing can guarantee the discovery of all faults.
This testing technique was recently developed for object-oriented systems. State-
based testing compares the resulting state of the system with the expected state. In the
context of a class, state-based testing consists of deriving test cases from the UML
statechart diagram for the class. For each state, a representative set of stimuli is derived
for each transition (similar to equivalence testing). The attributes of the class are then
instrumented and tested after each stimulus has been applied to ensure that the class
has reached the specified state.
112
Software Engineering
Figure 4.19: UML statechart diagram and resulting tests for 2Bwatch SetTime function
(only the first eight stimuli are shown)
2Bwatch is a watch with two buttons (so is the name). Setting the time on 2Bwatch requires
the actor 2BwatchOwner to first press both buttons simultaneously, after which 2Bwatch
enters the set time mode. In the set time mode, 2Bwatch blinks the number being changed
(e.g., the hours, minutes, seconds, day, month, or year). Initially, when the 2BWatchOwner
enters the set time mode, the hours blink. If the actor presses the first button, the next
number blinks (e.g., if the hours are blinking and the actor presses the first button, the hurs
stop blinking and the minutes start blinking). If the actor presses the second button, the
blinking number is incremented by one unit. If the blinking number reaches the end of its
range, it is reset to the beginning of its range (e.g., assume the minutes are blinking and its
The statechart diagram specifies which stimuli change the watch from the high-level
current value is 59, its new value is set to 0 if the actor presses the second button). The
state MeasureTime to the high-level state SetTime. It does not show the low-level states
actor exits the set time mode by pressing both buttons simultaneously
of the watch when the date and time change, either because of actions of the user or
113
Software Engineering
because of time passing. The test inputs shown in the figure were generated such that
each transition is traversed at least once. After each input, instrumentation code checks
if the watch is in the predicted state and reports a failure otherwise. Some transitions
(e.g., transition 3) are traversed several times, as it is necessary to put the watch back
into the SetTime state (e.g., to test transitions 4, 5, and 6). The test inputs for the
DeadBattery state were not generated (only the first eight stimuli are displayed).
v. Polymorphism testing
114
Software Engineering
115
Software Engineering
Figure 4.21: Java source code for the NetworkConnection.send() message (left) and
equivalent Java source code without polymorphism (right).
116
Software Engineering
method of medium complexity can result in an explosion of paths. The equivalent flow
graph is shown below:
Figure 4.22: Equivalent flow graph for the expanded source code of the
NetworkConnection.send() method of the Java implementation (UML activity diagram)
Integration Testing:
Integration testing detects faults that have not been detected during unit testing
by focusing on small groups of components. Two or more components are integrated
and tested, and when no new faults are revealed, additional components are added to
the group. Since developing test stubs and drivers is time consuming, Extreme
Programming is used. The order in which components are tested, influences integration
testing. A careful ordering of components can reduce the resources needed for the
overall integration test.
117
Software Engineering
The big-bang testing strategy assumes that all components are first tested
individually and then tested together as a single system. The advantage is that no
additional test stubs or drivers are needed. Although this sounds simple it is expensive.
If attest uncovers a failure, it is impossible to distinguish failures in the interface from
failures within a component. Moreover, it is difficult to pinpoint a specific component.
The bottom-up testing first tests each component of the bottom layer
individually, and then integrates them with components of the next layer up. If two
components are tested together, it is called a double test. Similarly, three components
means triple test and four means quadruple test. This is repeated until all components
from all layers are combined. Test drivers are used to simulate the components of
higher layers that have not yet been integrated. Test stubs are not necessary during this
testing. The advantage of bottom-up testing is that interface faults can be more easily
found. The disadvantage is that it tests the most important subsystem, namely the
components of the user interface, last.
118
Software Engineering
The above figure illustrates bottom-up testing. Here the subsystems E, F, and G
are unit tested first, and then the triple test B-E-F and the double test D-G are executed
and so on.
The top-down testing strategy unit tests the components of the top layer first,
and then integrates the components of the next layer down. When all components of the
new layer have been tested together, the next layer is selected. Again, the tests
incrementally add one component at a time. This is repeated until all layers are
combined and involved in the test. Test stubs are used to simulate the components of
lower layers that have not yet been integrated. Test drivers are not required during this
testing process. The advantage of top-down testing is that it starts with user interface
components. The same set of tests, derived from the requirements, can be used in
testing the increasingly more complex set of subsystems. The disadvantage is that the
development of test stubs is time consuming and prone to error.
The above figure illustrates top-down testing strategy. Here, the subsystem A is
unit tested, then double tests A-B, A-C, and A-D are executed, then the quad test A-B-C-
D is executed, and so on.
119
Software Engineering
The sandwich testing strategy combines the top-down and bottom-up strategies
to make use of the best of both. During this testing, the tester must be able to map the
subsystem decomposition into three layers, a target layer (“the meat”), a layer above the
target layer (“the top slice of bread”), and a layer below the target layer (“the bottom
slice of bread”). Top-down integration testing is done by testing the top layer
incrementally with the components of the target layer, and bottom-up testing is used
for testing the bottom layer incremental with the components of the target layer. As a
result, test stubs and drivers need not be written for the top and bottom layers, because
they use the actual components from the target layer. The problem with this testing is
that it does not thoroughly test the individual components of the target layer before
integration. For example, the sandwich test shown below does not unit test component
C of the target layer.
120
Software Engineering
The modified sandwich testing strategy tests the three layers individually before
combining them in incremental tests with one another. The individual layer tests consist
of a group of three tests:
• The top layer the target layer. This test can reuse the target layer tests from the
individual layer tests, replacing the drivers with components from the top layer.
• The bottom is accessed by the target layer. This test can reuse the target layer
tests from the individual layer tests, replacing the stub with components from
the bottom layer.
The advantage of modified sandwich testing is that many testing activities can be
performed in parallel, as indicated by the UML activity diagrams. The disadvantage is
121
Software Engineering
the need for additional test stubs and drivers. But, modified testing leads to a
significantly shorter overall testing time than top-down or bottom-up testing.
System testing:
Once components have been integrated, system testing ensures that the complete
system complies with the functional and nonfunctional requirements. System testing is
a blackbox technique: test cases are derived from the use case model. Several system
testing activities are performed which are listed below:
• Functional testing
• Performance yesting
• Pilot testing
• Acceptance testing
• Installation testing
Functional testing:
Functional testing, also called requirements testing, finds differences between the
functional requirements and the system. The goal of the tester is to select those tests that
are relevant to the user and have a high probability of uncovering a failure. Functional
testing is different from usability testing. Functional testing finds differences between
the use case model and the observed system behavior, whereas usability testing finds
difference between the use case model and user’s expectation of the system.
To identify functional tests, the use case model is inspected and use case
instances that are likely to cause failures are identified. Test cases identified should
exercise both common and exceptional use cases. For example, the use case model for a
‘subway ticket distributor’ (shown by the UML use case diagram) is considered. Here
the common functionality is ‘PurchaseTicket’ use case, describing the steps necessary
for a passenger to successfully purchase a ticket and the various exceptional conditions
are ‘TimeOut’, Cancel, OutOfOrder, and NoChange use cases. The exceptional
conditions result from the state of the distributor or actions by the Passenger.
122
Software Engineering
Figure 4.27: An example of use case model for a subway ticket distributor (UML use
case diagram)
The following is the PurchaseTicket use case which describes the normal interaction
between the Passenger actor and the Distributor.
Figure 4.28: An example of use case from the ticket distributor use case model
PurchaseTicket.
123
Software Engineering
The following are the three features of the Distributor that are likely to fail and should
be tested:
• The Passenger may press multiple zone buttons before inserting money, in which
case the Distributor should display the amount of the last zone.
• The Passenger may select another zone button after beginning to insert money, in
which case the Distributor should return all money inserted by the Passenger.
• The Passenger may insert more money than needed, in which case the
Distributor should return the correct change.
The following is the test case PurchaseTicket CommonCase, which exercises the above
three features. The flow of events describes both the input to the system (stimuli that
the Passenger sends to the Distributor) and desired outputs (correct responses from the
Distributor).
124
Software Engineering
Flow of events 1. The Passenger presses in succession the zone buttons 2, 4, 1, and
2.
2. The Distributor should display in succession $i.25, $2.25, $0.75,
and $1.25.
3. The Passenger inserts a $5 bill.
4. The Distributor returns three $1 bills and three quarters and
issues a 2- zone ticket.
5. The Passenger repeats steps 1-4 using his second $5 bill.
6. The Passenger repeats steps 1-3 using four quarters and three
dimes. The Distributor issues a 2-zone ticket and returns a nickel.
7. The Passenger selects zone 1 and inserts a dollar bill. The
Distributor issues a 1-zone ticket and returns a quarter.
8. The Passenger selects zone 4 and inserts two $1 bills and a
quarter. The Distributor issues a 4-zone ticket.
9. The Passenger selects zone 4. The Distributor displays $2.25. The
Passenger inserts a $1 bill and a nickel, and selects zone 2. The
Distributor returns the $1 bill and the nickel and displays $1.25.
Exit condition The Passenger has three 2-zone tickets, one 1-zone ticket, and one 4-
zone ticket.
Figure 4.29: An example of test case derived from the PurchaseTicket use case.
Similar test cases can also be derived for the exceptional use cases NoChange,
OutOfOrder, TimeOut, and Cancel. Test cases such as PurchaseTicket_CommonCase,
are derived for all use cases, including use cases representing exceptional behavior. Test
cases are associated with the use cases from which they are derived, making it easier to
update the test cases when use cases are modified.
Performance testing:
125
Software Engineering
Performance testing finds differences between the design goals selected during
system design and the system. Because the design goals are derived from the
nonfunctional requirements, the test cases can be derived from the SDD (System Design
Document) or from the RAD (Rapid Application Development). The following tests are
performed during performance testing:
• Stress testing checks if the system can respond to many simultaneous requests.
• Volume testing attempts to find faults associated with large amounts of data, such
as static limits imposed by the data structure, or high-complexity algorithms, or
high disk fragmentation.
• Security testing attempts to find security faults in the system. Usually this test is
accomplished by “tiger teams” who attempt to break into the system, using their
experience and knowledge of typical security flaws.
• Timing testing attempts to find behaviors that violate timing constraints described
by the nonfunctional requirements.
• Recovery tests evaluates the ability of the system to recover from errorneous
states, such as the unavailability of resources, a hardware failure, or a network
failure.
After all the functional and performance tests have been performed, and no failures
have been detected during these tests, the system is said to be validated.
Pilot testing:
During the pilot test, also called the field test, the system is installed and used by
a selected set of users. Users exercise the system as if it had been permanently installed.
No explicit guidelines or test scenarios are given to the users. A group of people is
invited to use the system for a limited time and to give their feedback to the developers.
This test is useful for systems without a specific set of requirements.
An alpha test is a pilot test with users exercising the system in the development
environment. In a beta test, the acceptance test is performed by a limited number of end
users in the target environment. The Internet has made the distribution of software very
easy. As a result, beta tests are more and more common.
Acceptance testing:
There are three ways the client evaluates a system during acceptance testing. In a
benchmark test, the client prepares a set of test cases that represent typical conditions
under which the system should operate. Benchmark tests can be performed with actual
users or by a special test team exercising the system functions.
126
Software Engineering
After acceptance testing, the client reports to the project manager which
requirements are not satisfied. If requirements must be changed, the changes should be
reported in the minutes to the client acceptance review and should form the basis for
another iteration of the software life-cycle process. If the customer is satisfied, the
system is accepted, possibly contingent on a list of changes recorded in the minutes of
the acceptance test.
Installation testing:
After the system is accepted, it is installed in the target environment. The desired
outcome of the installation test is that the installed system correctly addresses all
requirements. In most cases, the installation test repeats the test cases executed during
function and performance testing in the target environment. Once the customer is
satisfied with the results of the installation test, system testing is complete, and the
system is formally delivered and ready for operation.
Regression Testing:
127
Software Engineering
Product testing:
Maintenance
128
Software Engineering
Once the product has its acceptance test, it is handed over to the client. The
product is installed and used for the purpose for which it was constructed. Any useful
product, however, is almost certain to undergo maintenance during the maintenance
phase, either to fix faults (corrective maintenance) or extend the functionality of the
product (enhancement).
ii. Perfective maintenance: Here, changes are made to the code to improve the
effectiveness of the product. For instance, the client may wish additional functionality
or request that the product be modified so that it runs faster. Improving the
maintainability of a product is another example of perfective maintenance. The study
showed that 60.5 percent time was spent on this type of maintenance.
iii. Adaptive maintenance: Here, changes are made to the product to react to changes in
the environment in which the product operates. For example, a product almost certainly
has to be modified if it is ported to a new compiler, operating system, or hardware.
Adaptive maintenance is not requested by a client; instead, it is externally imposed on
the client. The study showed that 18 percent of software maintenance was adaptive in
nature.
129
Software Engineering
The first thing needed when maintaining a product is a mechanism for changing
the product. With regard to corrective maintenance, that is, removing residual faults, if
the product appears to be functioning incorrectly, then a fault/defect report should be
filed by the user. This must include enough information to enable the maintenance
programmer to recreate the problem, which usually will be sort of software failure.
The maintenance programmer should first consult the fault report file. This
contains all reported faults that have not yet been fixed, together with suggestions for
working around them, that is, ways for the user to bypass the portion of the product
that apparently is responsible for the failure, until such time as the fault can be fixed. If
the fault has been reported previously, any information in the fault report file should be
given to the user. But if what the user reports appear to be a new fault, then the
maintenance programmer should study the problem and attempt to find the cause and
a way to fix it. In addition, an attempt should be made to find a way to work around the
problem, because it may take 6 or 9 months before someone can be assigned to make
the necessary changes to the software. In the light of the serious shortage of
programmers, and in particular programmers good enough to perform maintenance,
suggesting a way to live with the fault until it can be solved often is the only way to
deal with fault reports that are not true emergencies.
130
Software Engineering
then distribute a new version of the product to all sites, of course. Also organizations
prefer to accumulate noncritical maintenance tasks then implement the changes as a
group.
131
Software Engineering
The reengineering paradigm shown in the figure is a cyclical model. This means
that each of the activities presented as a part of the paradigm may be revisited. For any
particular cycle, the process can terminate after any one of these activities. Each activity
is described down.
Inventory analysis:
Every software organization should have an inventory of all applications. The
inventory can be nothing more than a spreadsheet model containing information that
provides a detailed description (e.g., size, age, business criticality) of every active
application. By sorting this information according to business criticality, longevity,
current maintainability, and other locally important criteria, candidates for
reengineering appear. Resources can then be allocated to candidate applications for
reengineering work.
Document restructuring:
132
Software Engineering
Weak documentation is the trademark of many legacy systems. The following options
can be adopted:
• Creating documentation is far too time consuming. If the system works, we’ll
live with what we have.
• Documentation must be updated, but we have limited resources. We’ll use a
“document when touched” approach. (i.e., documenting only changed portions
of the system)
• The system is business critical and must be fully redocumented. (Even in this
case, an intelligent approach is to pare documentation to an essential minimum)
Each of these options is viable. A software organization must choose the one that is
most appropriate for each case.
Reverse Engineering:
The term reverse engineering has its origins in the hardware world. A company
disassembles a competitive hardware product in an effort to understand its competitor's
design and manufacturing "secrets." These secrets could be easily understood if the
competitor's design and manufacturing specifications were obtained. But these
documents are proprietary and unavailable to the company doing the reverse
engineering. In essence, successful reverse engineering derives one or more design and
manufacturing specifications for a product by examining actual specimens of the
product.
Reverse engineering for software is quite similar. In most cases, however, the
program to be reverse engineered is not a competitor's. Rather, it is the company's own
work (often done many years earlier). The "secrets" to be understood are obscure
because no specification was ever developed. Therefore, reverse engineering for
software is the process of analyzing a program in an effort to create a representation of
the program at a higher level of abstraction than source code. Reverse engineering is a
process of design recovery. Reverse engineering tools extract data, architectural, and
procedural design information from an existing program.
Code restructuring:
The most common type of reengineering is code restructuring. Some legacy
systems have relatively solid program architecture, but individual modules were coded
in a way that makes them difficult to understand, test, and maintain. In such cases, the
code within the suspect modules can be restructured.
To accomplish this activity, the source code is analyzed using a restructuring
tool. Violations of structured programming constructs are noted and code is then
restructured (this can be done automatically). The resultant restructured code is
133
Software Engineering
reviewed and tested to ensure that no anomalies have been introduced. Internal code
documentation is updated.
Data restructuring:
A program with weak data architecture will be difficult to adapt and enhance. In
fact, for many applications, data architecture has more to do with the long-term
viability of a program that the source code itself.
Unlike code restructuring, which occurs at a relatively low level of abstraction,
data structuring is a full-scale reengineering activity. In most cases, data restructuring
begins with a reverse engineering activity. Current data architecture is dissected and
necessary data models are defined. Data objects and attributes are identified, and
existing data structures are reviewed for quality.
When data structure is weak, the data are reengineered. Because data
architecture has a strong influence on program architecture and the algorithms that
populate it, changes to the data will invariably result in either architectural or code-
level changes.
Forward engineering:
In an ideal world, applications would be rebuilt using a automated
“reengineering engine.” The old program would be fed into the engine, analyzed,
restructured, and then regenerated in a form that exhibited the best aspects of software
quality. In the short term, it is unlikely that such an “engine” will appear, but CASE
vendors have introduced tools that provide a limited subset of these capabilities that
addresses specific application domains (e.g., applications that are implemented using a
specific database system). More important, these reengineering tools are becoming
increasingly more sophisticated.
Forward engineering, also called renovation or reclamation, not only recovers
design information from existing software, but uses this information to alter or
reconstitute the existing system in an effort to improve its overall quality. In most cases,
reengineered software re implements the function of the existing system and also adds
new functions and/or improves overall performance.
134
Software Engineering
135
Software Engineering
12. Questions:
Ans:
It is concerned about adapting existing systems to changes in their external
environment and making enhancements requested by users.
– Forward engineering
– Data restructuring
– Code restructuring
– Inventory analysis
– Document restructuring
– Reverse engineering
2. What are different types of maintenance and also explain the different steps involved
in creating a maintenance log? (Nov 2010) (10 marks)
Ans:
Types of maintenance:
• Corrective
• Perfective
136
Software Engineering
• Adaptive
• Preventive
Maintenance log:
3. Write short notes on Reverse and Re-engineering. (Nov 2010) (10 marks)
Ans:
Reengineering is concerned about adapting existing systems to changes in their external
environment and making enhancements requested by users. Only about 20 percent of
all maintenance work is spent “fixing mistakes”, the remaining 80 percent is spent in
reengineering for future use.
A software reengineering model that defines six activities as follows:
– Forward engineering
– Inventory analysis
– Data restructuring
– Document restructuring
– Code restructuring
– Reverse engineering
4. Explain various software testing strategies. (May 2010, Nov 2010) (10 marks)
Ans:
– Unit testing
– Regression testing (additional test cases for software functions)
– Integration testing (bottom-up, top down, sandwich)
– Validation testing
o Alpha- customer tests
o Beta-developer tests
– Acceptance testing
o Benchmark
o Pilot
o Parallel
– System testing
o Recovery
o Security
o Stress
o Performance
137
Software Engineering
5. Explain objectives for testing. Also explain the following terms: (Nov 2010) (10 marks)
(i) System testing
(ii) Scalability
(iii) Regression
(iv) Black box testing.
Ans:
Objectives for testing:
138
Software Engineering
Equivalence partitioning
The tester looks for plausible faults (i.e., aspects of the implementation of the system
that may result in defects). To determine whether these faults exist, test cases are
designed to exercise the design or code.
Inheritance does not obviate the need for thorough testing of all derived classes. In fact,
it can actually complicate the testing process.
Scenario-based testing concentrates on what the user does, not what the product does.
This means capturing the tasks (via use-cases) that the user has to perform, then
applying them and their variants as tests.
139
Software Engineering
8 Web Engineering
1. Motivation:
2. Objective:
3. Learning Objective:
This module explores the area of Web Engineering.
140
Software Engineering
software engineering.
3. Student will be able to understand the meaning of Test driven
development along with Software engineering with aspects.
3. Prerequisite:
Activities involved in requirement, analysis and design phases of software life cycle
4. Syllabus:
5. Learning Outcome:
This topic gives the knowledge of developing web applications, security
engineering and development with aspects.
6. Weightage: Marks
7. Abbreviations:
(1) TDD – test-driven development
141
Software Engineering
8. Key Definitions:
9. Theory
During TDD, code is developed in very small increments (one sub function at a time),
and no code is written until a test exists to exercise it. You should note that each
iteration results in one or more new tests that are added to a regression test suite that is
run with every change. In TDD, tests drive the detailed component design and the
esultant source code
Security Engineering
142
Software Engineering
Phishing –
The user is mislead to some other site which is designed as the original site and
important customer data is copied.
According to the Microsoft Developer Network the patterns & practices of Security
Engineering consists of the following activities:
Security Objectives
Security Design Guidelines
Security Modeling
Security Testing
143
Software Engineering
Security Tuning
These activities are designed to help meet security objectives in the software life cycle
The process is not yet matured, but it is likely that such a process will adopt
characteristics of both evolutionary and concurrent process models
Questions:
144
Software Engineering
145