Introduction to Software Engineering Basics
Introduction to Software Engineering Basics
Analysis\Context-Diagram
Flow Charts
Entity-Relationship Diagrams
Documentation
Cross-Reference Listing
Test Data
Test Results
Reference Guide
Operating
Procedures
Installation
Guide
Operational Manuals
• Reusability of components
Software reusability has introduced another area and is known as
component based software engineering.
In software, there is only a humble beginning like
graphical user interfaces are built using reusable components that
enable the creation of graphics windows, pull-down menus, and a
wide variety of interaction mechanisms.
• Software is flexible
Software Myths
1. Software is easy to change
It is true that source code files are easy to edit, but
that is quite different than saying that software is easy to
change. Every change requires that the complete system be
re-verified. If we do not take proper care, this will be an
extremely tedious and expensive process.
2. Computers provide greater reliability than the devices they
replace.
It is true that software does not fail in the traditional sense.
There are no limits to how many times a given piece of code
can be executed before it “wears out". In any event, the
simple expression of this myth is that our general ledgers are
still not perfectly accurate, even though they have been
computerized. Back in the days of manual accounting
systems, human error was a fact of life.
3. Testing software correct can remove all the errors.
Testing can only show the presence of errors. It cannot show the
absence of errors. Our aim is to design effective test cases in order
to find maximum possible errors. The more we test, the more we
are confident about our design.
Contributing factors:
• Change in ratio of hardware to software costs
• Increasing importance of maintenance
• Advances in software techniques
• Increased demand for software
• Demand for larger and more complex software systems.
The Software Process
A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created.
An activity strives to achieve a broad objective (e.g., communication with stakeholders) and is
applied regardless of the application domain, size of the project, complexity of the effort, or degree
of rigor with which software engineering is to be applied.
An action (e.g., architectural design) encompasses a set of tasks that produce a major work product
(e.g., an architectural model).
A task focuses on a small, but well-defi ned objective (e.g., conducting a unit test) that produces a
tangible outcome.
In the context of software engineering, a process is not a rigid prescription for how to build
computer software. Rather, it is an adaptable approach that enables the people doing the work (the
software team) to pick and choose the appropriate set of work actions and tasks. The intent is
always to deliver software in a timely manner and with sufficient quality to satisfy those who have
sponsored its creation and those who will use it.
A quality focus: Total quality management, Six Sigma, and similar philosophies 2 foster a continuous
process improvement culture, and it is this culture that ultimately leads to the development of
increasingly more effective approaches to software engineering. The bedrock that supports software
engineering is a quality focus.
Process: The foundation for software engineering is the process layer. The software engineering
process is the glue that holds the technology layers together and enables rational and timely
development of computer software. Process defines a framework that must be established for
effective delivery of software engineering technology. The software process forms the basis for
management control of software projects and establishes the context in which technical methods
are d, work products (models, documents, data, reports, forms, etc.) are produced, milestones are
established, quality is ensured, and change is properly managed.
Methods: Software engineering methods provide the technical “how to’ s for building software”.
Methods encompass a broad array of tasks that include communication, requirements analysis,
design modelling, program construction, testing, and support. Software engineering methods rely on
a set of basic principles that govern each area of the technology and include modelling activities and
other descriptive techniques.
Tools: Software engineering tools provide automated or semi-automated support for the process
and the methods. When tools are integrated so that information created by one tool can be used by
another, a system for the support of software development, called computer-aided software
engineering, is established.
The Process Framework
A process framework establishes the foundation for a complete software
engineering process by identifying a small number of framework activities that
are applicable to all software projects, regardless of their size or complexity. The
process framework encompasses a set of umbrella activities that are applicable
across the entire software process.
A generic process framework for software engineering encompasses five
activities:
• Communication
• Planning
• Modelling
• Construction
• Deployment
Communication
Before any technical work, it is important to communicate and collaborate with
the customer (and other stakeholders). The intent is to understand stakeholders’
objectives for the project and to gather requirements that help define software
features and functions.
Planning
A software project is a complicated journey, and the planning activity creates a
“map” that helps guide the team as it makes the journey. The map—called a
software project plan—defines the software engineering work by describing the
technical tasks to be conducted, the risks that are likely, the resources that will be
required, the work products to be produced, and a work schedule.
Modelling
Whether you’re a landscaper, a bridge builder, an aeronautical engineer, a
carpenter, or an architect, you work with models every day. You create a “sketch”
of the thing so that you’ll understand the big picture—what it will look like
architecturally, how the constituent parts fit together, and many other
characteristics. If required, you refine the sketch into greater and greater detail in
an effort to better understand the problem and how you’re going to solve it. A
software engineer does the same thing by creating models to better understand
software requirements and the design that will achieve those requirements.
Construction
What you design must be built. This activity combines code generation (either
manual or automated) and the testing that is required to uncover errors in the
code.
Deployment
The software (as a complete entity or as a partially completed increment) is
delivered to the customer who evaluates the delivered product and provides
feedback based on the evaluation.
These five generic framework activities can be used during the development of
small, simple programs, the creation of Web applications, and for the engineering
of large, complex computer-based systems. The details of the software process
will be quite different in each case, but the framework activities remain the same.
For many software projects, framework activities are applied
iteratively as a project progress. That is, communication, planning, modelling,
construction, and deployment are applied repeatedly through a number of project
iterations. Each iteration produces a software increment that provides
stakeholders with a subset of overall software features and functionality. As each
increment is produced, the software becomes more and more complete.
Umbrella activities
Software engineering process framework activities are complemented by a
number of umbrella activities. In general, umbrella activities are applied
throughout a software project and help a software team manage and control
progress, quality, change, and risk. Typical umbrella activities include
Software project tracking and control—allows the software team to assess
progress against the project plan and take any necessary action to maintain the
schedule.
Risk management—assesses risks that may affect the outcome of the project or
the quality of the product.
Software quality assurance—defines and conducts the activities required to
ensure software quality.
Technical reviews—assess software engineering work products in an effort to
uncover and remove errors before they are propagated to the next activity.
Measurement—defines and collects process, project, and product measures that
assist the team in delivering software that meets stakeholders’ needs; can be used
in conjunction with all other framework and umbrella activities
Software configuration management—manages the effects of change
throughout the software process.
Reusability management—defines criteria for work product reuse (including
software components) and establishes mechanisms to achieve reusable
components.
Work product preparation and production—encompass the activities required
to create work products such as models, documents, logs, forms, and lists.
Software Product
Software Products are nothing but software systems delivered to the customer with
the documentation that describes how to install and use the system. In certain cases,
software products may be part of system products where hardware, as well as
software, is delivered to a customer. Software products are produced with the help of
the software process.
Software Process
Software is the set of instructions in the form of programs to work the computer system
and to process the hardware components. To produce a software product the set of
activities is used. This set is called a software process.
PROCESS MODELS
1. Definition
• Process models were originally proposed to bring order to the
chaos of software development.
• History has indicated that these models have brought a certain
amount of useful structure to software engineering work and have
provided a reasonably effective road map for software teams.
• However, software engineering work and the products that are
produced remain on “the edge of chaos.”
• A process model provides a specific roadmap for software
engineering work. It defines the flow of all activities, actions and
tasks, the degree of iteration, the work products, and the
organization of the work that must be done.
• Software engineers and their managers adapt a process model to
their needs and then follow it. In addition, the people who have
requested the software have a role to play in the process of
defining, building, and testing it.
• From the point of view of a software engineer, the work product is
a customized description of the activities and tasks defined by the
process.
1.1 PRESCRIPTIVE PROCESS MODELS
• A prescriptive process model strives for structure and order in
software development. Activities and tasks occur sequentially
with defined guidelines for progress.
• We call them “prescriptive” because they prescribe a set of
process elements—framework activities, software engineering
actions, tasks, work products, quality assurance, and change
control mechanisms for each project.
• Each process model also prescribes a process flow (also called a
work flow)—that is, the manner in which the process elements
are interrelated to one another.
• All software process models can accommodate the generic
framework activities, but each applies a different emphasis to
these activities and defines a process flow that invokes each
framework activity (as well as software engineering actions and
tasks) in a different manner.
• Prescriptive process models are sometimes referred to as
“traditional” process models.
• Prescriptive process models defi ne a prescribed set of process
elements and a predictable process work flow.
1.1.1 The Waterfall Model
• The Waterfall Model was the first Process Model to be
introduced. It is also referred to as a linear-sequential life
cycle model.
• There are times when the requirements for a problem are
well understood— when work flows from communication
through deployment in a reasonably linear fashion. This
situation is sometimes encountered when well-defined
adaptations or enhancements to an existing system must be
made (e.g., an adaptation to accounting software that has
been mandated because of changes to government
regulations). It may also occur in a limited number of new
development efforts, but only when requirements are well
defined and reasonably stable.
• The waterfall model, sometimes called the classic life cycle,
suggests a systematic, sequential approach to software
development that begins with customer specification of
requirements and progresses through planning, modelling,
construction, and deployment, culminating in ongoing
support of the completed software.
• The waterfall model is the oldest paradigm for software
engineering.
• A variation in the representation of the waterfall model is
called the V-model.
• V-model depicts the relationship of quality assurance actions
to the actions associated with communication, modelling,
and early construction activities.
• As a software team moves down the left side of the V, basic
problem requirements are refined into progressively more
detailed and technical representations of the problem and its
solution. Once code has been generated, the team moves up
the right side of the V, essentially performing a series of tests
(quality assurance actions) that validate each of the models
created as the team moves down the left side. In reality,
there is no fundamental difference between the classic life
cycle and the V-model.
• The V-model provides a way of visualizing how verification
and validation actions are applied to earlier engineering
work.
Waterfall model
V-model
1.1.2 Evolutionary Process Models
• Evolutionary process models produce an increasingly
more complete version of the software with each iteration.
• Evolutionary models are iterative. They are characterized
in a manner that enables you to develop increasingly
more complete versions of the software.
• we present two common evolutionary process models:
Prototyping , The Spiral Model.
Prototyping Model :
• Often, a customer defines a set of general objectives
for software, but does not identify detailed
requirements for functions and features. In other cases,
the developer may be unsure of the efficiency of an
algorithm, the adaptability of an operating system, or
the form that human-machine interaction should take.
In these, and many other situations, a prototyping
paradigm may offer the best approach.
• prototyping can be used as a stand-alone process
model, it is more commonly used as a technique that
can be implemented within the context of any one of
the process models.
• The prototyping paradigm begins with communication.
You meet with other stakeholders to define the overall
objectives for the software, identify whatever
requirements are known, and outline areas where
further definition is mandatory.
• A prototyping iteration is planned quickly, and
modeling (in the form of a “quick design”) occurs.
• A quick design focuses on a representation of those
aspects of the software that will be visible to end users.
Prototyping can be problematic for the following
reasons:
➢ Stakeholders see what appears to be a working
version of the software, unaware that the
prototype is held together haphazardly, unaware
that in the rush to get it working you haven’t
considered overall software quality or long-term
maintainability.
➢ As a software engineer, you often make
implementation compromises in order to get a
prototype working quickly. An inappropriate
operating system or programming language may
be used simply because it is available and known;
an inefficient algorithm may be implemented
simply to demonstrate capability.
• problems can occur, prototyping can be an effective
paradigm for Prescriptive Process Model software
engineering.
The Spiral Model :
• Originally proposed by Barry Boehm.
• The spiral model is an evolutionary software process
model that couples the iterative nature of prototyping
with the controlled and systematic aspects of the
waterfall model.
• It provides the potential for rapid development of
increasingly more complete versions of the software.
Boehm describes the model in the following manner.
• The spiral development model is a risk driven process
model generator that is used to guide multi
stakeholder concurrent engineering of software
intensive systems.
• It has two main distinguishing features. One is a cyclic
approach for incrementally growing a system’s degree
of definition and implementation while decreasing its
degree of risk. The other is a set of anchor point
milestones for ensuring stakeholder commitment to
feasible and mutually satisfactory system solutions.
keyassumptions:
which will change. It is equally difficult to predict how customer priorities will
• For many types of software, design and construction are interleaved. That is,
both activities should be performed in tandem so that design models are proven
as they are created. It is difficult to predict how much design is necessary before
Agile principles
• Throughout the project business people and developers work together on daily basis.
• Projects are created around motivated people if they are given the proper environment
andsupport.
in thedevelopment team.
knowledge of the processes used. You cannot say that a team cannot work in an
agile way if she doesnot know the key concepts of this process. In many
companies, the team has all the technical skills, but do not know the process.
This can be addressed with a simple workshop led by someone who already
2. Collaboration: the good old ability to work in a team is also essential. People
shouldcooperate among themselves and with all involved, for the sake of the
project. This requires above all humility. Even the most senior developers have
3. Focus: all team members must be focused on one common goal: to deliver
needed. Remember that the team itself must stop from time to time (eg,
every 15 days) toreflect on what is good and what can be improved in the
work process.
4. Decision making: the development team should have the freedom to control
their own destiny. Should have autonomy in technical matters and project. It is
the staff who should define the best way to control versions of code, making
builds, deploys, run tests, documenting requirements, etc.. The company can
(and should) suggest good practice, but in the end is the staff (self-organizing)
which will adopt the methods or processes that you think best. Those involved in
business. It is necessary that the staff record the main lessons learned, which
5. Trust and respect: the team must be consistent and must demonstrate trust and
respect needed to make a strong team. Remember that the main objective is to
makethe team strong enough that the whole is greater than the sum of its parts.
6. Self-organization: is the team itself should organize to perform the work. You
need tolook at every moment what else can be improved in the process so that it
very important to improve collaboration. The team selects how much work
7. Fuzzy problem-solving ability: Any good software team must be allowed the
freedomto control its own destiny .This implies that the team is given decision
Extreme programming (XP) is one of the most important software development framework of Agile
models. It is used to improve software quality and responsive to customer requirements. The extreme
programming model recommends taking the best practices that have worked well in the past in program
development projects to extreme levels.
Extreme Programming uses an object-oriented approach as its preferred development paradigm and
encompasses a set of rules and practices that occur within the context of four framework activities:
• Planning
• Design
• Coding
• Testing
Planning
Design
➢ Recommends the construction of a unit test for a store before coding commences
➢ Encourages pair programming
Testing
➢ All unit tests are executed daily
➢ Acceptance tests are defined by the customer and executed to assess customer visible
functionality
Some of the projects that are suitable to develop using XP model are given below:
Small projects: XP model is very useful in small projects consisting of small teams as face to face meeting
is easier to achieve.
Projects involving new technology or Research projects: This type of projects face changing of
requirements rapidly and technical problems. So XP model is used to complete this type of projects.
Scrum (software development)
Scrum is the type of Agile framework. It is a framework within which people can address complex adaptive
problem while productivity and creativity of delivering product is at highest possible values. Scrum uses
Iterative process.
Lifecycle of Scrum
Sprint
A Sprint is a time-box of one month or less. A new Sprint starts immediately after the completion of the
previous Sprint.
Release
Sprint Review
If the product still have some non-achievable features then it will be checked in this stage and then the
product is passed to the Sprint Retrospective stage.
Sprint Retrospective
Product Backlog
Sprint Backlog
Sprint Backlog is divided into two parts Product assigned features to sprint and Sprint planning meeting.
• Requirements elicitation
• Requirements Analysis
• Requirements Documentation
• Requirements Review
Requirements Elicitation:
This is also known as gathering of requirements. Here, requirements
are identified with the help of customer and existing system processes,
is available.
Requirements Analysis:
analysis of requirements starts with requirement elicitation. The
requirements are analysed in order to identify inconsistencies, defects,
omissions etc: we describe requirements in terms of relationships and
also resolve conflicts, if any.
Requirements Documentation:
This is the end product of requirements elicitation and analysis. The
documentation is very important as it will be the foundation for the
design of the software. The document is known as software
requirements specification(srs).
Requirements Review:
The review process is carried out to improve the quality of the SRS. It
may also be called as requirements verification. For maximum benefits,
review and verification should not be treated as a discrete activity to be
done only at the end of the preparation of SRS. It should be treated as
continuous activity that is incorporated into the elicitation, analysis and
documentation.
requirement engineering
• The process of collecting the software requirement from the client then
understand, evaluate and document it is called as requirement
engineering.
• Requirement engineering constructs a bridge for design and
construction.
2. Elicitation
Elicitation means to find the requirements from anybody.
The requirements are difficult because the following problems occur in
elicitation.
3. Elaboration
• In this task, the information taken from user during inception and
elaboration and are expanded and refined in elaboration.
• Create and refine user scenarios.
• Find analysis classes-attributes and service.
• Its main task is developing pure model of software using functions,
feature and constraints of a software.
4. Negotiation
• In negotiation task, a software engineer decides the how will the project
be achieved with limited business resources.
• Reconcile the requirements conflicts through negotiation
• Prioritize requirements.
• Asses their cost and work.
• To create rough guesses of development and access the impact of the
requirement on the project cost and delivery time.
5. Specification
• In this task, the requirement engineer constructs a final work product.
• The work product is in the form of software requirement specification.
• In this task, formalize the requirement of the proposed software such as
informative, functional and behavioral.
• The requirement are formalize in both graphical and textual formats.
• Written document
• Set of geographical modes
6. Validation
• The work product is built as an output of the requirement engineering
and that is accessed for the quality through a validation step.
• The formal technical reviews from the software engineer, customer and
other stakeholders helps for the primary requirements validation
mechanism.
7. Requirement management
• It is a set of activities that help the project team to identify, control and
track the requirements and changes can be made to the requirements at
any time of the ongoing project.
• These tasks start with the identification and assign a unique identifier to
each of the requirement.
• After finalizing the requirement traceability table is developed.
• The examples of traceability table are the features, sources,
dependencies, subsystems and interface of the requirement.
TYPES OF REQUIREMENTS
1. Known requirements
[Link] requirements
[Link] requirements
STAKEHOLDER:
1. Known requirements
2. unknown requirements
* forgotten by stakeholder
3. undreamt requirements
FUNCTIONAL REQUIREMENTS
These are the requirements that the end user specifically demands as
basic facilities that the system should offer. All these functionalities
need to be necessarily incorporated into the system as a part of the
contract. These are represented or stated in the form of input to be given
to the system, the operation performed and the output expected. They are
basically the requirements stated by the user which one can see directly
in the final product, unlike the non-functional requirements.
NON-FUNCTIONAL REQUIREMENTS
These are basically the quality constraints that the system must satisfy
according to the project contract. The priority or extent to which these
factors are implemented various from one project to other. They are also
called non-behavioral requirements. They are basically deal with issues
like:
• Portability
• Security
• Maintainability
• Reliability
• Scalability
• Performance
• Reusability
• Flexibility
USER REQUIREMENTS
SYSTEM REQUIREMENTS
Feasibility Study
Feasibility Study in Software Engineering is a study to evaluate
feasibility of proposed project or system. Feasibility study is one of
stage among important four stages of Software Project Management
Process. As name suggests feasibility study is the feasibility analysis or
it is a measure of the software product in terms of how much beneficial
product development will be for the organization in a practical point of
view. Feasibility study is carried out based on many purposes to
analyze whether software product will be right in terms of
development, implantation, contribution of project to the organization
etc.
Types of Feasibility Study :
The feasibility study mainly concentrates on bellow five mentioned
areas. Among these Economic Feasibility Study is most important part
of the feasibility analysis and Legal Feasibility Study is less considered
feasibility analysis.
1. Technical Feasibility –
In Technical Feasibility current resources both hardware software
along with required technology are analyzed/assessed to develop
project. This technical feasibility study gives report whether there
exists correct required resources and technologies which will be used
for project development. Along with this, feasibility study also
analyzes technical skills and capabilities of technical team, existing
technology can be used or not, maintenance and up-gradation is easy
or not for chosen technology etc.
2. Operational Feasibility –
In Operational Feasibility degree of providing service to
requirements is analyzed along with how much easy product will be
to operate and maintenance after deployment. Along with this other
operational scopes are determining usability of product, Determining
suggested solution by software development team is acceptable or
not etc.
3. Economic Feasibility –
In Economic Feasibility study cost and benefit of the project is
analyzed. Means under this feasibility study a detail analysis is
carried out what will be cost of the project for development which
includes all required cost for final development like hardware and
software resource required, design and development cost and
operational cost and so on. After that it is analyzed whether project
will be beneficial in terms of finance for organization or not.
4. Legal Feasibility –
In Legal Feasibility study project is analyzed in legality point of
view. This includes analyzing barriers of legal implementation of
project, data protection acts or social media laws, project certificate,
license, copyright etc. Overall it can be said that Legal Feasibility
Study is study to know if proposed project conform legal and ethical
requirements.
5. Schedule Feasibility –
In Schedule Feasibility Study mainly timelines/deadlines is analyzed
for proposed project which includes how many times teams will take
to complete final project which has a great impact on the
organization as purpose of project may fail if it can’t be completed
on time.
Feasibility Study Process :
The below steps are carried out during entire feasibility analysis.
1. Information assessment
2. Information collection
3. Report writing
4. General information
There are a number of requirements elicitation methods. Few of them are listed
below –
1. Interviews
2. Brainstorming Sessions
3. Facilitated Application Specification Technique (FAST)
4. Quality Function Deployment (QFD)
5. Use Case Approach
2. Brainstorming Sessions:
• It is a group technique
• It is intended to generate lots of new ideas hence providing a platform to
share views
• A highly trained facilitator is required to handle group bias and group
conflicts.
• Every idea is documented so that everyone can see it.
• Finally, a document is prepared which consists of the list of
requirements and their priority if possible.
Each participant prepares his/her list, different lists are then combined,
redundant entries are eliminated, team is divided into smaller sub-
teams to develop mini-specifications and finally a draft of
specifications is written down using all the inputs from the meeting.
The use cases describe the ‘what’, of a system and not ‘how’. Hence, they only
give a functional view of
the system.
The components of the use case design includes three major things – Actor, Use
cases, use case diagram.
1. Actor – It is the external agent that lies outside the system but
interacts with it in some way. An actor maybe a person, machine
etc. It is represented as a stick figure. Actors can be primary
actors or secondary actors.
Primary actors – It requires assistance from the
system to achieve a goal. Secondary actor – It is an
actor from which the system needs assistance.
2. Use cases – They describe the sequence of interactions between
actors and the system. They capture who(actors) do
what(interaction) with the system. A complete set of use cases
specifies all possible ways to use the system.
3. Use case diagram – A use case diagram graphically represents
what happens when an actor interacts with a system. It captures
the functional aspect of the system.
• A stick figure is used to represent an actor.
• An oval is used to represent a use case.
• A line is used to represent a relationship between an actor and a use
case.
1. Facilitated Application Specification Technique:
It’s objective is to bridge the expectation gap – difference
between what the developers think they are supposed to build
and what customers think they are going to get.
A team oriented approach is developed for
requirements gathering. Each attendee is asked
to make a list of objects that are-
1. Part of the environment that surrounds the system
2. Produced by the system
3. Used by the system
Each participant prepares his/her list, different lists are then
combined, redundant entries are eliminated, team is divided into
smaller sub-teams to develop mini-specifications and finally a
draft of specifications is written down using all the inputs from the
meeting.
2. Quality Function Deployment:
Quality Function Deployment (QFD) is process or set of tools
used to define the customer requirements for product and convert
those requirements into engineering specifications and plans such
that the customer requirements for that product are satisfied.
Product planning :
Translating what customer wants or needs into set of prioritized
design requirements. Prioritized design requirements describe
looks/design of product.
Involves benchmarking – comparing product’s performance with
competitor’s products. Setting targets for improvements and for
achieving competitive edge.
Part Planning :
Translating product requirement specifications into part of
characteristics.
For example, if requirement is that product should be portable,
then characteristics could be light-weight, small size, compact,
etc.
Process Planning :
Translating part characteristics into an effective
and efficient process. The ability to deliver six
sigma quality should be maximized.
Production Planning :
Translating process into manufacturing or service delivery methods.
In this step too, ability to deliver six sigma quality
should be improved. Benefits of QFD :
Customer-focused –
Very first step of QFD is marked by understanding and collecting
all user requirements and expectations of product. The company
does not focus on what they think customer wants, instead, they
ask customers and focus on requirements and expectations put
forward by them. Voice of Customer Competitor Analysis –
House of Quality is significant tool that is used to compare voice
of customer with design specifications.
Structure and Documentation –
Tools used in Quality Function Deployment are very well
structured for capturing decisions made and lessons learned
during development of product. This documentation can assist in
development of future products.
Low Development Cost –
Since QFD focuses and pays close attention to customer
requirements and expectations in initial steps itself, so the
chances of late design changes or modifications are highly
reduced, thereby resulting in low product development cost.
Shorter Development Time –
QFD process prevents wastage of time and resources as enough
emphasis is made on customer needs and wants for the product.
Since customer requirements are understood and developed in
right way, so any development of non-value-added features or
unnecessary functions is avoided, resulting in no time waste of
product development team.
A QFD Tool – House Of Quality (HOQ) :
House of Quality or HOQ is conceptual map or matrix that
provides an understanding of how customer requirements
(WHATs) are related to various technical descriptors or design
parameters (HOWs) and their priority levels. House of Quality is
also known as Quality Matrix. The matrix gets its name from fact
that it represents the shape of house.
1. Actor –
It is the external agent that lies outside the system but
interacts with it in some way. An actor maybe a person,
machine etc. It is represented as a stick figure. Actors can be
primary actors or secondary actors.
• Primary actors – It requires assistance from the system to achieve
a goal.
• Secondary actor – It is an actor from which the system needs
assistance.
2. Use cases –
They describe the sequence of interactions between actors
and the system. They capture who(actors) do what(interaction)
with the system. A complete set of use cases specifies all
possible ways to use the system.
entire system. After the completion of analysis, it is expected that the understandability of the project may improve significantly.
Here, we may also interact with the customer to clarify points of confusion and to understand which requirements are more
important than others. The various steps of requirements analysis are shown in Fig. 3.4.
(i) Draw the context diagram. The context diagram is a simple model that defines the boundaries and interfaces of the
proposed system with the external world. It identifies the entities outside the proposed system that interact with the system. The
context diagram of student result management system (as discussed earlier) is given below:
[Type here]
(ii) Development of a prototype (optional). One effective way to find out what the customer really wants is to construct a
prototype, something that looks and preferably acts like a part of the system they say they want.
We can use their feedback to continuously modify the prototype until the customer is satisfied. Hence, prototype helps the
client to visualise the proposed system and increase the understanding of requirements. developers and users are not certain about
some of the requirements, a prototype may help both the parties to take a final decision.
Some projects are developed for general market. In such cases, the prototype should be shown to some representative
sample of the population of potential purchasers. Even though, persons who try out a prototype may not buy the final system, but
their feedback may allow us to make the product more attractive to others. Some projects are developed for a specific customer
under contract. On such projects, only that customer's opinion counts, so the prototype should be shown to the prospective
users in the customer organisation.
The prototype should be built quickly and at a relatively low cost. Hence it will always have limitations and would not be
acceptable in the final system. This is an optional activity. Although many organisations are developing prototypes for better
understanding before the finalisation of SRS.
(iii) Model the requirements. This process usually consists ofvarious graphical representations ofthe functions, data
entities, external entities and the relationships between them. The graphical view may help to find incorrect, inconsistent, missing
and superfluous requirements. Such models include data flow diagrams, entity relationship diagrams, data dictionar ies, state-
transition diagrams etc.
(iv) Finalise the requirements. After modeling the requirements, we will have better understanding of the system behaviour,
The inconsistencies and ambiguties have been identi fied and corrected. Flow of data amongest various modules has been analysed.
Elicitation and analysis activities have provided better insight to the system. Now we finalise the analysed requirements and next
step is to document these requirements in a prescribed format.
1. All names should be unique. This makes it easier to refer to items in the DFD.
2. Remember that a DFD is not a flow chart. Arrows in a flow chart represent the order of events; arrows in DFD
represent flowing data. A DFD does not imply any order of events.
3. Suppress logical decisions. If we ever have the urge to draw a diamond-shaped box in a DFD, suppress that urge! A
diamond-shaped
box is used in flow charts to represent decision points with multiple exit paths of which only one is taken. This implies an
ordering of events, which makes no sense in a DFD.
4. Do not become bogged down with details. Defer error conditions and error handling until the end of the analysis.
Standard symbols for DFDs are derived from the electric circuit diagram analysis and are shown in Fig. 3.5 [SAGE901.
Symbol Name
[Type here]
Data Flow Used to connect processes to each other, to source or
sinks, the arrowhead indicates direction of data flow.
sink (External
Entity)
Data A repository of data; the arrowheads indicate net inputs and net
store outputs to store
A circle (bubble) shows a process that transforms data inputs into data outputs. A curved line shows flow ofdata into or out of a process or
data store. A set of parallel lines shows a place for the collection of data items. A data store indicates that the data is stored which can
be used at a later stage or by the other processes in a different order. The data store can have element or group of elements. Source or
sink is an external entity and acts as a source of system inputs or sink of system outputs.
[Type here]
Leveling
The DFD may be used to represent a system or software at any level Of abstraction, in fact, DPDs may be partitioned into
levels that represent increasing information flow and functional detail. A level-O DFD, also called a fundamental system model or
context diagram represents the entire software element as a single bubble with input and output data indicated by incoming and
outgoing arrows, respectively [PRES2Kl. Then the system is decomposed and represented as a DFD with multiple bubbles. Parts of
the system represented by each of these bubbles are then decomposed and documented as more and more detailed DFDs. This
process may be repeated at as many levels as necessary until the problem at hand is well understood. It is important to preserve the
number of inputs and outputs between levels; this concept is caned leveling by DeMacro. Thus, ifbubble "A" has two inputs, and
one output y, then the expanded DFD, that represents "A" should have exactly two external inputs and one external output as
shown in Fig. 3.6 [DEMA79, DAV1901
x,
The level-O DFD, algo called context diagram of result management system is shown in Fig. 3.7. As the bubbles are
decomposed into less and less abstract bubbles, the corresponding data flows may also need to be decomposed. Level-I DFD of result
management system is given in Fig. 3.8.
This provides a detailed view of requirements and flow of data from one bubble to the another.
[Type here]
Data entry
[Type here]
.
[Type here]
Data Dictionary
Range of values : records all possible values, e.g. total marks must be
positive and between 0 to 100.
The mathematical operators used within the data dictionary are
defined in the table:
Notations Meaning
x=y[a]z x includes of some occurrences of data element a which are between y and z
Purpose of ER Diagram
• Entity
• Attributes
• Relationship
1. ENTITY
Entity Set
2. ATTRIBUTES
1. Key attribute
2. Composite attribute
3. Single-valued attribute
4. Multi-valued attribute
5. Derived attribute
1. Key Attribute
5. Derived attribute: Derived attributes are the attribute that does not exist in the
physical database, but their values are derived from other attributes present in the
database. For example, age can be derived from date_of_birth. In the ER diagram,
Derived attributes are depicted by the dashed ellipse.
3. Relationship
Relationship Set:
Degree of Relationship:
1. Unary (degree1)
2. Binary (degree2)
3. Ternary (degree3)
Cardinality
Cardinality describes the number of entities in one entity set, which can be
associated with the number of entities of other sets via relationship set.
Types of cardinalities
1. One to One: One entity from entity set A can be contained with at most one
entity of entity set B and vice versa. Let us assume that each student has only one
student ID, and each student ID is assigned to only one person. So, the relationship
will be one to one.
2. One to many: When a single instance of an entity is associated with more than
one instances of another entity then it is called one to many relationships. For
example, a client can place many orders; a order cannot be placed by many
customers.
3. Many to One: More than one entity from entity set A can be associated with at
most one entity of entity set B, however an entity from entity set B can be
associated with more than one entity from entity set A. For example - many
students can study in a single college, but a student cannot study in many colleges
at the same time.
4. Many to Many: One entity from A can be associated with more than one entity
from B and vice-versa. For example, the student can be assigned to many projects,
and a project can be assigned to many students.
Prototyping
Two Approaches:
>Throwaway Prototyping
>Evolutionary Prototyping
Throwaway Prototyping
Evolutionary Prototyping
The name for this type of software prototyping is also quite self-
explanatory. An evolutionary prototype is much more functional
than a throwaway prototype, with some primary features coded
into it from the get-go instead of it being a mere dummy focused
solely on design. The screens (user interface) also have actual
code behind them. The user can see and interact with the
prototype as if it were the actual product. Over time and
multiple feedback cycles, the prototype may have more
advanced functionality added to it as needed by the client. The
process thus results in the finished product.
Requirement Documentation
Requirements documentation is a very important activity after requirements elicitation and
analysis. It is the way to represent requirements in a consistent format. Requirements
document is called Software Requirement Specification(SRS).
The SRS is a specification for a particular software product, program or set of programs that
performs certain functions in a specific environment. It serves a number of purposes depending
on who is writing it. First, SRS could be written by the customer of a system. Second, the SRS
could be written by the developer of a [Link] two scenarios create entirely different
situations and establish entirely different purposes for the document. First case, SRS is used to
define the needs and expectations of the users. Second case, SRS is written for different
purpose and serve as a contract document between customer and developer.
This reduces the probability of the customer being disappointed with the final product. The SRS
written by developer(second case) is of our interest and discussed in the subsequent sections.
• Correct
• Unambiguous
• Complete
• Consistent
• Ranked for importance and/or stability
• Verifiable
• Modifiable
• Traceable
Each of the above mentioned characteristics is discussed below :
Correct
The SRS is correct if and only if every requirement stated therein is one that the software shall
meet. There is no tool or procedure that assures correctness. If the software must respond to
all button presses within 5 seconds and the SRS stated that “the software shall respond to all
button presses within 10 seconds”, then that requirement is incorrect.
Unambiguous
The SRS is unambiguous if and only if every requirement stated therein has only one
interpretation. Each sentence in the SRS should have unique interpretation. Imagine that a
sentence extracted from the SRS, is given to 10 people who are asked for their interpretation. If
there is more than one such interpretation, then that sentence is probably ambiguous.
In cases, where a term used in a particular context could have multiple meanings, the term
should be included in a glossary where its meaning is made more specific. The SRS should be
unambiguous to both those who create it and to those who use it.
Requirements are often written in natural language (eg , English). Natural language is inherently
ambiguous, A natural language SRS should be reviewed by an independent party to identify
ambiguous use of a language so that it can be corrected. This can be avoided by using a
particular requirement specification language. Its language processors automatically detect
many lexical, syntactic, and semantic errors.
Complete
The SRS is complete if, and only if ; it includes the following elements :
1. All significant requirement, whether relating to functionality, performance, design
constraints, attributes or external interfaces.
2. Definition of their responses of the software to all realizable classes of input data in all
realizable classes of situations. Note that it is important to specify the responses to both
valid and invalid values.
3. Full labels and references to all figures, tables and diagrams in the SRS and definition of
all terms and units of measure.
Consistent
The SRS is consistent if, and only if no subset of individual requirements described in it conflict.
There are three types of likely conflicts in the SRS :
1. The specified characteristics of real-world objects may conflict. For example,
a. The format of an output report may be described in one requirement as tabular but
in another as textual.
b. One requirement may state that all lights shall be green while another states that all
lights shall be blue.
2. There may be logical or temporal conflict between two specified actions, for example,
a. One requirement may specify that the program will add two inputs and another may
specify that the program will multiply them.
b. One requirement may state that “A” must always follow “B”, while another requires
that “A and B” occur simultaneously.
3. Two or more requirements may describe the same real-world object but use different
terms for that object. For example, a program’s request for a user input may be called a
“prompt” in one requirement and a “cue” in another. The use of standard terminology
and definitions promotes consistency.
Modifiable
The SRS is modifiable if and only if its structure and style are such that any changes to the
requirements can be made easily, completely and consistently while retaining the structure and
style.
The requirements should not be redundant. Redundancy itself is not an error but, it can easily
lead to errors. Redundancy can occasionally help to make an SRS more readable, but a problem
can arise when the redundant document is updated. For instance, a requirement may be
altered in only one of the places out of many places where it appears.
The SRS then becomes inconsistent. Whenever redundancy is necessary, the SRS should include
explicit cross-references to make it modifiable.
Traceable
The SRS is traceable if the origin of each of the requirements is clear and if it facilitate the
referencing of each requirement in future development or enhancement documentation. Two
types of traceability are recommended :
1. Backward traceability : This depends upon each requirement explicitly referencing its
source in earlier documents.
2. Forward traceability : This depends upon each requirement in the SRS having a unique
name or reference number.
The forward traceability of the SRS is especially important when the software product enters
the operation and maintenance phase. As code and design documents are modified, it is
essential to be able to ascertain the complete set of requirements that may be affected by
those modifications.
Organization of the SRS
1. Introduction
1.1 Purpose
1.2 Scope
1.3 Definitions, Acronyms, and Abbreviations
1.4 References
1.5 Overview
2. The Overall Description
2.1 Product Perspective
2.1.1 System Interfaces
2.1.2 Interfaces
2.1.3 Hardware Interfaces
2.1.4 Software Interfaces
2.1.5 Communications Interfaces
2.1.6 Memory Constraints
2.1.7 Operations
2.1.8 Site Adaptation Requirements
2.2 Product Functions
2.3 User Characteristics
2.4 Constraints
2.5 Assumptions and Dependencies.
2.6 Apportioning of Requirements
3. Specific Requirements
3.1 External interfaces
3.2 Functions
3.3 Performance Requirements
3.4 Logical Database Requirements
3.6 Design Constraints
3.5.1 Standards Compliance
3.6 Software System Attributes
3.6.1 Reliability
3.6.2 Availability
3.6.3 Security
3.6.4 Maintainability
3.6.5 Portability
3.7 Organizing the Specific Requirements
3.7.1 System Mode
3.7.2 User Class
3.7.8 Objects
3.7.4 Feature
3.7.°5 Stimulus
3.7.6 Response
3.7.7 Functional Hierarchy
3.8 Additional Commen ts
4. Change Management Process
5. [Link] Approvals
6. Supporting Information
Copyrighted matem
1 10 $oftwate EnglnMHhg I
1. Introduction
The following subsections of the Software Requirements Specifications(SRS) document should
provide an overview of the entire SRS.
1.1 Purpose
Identify the purpose of this SRS and its intended audience. 1n this subsection, describe the
purpose of the particular SRS and specify the intended audiettc e for the SRS.
1.2 Scope
In this subsection:
(i) Identify the software product(s) to be produced by name
(ii) Explain what the software product(s) will, and, if necessary, will not do
(i ii ) Describe the application of the software being specified, including relevant benefits,
objectives, and goals
(iv ) Be consistent with similar statements in higher-level specifications if they exist.
Put th e product into perspective with other related products. If the product is independent
and totally self-contained , it should be so s tated h ere. If ·the SRS defines a product that is a
component of a lar ger syste m, as frequent ly occurs, then this subsectoin relates the require-
Copyrighted material
Software Requlntments Analysis and Specifications 71
ments of the larg er system to functionality of the software and identifies interfaces between
that [Link] and the software.
A block diagram showing the major components of the large system, interconnections, and
external interfaces can be helpful.
The following subsections describe how the software operates inside various constraints.
2.1.1 System Interfaces
Listeach system interface and identify the functionality of the software toaccomplish the _s [Link]
reqtJirement and the interface description to match the system.
2.1.2 Interfaces
Specify:
(i) The logical characteris tics of each interface between the software product and its
users.
(ii) All the aspects of optimizing the interface with the person who must use the system.
Copyrighted material
Software Engineering j
Copyrighted material
Software Requirements Analysis and Specifications 73
3- Specific Requirements
This section contains all the software requirements at a level of detail sufficient to enable
designers to design a system to satisfy those requirements, and testers to test that the system
satisfies those requirements. Throughout this section. everystated requirement should be
externally perceivable by users , operators, or bothexternal systems. These requirements should
include at a minimum a description of every input into the system, every output from the
system and all functions performed by the sys tem in response to an input or in support of an
output. The following principles apply:
(i) Specific requirements should be stated with all the characteristics of a good SRS
• corr ec t
• unambiguous
• complete
• consistent
• ranked for importance and/or stability
• verifiable
• modifiable
• tracea ble
(i i ) Specific requirements should be cross-referenced to earlier documents that relate
(i i i ) All requirements should be uniquely identifiable
(iv) Careful attention should be given to organizing the requirements to
maximize rea dability.
Before examining specific ways of organizing the requirements it is helpfulto understand
the various items that comprise requirements as described in the following sebsections.
3.1 External Interfaces
This contains a detailed description of all inputs into and outputs from the software system. It
complements the interface descriptions in section 2 but does not repeat information there .
It contains both content and format as follows:
• Name of item
• Description of purpose
• Source of input or destination of output
• Valid ran ge, accuracy and/or tolerance
• Units of measure
Copyrighted material
Software Engineering I
• Timing
• Relationships to other inputs/outputs
• Screen formats/organization
• Window formats/organization
• Data formats
• Command formats
• End messages.
3.2 Functions
Functional requirements [Link] the fundamental actions that must take place in the software
in accepting and processing the inputs and in processin g and generating the outputs. These
are generally listed as "shall" statements starting with "T_he system shall...
These include:
• Validity checks on the inputs
• Exact sequence of operations
• Responses to abnormal situation, including
• Overflow
• Communication facilities
• Error handling and recovery
• Effect of parameters
• Relationship of outputs tn inputs, including
• Input/Output sequences
• Formulas for input to output conver on.
It may be appropriate to partition the functional requirements into sub-functions or sub-
processes. This does not imply that thesoftware design will also be partitioned that way.
3.3 Performance Requirements
Thi s s ubsection specifies both the static and the dynamic numerical requirements placed on
the software or on human interaction with the software, as a whole. Static numerical require-
ments may include:
(i) The number of terminals to be supported
(i i ) Th e number of simultaneous users to be supported
(i i i ) Amount and type of information to be handled
Static numerical requirements are sometime·s identified under a separate section enti-
tled capacity.
Dynamic numerical requirements may include, for example, the number of transactions
and tasks and the amount of data to be processed within certain time periods for both normal
and peak workload conditions .
All of these requirements should be stated in measurable [Link]:
For example,
95% of the transactions shall be processed in less than 1 second rather than,
An operator shall not have to wait for the transaction to complete.
Copyrighted material
Software Requirements Analysis and Specifications 75
(Note: Numerical limits applied to one specific function are normally specified as partof
the processing subparagraph description of that function).
3.4 Log/cal Database Requirements
[Link] section specifies the logical requirements for any information that is to be placed into a
database. This may include:
• Types of information used by various functions
• Frequency of use
• Accessing capabilities
• Data entities and their relationships
• Integrity constraints
• Data retention requirements.
3,5 Dflslgn Constralnta
Specify design constraints that can be imposed by other standards, hardware limi tionll, etc.
3.5.1 Standards Compliance
Specify the requirements derived from existing standards or regulations. They might include:
(i) Report format
. (ii) Data naming
(iii) Accounting procedures
(iv ) Audit Tracing
For example, this could specify the requirement for software to processing activity. Such traces
are needed for some applications to meet minimum regulatory or [Link] standard. An audit
trace requirement may, for example, state that all changes to a payroll database must be
recorded in a trace file with before and after values.
3.6 Software System Attributes
There are a number of quality attributes of software that can serve as requirements. It is
important that required attributes be specified so that their achievement can be objectively
verified. Fig. 3.19 has the definitions of the quality attributes of the software discussed in this
subsection [ROBE02]. The following items provide a partial list of examples.
3.6.1 Reliability
Specify the factors required to establish the required reliability of the software system a t im e
of delivery.
3.6.2 Availability
Specify the factors required to guarantee a defined availability level for the entire system such
as checkpoint, recovery, and restart.
3.6.3 Security
Specify the factors that would protect the software from accidental or malicious access, use,
modification, destruction , or disclosure. Specific requirements in this area could include the
need to:
Copyrighted material
Software Engi ring I
• Utilize certain cryptographic techniques
• Keep specific fog or history data sets
• Assign certain functions to different modules
• Restrict communications between some areas of the program
• Check data integrity for critical variables.
3.6.4 Maintainability
Specify attribµtes of software that rfl1 4! to the ease of maintenance of the software itself.
'fhere mar be some requirement for certain nao,:h;Jsn-ity, interfaces, complexity, etc. Reqµjre-
IQent;s should not be placed here just because they are ought tobe good desigq practipes.
3.6.9 Portability
Specjfy l\ttributes of software tqat relate to the ease of parting the software t.o other host
machines and/or operating systems. This may include:
• Percentage of components with host-dependent code
• Percentage of code that is host dependent
• Use of a proven portable language
• Use of a particular compiler or language subset
• Use of a particular operating system.
Copyrighted material
I Software Requirements Analysis and Specifications 11 1
3.1 Organizing the Specific Requirements
For anything but trivial systems the detailed requirements tend to be extensive. For this rea-
son, it is recommended that careful consideration be given to organizing these in a manner
optimal for understanding. There is no one optimal organization for all systems. Different
classes of systems lend themselves to different organizations of requirements. Some of these
organizations are described in the following subclasses.
3.7.1 System Mode
Some systems behave [Link] differently depending on the mode of operation. When organizing
by mode there are two possible outlines. The choice depends on whether interfaces d per•
formance are dependent on mode.
3.7.2 User Class
Some systems provide different sets of functions to different classes of usel'S.
3.7.3 3 Objects
Objects are real-world entities that have a counterpart within the system. Associated with
each object is a set of attributes and functions. These functions are also called services, meth
ods, or processes. Note that sets of objects may sha re attributes and services. These are
grouped together as classes.
3.7.4 Feature
A feature is an externally desired service by the system that may require a sequenceof inputs
to effect the desired result. Each feature is generally described as sequence of stimulus-re-
sponse pairs.
3.7.5 Stimulus
Some systems can be best organized by describing their functions in terms of stimuli.
3.7.6 Response
Some systems can be best organized by describing their functions in support of the generation
of a response.
3.7.7 Functional Hierarchy
When non of the Above organizAtion schemes prove helpful, the overall functionality can be
organized into a hierarchy of fupctions organized by eith er common inputs , common outputs,
or common internal data access. Da flow diagr ams and datadictionaries can be used to show
the relationships between and among the functions and data.
3.8 Addlt/Of1fll Commeni.
Whenever a new SRS is contemplated, more than one of the organizational techniques given
in 3.7 may be appropriate. In such cases, organize the specific requirements for multiple bier•
archies tailored to the specific needs of the system under specification.
Copyrighted material
Software EngineerinQ I
There are many notations, methods, and [Link] support tools available t.o EUd ill the
documentation of requirements. For the most part, their usefulness is a function of o aniz,a
tion. For example, when organizing by mode, finite state machines or state charts may prove
helpful, when organizing by object, object-oriented analysis may prove·helpful; when organiz-
ing by feature, stimulus-response sequences may prove helpful ; when organizi.Q by function,u
hierarchy, data flow diagram s and data dictionaries may prove helpful.
In any of th e outlines below, those sections called "Functional Req rement i" ma;v Qt3
described in native language, in pseunocode, in a system definition language, or in four sub-
lJet ons titled: Introduction, Inputs, Processmg, O tpnt-R.
5. Document Approval
Identify the approvers of the SRS document. Approver's name, signature, and date should be
used.
6. Supporting Information
The supporting information makes the SRS easier to use. It includes:
• Table of Contents
• Index
• Appendices
The Appendices are not always considered part of the actual requirements specification tUid
are not always necessary. They may include:
(a ) Sample 1/0 formats, descriptions of cost analysis studies, results of user surveys
(b) Supporting or background information that can help the readers of the SRS
(c) A description of the problems to be solved by the software
(cl) Special packaging instructions for the code and the media t.o meet security, export,
initial loading, or other requirements.
When Appendices are included, the SRS should explicitly state whether or not the Appendices
are to be considered part of the requirements.
Tables on the following pages provide alternate ways [Link] ructuresection 3 on the specific
req uirements.
Copyrighted material
REQUIREMENTS REVIEW PROCESS
Distribute
Plan Read
SRS
Review Documents
Document
Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into
a form, i.e., easily implementable using programming language.
1. Conceptual Design :
Conceptual design is an initial/starting phase in the process of planning, during which the broad
outlines of function and sort of something are coupled. It tells the customers that what actually
the system will do
Common methods used for conceptual designs are-
• Wireframes
• Component diagrams
2. Technical Design :
A Technical design is a phase in which the event team writes the code and describes the minute
detail of either the whole design or some parts of it. It tells the designers that what actually the
system will do
Common methods of technical designs are-
• Class Diagrams
• Activity diagram
• Sequence diagram
• State Diagram
Objectives of Software Design
1. Correctness:
A good design should be correct i.e. it should correctly implement all the functionalities
of the system.
2. Efficiency:
A good software design should address the resources, time and cost optimization issues.
3. Understandability:
A good design should be easily understandable, for which it should be modular and all
the modules are arranged in layers.
4. Completeness:
The design should have all the components like data structures, modules, and external
interfaces, etc.
5. Maintainability:
A good software design should be easily amenable to change whenever a change
request is made from the customer side
i. Fortran subroutine
ii. Ada package
iii. Procedures & functions of PASCAL & C
iv. C++ / Java classes
v. Java packages
vi. Work assignment for an individual programmer
All these definitions are correct. A modular system consist of well defined
manageable units with well defined interfaces among the units.
Properties :
MODULE COUPLING
Loosely coupled:
some dependencies
(B)
Fig. 5 : Module coupling
Different types of coupling are content, common, external, control, stamp and data.
The strength of coupling from lowest coupling (best) to highest coupling(worst).
Data coupling
Stamp coupling
Stamp coupling occurs between module A and B when complete data structure is
passed from one module to another. Since not all data making up the structure are
usually necessary in communication between the modules, stamp coupling
typically involves tramp data. If one procedure only needs a part of a data structure,
calling module should pass just that part, not the complete data structure.
Control coupling
External coupling
Common coupling
With common coupling, module A and module B have shared data. Global data
areas are commonly found in programming languages. Making a change to the
common data means tracing back to all the modules which access that data to
evaluate the effect of change. With common coupling, it can be difficult to
determine which module is responsible for having set a variable to a particular
value.
Content coupling
Functional cohesion
Sequential cohesion
Procedural cohesion
Temporal cohesion
Logical cohesion
Coincident cohesion
A and B are part of a single functional task. This is very good reason for them to
be contained in the same procedure.
Sequential Cohesion
Module A outputs some data which forms the input to B. This is the reason for
them to be contained in the same procedure.
Procedural Cohesion
Temporal Cohesion
Module exhibits temporal cohesion when it contains tasks that are related by the
fact that all tasks must be executed in the same time-span.
Logical Cohesion
Coincidental Cohesion
Coincidental cohesion exists in modules that contain instructions that have little or
no relationship to one another.
A good example of a system that has high cohesion and low coupling is the
‘plug and play’ feature of the computer system. Various slots in the mother board
of the system simply facilitate to add or remove the various services/functionalities
without affecting the entire system. This is because the add on components provide
the services in highly cohesive manner. Fig 12: provides a graphical review of
cohesion and coupling.
Module design with high cohesion and low coupling characterizes a module as
black box when the entire structure of the system is described. Each module can be
dealt separately when the module functionality is described.
STRATEGY OF DESIGN
A good system design strategy is to organize the program modules in such a way
that are easy to develop and latter to, change. Structured design techniques help
developers to deal with the size and complexity of programs. Analysts create
instructions for the developers about how code should be written and how pieces of
code should fit together to form a program. It is important for two reasons:
First, even pre-existing code, if any, needs to be understood, organized and pieced
together.
Second, it is still common for the project team to have to write some code and
produce original programs that support the application logic of the system.
Bottom-up Design
Since the design progressed from bottom layer upwards, the method is
called bottom-up design. The main argument for this design is that is that if we
start coding a module soon after its design, the chances of recoding is high; but the
coded module can be tested and design can be validated sooner than a module
whose sub modules have not yet been designed.
This method has one terrible weakness; we need to use a lot of intuition to
decide exactly what functionality a module should provide.
If we get it wrong, then at a higher level, we will find that it is not as per
requirements; then we have to redesign at a lower level. If a system is to be built
form an existing system ,this approach is more suitable, as it start from some
existing modules.
Top-Down Design
A top down design approach starts by identifying the major modules of the system,
decomposing them into their lower level modules and iterating until the desired
level of detail is achieved. This is stepwise refinement; starting from an abstract
design, in each step the design is refined to a more concrete level, until we reach a
level where no more refinement is needed and the design can be implemented
directly.
Hybrid Design
Near the bottom of the hierarchy, where the intuition is simpler, and the need for
bottom-up testing is greater, because there are more number of modules at low
levels than high levels.
1. Objects: All entities involved in the solution design are known as objects.
For example, person, banks, company, and users are considered as
objects. Every entity has some attributes associated with it and has some
methods to perform on the attributes.
i. Create use case model :First step is to identify the actors interacting with
the system. We should then write the use case and draw the use case
diagram.
ii. Draw activity diagram (If required) :Activity Diagram illustrate the
dynamic nature of a system by modeling the flow of control form activity to
activity. An activity represents an operation on some class in the system
that results in a change in the state of the system.
iii. Draw the interaction diagram :An interaction diagram shows an
interaction, consisting of a set of objects and their relationship, including
the messages that may be dispatched among them. Interaction diagrams
address the dynamic view of a system.
1) Firstly, we should identify that the objects with respects to every use
case.
2) We draw the sequence diagrams for every use case.
iv. Draw the class diagram :The class diagram shows the relationship
amongst classes.
A state chart diagram is used to show the state space of a given class, the
event that cause a transition from one state to another, and the action that
result from a state change. A state transition diagram for a “book” in the
library system is given below:
vi. Draw component and development diagram
A software testing strategy should be flexible enough to promote a customized testing approach.
At the same time, it must be rigid enough to encourage reasonable planning and management
tracking as the project progresses.
In many ways, testing is an individualistic process, and the number of different types of tests
varies as much as the different development approaches. For many years, our only defense
against programming errors was careful design and the native intelligence of the programmer.
We are now in an era in which modern design techniques [and technical reviews] are helping us
to reduce the number of initial errors that are inherent in the code. Similarly, different test
methods are beginning to cluster themselves into several distinct approaches and philosophies.
Testing is a set of activities that can be planned in advance and conducted systematically. For
this reason a template for software testing—a set of steps into which we can place specific test-
case design techniques and testing methods—should be defined for the software process.
• What is it? :
Software is tested to uncover errors that were made inadvertently as it was designed and
constructed. But how do you conduct the tests? Should you develop a formal plan for your tests?
Should you test the entire program as a whole or run tests only on a small part of it? Should you
rerun tests you’ve already conducted as you add new components to a large system? When
should you involve the customer? These and many other questions are answered when you
develop a software testing strategy.
• Who does it? :
A strategy for software testing is developed by the project manager, software engineers, and
testing specialists.
• Why is it important? :
Testing often accounts for more project effort than any other software engineering action. If it is
conducted haphazardly, time is wasted, unnecessary effort is expended, and even worse, errors
sneak through undetected. It would therefore seem reasonable to establish a systematic strategy
for testing software.
Testing begins “in the small” and progresses “to the large.” By this we mean that early testing
focuses on a single component or on a small group of related components and applies tests to
uncover errors in the data and processing logic that have been encapsulated by the component(s).
After components are tested they must be integrated until the complete system is constructed. At
this point, a series of high-order tests are executed to uncover errors in meeting customer
requirements. As errors are uncovered, they must be diagnosed and corrected using a process
that is called debugging.
A Test specification documents the software team’s approach to testing by defining a plan that
describes an overall strategy and a procedure that defines specific testing steps and the types of
tests that will be conducted.
By reviewing the Test Specification prior to testing, you can assess the completeness of test
cases and testing tasks. An effective test plan and procedure will lead to the orderly construction
of the software and the discovery of errors at each stage in the construction process.
GENERIC CHARACTERISTICS
A number of software testing strategies have been proposed in the literature. All provide you
with a template for testing and all have the following generic characteristics:
• To perform effective testing, you should conduct effective technical reviews. By doing this,
many errors will be eliminated before testing commences.
• Testing begins at the component level and works “outward” toward the integration of the
entire computer-based system.
• Different testing techniques are appropriate for different software engineering approaches and
at different points in time.
• Testing is conducted by the developer of the software and (for large projects) an independent
test group.
• Testing and debugging are different activities, but debugging must be accommodated in any
testing strategy.
NOTE: A strategy for software testing must accommodate low-level tests that are necessary to
verify that a small source code segment has been correctly implemented as well as high-level tests
that validate major system functions against customer requirements. A strategy should provide
guidance for the practitioner and a set of milestones for the manager. Because the steps of the
test strategy occur at a time when deadline pressure begins to rise, progress must be measurable
and problems should surface as early as possible.
Verification and Validation
Software testing is one element of a broader topic that is often referred to as verification and
validation (V&V). Verification refers to the set of tasks that ensure that software correctly
implements a specific function. Validation refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements. Boehm [Boe81] states this
another way:
Verification and validation includes a wide array of SQA activities: technical reviews, quality and
configuration audits, performance monitoring, simulation, feasibility study, documentation
review, database review, algorithm analysis, development testing, usability testing, qualification
testing, acceptance testing, and installation testing. Although testing plays an extremely
important role in V&V, many other activities are also necessary.
Testing does provide the last bastion from which quality can be assessed and, more
pragmatically, errors can be uncovered. But testing should not be viewed as a safety net. As they
say, “You can’t test in quality. If it’s not there before you begin testing, it won’t be there when
you’re finished testing.” Quality is incorporated into software throughout the process of software
engineering. Proper application of methods and tools, effective technical reviews, and solid
management and measurement all lead to quality that is confirmed during testing.
Miller [Mil77] relates software testing to quality assurance by stating that “the underlying
motivation of program testing is to affirm software quality with methods that can be
economically and effectively applied to both large-scale and small-scale systems.”
CRS
(Customer
Requirements
Specification)
SRS
([Link]
Requirements
Spacification)
HLO
(High Level Design)
L LD
(Low Level design)
DIFFERENCE BETWEEN VERIFICATION AND VALIDATION
Verification Validation
It includes checking documents, design, It includes testing and validating the actual
codes and programs. product.
Methods used in verification are reviews, Methods used in validation are Black Box
walkthroughs, inspections and desk- Testing, White Box Testing and non-
checking. functional testing.
It can find the bugs in the early stage of It can only find the bugs that could not be
the development. found by the verification process.
Verification Validation
Figure 1
A strategy for software testing may also be viewed in the context of the spiral ( Figure 1 ). Unit
testing begins at the vortex of the spiral and concentrates on each unit (e.g., component, class, or
WebApp content object) of the software as implemented in source code .Testing progresses by
moving outward along the spiral to integration testing, where the focus is on design and the
construction of the software architecture. Taking another turn outward on the spiral, you
encounter validation testing, where requirements established as part of requirements modeling
are validated against the software that has been constructed. Finally, you arrive at system testing,
where the software and other system elements are tested as a whole. To test computer software,
you spiral out along streamlines that broaden the scope of testing with each turn.
Considering the process from a procedural point of view, testing within the context of software
engineering is actually a series of four steps that are implemented sequentially. The steps are
shown in Figure 2 . Initially, tests focus on each component individually, ensuring that it
functions properly as a unit. Hence, the name unit testing. Unit testing makes heavy use of
testing techniques that exercise specific paths in a component’s control structure to ensure
complete coverage and maximum error detection. Next, components must be assembled or
integrated to form the complete software package. Integration testing addresses the issues
associated with the dual problems of verification and program construction. Test-case design
techniques that focus on inputs and outputs are more prevalent during integration, although
techniques that exercise specific program paths may be used to ensure coverage of major control
paths. After the software has been integrated (constructed), a set of high-order tests is conducted.
Validation criteria (established during requirements analysis) must be evaluated. Validation
testing provides final assurance that software meets all functional, behavioral, and performance
requirements.
Figure 2
The last high-order testing step falls outside the boundary of software engineering and into the
broader context of computer system engineering. Software, once validated, must be combined
with other system elements (e.g. hardware , people ,databases). System testing verifies that all
elements mesh properly and that overall system function/performance is achieved.
Criteria for Completion of Testing
A classic question arises every time software testing is discussed: “When are we done testing—
how do we know that we’ve tested enough?” Sadly, there is no definitive answer to thisquestion,
but there are a few pragmatic responses and early attempts at empirical guidance.
One response to the question is: “You're never done testing; the burden simply shifts from you
(the software engineer) to the end user.” Every time the user executes a computer program, the
program is being tested. This sobering fact underlines the importance of other software quality
assurance activities. Another response (somewhat cynical but nonetheless accurate) is: “You’re
done testing when you run out of time or you run out of money.” Although few practitioners
would argue with these responses, you need more rigorous criteria for determining when
sufficient testing has been conducted. The clean room software engineering approach suggests
statistical use techniques [Kel00] that execute a series of tests derived from a statistical sample of
all possible program executions by all users from a targeted population. By collecting metrics
during software testing and making use of existing statistical models, it is possible to develop
meaningful guidelines for answering the question: “When are we done testing?”
STRATEGIC ISSUES
Later in this, we present a systematic strategy for software testing. But even the best strategy will
fail if a series of overriding issues are not addressed. Tom Gilb [Gil95] argues that a software
testing strategy will succeed only when software testers:
(1) Specify product requirements in a quantifiable manner long before testing commences
(3) Understand the users of the software and develop a profile for each user category
(7) Conduct technical reviews to assess the test strategy and test cases themselves
(8) Develop a continuous improvement approach for the testing process.
Test strategies for conventional software
A testing strategy that is chosen by many software teams falls between the two extremes. It
takes an incremental view of testing, beginning with the testing of individual program units,
moving to tests designed to facilitate the integration of the units (sometimes on a daily basis),
and culminating with tests that exercise the constructed system.
Unit Testing
Unit Testing, the very first thing that we need to know is when the Unit Testing is being
performed. Unit testing is the initial level of software testing, that is performed on the
application source code mainly by the developer. The main motive of Unit Testing is to isolate
a section of code and verify it’s correctness.
A level of software testing where the individual units or components of a software or web
applications are tested by developer is called Unit Testing. It is an important aspect of
Software Testing. It is a component of test-driven development (TDD).
Benefits of Unit Testing:
1. Defects revealed by a unit test are easy to locate and relatively easy to repair. Unit testing
verifies the accuracy of the each unit.
2. In unit testing procedure – the developer needs to write test cases for all functions and
methods so that whenever a change is required, it can be fixed quickly in later date and the
module also works correctly.
3. Unit testing improves the quality of the code. It helps the programmer to write the better
code. It identifies every defect that may have come up before code is sent further for
regression testing.
4. If a test fails, then only the latest changes need to be made in the code and need to debug. So
It’s better to say unit testing helps to simplify the debugging process.
5. Codes are more reusable. In order to make unit testing possible, codes need to be modular.
This means that codes are easier to reuse.
The developer should make Unit testing a part of their regime to make neat and clean,
reusable, and bug free codes. Also by using unit testing the quality of code will be improved.
Unit testing helps to reduce the cost of bug fixes.
Unit Test Case Best Practices:
• You Should always follow the proper naming conventions for your unit testing, i.e. clear and
consistent naming conventions for the unit tests.
• If any changes done in the requirements then your unit test case should not be affected.
Your test cases should be independent.
• Always follow “test as your code" approach. The more code you write without testing, the
more paths you have to check for errors.
• If any changes needs to done in the code for the corresponding module, make sure that you
have the unit test case for that particular module and need to pass the test case before any
implementation done.
Integration testing
Integration testing is one of the agile methodologies of software testing where individual
components or units of code are tested to validate interactions among different software
system modules. In this process, these system components are either tested as a single group or
organized iteratively.
Typically, system integration testing is taken up to validate the performance of the entire
software system as a whole. The main purpose of this testing method is to expand the process
and validate the integration of the modules with other groups. It is performed to verify if all
the units operate in accordance with their specifications defined.
Top-Down Integration.
depth-first integration integrates all components on a major control path of the program
structure. Selection of a major path is somewhat arbitrary and depends on application-specific
characteristics. For example, selecting the left-hand path, components M1, M2 , M5 would be
integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be
integrated. Then, the central and right-hand control paths are built. Breadth-first integration
incorporates all components directly subordinate at each level, moving across the structure
horizontally. From the figure, components M2, M3, and M4 would be integrated first. The
next control level, M5, M6, and so on, follows. The integration process is performed in a series
of five steps:
1. The main control module is used as a test driver and stubs are substituted for all components
directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate
stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component. 5.
Regression testing may be conducted to ensure that new errors have not been introduced.
Bottom-Up Integration
Bottom-up integration testing, as its name implies, begins construction and testing with atomic
modules (i.e., components at the lowest levels in the program structure). Because components
are integrated from the bottom up, the functionality provided by components subordinate to a
given level is always available and the need for stubs is eliminated. A bottom-up integration
strategy may be implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called builds) that perform a
specific software subfunction.
2. A driver (a control program for testing) is written to coordinate test-case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
In this Figure the components are combined to form clusters 1, 2, and 3. Each of the clusters
is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are
subordinate to M a. Drivers D1 and D2 are removed and the clusters are interfaced directly to
M a. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both
Ma and M b will ultimately be integrated with component Mc, and so forth. As integration
moves upward, the need for separate test drivers lessens. In fact, if the top two levels of
program structure are integrated top down, the number of drivers can be reduced substantially
and integration of clusters is greatly simplified.
Regression testing
Regression testing is a testing type performed to validate that existing functionalities are
working fine after code modification. It testing is one of the most common terms used in
software testing and quality assurance. Regression testing is the technique or the testing
type performed to ensure that the existing functionality of the software or application
works as expected if any new code is introduced or a new defect fix has been done, or any
new functionality added to the application. It is the task of a quality assurance engineer to
check the already tested features after modifications and ensure that code changes have not
impacted the existing features.
The need for Regression Testing could arise when there are changes as mentioned below:
It focuses on minimising the risks of defects or dependencies due to any changes to the
code. This testing is conducted after maintenance; changes to functionalities or
enhancements are made to a product to make sure that there are not any unexpected
outcomes.
Smoke Testing
Smoke testing is an integration testing approach that is commonly used when product software
is developed. It is designed as a pacing mechanism for time-critical projects, allowing the
software team to assess the project on a frequent basis. In essence, the smoke-testing approach
encompasses the following activities:
1. Software components that have been translated into code are integrated into a build. A
build includes all data fi les, libraries, reusable modules, and engineered components that are
required to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “show-stopper” errors that have the
highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds, and the entire product (in its current form) is
smoke tested daily. The integration approach may be top down or bottom up.
• Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and
other show-stopper errors are uncovered early, thereby reducing the likelihood of serious
schedule impact when errors are uncovered.
The quality of the end product is improved. Because the approach is construction (integration)
oriented, smoke testing is likely to uncover functional errors as well as architectural and
component-level design errors. If these errors are corrected early, better product quality will
result.
• Error diagnosis and correction are simplified. Like all integration testing approaches, errors
uncovered during smoke testing are likely to be associated with “new software increments”—
that is, the software that has just been added to the build(s) is a probable cause of a newly
discovered error.
• Progress is easier to assess. With each passing day, more of the software has been integrated
and more has been demonstrated to work. This improves team morale and gives managers a
good indication that progress is being made.
Validation Testing
Validation is the process of evaluating a system or component during or at the end of
development process to determine whether it satisfies the specified requirements.
The process of evaluating software during the development process or at the end of
the development process to determine whether it satisfies specified business
requirements. Validation Testing ensures that the product actually meets the client's
needs.
Validation testing begins at the culmination of integration testing, when individual
components have been exercised, the software is completely assembled as a
package, and interfacing errors have been uncovered and corrected. At the validation
or system level, the distinction between different software categories disappears.
Testing focuses on user-visible actions and user-recognizable output from the system.
Validation-Test Criteria
• Ensure that all functional requirements are satisfied
• All behavioural characteristics are achieved
• All content is accurate and properly presented
• All performance requirements are attained
• Documentation is correct, and usability and other requirements are met.
System Testing
System Testing is a black box testing technique performed to evaluate the complete
system the system's compliance against specified requirements. In System testing,
the functionalities of the system are tested from an end-to-end perspective.
A classic system-testing problem is “finger pointing.” This occurs when an error is
uncovered.
System Testing is usually carried out by a team that is independent of the development
team in order to measure the quality of the system unbiased. It includes both functional
and Non-Functional testing.
• Recovery Testing
Recovery testing is a system test that forces the software to fail in a variety of
ways and verifies that recovery is properly performed. Recovery testing is a
type of non-functional testing technique performed in order to determine how
quickly the system can recover after it has gone through system crash or
hardware failure. Recovery testing is the forced failure of the software to verify
if the recovery is successful.
• Security Testing
Security Testing is a type of system testing that uncovers vulnerabilities of the
system and determines that the data and resources of the system are protected
from possible intruders. It ensures that the software system and application are
free from any threats or risks that can cause a loss. Security testing attempts
to verify that protection mechanisms built into a system will, in fact, protect it
from improper penetration.
• Stress Testing
Stress tests are designed to confront programs with abnormal situations.
Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume. During stress testing, the system is
monitored after subjecting the system to overload to ensure that the system
can sustain the stress.
• Performance Testing
Performance testing is designed to test the run-time performance of software
within the context of an integrated system. Performance testing occurs
throughout all steps in the testing process. Even at the unit level, the
performance of an individual module may be assessed as tests are conducted.
Performance testing, is a non-functional testing technique performed to
determine the system parameters in terms of responsiveness and stability
under various workload. Performance testing measures the quality attributes
of the system, such as scalability, reliability and resource usage.
SOFTWARE TESTING FUNDAMENTALS
The goal of testing is to find errors, and a good test is one that has a high probability of finding
an error. Therefore, you should design and implement a computer-based system or a product
with “testability” in mind. At the same time, the tests themselves must exhibit a set of
characteristics that achieve the goal of finding the most errors with a minimum of effort.
Testability: Software testability is simply how easily [a computer program] can be tested.
• Operability. “The better it works, the more efficiently it can be tested.” If a system is
designed and implemented with quality in mind, relatively few bugs will block the
execution of tests, allowing testing to progress without fits and starts.
• Observability: “What you see is what you test.” Inputs provided as part of testing produce
distinct outputs. System states and variables are visible or queriable during execution.
Incorrect output is easily identified. Internal errors are automatically detected and reported.
Source code is accessible.
• Controllability. “The better we can control the software, the more the testing can be
automated and optimized.” All possible outputs can be generated through some
combination of input, and I/O formats are consistent and structured. All code is executable
through some combination of input. Software and hardware states and variables can be
controlled directly by the test engineer. Tests can be conveniently specified, automated, and
reproduced.
• Decomposability. “By controlling the scope of testing, we can more quickly isolate
problems and perform smarter retesting.” The software system is built from independent
modules that can be tested independently.
• Simplicity. “The less there is to test, the more quickly we can test it.” The program should
exhibit functional simplicity (e.g., the feature set is the minimum necessary to meet
requirements); structural simplicity (e.g., architecture is modularized to limit the
propagation of faults), and code simplicity (e.g., a coding standard is adopted for ease of
inspection and maintenance).
• Stability. “The fewer the changes, the fewer the disruptions to testing.” Changes to the
software are infrequent, controlled when they do occur, and do not invalidate existing tests.
The software recovers well from failures.
• Understandability. “The more information we have, the smarter we will test.” The
architectural design and the dependencies between internal, external, and shared
components are well understood. Technical documentation is instantly accessible, well
organized, specific and detailed, and accurate. Changes to the design are communicated to
testers.
✓ What are the characteristics of testability?
A. Test Characteristics.
o A good test has a high probability of finding an error. To achieve this goal, the tester must
understand the software and attempt to develop a mental picture of how the software might
fail.
o A good test is not redundant. Testing time and resources are limited. There is no point in
conducting a test that has the same purpose as another test. Every test should have a
different purpose (even if it is subtly different).
o A good test should be “best of breed” [Kan93]. In a group of tests that have a similar
intent, time and resource limitations may dictate the execution of only those tests that has
the highest likelihood of uncovering a whole class of errors.
o A good test should be neither too simple nor too complex. Although it is sometimes
possible to combine a series of tests into one test case, the possible side effects associated
with this approach may mask errors. In general, each test should be executed separately.
The first test approach takes an external view and is called black-box testing. The second
requires an internal view and is termed white-box testing
Black-box testing alludes to tests that are conducted at the software interface. A black-box
test examines some fundamental aspect of a system with little regard for the internal logical
structure of the software.
From <[Link]
In this testing method, the design and structure of the code are known to the tester.
Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:
• Control-flow testing - The purpose of the control-flow testing to set up test cases
which covers all statements and branch conditions. The branch conditions are tested
for both being true and false, so that all statements can be covered.
• Data-flow testing - This testing technique emphasis to cover all the data variables
included in the program. It tests where the variables were declared and defined and
where they were used or changed.
From
<[Link]
Before the basis path method can be introduced, a simple notation for the representation of
control flow, called a flow graph (or program graph) must be introduced.
Consider the procedural design representation in Figure 23.2a. Here, a flowchart is used to depict
program control structure. Figure 23.2bmaps the flowchart into a corresponding flow graph
(assuming that no compound conditions are contained in the decision diamonds of the
flowchart). Referring to Figure 23.2b, each circle, called a flow graph node, represents one or
more procedural statements. A sequence of process boxes and a decision diamond can map into a
single node. The arrows on the flow graph, called edges or links, represent flow of control and
are analogous to flowchart arrows. An edge must terminate at a node, even if the node does not
represent any procedural statements (e.g., see the flow graph symbol for the if-then-else
construct).
Areas bounded by edges and nodes are called regions. When counting regions, we include the
area outside the graph as a region. When compound conditions are encountered in a procedural
design, the generation of a flow graph becomes slightly more complicated. A compound
condition occurs when one or more Boolean operators (logical OR, AND, NAND, NOR) is
present in a conditional statement. Referring to Figure 23.3, the program design language (PDL)
segment translates into the flow graph shown. Note that a separate node is created for each of the
conditions a and b in the statement IF a OR b. Each node that contains a condition is called a
predicate node and is characterized by two or more edges emanating from it.
An independent path is any path through the program that introduces at least one new set of
processing statements or a new condition. When stated in terms of a flow graph, an
independent path must move along at least one edge that has not been traversed before the
path is defined. For example, a set of independent paths for the flow graph illustrated in
Figure 23.2b is
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11
Note that each new path introduces a new edge.
The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11
is not considered to be an independent path because it is simply a combination of already
specified paths and does not traverse any new edges. Paths 1 through 4 constitute a basis
set for the flow graph in Figure 23.2b . That is, if you can design tests to force execution of
these paths (a basis set), every statement in the program will have been guaranteed to be
executed at least one time and every condition will have been executed on its true and false
sides. It should be noted that the basis set is not unique. In fact, a number of different basis
sets can be derived for a given procedural design. How do you know how many paths to
look for? The computation of cyclomatic complexity provides the answer. Cyclomatic
complexity is a software metric that provides a quantitative measure of the logical
complexity of a program. When used in the context of the basis path testing method, the
value computed for cyclomatic complexity defines the number of independent paths in the
basis set of a program and provides you with an upper bound for the number of tests that
must be conducted to ensure that all statements have been executed at least once.
Cyclomatic complexity has a foundation in graph theory and provides you with an
extremely useful software metric.
For example
Referring once more to the flow graph in Figure 23.2b , the cyclomatic complexity can be
computed using each of the algorithms just noted:
1. The flow graph has four regions.
2. V( G) =11 edges - 9 nodes + 2 =4.
3. V( G) = 3 predicate nodes +1 = 4.
• Using the design or code as a foundation, draw a corresponding flow graph. A flow graph is created
using the symbols and construction rules presented in Section 23.4.1. Referring to the PDL for
average in Figure 23.4 , a flow graph is created by numbering those PDL statements that will be
mapped into corresponding flow graph nodes.
• Determine the cyclomatic complexity of the resultant flow graph. The cyclomatic complexity V( G)
is determined by applying the algorithms described in Section 23.4.2. It should be noted that V( G)
can be determined
• Determine a basis set of linearly independent paths.
The value of V( G) provides the number of linearly independent paths through the program
control structure. In the case of procedure average, we expect to specify six paths:
Path 1: 1-2-10-11-13
Path 2: 1-2-10-12-13
Path 3: 1-2-3-10-11-13
Path 4: 1-2-3-4-5-8-9-2-. . .
Path 5: 1-2-3-4-5-6-8-9-2-. . .
Path 6: 1-2-3-4-5-6-7-8-9-2-. . .
The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path through the remainder
of the control structure is acceptable. It is often worthwhile to identify predicate nodes as
an aid in the derivation of test cases. In this case, nodes 2, 3, 5, 6, and 10 are predicate
nodes.
• Prepare test cases that will force execution of each path in the basis set.
Data should be chosen so that conditions at the predicate nodes are appropriately set as
each path is tested. Each test case is executed and compared to expected results. Once all
test cases have been completed, the tester can be sure that all statements in the program
have been executed at least once.
It is important to note that some independent paths (e.g., path 1 in our example) cannot be
tested in stand-alone fashion. That is, the combination of data required to traverse the path
cannot be achieved in the normal flow of the program. In such cases, these paths are tested
as part of another path test.
4. Graph Matrices
The procedure for deriving the flow graph and even determining a set of basis paths is
amenable to mechanization. A data structure, called a graph matrix, can be quite useful for
developing a software tool that assists in basis path testing. A graph matrix is a square
matrix whose size (i.e., number of rows and columns) is equal to the number of nodes on
the flow graph. Each row and column corresponds to an identified node, and matrix entries
correspond to connections (an edge) between nodes. A simple example of a flow graph and
its corresponding graph matrix [Bei90] is shown in Figure 23.6 . Referring to the figure,
each node on the flow graph is identified by numbers, while each edge is identified by
letters. A letter entry is made in the matrix to correspond to a connection between two
nodes. For example, node 3 is connected to node 4 by edge b. To this point, the graph
matrix is nothing more than a tabular representation of a flow graph. However, by adding a
link weight to each matrix entry, the graph
matrix can become a powerful tool for evaluating program control structure during testing.
The link weight provides additional information about control flow. In its simplest form,
the link weight is 1 (a connection exists) or 0 (a connection does not exist). But link
weights can be assigned other, more interesting properties:
• The probability that a link (edge) will be executed. • The processing time expended
during traversal of a link
• The memory required during traversal of a link
• The resources required during traversal of a link. Beizer [Bei90] provides a thorough
treatment of additional mathematical algorithms that can be applied to graph matrices.
Using these techniques, the analysis required to design test cases can be partially or fully
automated.
D. BLACK-BOX TESTING
Black-box testing, also called behavioral testing or functional testing, focuses on the
functional requirements of the software. That is, black-box testing techniques enable you to
derive sets of input conditions that will fully exercise all functional requirements for a
program. Black-box testing is not an alternative to white-box techniques. Rather, it is a
complementary approach that is likely to uncover a different class of errors than white-box
methods.
Black box testing is a technique of software testing which examines the functionality of
software without peering into its internal structure or coding. The primary source of black
box testing is a specification of requirements that is stated by the customer.
In this method, tester selects a function and gives input value to examine its functionality,
and checks whether the function is giving expected output or not. If the function produces
correct output, then it is passed in testing, otherwise failed. The test team reports the result
to the development team and then tests the next function. After completing testing of all
functions if there are severe problems, then it is given back to the development team for
correction.
From <[Link]
Transaction flow modeling. The nodes represent steps in some transaction (e.g., the steps
required to make an airline reservation using an online service), and the links represent the
logical connection between steps. For example, a data object flightInformationInput is followed
by the operation validationAvailabilityProcessing().
Finite state modeling. The nodes represent different user-observable states of the software
(e.g., each of the “screens” that appear as an order entry clerk takes a phone order), and the links
represent the transitions that occur to move from state to state (e.g., orderInformation is
verified during inventoryAvailabilityLook-up() and is followed by customerBillingInformation
input). The state diagram (Chapter 11) can be used to assist in creating graphs of this type.
Data flow modeling. The nodes are data objects, and the links are the transformations that occur
to translate one data object into another. For example, the node FICATaxWithheld ( FTW) is
computed from gross wages ( GW) using the relationship, FTW = 0.62 * GW.
Timing modeling: The nodes are program objects, and the links are the sequential connections
between those objects. Link weights are used to specify the required execution times as the
program executes.
2. Equivalence Partitioning:[imp]
Equivalence partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived. An ideal test case single-
handedly uncovers a class of errors (e.g., incorrect processing of all character data) that might
otherwise require many test cases to be executed before the general error is observed. Test-case
design for equivalence partitioning is based on an evaluation of equivalence classes for an input
condition. Using concepts introduced in the preceding section, if a set of objects can be linked
by relationships that are symmetric, transitive, and reflexive, an equivalence class is present
[Bei95]. An equivalence class represents a set of valid or invalid states for input conditions.
Typically, an input condition is either a specific numeric value, a range of values, a set of related
values, or a Boolean condition. Equivalence classes may be defined according to the following
guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes
are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class
are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined
By applying the guidelines for the derivation of equivalence classes, test cases for each input
domain data item can be developed and executed. Test cases are selected so that the largest
number of attributes of an equivalence class are exercised at once.
4. If internal program data structures have prescribed boundaries (e.g., a table has a defined
limitof 100 entries), be certain to design a test case to exercise the data structure at its boundary.
Most software engineers intuitively perform BVA to some degree. By applying these guidelines,
boundary testing will be more complete, thereby having a higher likelihood for error detection
The orthogonal array testing approach enables you to provide good test coverage with far fewer
test cases than the exhaustive strategy. An L9 orthogonal array for the fax send function is
illustrated in Figure 23.10 .
MODULE:5
Object oriented modelling: use case: Actors, Scenarios & usw cases,
Drawing use case diagrams
Object-oriented modeling (OOM) is an approach to modeling an application that is used at the beginning
of the software life cycle when using an object-oriented approach to software development.
The software life cycle is typically divided up into stages going from abstract descriptions of the problem to
designs then to code and testing and finally to deployment. Modeling is done at the beginning of the process.
Object-oriented modeling is typically done via use cases and abstract definitions of the most important
objects. The most common language used to do object-oriented modeling is the Object Management
Group's Unified Modeling Language (UML).
A use case diagram is a dynamic or behavior diagram in UML. Use case diagrams model the functionality
of a system using actors and use cases. Use cases are a set of actions, services, and functions that the system
needs to perform. In this context, a "system" is something being developed or operated, such as a web site.
The "actors" are people or entities operating under defined roles within the system.
Actors represent the role of the future users of the system. Actors model the user's perspective of the
system. Actors are located outside the system; therefore, in order to depict actors, it is important to define
the boundaries between actors and the system.
A Scenario is a formal description of the flow of events that occur during the execution of a use
case instance. It defines the specific sequence of events between the system and the external actors. It is
normally described in text and corresponds to the textual representation of the sequence diagram.
A use case is a written description of how users will perform tasks on your website. It outlines, from
a user's point of view, a system's behavior as it responds to a request. Each use case is represented as a
sequence of simple steps, beginning with a user's goal and ending when that goal is fulfilled.
Use case diagrams are valuable for visualizing the functional requirements of a system that will translate
into design choices and development priorities.
They also help identify any internal or external factors that may influence the system and should be taken
into consideration.
They provide a good high level analysis from outside the system. Use case diagrams specify how the system
interacts with actors without worrying about the details of how that functionality is implemented.
System
Draw your system's boundaries using a rectangle that contains use cases. Place actors outside the system's
boundaries.
Use Case
Draw use cases using ovals. Label the ovals with verbs that represent the system's functions.
Actors
Actors are the users of a system. When one system is the actor of another system, label the actor system with
the actor stereotype.
Relationships
Illustrate relationships between an actor and a use case with a simple line. For relationships among use cases,
use arrows labeled either "uses" or "extends." A "uses" relationship indicates that one use case is needed by
another in order to perform a task. An "extends" relationship indicates alternative options under a certain use
case.
Purpose of Use Case Diagrams
The main purpose of a use case diagram is to portray the dynamic aspect of a system. It accumulates the
system's requirement, which includes both internal as well as external influences. It invokes persons, use
cases, and several things that invoke the actors and elements accountable for the implementation of use case
diagrams. It represents how an entity from the external environment can interact with a part of the system.
It is essential to analyze the whole system before starting with drawing a use case diagram, and then the
system's functionalities are found. And once every single functionality is identified, they are then
transformed into the use cases to be used in the use case diagram.
After that, we will enlist the actors that will interact with the system. The actors are the person or a thing that
invokes the functionality of a system. It may be a system or a private entity, such that it requires an entity to
be pertinent to the functionalities of the system to which it is going to interact.
Once both the actors and use cases are enlisted, the relation between the actor and use case/ system is
inspected. It identifies the no of times an actor communicates with the system. Basically, an actor can
interact multiple times with a use case or system at a particular instance of time.
Following are some rules that must be followed while drawing a use case diagram:
1. A pertinent and meaningful name should be assigned to the actor or a use case of a system.
2. The communication of an actor with a use case must be defined in an understandable way.
3. Specified notations to be used as and when required.
4. The most significant interactions should be represented among the multiple no of interactions
between the use case and actors.
Example of a Use Case Diagram
A use case diagram depicting the Online Shopping website is given below.
Here the Web Customer actor makes use of any online shopping website to purchase online. The top-level
uses are as follows; View Items, Make Purchase, Checkout, Client Register. The View Items use case is
utilized by the customer who searches and view products. The Client Register use case allows the customer
to register itself with the website for availing gift vouchers, coupons, or getting a private sale invitation. It is
to be noted that the Checkout is an included use case, which is part of Making Purchase, and it is not
available by itself.
The View Items is further extended by several use cases such as; Search Items, Browse Items, View
Recommended Items, Add to Shopping Cart, Add to Wish list. All of these extended use cases provide some
functions to customers, which allows them to search for an item. The View Items is further extended by
several use cases such as; Search Items, Browse Items, View Recommended Items, Add to Shopping Cart,
Add to Wish list. All of these extended use cases provide some functions to customers, which allows them to
search for an item.
Both View Recommended Item and Add to Wish List include the Customer Authentication use case, as
they necessitate authenticated customers, and simultaneously item can be added to the shopping cart without
any user authentication.
Similarly, the Checkout use case also includes the following use cases, as shown below. It requires an
authenticated Web Customer, which can be done by login page, user authentication cookie ("Remember
me"), or Single Sign-On (SSO). SSO needs an external identity provider's participation, while Web site
authentication service is utilized in all these use cases.
The Checkout use case involves Payment use case that can be done either by the credit card and external
credit payment services or with PayPal.
Important tips for drawing a Use Case diagram
Following are some important tips that are to be kept in mind while drawing a use case diagram:
Write two to four sentences per use case, capturing key activities
and key-extension handling.
• Fully dressed - All steps and variations are written in detail, and
there are supporting sections, such as preconditions and success
guarantees.
1
• A carefully structured and detailed description enabling a deep
understanding of the goals, tasks, and requirements.
• Use case diagrams can be embedded at any level.
• Simple projects may only need a brief or casual use
case. Complex projects are likely to need a fully dressed use
case to define requirements.
• The case level may also depend on the progress of the
project. The first use case may be brief, and become more
detailed later when solution owners need more specific and
detailed guidance.
2
➢ The System Sequence Diagram (SSD): -
3
A system sequence diagram should specify and show the following:
• External actors
• Messages (methods) invoked by these actors
• Return values (if any) associated with previous messages
• Indication of any loops or iteration area
4
5
➢ How to name system events in SSD?
System events should be expressed at the abstract level of
intention rather than in terms of the physical input device.
6
UML Interaction Diagram
As the name suggests, the interaction diagram portrays the interactions between distinct
entities present in the model. It amalgamates both the activity and sequence diagrams. The
communication is nothing but units of the behavior of a classifier that provides context for
interactions.
A set of messages that are interchanged between the entities to achieve certain specified
tasks in the system is termed as interaction. It may incorporate any feature of the classifier
of which it has access. In the interaction diagram, the critical component is the messages
and the lifeline.
In UML, the interaction overview diagram initiates the interaction between the objects
utilizing message passing. While drawing an interaction diagram, the entire focus is to
represent the relationship among different objects which are available within the system
boundary and the message exchanged by them to communicate with each other.
The message exchanged among objects is either to pass some information or to request
some information. And based on the information, the interaction diagram is categorized
into the sequence diagram, collaboration diagram, and timing diagram.
The sequence diagram envisions the order of the flow of messages inside the system by
depicting the communication between two lifelines, just like a time-ordered sequence of
events.
The collaboration diagram, which is also known as the communication diagram, represents
how lifelines connect within the system, whereas the timing diagram focuses on that instant
when a message is passed from one element to the other.
2. The interaction diagram explores and compares the use of the collaboration diagram
sequence diagram and the timing diagram.
3. The interaction diagram represents the interactive (dynamic) behavior of the system.
4. The sequence diagram portrays the order of control flow from one element to the
other elements inside the system, whereas the collaboration diagrams are employed
to get an overview of the object architecture of the system.
5. The interaction diagram models the system as a time-ordered sequence of a system.
6. The interaction diagram models the system as a time-ordered sequence of a system.
7. The interaction diagram systemizes the structure of the interactive elements.
1. Sequence Diagram
The sequence diagram represents the flow of messages in the system and is also termed as
an event diagram. It helps in envisioning several dynamic scenarios. It portrays the
communication between any two lifelines as a time-ordered sequence of events, such that
these lifelines took part at the run time. In UML, the lifeline is represented by a vertical bar,
whereas the message flow is represented by a vertical dotted line that extends across the
bottom of the page. It incorporates the iterations as well as branching.
Purpose of a Sequence Diagram
1. To model high-level interaction among active objects within a system.
2. To model interaction among objects inside a collaboration realizing a use case.
3. It either models generic interactions or some certain instances of interaction.
Notations of a Sequence Diagram
a. Lifeline
An individual participant in the sequence diagram is represented by a lifeline. It is positioned
at the top of the diagram.
b. Actor
A role played by an entity that interacts with the subject is called as an actor. It is out of the
scope of the system. It represents the role, which involves human users and external
hardware or subjects. An actor may or may not represent a physical entity, but it purely
depicts the role of an entity. Several distinct roles can be played by an actor or vice versa.
c. Activation
It is represented by a thin rectangle on the lifeline. It describes that time period in which an
operation is performed by an element, such that the top and the bottom of the rectangle is
associated with the initiation and the completion time, each respectively.
d. Messages
The messages depict the interaction between the objects and are represented by arrows.
They are in the sequential order on the lifeline. The core of the sequence diagram is formed
by messages and lifelines.
Following are types of messages enlisted below:
o Recursive Message: A self message sent for recursive purpose is called a recursive
message. In other words, it can be said that the recursive message is a special case of
the self message as it represents the recursive calls.
o Create Message: It describes a communication, particularly between the lifelines of
an interaction describing that the target (lifeline) has been instantiated.
Sequence Fragments:
1. Sequence fragments have been introduced by UML 2.0, which makes it quite easy
for the creation and maintenance of an accurate sequence diagram.
2. It is represented by a box called a combined fragment, encloses a part of interaction
inside a sequence diagram.
3. The type of fragment is shown by a fragment operator.
Types of fragments
Following are the types of fragments enlisted below;
2. Communication Diagram
The communication diagram is used to show the relationship between the objects in a
system. Both the sequence and the communication diagrams represent the same
information but differently. Instead of showing the flow of messages, it depicts the
architecture of the object residing in the system as it is based on object-oriented
programming. An object consists of several features. Multiple objects present in the system
are connected to each other. The communication diagram, which is also known as a
collaboration diagram, is used to portray the object's architecture in the system.
Notations of a Communication Diagram
Following are the components of a component diagram that are enlisted below:
1. Objects: The representation of an object is done by an object symbol with its name
and class underlined, separated by a colon.
In the communication diagram, objects are utilized in the following ways:
The collaborations are used when it is essential to depict the relationship between the
object. Both the sequence and communication diagrams represent the same information,
but the way of portraying it quite different. The communication diagrams are best suited for
analyzing use cases.
Following are some of the use cases enlisted below for which the communication diagram is
implemented:
OOPs Concepts in Java
1. To model collaboration among the objects or roles that carry the functionalities of
use cases and operations.
2. To model the mechanism inside the architectural design of the system.
3. To capture the interactions that represent the flow of messages between the objects
and the roles inside the collaboration.
4. To model different scenarios within the use case or operation, involving a
collaboration of several objects and interactions.
5. To support the identification of objects participating in the use case.
6. In the communication diagram, each message constitutes a sequence number, such
that the top-level message is marked as one and so on. The messages sent during the
same call are denoted with the same decimal prefix, but with different suffixes of 1,
2, etc. as per their occurrence.
Steps for creating a Communication Diagram
1. Determine the behavior for which the realization and implementation are specified.
2. Discover the structural elements that are class roles, objects, and subsystems for
performing the functionality of collaboration.
o Choose the context of an interaction: system, subsystem, use case, and
operation.
3. Think through alternative situations that may be involved.
o Implementation of a communication diagram at an instance level, if needed.
o A specification level diagram may be made in the instance level sequence
diagram for summarizing alternative situations.
Example of a Communication Diagram
The sequence diagram is used when time The collaboration diagram is used when
sequence is main focus. object organization is main focus.
The collaboration diagrams are better
The sequence diagrams are better suited of suited for depicting simpler interactions of
analysis activities. the smaller number of objects.
Class Diagrams
What is class diagrams?
In software engineering, a class diagram in the Unified Modeling Language (UML)
is a type of static structure diagram that describes the structure of a system by
showing the system's classes, their attributes, operations (or methods), and the
relationships among objects.
Purpose of Class Diagrams
The purpose of class diagram is to model the static view of an application. Class
diagrams are the only diagrams which can be directly mapped with object-
oriented languages and thus widely used at the time of construction.
UML diagrams like activity diagram, sequence diagram can only give the sequence
flow of the application, however class diagram is a bit different. It is the most
popular UML diagram in the coder community.
Directed Association
Multiplicity
This occurs when a class may have multiple functions or responsibilities. For
example, a staff member working in an airport may be a pilot, aviation
engineer, a ticket dispatcher, a guard, or a maintenance crew member. If the
maintenance crew member is managed by the aviation engineer there could
be a managed by relationship in two instances of the same class.
Aggregation
To show aggregation in a diagram, draw a line from the parent class to the
child class with a diamond shape near the parent class.
Composition
Inheritance / Generalization
For example – When both activities i.e. steaming the milk and
adding coffee get completed, we converge them into one final
activity.
[Link] State or End State – The state which the system reaches
when a particular process or activity ends is known as a Final State
or End State. We use a filled circle within a circle notation to
represent the final state in a state machine diagram. A system or a
process can have multiple final states.