0% found this document useful (0 votes)
137 views196 pages

Introduction to Software Engineering Basics

The document provides an overview of software engineering, defining software and its characteristics, including its non-manufactured nature and reusability. It categorizes software into various types, discusses common myths surrounding software development, and outlines the software process and its framework. Additionally, it introduces prescriptive process models, particularly the Waterfall Model, emphasizing the structured approach to software development.

Uploaded by

shankerdevkr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views196 pages

Introduction to Software Engineering Basics

The document provides an overview of software engineering, defining software and its characteristics, including its non-manufactured nature and reusability. It categorizes software into various types, discusses common myths surrounding software development, and outlines the software process and its framework. Additionally, it introduces prescriptive process models, particularly the Waterfall Model, emphasizing the structured approach to software development.

Uploaded by

shankerdevkr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

MODULE:1

Introduction to Software Engineering:


Definition and characteristics of software
Definition
Software is the computer programs that when executed provide
desired features,function and performance.
Software consists of programs, documentation of any facet of the
program and the procedures used to setup and operate the software
system.
software acts as the basis for the control of the computer, the
communication of information and the creation and control of other
programs.

• Any program is a subset of software and it becomes


software only if documentation and operating procedure
manuals are prepared.
• Program is a combination of source code and object code.
• Documentation consists of different types of manuals
Formal Specification

Analysis\Context-Diagram

Data Flow Diagrams

Flow Charts

Entity-Relationship Diagrams

Documentation

Source Code Listings

Cross-Reference Listing

Test Data

Test Results

Fig. 1.2: List of documentation manuals.

Operating procedures consist of instructions to setup and use the


software system and instructions on how to react to system
failure.
System
Overview

user Manuals Beginners Guide Tutorial

Reference Guide

Operating
Procedures

Installation
Guide

Operational Manuals

System Administration Guide

Fig. 1.3: List of operating procedure manuals.


Characteristics of software
• Software does not wear out
Software becomes reliable overtime instead of wearing out. It
becomes obsolete,if the environment for which it was
developed,changes.

• Software is not manufactured


The life of a software is from concept exploration to the retirement of
the software product. It is one time development effort and continuous
maintenance effort in order to keep it operational. The making 1000
copies is not an issue and it does not involve any cost.

• Reusability of components
Software reusability has introduced another area and is known as
component based software engineering.
In software, there is only a humble beginning like
graphical user interfaces are built using reusable components that
enable the creation of graphics windows, pull-down menus, and a
wide variety of interaction mechanisms.

• Software is flexible

A program can be developed to do almost anything. Sometimes, this


characteristic may be the best and may help us to accommodate any
kind of change.
A program can be developed to do almost anything. Sometimes, this
characteristic may be the best and may help us to accomodate any

Broad categories of computer software


Application software: These are the stand-alone programs that solve
a specific business need. Applications in this area process business or
technical data in a way that facilitates business operations or
management/technical decision making.
Examples:
Word Processors,Database Software,Multimedia Software,Education
and Reference Software,Graphics software,Web browsers
System software: System software is an intermediator or a middle
layer between the user and the hardware. A collection of programs
written to service other programs. It includes compilers, editors,
operating system drivers etc.
Engineering/scientific software : a broad array of “number-
crunching programs that range from astronomy to volcanology, from
automotive stress analysis to orbital dynamics, and from computer-
aided design to molecular biology, from genetic analysis to
meteorology.
Embedded software: it is placed in ROM of the product and control
the various functions of the product like automobile,security system.
the embedded software handles hardware components and is also
termed as intelligent software.
Product-line software : a set of software-intensive systems that
share a common, managed set of features satisfying the specifi c
needs of a particular market segment or mission and that are
developed from a common set of core assets in a prescribed way
Web/Mobile applications : this network-centric software category
spans a wide array of applications and encompasses both browser-
based apps and software that resides on mobile devices.
Artificial intelligence software: makes use of nonnumerical
algorithms to solve complex problems. Applications within this area
include robotics, expert systems, pattern recognition , artificial
neural networks, theorem proving, and game playing.

Software Myths
1. Software is easy to change
It is true that source code files are easy to edit, but
that is quite different than saying that software is easy to
change. Every change requires that the complete system be
re-verified. If we do not take proper care, this will be an
extremely tedious and expensive process.
2. Computers provide greater reliability than the devices they
replace.
It is true that software does not fail in the traditional sense.
There are no limits to how many times a given piece of code
can be executed before it “wears out". In any event, the
simple expression of this myth is that our general ledgers are
still not perfectly accurate, even though they have been
computerized. Back in the days of manual accounting
systems, human error was a fact of life.
3. Testing software correct can remove all the errors.
Testing can only show the presence of errors. It cannot show the
absence of errors. Our aim is to design effective test cases in order
to find maximum possible errors. The more we test, the more we
are confident about our design.

4. Reusing software increases safety.


Code re-use is a very powerful tool that can yield dramatic
improvement in development efficiency, but it still requires
analysis to determine its suitability and testing to determine if it
works.

5. Software can work right the first time.

6. Software can be designed thoroughly enough to avoid most


integration problems.
There is no computer tool to perform consistency checks on the
specifications. A special care is required to understand the
specifications, and if there is an ambiguity, that should be resolved
before proceeding for design.

7. Software with more features is better software.


This is, of course, almost the opposite of the truth. The best, most
enduring programs are those which do one thing well.

8. Addition of more software engineers will make up the delay.


This is not true in most of the cases. By the process of adding more
software engineers during the project, we may further delay the
project. This does not serve any purpose here, although this may
be true for any civil engineering work.

9. Aim is to develop working programs.


The aim has been shifted from developing working programs to
good quality, maintainable programs. Maintaining software has
become a very critical and crucial area for software engineering
community.
This list is endless. These myths, poor quality of
software, increasing cost and delay in the delivery of the
software have been the driving forces behind the emergence
of software engineering .

Contributing factors:
• Change in ratio of hardware to software costs
• Increasing importance of maintenance
• Advances in software techniques
• Increased demand for software
• Demand for larger and more complex software systems.
The Software Process
A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created.

An activity strives to achieve a broad objective (e.g., communication with stakeholders) and is
applied regardless of the application domain, size of the project, complexity of the effort, or degree
of rigor with which software engineering is to be applied.

An action (e.g., architectural design) encompasses a set of tasks that produce a major work product
(e.g., an architectural model).

A task focuses on a small, but well-defi ned objective (e.g., conducting a unit test) that produces a
tangible outcome.

In the context of software engineering, a process is not a rigid prescription for how to build
computer software. Rather, it is an adaptable approach that enables the people doing the work (the
software team) to pick and choose the appropriate set of work actions and tasks. The intent is
always to deliver software in a timely manner and with sufficient quality to satisfy those who have
sponsored its creation and those who will use it.

Software Engineering Layers


Software engineering is a layered technology. There are four types of layers in Software Engineering
such as – Tools, methods, process, A quality focus.

A quality focus: Total quality management, Six Sigma, and similar philosophies 2 foster a continuous
process improvement culture, and it is this culture that ultimately leads to the development of
increasingly more effective approaches to software engineering. The bedrock that supports software
engineering is a quality focus.

Process: The foundation for software engineering is the process layer. The software engineering
process is the glue that holds the technology layers together and enables rational and timely
development of computer software. Process defines a framework that must be established for
effective delivery of software engineering technology. The software process forms the basis for
management control of software projects and establishes the context in which technical methods
are d, work products (models, documents, data, reports, forms, etc.) are produced, milestones are
established, quality is ensured, and change is properly managed.

Methods: Software engineering methods provide the technical “how to’ s for building software”.
Methods encompass a broad array of tasks that include communication, requirements analysis,
design modelling, program construction, testing, and support. Software engineering methods rely on
a set of basic principles that govern each area of the technology and include modelling activities and
other descriptive techniques.

Tools: Software engineering tools provide automated or semi-automated support for the process
and the methods. When tools are integrated so that information created by one tool can be used by
another, a system for the support of software development, called computer-aided software
engineering, is established.
The Process Framework
A process framework establishes the foundation for a complete software
engineering process by identifying a small number of framework activities that
are applicable to all software projects, regardless of their size or complexity. The
process framework encompasses a set of umbrella activities that are applicable
across the entire software process.
A generic process framework for software engineering encompasses five
activities:
• Communication
• Planning
• Modelling
• Construction
• Deployment
Communication
Before any technical work, it is important to communicate and collaborate with
the customer (and other stakeholders). The intent is to understand stakeholders’
objectives for the project and to gather requirements that help define software
features and functions.
Planning
A software project is a complicated journey, and the planning activity creates a
“map” that helps guide the team as it makes the journey. The map—called a
software project plan—defines the software engineering work by describing the
technical tasks to be conducted, the risks that are likely, the resources that will be
required, the work products to be produced, and a work schedule.
Modelling
Whether you’re a landscaper, a bridge builder, an aeronautical engineer, a
carpenter, or an architect, you work with models every day. You create a “sketch”
of the thing so that you’ll understand the big picture—what it will look like
architecturally, how the constituent parts fit together, and many other
characteristics. If required, you refine the sketch into greater and greater detail in
an effort to better understand the problem and how you’re going to solve it. A
software engineer does the same thing by creating models to better understand
software requirements and the design that will achieve those requirements.
Construction
What you design must be built. This activity combines code generation (either
manual or automated) and the testing that is required to uncover errors in the
code.
Deployment
The software (as a complete entity or as a partially completed increment) is
delivered to the customer who evaluates the delivered product and provides
feedback based on the evaluation.

These five generic framework activities can be used during the development of
small, simple programs, the creation of Web applications, and for the engineering
of large, complex computer-based systems. The details of the software process
will be quite different in each case, but the framework activities remain the same.
For many software projects, framework activities are applied
iteratively as a project progress. That is, communication, planning, modelling,
construction, and deployment are applied repeatedly through a number of project
iterations. Each iteration produces a software increment that provides
stakeholders with a subset of overall software features and functionality. As each
increment is produced, the software becomes more and more complete.
Umbrella activities
Software engineering process framework activities are complemented by a
number of umbrella activities. In general, umbrella activities are applied
throughout a software project and help a software team manage and control
progress, quality, change, and risk. Typical umbrella activities include
Software project tracking and control—allows the software team to assess
progress against the project plan and take any necessary action to maintain the
schedule.
Risk management—assesses risks that may affect the outcome of the project or
the quality of the product.
Software quality assurance—defines and conducts the activities required to
ensure software quality.
Technical reviews—assess software engineering work products in an effort to
uncover and remove errors before they are propagated to the next activity.
Measurement—defines and collects process, project, and product measures that
assist the team in delivering software that meets stakeholders’ needs; can be used
in conjunction with all other framework and umbrella activities
Software configuration management—manages the effects of change
throughout the software process.
Reusability management—defines criteria for work product reuse (including
software components) and establishes mechanisms to achieve reusable
components.
Work product preparation and production—encompass the activities required
to create work products such as models, documents, logs, forms, and lists.
Software Product
Software Products are nothing but software systems delivered to the customer with
the documentation that describes how to install and use the system. In certain cases,
software products may be part of system products where hardware, as well as
software, is delivered to a customer. Software products are produced with the help of
the software process.
Software Process
Software is the set of instructions in the form of programs to work the computer system
and to process the hardware components. To produce a software product the set of
activities is used. This set is called a software process.
PROCESS MODELS
1. Definition
• Process models were originally proposed to bring order to the
chaos of software development.
• History has indicated that these models have brought a certain
amount of useful structure to software engineering work and have
provided a reasonably effective road map for software teams.
• However, software engineering work and the products that are
produced remain on “the edge of chaos.”
• A process model provides a specific roadmap for software
engineering work. It defines the flow of all activities, actions and
tasks, the degree of iteration, the work products, and the
organization of the work that must be done.
• Software engineers and their managers adapt a process model to
their needs and then follow it. In addition, the people who have
requested the software have a role to play in the process of
defining, building, and testing it.
• From the point of view of a software engineer, the work product is
a customized description of the activities and tasks defined by the
process.
1.1 PRESCRIPTIVE PROCESS MODELS
• A prescriptive process model strives for structure and order in
software development. Activities and tasks occur sequentially
with defined guidelines for progress.
• We call them “prescriptive” because they prescribe a set of
process elements—framework activities, software engineering
actions, tasks, work products, quality assurance, and change
control mechanisms for each project.
• Each process model also prescribes a process flow (also called a
work flow)—that is, the manner in which the process elements
are interrelated to one another.
• All software process models can accommodate the generic
framework activities, but each applies a different emphasis to
these activities and defines a process flow that invokes each
framework activity (as well as software engineering actions and
tasks) in a different manner.
• Prescriptive process models are sometimes referred to as
“traditional” process models.
• Prescriptive process models defi ne a prescribed set of process
elements and a predictable process work flow.
1.1.1 The Waterfall Model
• The Waterfall Model was the first Process Model to be
introduced. It is also referred to as a linear-sequential life
cycle model.
• There are times when the requirements for a problem are
well understood— when work flows from communication
through deployment in a reasonably linear fashion. This
situation is sometimes encountered when well-defined
adaptations or enhancements to an existing system must be
made (e.g., an adaptation to accounting software that has
been mandated because of changes to government
regulations). It may also occur in a limited number of new
development efforts, but only when requirements are well
defined and reasonably stable.
• The waterfall model, sometimes called the classic life cycle,
suggests a systematic, sequential approach to software
development that begins with customer specification of
requirements and progresses through planning, modelling,
construction, and deployment, culminating in ongoing
support of the completed software.
• The waterfall model is the oldest paradigm for software
engineering.
• A variation in the representation of the waterfall model is
called the V-model.
• V-model depicts the relationship of quality assurance actions
to the actions associated with communication, modelling,
and early construction activities.
• As a software team moves down the left side of the V, basic
problem requirements are refined into progressively more
detailed and technical representations of the problem and its
solution. Once code has been generated, the team moves up
the right side of the V, essentially performing a series of tests
(quality assurance actions) that validate each of the models
created as the team moves down the left side. In reality,
there is no fundamental difference between the classic life
cycle and the V-model.
• The V-model provides a way of visualizing how verification
and validation actions are applied to earlier engineering
work.
Waterfall model

V-model
1.1.2 Evolutionary Process Models
• Evolutionary process models produce an increasingly
more complete version of the software with each iteration.
• Evolutionary models are iterative. They are characterized
in a manner that enables you to develop increasingly
more complete versions of the software.
• we present two common evolutionary process models:
Prototyping , The Spiral Model.
Prototyping Model :
• Often, a customer defines a set of general objectives
for software, but does not identify detailed
requirements for functions and features. In other cases,
the developer may be unsure of the efficiency of an
algorithm, the adaptability of an operating system, or
the form that human-machine interaction should take.
In these, and many other situations, a prototyping
paradigm may offer the best approach.
• prototyping can be used as a stand-alone process
model, it is more commonly used as a technique that
can be implemented within the context of any one of
the process models.
• The prototyping paradigm begins with communication.
You meet with other stakeholders to define the overall
objectives for the software, identify whatever
requirements are known, and outline areas where
further definition is mandatory.
• A prototyping iteration is planned quickly, and
modeling (in the form of a “quick design”) occurs.
• A quick design focuses on a representation of those
aspects of the software that will be visible to end users.
Prototyping can be problematic for the following
reasons:
➢ Stakeholders see what appears to be a working
version of the software, unaware that the
prototype is held together haphazardly, unaware
that in the rush to get it working you haven’t
considered overall software quality or long-term
maintainability.
➢ As a software engineer, you often make
implementation compromises in order to get a
prototype working quickly. An inappropriate
operating system or programming language may
be used simply because it is available and known;
an inefficient algorithm may be implemented
simply to demonstrate capability.
• problems can occur, prototyping can be an effective
paradigm for Prescriptive Process Model software
engineering.
The Spiral Model :
• Originally proposed by Barry Boehm.
• The spiral model is an evolutionary software process
model that couples the iterative nature of prototyping
with the controlled and systematic aspects of the
waterfall model.
• It provides the potential for rapid development of
increasingly more complete versions of the software.
Boehm describes the model in the following manner.
• The spiral development model is a risk driven process
model generator that is used to guide multi
stakeholder concurrent engineering of software
intensive systems.
• It has two main distinguishing features. One is a cyclic
approach for incrementally growing a system’s degree
of definition and implementation while decreasing its
degree of risk. The other is a set of anchor point
milestones for ensuring stakeholder commitment to
feasible and mutually satisfactory system solutions.

A spiral model is divided into a set of framework activities


defined by the software engineering team. As this
evolutionary process begins, the software team performs
activities that are implied by a circuit around the spiral in a
clockwise direction, beginning at the center. Risk is
considered as each revolution is made. Anchor point
milestones are a combination of work products and
conditions that are attained along the path of the spiral
are noted for each evolutionary pass.
The first circuit around the spiral might result in the
development of a product specification; subsequent
passes around the spiral might be used to develop a
prototype and then progressively more sophisticated
versions of the software. Each pass through the planning
region results in adjustments to the project plan.
• The spiral model can be adapted to apply
throughout the life of the computer software.
• Therefore, the first circuit around the spiral might
represent a “concept development project” that
starts at the core of the spiral and continues for
multiple iterations until concept development is
complete. The new product will evolve through a
number of iterations around the spiral.
• circuit around the spiral might be used to represent
a “product enhancement project.”
Agile Process

Any agile software process is characterized in a manner that addresses a number of

keyassumptions:

• It is difficult to predict in advance which software requirements will persist and

which will change. It is equally difficult to predict how customer priorities will

change as theproject proceeds.

• For many types of software, design and construction are interleaved. That is,

both activities should be performed in tandem so that design models are proven

as they are created. It is difficult to predict how much design is necessary before

constructionis used to prove the design.

• Analysis, design, construction, and testing are not as predictable (from a

planningpoint of view) as we might like.

Therefore, An agile process, must be adaptable, adapt incrementally.

Agile principles

• The highest priority of this process is to satisfy the customer.

• Acceptance of changing requirement even late in development.

• Frequently deliver working software in small time span.

• Throughout the project business people and developers work together on daily basis.

• Projects are created around motivated people if they are given the proper environment

andsupport.

• Face to face interaction is the most efficient method of moving information

in thedevelopment team.

• Primary measure of progress is a working software.

• Agile process helps in sustainable development.

• Continuous attention to technical excellence and good design increases agility.


• From self-organizing teams the best architecture, design and requirements are emerged.

• Simplicity is necessary in development.


Human factors

1. Competency: In agile development the context ‘ competent’, must have the

specific skills necessary software and know the technologies involved in a

particular project or initiative. Moreover people should possess comprehensive

knowledge of the processes used. You cannot say that a team cannot work in an

agile way if she doesnot know the key concepts of this process. In many

companies, the team has all the technical skills, but do not know the process.

This can be addressed with a simple workshop led by someone who already

knows the process.

2. Collaboration: the good old ability to work in a team is also essential. People

shouldcooperate among themselves and with all involved, for the sake of the

project. This requires above all humility. Even the most senior developers have

much to learn fromother colleagues.

3. Focus: all team members must be focused on one common goal: to deliver

the customer an increment of working software in agreed time. The team

should alsofocus on continuous adaptations, always improving the process as

needed. Remember that the team itself must stop from time to time (eg,

every 15 days) toreflect on what is good and what can be improved in the

work process.

4. Decision making: the development team should have the freedom to control

their own destiny. Should have autonomy in technical matters and project. It is

the staff who should define the best way to control versions of code, making

builds, deploys, run tests, documenting requirements, etc.. The company can

(and should) suggest good practice, but in the end is the staff (self-organizing)

which will adopt the methods or processes that you think best. Those involved in

development must learnto deal with conflicting situations, ambiguity and


frequent changes, because scenarios are happening increasingly in day- day

business. It is necessary that the staff record the main lessons learned, which

will facilitate the continual improvement process.

5. Trust and respect: the team must be consistent and must demonstrate trust and

respect needed to make a strong team. Remember that the main objective is to

makethe team strong enough that the whole is greater than the sum of its parts.
6. Self-organization: is the team itself should organize to perform the work. You

need tolook at every moment what else can be improved in the process so that it

fits more tothe environment. Self-organization has technical benefits, but it is

very important to improve collaboration. The team selects how much work

believed to be capable of performing the iteration and commits.

7. Fuzzy problem-solving ability: Any good software team must be allowed the

freedomto control its own destiny .This implies that the team is given decision

making authority for both technical and project issues.


AGILE PROCESS MODELS
eXtreme Programming(XP)

Extreme programming (XP) is one of the most important software development framework of Agile
models. It is used to improve software quality and responsive to customer requirements. The extreme
programming model recommends taking the best practices that have worked well in the past in program
development projects to extreme levels.

Extreme Programming uses an object-oriented approach as its preferred development paradigm and
encompasses a set of rules and practices that occur within the context of four framework activities:

• Planning
• Design
• Coding
• Testing

Planning

➢ Begins with the creation of user stories


➢ Agile team assesses each story and assigns a cost
➢ Stories are grouped to for a deliverable increment
➢ A commitment is made on delivery date
➢ After the first increment project velocity is used to help define subsequent delivery dates for other
increments

Design

➢ Follows the KIS principle


➢ Encourage the use of CRC cards
➢ For difficult design problems, suggests the creation of spike solutions — a design prototype
➢ Encourages refactoring — an iterative refinement of the internal program design
Coding

➢ Recommends the construction of a unit test for a store before coding commences
➢ Encourages pair programming
Testing
➢ All unit tests are executed daily
➢ Acceptance tests are defined by the customer and executed to assess customer visible
functionality

Applications of Extreme Programming (XP)

Some of the projects that are suitable to develop using XP model are given below:

Small projects: XP model is very useful in small projects consisting of small teams as face to face meeting
is easier to achieve.

Projects involving new technology or Research projects: This type of projects face changing of
requirements rapidly and technical problems. So XP model is used to complete this type of projects.
Scrum (software development)
Scrum is the type of Agile framework. It is a framework within which people can address complex adaptive
problem while productivity and creativity of delivering product is at highest possible values. Scrum uses
Iterative process.

Silent features of Scrum are

➢ Scrum is light-weighted framework


➢ Scrum emphasizes self-organization
➢ Scrum is simple to understand
➢ Scrum framework help the team to work together

Lifecycle of Scrum
Sprint

A Sprint is a time-box of one month or less. A new Sprint starts immediately after the completion of the
previous Sprint.

Release

When the product is completed then it goes to the Release stage.

Sprint Review

If the product still have some non-achievable features then it will be checked in this stage and then the
product is passed to the Sprint Retrospective stage.

Sprint Retrospective

In this stage quality or status of the product is checked.

Product Backlog

According to the prioritize features the product is organized.

Sprint Backlog

Sprint Backlog is divided into two parts Product assigned features to sprint and Sprint planning meeting.

Advantage of using Scrum framework

➢ Scrum framework is fast moving and money efficient.


➢ Scrum framework works by dividing the large product into small sub-products. It’s like a divide
and conquer strategy
➢ In Scrum customer satisfaction is very important.
➢ Scrum is adaptive in nature because it have short sprint.
➢ As Scrum framework rely on constant feedback therefore the quality of product increases in less
amount of time
Disadvantage of using Scrum framework

➢ Scrum framework do not allow changes into their sprint.


➢ Scrum framework is not fully described model. If you want to adopt it you need to fill in the
framework with your own details like Extreme Programming(XP), Kanban, DSDM.
➢ It can be difficult for the Scrum to plan, structure and organize a project that lacks a clear
definition.
➢ The daily Scrum meetings and frequent reviews require substantial resources.
MODULE:2
Requirements Engineering

Requirements describe the “what” of a system, not the “how”.


Requirements engineering produces one large document, written in a
natural language, contains a description of what the system will do
without describing how it will do. The input to requirement engineering
is the problem statement prepared by the customer. The problem
statement may give an overview of the existing system alongwith broad
expections from the new system.

Crucial Process Steps

The quality of a software product is only as good as the process that


creates it. Requirements engineering is one of the most crucial activity
in this creation process. Without well-written requirements
specifications, developers do not know what to build, customers do not
know what to expect, and there is no way to validate that the built
system satisfies the requirements.

Requirements engineering is the disciplined application of proven


principles, methods, tools, and notations to describe a proposed
system’s intended behavior and its associated constraints[hsia93]. This
process consists of four steps

• Requirements elicitation
• Requirements Analysis
• Requirements Documentation
• Requirements Review
Requirements Elicitation:
This is also known as gathering of requirements. Here, requirements
are identified with the help of customer and existing system processes,
is available.

Requirements Analysis:
analysis of requirements starts with requirement elicitation. The
requirements are analysed in order to identify inconsistencies, defects,
omissions etc: we describe requirements in terms of relationships and
also resolve conflicts, if any.
Requirements Documentation:
This is the end product of requirements elicitation and analysis. The
documentation is very important as it will be the foundation for the
design of the software. The document is known as software
requirements specification(srs).

Requirements Review:
The review process is carried out to improve the quality of the SRS. It
may also be called as requirements verification. For maximum benefits,
review and verification should not be treated as a discrete activity to be
done only at the end of the preparation of SRS. It should be treated as
continuous activity that is incorporated into the elicitation, analysis and
documentation.

requirement engineering

• The process of collecting the software requirement from the client then
understand, evaluate and document it is called as requirement
engineering.
• Requirement engineering constructs a bridge for design and
construction.

Requirement engineering consists of seven different


tasks as follow:
1. Inception
• Inception is a task where the requirement engineering asks a set of
questions to establish a software process.
• In this task, it understands the problem and evaluates with the proper
solution.
• It collaborates with the relationship between the customer and the
developer.
• The developer and customer decide the overall scope and the nature of
the question.

2. Elicitation
Elicitation means to find the requirements from anybody.
The requirements are difficult because the following problems occur in
elicitation.

Problem of scope: The customer give the unnecessary technical detail


rather than clarity of the overall system objective.

Problem of understanding: Poor understanding between the customer


and the developer regarding various aspect of the project like capability,
limitation of the computing environment.

Problem of volatility: In this problem, the requirements change from


time to time and it is difficult while developing the project.

3. Elaboration
• In this task, the information taken from user during inception and
elaboration and are expanded and refined in elaboration.
• Create and refine user scenarios.
• Find analysis classes-attributes and service.
• Its main task is developing pure model of software using functions,
feature and constraints of a software.

4. Negotiation
• In negotiation task, a software engineer decides the how will the project
be achieved with limited business resources.
• Reconcile the requirements conflicts through negotiation
• Prioritize requirements.
• Asses their cost and work.
• To create rough guesses of development and access the impact of the
requirement on the project cost and delivery time.

5. Specification
• In this task, the requirement engineer constructs a final work product.
• The work product is in the form of software requirement specification.
• In this task, formalize the requirement of the proposed software such as
informative, functional and behavioral.
• The requirement are formalize in both graphical and textual formats.
• Written document
• Set of geographical modes

6. Validation
• The work product is built as an output of the requirement engineering
and that is accessed for the quality through a validation step.
• The formal technical reviews from the software engineer, customer and
other stakeholders helps for the primary requirements validation
mechanism.

7. Requirement management
• It is a set of activities that help the project team to identify, control and
track the requirements and changes can be made to the requirements at
any time of the ongoing project.
• These tasks start with the identification and assign a unique identifier to
each of the requirement.
• After finalizing the requirement traceability table is developed.
• The examples of traceability table are the features, sources,
dependencies, subsystems and interface of the requirement.
TYPES OF REQUIREMENTS

1. Known requirements
[Link] requirements
[Link] requirements

STAKEHOLDER:

• Who have some direct or indirect influence on the system


requirements
• Users who interact with the system.
• All those who are affected by it.

1. Known requirements

* Stakeholder tells to be implemented.

2. unknown requirements

* forgotten by stakeholder

* not needed right now

* needed only by another stakeholder.

3. undreamt requirements

* stakeholder didn’t think of this due to limited domain knowledge


FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS

• A Known, Unknown or undreamt requirement may be functional


or non-functional
• Requirements analysis is very critical process that enables the
success of a system or software project to be assessed.
Requirements are generally split into two types: FUNCTIONAL
and NON-FUNCTIONAL

FUNCTIONAL REQUIREMENTS

These are the requirements that the end user specifically demands as
basic facilities that the system should offer. All these functionalities
need to be necessarily incorporated into the system as a part of the
contract. These are represented or stated in the form of input to be given
to the system, the operation performed and the output expected. They are
basically the requirements stated by the user which one can see directly
in the final product, unlike the non-functional requirements.

NON-FUNCTIONAL REQUIREMENTS

These are basically the quality constraints that the system must satisfy
according to the project contract. The priority or extent to which these
factors are implemented various from one project to other. They are also
called non-behavioral requirements. They are basically deal with issues
like:

• Portability
• Security
• Maintainability
• Reliability
• Scalability
• Performance
• Reusability
• Flexibility

USER REQUIREMENTS AND SYSTEM REQUIREMENTS

USER REQUIREMENTS

The user requirements document (URD) or user requirement


specification (URS) is a document usually used in software
engineering that specifies what the user expects the software to be
able to do.

Once the required information is completely gathered it is


document in a URD, which is meant to spell out exactly what the
software must do and become part of the contractual agreement. A
customer cannot demand features not in the URD, while the
developer cannot claim the product is ready if it does not meet an
item of the URD

The URD can be used as a guide for planning cost, timetables,


milestones, testing etc: the explicit nature of the URD allows
customers to show it to various stakeholders to make sure all
necessary features are described.

SYSTEM REQUIREMENTS

System requirements are all the requirements at the system level


that describe the functions which the system as a whole should
fulfill to satisfy the stakeholder needs and requirements, and are
expressed in an appropriate combination of textual statements,
views, and non-functional requirements; the latter expressing the
levels of safety, security, reliability etc; that will be necessary.
System requirements play major roles in system engineering, as
they:

• Form the basis of system architecture and design activities.


• Form the basis of system integration and verification activities.
• Act as reference for validation and stakeholder acceptance
• Provide a means of communication between the various technical
staff that interact throughout the project.

Feasibility Study
Feasibility Study in Software Engineering is a study to evaluate
feasibility of proposed project or system. Feasibility study is one of
stage among important four stages of Software Project Management
Process. As name suggests feasibility study is the feasibility analysis or
it is a measure of the software product in terms of how much beneficial
product development will be for the organization in a practical point of
view. Feasibility study is carried out based on many purposes to
analyze whether software product will be right in terms of
development, implantation, contribution of project to the organization
etc.
Types of Feasibility Study :
The feasibility study mainly concentrates on bellow five mentioned
areas. Among these Economic Feasibility Study is most important part
of the feasibility analysis and Legal Feasibility Study is less considered
feasibility analysis.
1. Technical Feasibility –
In Technical Feasibility current resources both hardware software
along with required technology are analyzed/assessed to develop
project. This technical feasibility study gives report whether there
exists correct required resources and technologies which will be used
for project development. Along with this, feasibility study also
analyzes technical skills and capabilities of technical team, existing
technology can be used or not, maintenance and up-gradation is easy
or not for chosen technology etc.
2. Operational Feasibility –
In Operational Feasibility degree of providing service to
requirements is analyzed along with how much easy product will be
to operate and maintenance after deployment. Along with this other
operational scopes are determining usability of product, Determining
suggested solution by software development team is acceptable or
not etc.
3. Economic Feasibility –
In Economic Feasibility study cost and benefit of the project is
analyzed. Means under this feasibility study a detail analysis is
carried out what will be cost of the project for development which
includes all required cost for final development like hardware and
software resource required, design and development cost and
operational cost and so on. After that it is analyzed whether project
will be beneficial in terms of finance for organization or not.

4. Legal Feasibility –
In Legal Feasibility study project is analyzed in legality point of
view. This includes analyzing barriers of legal implementation of
project, data protection acts or social media laws, project certificate,
license, copyright etc. Overall it can be said that Legal Feasibility
Study is study to know if proposed project conform legal and ethical
requirements.
5. Schedule Feasibility –
In Schedule Feasibility Study mainly timelines/deadlines is analyzed
for proposed project which includes how many times teams will take
to complete final project which has a great impact on the
organization as purpose of project may fail if it can’t be completed
on time.
Feasibility Study Process :
The below steps are carried out during entire feasibility analysis.
1. Information assessment
2. Information collection
3. Report writing
4. General information

Purpose of Feasibility Study:

Feasibility study is so important stage of Software Project Management


Process as after completion of feasibility study it gives a conclusion of
whether to go ahead with proposed project as it is practically feasible or
to stop proposed project here as it is not right/feasible to develop or to
think/analyze about proposed project again.
Along with this Feasibility study helps in identifying risk factors
involved in developing and deploying system and planning for risk
analysis also narrows the business alternatives and enhance success rate
analyzing different parameters associated with proposed project
development.
Feasibility studies focus on
• Is the product concept viable?
• Will it be possible to develop a product that matches the project’s
vision statement?
• What are the current estimated cost and schedule for the project?
• How big is the gap between the original cost and schedule target
and current estimates?
• Is the business model for software justified when the current cost
and schedule estimate are considered?
• Have the major risks to the project been identified and can they be
surmounted?
• Is the specifications complete and stable enough to support
remaining development work?
• Have users and developers been able to agree on a detailed user
interface prototype? If not, are the requirements really stable?
• Is the software development plan complete and adequate to support
further development work?
Requirements Elicitation

Requirements elicitation (also called requirements gathering )


combines ele-ments of problem solving, elaboration, negotiation, and
specifi cation. In order to encourage a collaborative, team-oriented
approach to requirements gathering, stakeholders work together to
identify the problem, propose elements of the solution, negotiate
different approaches, and specify a preliminary set of solu-tion
requirements

Requirements elicitation is perhaps the most difficult, most error-prone


and most communication intensive software development. It can be
successful only through an effective customer-developer partnership. It
is needed to know what the users really need.

Requirements elicitation Activities:

Requirements elicitation includes the subsequent activities. Few of them are


listed below –

• Knowledge of the overall area where the systems is applied.


• The details of the precise customer problem where the system
are going to be applied must be understood.
• Interaction of system with external requirements.
• Detailed investigation of user needs.
• Define the constraints for system development.

Requirements elicitation Methods:

There are a number of requirements elicitation methods. Few of them are listed
below –
1. Interviews
2. Brainstorming Sessions
3. Facilitated Application Specification Technique (FAST)
4. Quality Function Deployment (QFD)
5. Use Case Approach

The success of an elicitation technique used depends on the maturity of


the analyst, developers, users, and the customer involved.
1. Interviews:

Objective of conducting an interview is to understand the customer’s


expectations from the software.

It is impossible to interview every stakeholder hence representatives


from groups are selected based on their expertise and credibility.

Interviews maybe be open-ended or structured.

• In open-ended interviews there is no pre-set agenda. Context


free questions may be asked to understand the problem.
• In structured interview, agenda of fairly open questions is
prepared. Sometimes a proper questionnaire is designed for
the interview.

2. Brainstorming Sessions:

• It is a group technique
• It is intended to generate lots of new ideas hence providing a platform to
share views
• A highly trained facilitator is required to handle group bias and group
conflicts.
• Every idea is documented so that everyone can see it.
• Finally, a document is prepared which consists of the list of
requirements and their priority if possible.

3. Facilitated Application Specification Technique:

It’s objective is to bridge the expectation gap – difference between


what the developers think they are supposed to build and what
customers think they are going to get.

A team oriented approach is developed for


requirements gathering. Each attendee is asked to

make a list of objects that are-

• Part of the environment that surrounds the system


• Produced by the system
• Used by the system

Each participant prepares his/her list, different lists are then combined,
redundant entries are eliminated, team is divided into smaller sub-
teams to develop mini-specifications and finally a draft of
specifications is written down using all the inputs from the meeting.

4. Quality Function Deployment:

In this technique customer satisfaction is of prime concern, hence it


emphasizes on the requirements which are valuable to the customer.

3 types of requirements are identified –


• Normal requirements –
In this the objective and goals of the proposed software are
discussed with the customer. Example – normal requirements
for a result management system may be entry of marks,
calculation of results, etc
• Expected requirements –
These requirements are so obvious that the customer need not explicitly
state them. Example –
protection from unauthorized access.
• Exciting requirements –
It includes features that are beyond customer’s expectations and
prove to be very satisfying when present. Example – when
unauthorized access is detected, it should backup and shutdown
all processes.
The major steps involved in this procedure are –

1. Identify all the stakeholders, eg. Users, developers, customers etc


2. List out all requirements from customer.
3. A value indicating degree of importance is assigned to each requirement.
4. In the end the final list of requirements is categorized as –
• It is possible to achieve
• It should be deferred and the reason for it
• It is impossible to achieve and should be dropped off

5. Use Case Approach:

This technique combines text and pictures to provide a better understanding of


the requirements.

The use cases describe the ‘what’, of a system and not ‘how’. Hence, they only
give a functional view of
the system.

The components of the use case design includes three major things – Actor, Use
cases, use case diagram.

1. Actor – It is the external agent that lies outside the system but
interacts with it in some way. An actor maybe a person, machine
etc. It is represented as a stick figure. Actors can be primary
actors or secondary actors.
Primary actors – It requires assistance from the
system to achieve a goal. Secondary actor – It is an
actor from which the system needs assistance.
2. Use cases – They describe the sequence of interactions between
actors and the system. They capture who(actors) do
what(interaction) with the system. A complete set of use cases
specifies all possible ways to use the system.
3. Use case diagram – A use case diagram graphically represents
what happens when an actor interacts with a system. It captures
the functional aspect of the system.
• A stick figure is used to represent an actor.
• An oval is used to represent a use case.
• A line is used to represent a relationship between an actor and a use
case.
1. Facilitated Application Specification Technique:
It’s objective is to bridge the expectation gap – difference
between what the developers think they are supposed to build
and what customers think they are going to get.
A team oriented approach is developed for
requirements gathering. Each attendee is asked
to make a list of objects that are-
1. Part of the environment that surrounds the system
2. Produced by the system
3. Used by the system
Each participant prepares his/her list, different lists are then
combined, redundant entries are eliminated, team is divided into
smaller sub-teams to develop mini-specifications and finally a
draft of specifications is written down using all the inputs from the
meeting.
2. Quality Function Deployment:
Quality Function Deployment (QFD) is process or set of tools
used to define the customer requirements for product and convert
those requirements into engineering specifications and plans such
that the customer requirements for that product are satisfied.

QFD was developed in late 1960s by Japanese Planning Specialist


named Yoji Akao.
QFD aims at translating Voice of Customer into measurable and
detailed design targets and then drives them from the assembly
level down through sub-assembly level, component level, and
production process levels.
QFD helps to achieve structured planning of product by enabling
development team to clearly specify customer needs and
expectations of product and then evaluate each part of product
systematically.
Key steps in QFD :

Product planning :
Translating what customer wants or needs into set of prioritized
design requirements. Prioritized design requirements describe
looks/design of product.
Involves benchmarking – comparing product’s performance with
competitor’s products. Setting targets for improvements and for
achieving competitive edge.
Part Planning :
Translating product requirement specifications into part of
characteristics.
For example, if requirement is that product should be portable,
then characteristics could be light-weight, small size, compact,
etc.
Process Planning :
Translating part characteristics into an effective
and efficient process. The ability to deliver six
sigma quality should be maximized.
Production Planning :
Translating process into manufacturing or service delivery methods.
In this step too, ability to deliver six sigma quality
should be improved. Benefits of QFD :

Customer-focused –
Very first step of QFD is marked by understanding and collecting
all user requirements and expectations of product. The company
does not focus on what they think customer wants, instead, they
ask customers and focus on requirements and expectations put
forward by them. Voice of Customer Competitor Analysis –
House of Quality is significant tool that is used to compare voice
of customer with design specifications.
Structure and Documentation –
Tools used in Quality Function Deployment are very well
structured for capturing decisions made and lessons learned
during development of product. This documentation can assist in
development of future products.
Low Development Cost –
Since QFD focuses and pays close attention to customer
requirements and expectations in initial steps itself, so the
chances of late design changes or modifications are highly
reduced, thereby resulting in low product development cost.
Shorter Development Time –
QFD process prevents wastage of time and resources as enough
emphasis is made on customer needs and wants for the product.
Since customer requirements are understood and developed in
right way, so any development of non-value-added features or
unnecessary functions is avoided, resulting in no time waste of
product development team.
A QFD Tool – House Of Quality (HOQ) :
House of Quality or HOQ is conceptual map or matrix that
provides an understanding of how customer requirements
(WHATs) are related to various technical descriptors or design
parameters (HOWs) and their priority levels. House of Quality is
also known as Quality Matrix. The matrix gets its name from fact
that it represents the shape of house.

A House of Quality has the following parts :


1. WHATs –
Customer requirements and
needs are listed. Importance
Factor –
The team rates each of customer requirements (WHATs) on
scale of 1 to 5 based on their level of importance to the customer.
Here, 1 denotes lowest level and 5 denotes highest level of
importance to customer.
2. HOWs or Ceiling –
It comprises design features, technical descriptors and
specifications of product aligned with customer requirements.
3. Body –
HOWs are ranked on basis of their correlation of satisfying each
of listed WHATs. Body Ranking System used is set of symbols
used to show Strong, Moderate, Weak or No correlation
between HOWs and WHATs. Also, each of symbols represents
numerical value.
[Link] –
The roof indicates how design requirements(HOWs) are related
to each other. Roof Ranking System uses set of symbols to
represent different types of
[Link] – Strong Positive, Positive, None, Negative or
Strong Negative.
Competitor Comparison : This part focuses on comparing
competitor’s product in regards to fulfilling WHATs. This is also
measured on scale of 1 to 5 where 1 denotes Highly Dissatisfied
and 5 denotes Highly Satisfied.
6. Relative Importance –
This part gives results by calculating total sum of each of HOWs
by multiplying their Importance with value of Body Ranking
symbol. This part is useful as it allows us to identify HOWs of
products which require more attention and resources.
7. Lower Level or Foundation –
This part of HOQ lists more specific target values for technical
specifications in relation to HOWs in order to satisfy customer
requirements.

3. Use Case Approach:


This technique combines text and pictures to provide a better
understanding of the requirements.
The use cases describe the ‘what’, of a system and not ‘how’.
Hence, they only give a functional view of the system.
The component of the use case design includes three major
things – Actor, Use cases,use case diagram.

1. Actor –
It is the external agent that lies outside the system but
interacts with it in some way. An actor maybe a person,
machine etc. It is represented as a stick figure. Actors can be
primary actors or secondary actors.
• Primary actors – It requires assistance from the system to achieve
a goal.
• Secondary actor – It is an actor from which the system needs
assistance.
2. Use cases –
They describe the sequence of interactions between actors
and the system. They capture who(actors) do what(interaction)
with the system. A complete set of use cases specifies all
possible ways to use the system.

3. Use case diagram –


A use case diagram graphically represents what happens
when an actor interacts with a system. It captures the functional
aspect of the system.
• A stick figure is used to represent an actor.
• An oval is used to represent a use case.
• A line is used to represent a relationship between an actor and a
use case.
REQUIREMENTS ANALYSIS
Requirements analysis is very important and essential activity after elicitation. We analyze, refine and scrutinize the gathered
requirements in order to make consistent and unambiguous requirements. This activity reviews all requirements and may provide a graphical
view of the

Fig. 3.4: Requirements analysis steps.

entire system. After the completion of analysis, it is expected that the understandability of the project may improve significantly.
Here, we may also interact with the customer to clarify points of confusion and to understand which requirements are more
important than others. The various steps of requirements analysis are shown in Fig. 3.4.
(i) Draw the context diagram. The context diagram is a simple model that defines the boundaries and interfaces of the
proposed system with the external world. It identifies the entities outside the proposed system that interact with the system. The
context diagram of student result management system (as discussed earlier) is given below:

Administrator Subject Marks entry

[Type here]
(ii) Development of a prototype (optional). One effective way to find out what the customer really wants is to construct a
prototype, something that looks and preferably acts like a part of the system they say they want.
We can use their feedback to continuously modify the prototype until the customer is satisfied. Hence, prototype helps the
client to visualise the proposed system and increase the understanding of requirements. developers and users are not certain about
some of the requirements, a prototype may help both the parties to take a final decision.
Some projects are developed for general market. In such cases, the prototype should be shown to some representative
sample of the population of potential purchasers. Even though, persons who try out a prototype may not buy the final system, but
their feedback may allow us to make the product more attractive to others. Some projects are developed for a specific customer
under contract. On such projects, only that customer's opinion counts, so the prototype should be shown to the prospective
users in the customer organisation.
The prototype should be built quickly and at a relatively low cost. Hence it will always have limitations and would not be
acceptable in the final system. This is an optional activity. Although many organisations are developing prototypes for better
understanding before the finalisation of SRS.

(iii) Model the requirements. This process usually consists ofvarious graphical representations ofthe functions, data
entities, external entities and the relationships between them. The graphical view may help to find incorrect, inconsistent, missing
and superfluous requirements. Such models include data flow diagrams, entity relationship diagrams, data dictionar ies, state-
transition diagrams etc.

(iv) Finalise the requirements. After modeling the requirements, we will have better understanding of the system behaviour,
The inconsistencies and ambiguties have been identi fied and corrected. Flow of data amongest various modules has been analysed.
Elicitation and analysis activities have provided better insight to the system. Now we finalise the analysed requirements and next
step is to document these requirements in a prescribed format.

Data Flow Diagrams


Data flow diagrams (DFD) are used widely for modeling the requirements. They have been used for years prior to the
advent of computers. DFDs show the flow of data through a system. The system may be a company, an organization, a set of
procedures, a computer hardware system, a software system, or any combination of the preceding. The DFD is also known as a data
flow graph or a bubble chart.

The following observations about DFDs are important [DAV190):

1. All names should be unique. This makes it easier to refer to items in the DFD.

2. Remember that a DFD is not a flow chart. Arrows in a flow chart represent the order of events; arrows in DFD
represent flowing data. A DFD does not imply any order of events.

3. Suppress logical decisions. If we ever have the urge to draw a diamond-shaped box in a DFD, suppress that urge! A
diamond-shaped
box is used in flow charts to represent decision points with multiple exit paths of which only one is taken. This implies an
ordering of events, which makes no sense in a DFD.

4. Do not become bogged down with details. Defer error conditions and error handling until the end of the analysis.

Standard symbols for DFDs are derived from the electric circuit diagram analysis and are shown in Fig. 3.5 [SAGE901.

Symbol Name

[Type here]
Data Flow Used to connect processes to each other, to source or
sinks, the arrowhead indicates direction of data flow.

Process Perform some transformation of input data to yield output data.

Source or A source of system inputs or sink of system outputs.

sink (External
Entity)

Data A repository of data; the arrowheads indicate net inputs and net
store outputs to store

Fig. 3.5: Symbols for data flow diagrams

A circle (bubble) shows a process that transforms data inputs into data outputs. A curved line shows flow ofdata into or out of a process or
data store. A set of parallel lines shows a place for the collection of data items. A data store indicates that the data is stored which can
be used at a later stage or by the other processes in a different order. The data store can have element or group of elements. Source or
sink is an external entity and acts as a source of system inputs or sink of system outputs.

Rules for creating DFD


• The name of the entity should be easy and understandable without any extra assistance(like comments).
• The processes should be numbered or put in ordered list to be referred easily.
• The DFD should maintain consistency across all the DFD levels.
• A single DFD can have maximum processes upto 9 and minimum 3 processes.
Levels of DFD
DFD uses hierarchy to maintain transparency thus multilevel DFD’s can be created. Levels of DFD are as follows:
• 0-level DFD
• 1-level DFD:
• 2-level DFD:
Advantages of DFD
• It helps us to understand the functioning and the limits of a system.
• It is a graphical representation which is very easy to understand as it helps visualize contents.
• Data Flow Diagram represent detailed and well explained diagram of system components.
• It is used as the part of system documentation file.
• Data Flow Diagrams can be understood by both technical or nontechnical person because they are very easy to understand.
Disadvantages of DFD
• At times DFD can confuse the programmers regarding the system.
• Data Flow Diagram takes long time to be generated, and many times due to this reasons analysts are denied permission to work on it.

[Type here]
Leveling
The DFD may be used to represent a system or software at any level Of abstraction, in fact, DPDs may be partitioned into
levels that represent increasing information flow and functional detail. A level-O DFD, also called a fundamental system model or
context diagram represents the entire software element as a single bubble with input and output data indicated by incoming and
outgoing arrows, respectively [PRES2Kl. Then the system is decomposed and represented as a DFD with multiple bubbles. Parts of
the system represented by each of these bubbles are then decomposed and documented as more and more detailed DFDs. This
process may be repeated at as many levels as necessary until the problem at hand is well understood. It is important to preserve the
number of inputs and outputs between levels; this concept is caned leveling by DeMacro. Thus, ifbubble "A" has two inputs, and
one output y, then the expanded DFD, that represents "A" should have exactly two external inputs and one external output as
shown in Fig. 3.6 [DEMA79, DAV1901

x,

Flo. Level-O DFC),

The level-O DFD, algo called context diagram of result management system is shown in Fig. 3.7. As the bubbles are
decomposed into less and less abstract bubbles, the corresponding data flows may also need to be decomposed. Level-I DFD of result
management system is given in Fig. 3.8.

This provides a detailed view of requirements and flow of data from one bubble to the another.

[Type here]
Data entry

Marks entry operator

Marks entry generated

[Type here]
.

[Type here]
Data Dictionary

Data Dictionary is the major component in the structured analysis


model of the system. A data dictionary in Software Engineering
means a file or a set of files that includes a database’s metadata
(hold records about other objects in the database), like data
ownership, relationships of the data to another object, and some
other data.

Components of Data Dictionary:

In Software Engineering, the data dictionary contains the following


information as follows.

• Name of the data item


• Aliases
• Description/purpose
• Related data items
• Range of values

Name of Data item: The name of the data item is self-explanatory.

Aliases : It represents another name.

Description/purpose : Description of what actual text is all about.

Related data items : capture relationships between data items e.g.,


total_marks must always equal to internal_marks plus
external_marks.

Range of values : records all possible values, e.g. total marks must be
positive and between 0 to 100.
The mathematical operators used within the data dictionary are
defined in the table:

Notations Meaning

x=a+b x includes of data elements a and b.

x=[a/b] x includes of either data elements a or b.

x=a x includes of optimal data elements a.

x=y[a] x includes of y or more occurrences of data element a

x=[a]z x includes of z or fewer occurrences of data element a

x=y[a]z x includes of some occurrences of data element a which are between y and z

Features of Data Dictionary :

• It helps in designing test cases and designing the software.


• It is very important for creating an order list from a subset of
the items list.
• It is very important for creating an order list from a complete
items list.
• The data dictionary is also important to find the specific data
item object from the list.

Uses of Data Dictionary :

• Used for creating the ordered list of data items


• Used for creating the ordered list of a subset of the data items
• Used for Designing and testing of software in Software
Engineering
• Used for finding data items from a description in Software
Engineering
ER Diagrams

ER-modeling is a data modeling method used in software


engineering to produce a conceptual data model of an information
system. Diagrams created using this ER-modeling method are called
Entity-Relationship Diagrams or ER diagrams or ERDs.

Purpose of ER Diagram

▪ The database analyst gains a better understanding of the data


to be contained in the database through the step of
constructing the ERD.
▪ The ERD serves as a documentation tool.
▪ Finally, the ERD is used to connect the logical structure of the
database to users. In particular, the ERD effectively
communicates the logic of the database to users.

The components of an ER diagram:

• Entity
• Attributes
• Relationship

1. ENTITY

An entity can be a real-world object, either animate or inanimate,


that can be merely identifiable. An entity is denoted as a rectangle in
an ER diagram. For example, in a school database, students,
teachers, classes, and courses offered can be treated as entities. All
these entities have some attributes or properties that give them
their identity.

Entity Set

An entity set is a collection of related types of entities. An entity set


may include entities with attribute sharing similar values. For
example, a Student set may contain all the students of a school;
likewise, a Teacher set may include all the teachers of a school from
all faculties. Entity set need not be disjoint.

2. ATTRIBUTES

Entities are denoted utilizing their properties, known as attributes.


All attributes have values. For example, a student entity may have
name, class, and age as attributes.

There exists a domain or range of values that can be assigned to


attributes. For example, a student's name cannot be a numeric value.
It has to be alphabetic. A student's age cannot be negative, etc.

There are five types of Attributes:

1. Key attribute
2. Composite attribute
3. Single-valued attribute
4. Multi-valued attribute
5. Derived attribute

1. Key Attribute

Key is an attribute or collection of attributes that uniquely identifies an entity


among the entity set. For example, the roll_number of a student makes him
identifiable among students.

There are mainly three types of keys:

1. Super key: A set of attributes that collectively identifies an entity in the


entity set.
2. Candidate key: A minimal super key is known as a candidate key. An entity
set may have more than one candidate key.
3. Primary key: A primary key is one of the candidate keys chosen by the
database designer to uniquely identify the entity set.

2. Composite attribute: An attribute that is a combination of other attributes is


called a composite attribute. For example, In student entity, the student address is
a composite attribute as an address is composed of other characteristics such as
pin code, state, country.

3. Single-valued attribute: Single-valued attribute contain a single value. For


example, Social_Security_Number.

4. Multi-valued Attribute: If an attribute can have more than one value, it is


known as a multi-valued attribute. Multi-valued attributes are depicted by the
double ellipse. For example, a person can have more than one phone number,
email-address, etc.

5. Derived attribute: Derived attributes are the attribute that does not exist in the
physical database, but their values are derived from other attributes present in the
database. For example, age can be derived from date_of_birth. In the ER diagram,
Derived attributes are depicted by the dashed ellipse.
3. Relationship

The association among entities is known as relationship. Relationships are


represented by the diamond-shaped box. For example, an employee works_at a
department, a student enrolls in a course. Here, Works_at and Enrolls are called
relationships.

Relationship Set:

A set of relationships of a similar type is known as a relationship set. Like entities,


a relationship too can have attributes. These attributes are called descriptive
attributes.

Degree of Relationship:

1. Unary (degree1)
2. Binary (degree2)
3. Ternary (degree3)

1. Unary relationship: This is also called recursive relationships. It is a relationship


between the instances of one entity type. For example, one person is married to
only one person.
2. Binary relationship: It is a relationship between the instances of two entity
types. For example, the Teacher teaches the subject.

3. Ternary relationship: It is a relationship amongst instances of three entity types.


In fig, the relationships "may have" provide the association of three entities, i.e.,
TEACHER, STUDENT, and SUBJECT. All three entities are many-to-many
participants. There may be one or many participants in a ternary relationship.

Cardinality

Cardinality describes the number of entities in one entity set, which can be
associated with the number of entities of other sets via relationship set.
Types of cardinalities

1. One to One: One entity from entity set A can be contained with at most one
entity of entity set B and vice versa. Let us assume that each student has only one
student ID, and each student ID is assigned to only one person. So, the relationship
will be one to one.

2. One to many: When a single instance of an entity is associated with more than
one instances of another entity then it is called one to many relationships. For
example, a client can place many orders; a order cannot be placed by many
customers.

3. Many to One: More than one entity from entity set A can be associated with at
most one entity of entity set B, however an entity from entity set B can be
associated with more than one entity from entity set A. For example - many
students can study in a single college, but a student cannot study in many colleges
at the same time.
4. Many to Many: One entity from A can be associated with more than one entity
from B and vice-versa. For example, the student can be assigned to many projects,
and a project can be assigned to many students.

Prototyping

It is a technique of constructing a partial implementation of a system


so that customers,developers or users can learn more about a
problem or solution.

Two Approaches:

>Throwaway Prototyping

>Evolutionary Prototyping

Throwaway Prototyping

As its name suggests, a throwaway prototype is ‘thrown out’


once the product design is finalised. Also known as rapid
prototyping, this technique is carried out over a short time. The
designer can quickly hash out some design ideas in this phase,
somewhat like a rough sketch. Some of the key functionality may
also be incorporated into the software prototype.
Once the client and the developer have understood the initial
requirements with the help of the prototype and know what to
expect in the final product, this prototype is either partially re-
used or discarded completely. Thus, throwaway prototyping is
fast and low-effort, allowing quick feedback gathering and
incorporation.

Evolutionary Prototyping

The name for this type of software prototyping is also quite self-
explanatory. An evolutionary prototype is much more functional
than a throwaway prototype, with some primary features coded
into it from the get-go instead of it being a mere dummy focused
solely on design. The screens (user interface) also have actual
code behind them. The user can see and interact with the
prototype as if it were the actual product. Over time and
multiple feedback cycles, the prototype may have more
advanced functionality added to it as needed by the client. The
process thus results in the finished product.
Requirement Documentation
Requirements documentation is a very important activity after requirements elicitation and
analysis. It is the way to represent requirements in a consistent format. Requirements
document is called Software Requirement Specification(SRS).
The SRS is a specification for a particular software product, program or set of programs that
performs certain functions in a specific environment. It serves a number of purposes depending
on who is writing it. First, SRS could be written by the customer of a system. Second, the SRS
could be written by the developer of a [Link] two scenarios create entirely different
situations and establish entirely different purposes for the document. First case, SRS is used to
define the needs and expectations of the users. Second case, SRS is written for different
purpose and serve as a contract document between customer and developer.
This reduces the probability of the customer being disappointed with the final product. The SRS
written by developer(second case) is of our interest and discussed in the subsequent sections.

Nature of the SRS


The basic issues that SRS writers shall address are the following:
1. Functionality : What the software is supposed to do?
2. External Interfaces : How does the software interact with people, the system’s
hardware, other hardware, and other software?
3. Performance : What is the speed, availability, response time, recovery time, etc. of
various software functions?
4. Attributes : What are the considerations for portability, correctness, maintainability,
security, reliability etc.?
5. Design constraints imposed on an implementation : Are there any required standards
in effect, implementation language, policies for database integrity, resource limits,
operating environments etc.?
Since the SRS has a specific role to play in the software development process, SRS writers
should be careful not to go beyond the bounds of that [Link] means SRS :
1. should correctly define all the software requirements. A software requirement may exist
because of the nature of the task to be solved or because of a special characteristic of
the project.
2. should not describe any design or implementation details. These should be described in
the design stage of the project.
3. should not impose additional constraints on the software. These are properly specified
in other documents such as a software quality assurance plan.
Therefore, a properly written SRS limits the range of valid designs, but does not specify any
particular design.

Characteristics of a good SRS


The SRS should be :

• Correct
• Unambiguous
• Complete
• Consistent
• Ranked for importance and/or stability
• Verifiable
• Modifiable
• Traceable
Each of the above mentioned characteristics is discussed below :

Correct
The SRS is correct if and only if every requirement stated therein is one that the software shall
meet. There is no tool or procedure that assures correctness. If the software must respond to
all button presses within 5 seconds and the SRS stated that “the software shall respond to all
button presses within 10 seconds”, then that requirement is incorrect.

Unambiguous
The SRS is unambiguous if and only if every requirement stated therein has only one
interpretation. Each sentence in the SRS should have unique interpretation. Imagine that a
sentence extracted from the SRS, is given to 10 people who are asked for their interpretation. If
there is more than one such interpretation, then that sentence is probably ambiguous.
In cases, where a term used in a particular context could have multiple meanings, the term
should be included in a glossary where its meaning is made more specific. The SRS should be
unambiguous to both those who create it and to those who use it.
Requirements are often written in natural language (eg , English). Natural language is inherently
ambiguous, A natural language SRS should be reviewed by an independent party to identify
ambiguous use of a language so that it can be corrected. This can be avoided by using a
particular requirement specification language. Its language processors automatically detect
many lexical, syntactic, and semantic errors.
Complete
The SRS is complete if, and only if ; it includes the following elements :
1. All significant requirement, whether relating to functionality, performance, design
constraints, attributes or external interfaces.
2. Definition of their responses of the software to all realizable classes of input data in all
realizable classes of situations. Note that it is important to specify the responses to both
valid and invalid values.
3. Full labels and references to all figures, tables and diagrams in the SRS and definition of
all terms and units of measure.

Consistent
The SRS is consistent if, and only if no subset of individual requirements described in it conflict.
There are three types of likely conflicts in the SRS :
1. The specified characteristics of real-world objects may conflict. For example,
a. The format of an output report may be described in one requirement as tabular but
in another as textual.
b. One requirement may state that all lights shall be green while another states that all
lights shall be blue.
2. There may be logical or temporal conflict between two specified actions, for example,
a. One requirement may specify that the program will add two inputs and another may
specify that the program will multiply them.
b. One requirement may state that “A” must always follow “B”, while another requires
that “A and B” occur simultaneously.
3. Two or more requirements may describe the same real-world object but use different
terms for that object. For example, a program’s request for a user input may be called a
“prompt” in one requirement and a “cue” in another. The use of standard terminology
and definitions promotes consistency.

Ranked for importance and/or stability


The SRS is ranked for importance and/or stability if each requirement in it has an identifier to
indicate either the importance of stability of that particular requirement.
Typically, all requirements are not equally important. Some requirements may be essential,
especially for life critical applications, while others may be desirable. Each requirements should
be identified to make these differences clear and explicit. Another way to rank requirements is
to distinguish classes of requirements as essential, conditional and optional.
Verifiable
The SRS is verifiable if and only if, every requirement stated therein is verified. A requirement is
verifiable, if and only if there exists some finite cost-effective process with which a person or
machine can check that the software meets the requirements. In general, any ambiguous
requirement is not verifiable.
Non-verifiable requirements include statement, such as “works well”, “good human interface”,
and “shall usually happen”. These requirements cannot be verified because it is impossible to
define the terms “good”, ”well” , or ”usually”. The statement that “the program shall never
enter an infinite loop” is non-verifiable because the testing of this quality is theoretically
impossible.

Modifiable
The SRS is modifiable if and only if its structure and style are such that any changes to the
requirements can be made easily, completely and consistently while retaining the structure and
style.

The requirements should not be redundant. Redundancy itself is not an error but, it can easily
lead to errors. Redundancy can occasionally help to make an SRS more readable, but a problem
can arise when the redundant document is updated. For instance, a requirement may be
altered in only one of the places out of many places where it appears.
The SRS then becomes inconsistent. Whenever redundancy is necessary, the SRS should include
explicit cross-references to make it modifiable.

Traceable
The SRS is traceable if the origin of each of the requirements is clear and if it facilitate the
referencing of each requirement in future development or enhancement documentation. Two
types of traceability are recommended :
1. Backward traceability : This depends upon each requirement explicitly referencing its
source in earlier documents.
2. Forward traceability : This depends upon each requirement in the SRS having a unique
name or reference number.
The forward traceability of the SRS is especially important when the software product enters
the operation and maintenance phase. As code and design documents are modified, it is
essential to be able to ascertain the complete set of requirements that may be affected by
those modifications.
Organization of the SRS

The Institute of Electrical and Electronics Engineers


(IEEE) has published guidelines and standards to
organize an SRS document (IEEE87, IEEE941.
Different projects may require their requirements to be
organized differently, that is, there is no one method
that is suitable for all projects. It provides different ways
of structuring the SRS. The first two sections of the SRS
are the same in all of them.
The specific tailoring occurs in section 3 entitled"specific
requirements".
The general organization of an SRS is givenbelow
Software Requirements Analysis and Specifications 89

1. Introduction
1.1 Purpose
1.2 Scope
1.3 Definitions, Acronyms, and Abbreviations
1.4 References
1.5 Overview
2. The Overall Description
2.1 Product Perspective
2.1.1 System Interfaces
2.1.2 Interfaces
2.1.3 Hardware Interfaces
2.1.4 Software Interfaces
2.1.5 Communications Interfaces
2.1.6 Memory Constraints
2.1.7 Operations
2.1.8 Site Adaptation Requirements
2.2 Product Functions
2.3 User Characteristics
2.4 Constraints
2.5 Assumptions and Dependencies.
2.6 Apportioning of Requirements
3. Specific Requirements
3.1 External interfaces
3.2 Functions
3.3 Performance Requirements
3.4 Logical Database Requirements
3.6 Design Constraints
3.5.1 Standards Compliance
3.6 Software System Attributes
3.6.1 Reliability
3.6.2 Availability
3.6.3 Security
3.6.4 Maintainability
3.6.5 Portability
3.7 Organizing the Specific Requirements
3.7.1 System Mode
3.7.2 User Class
3.7.8 Objects
3.7.4 Feature
3.7.°5 Stimulus
3.7.6 Response
3.7.7 Functional Hierarchy
3.8 Additional Commen ts
4. Change Management Process
5. [Link] Approvals
6. Supporting Information

Fig. 3.18: Organisaiton of SAS [IEEE-std. 830-1993).

Copyrighted matem
1 10 $oftwate EnglnMHhg I
1. Introduction
The following subsections of the Software Requirements Specifications(SRS) document should
provide an overview of the entire SRS.
1.1 Purpose
Identify the purpose of this SRS and its intended audience. 1n this subsection, describe the
purpose of the particular SRS and specify the intended audiettc e for the SRS.
1.2 Scope
In this subsection:
(i) Identify the software product(s) to be produced by name
(ii) Explain what the software product(s) will, and, if necessary, will not do
(i ii ) Describe the application of the software being specified, including relevant benefits,
objectives, and goals
(iv ) Be consistent with similar statements in higher-level specifications if they exist.

1.3 Definitions, Acrony,,,., and AbbrevlatlOns


Provide the definitions of all te rms , acronyms, and abbreviations required to properly interpret
the SRS. This information may be provided by reference to one or more appendices in the SRS
or byr eference to documents. This information may be provided by reference to an Appendix.
1.4 References
In this subsection:
(i) Provide a complete list of all documents referenced elsewhere in the SRS
(ii) ) Identify each document by title, report number (if applicable), date, and
publishing organization
(iii) Specify the sources from which the references can be obtained.
This information can be provided by reference to an appendix or to another document.
1.5 Overview
In this subsection:
(i) Describe what the rest of the SRS contains
(i i ) Explain how the SRS is organized.

2. The Ove II Description


Describe the general factors that affect the product and its requirements: [Link] sect ion doesnot
state specific requirements. Instead, it provides a background for those requirements, which
are defined in section 3, and makes them easier to understand.
2.1 Product Perspective

Put th e product into perspective with other related products. If the product is independent
and totally self-contained , it should be so s tated h ere. If ·the SRS defines a product that is a
component of a lar ger syste m, as frequent ly occurs, then this subsectoin relates the require-

Copyrighted material
Software Requlntments Analysis and Specifications 71

ments of the larg er system to functionality of the software and identifies interfaces between
that [Link] and the software.
A block diagram showing the major components of the large system, interconnections, and
external interfaces can be helpful.
The following subsections describe how the software operates inside various constraints.
2.1.1 System Interfaces
Listeach system interface and identify the functionality of the software toaccomplish the _s [Link]
reqtJirement and the interface description to match the system.
2.1.2 Interfaces
Specify:
(i) The logical characteris tics of each interface between the software product and its
users.
(ii) All the aspects of optimizing the interface with the person who must use the system.

2.1.3 Hardware Interfaces


Specify the logical characteristics of each interface between the software product and the hard-
ware components of the system. This includes configuration characteristics. It also covers such
matters as what devices are to be supported, how they are to be supported and protocols.
2.1.4 Software Interfaces
Specify the use of other required software products and interfaces with other application sys-
tems. For each required software product, include:
(i) Name
(i i ) Mnemonic
(iii) Specification number
(iv) Version number
(v)Source
For each interface, provide:
(i) Discussion of the purposeof the interfacing software as related to thissoftware product
(ii) Definition of the inte rface in terms of message content and format.

2.1.5 Communications Interfaces


Specify the various interfaces to colc!llunications such as local network protocols, etc.

2.1.6 Memory Constraints


Specify any applicable characteristics and limits on primary and secondary memory.
2.1.7 Operations
Specify the normal and special operations required by the user such as:
(i ) The various modes of operations in the user organization
(ii) Periods of interactive operations and periods of unattended operations

Copyrighted material
Software Engineering j

Data processing support functions


-(i ii)
(iv) Backup and recovery operations.

2.1.8 Site Adaptation Requirements


In this section:
(i) Define the requirements for any data or initialization sequences that are specific to
a given site, mission, o operational mode
(ii) Specify the site or mission-related features that should be modified to adapt the
software to a particular installation.
2.2 Product Functlons
Provide a· summary of the major functions that the software will perform. Sometimes
the function summary that is necessary for this part can be taken directly from the section of
the higher-level specification (if one exists) that allocates particular functions t.o the software
product.
For clarity:
(i) The functions should be organized in a way that makes the list of functions under-
standable to the [Link] or to anyone else reading the dOCQlllent for the first time.
(ii) Textual or graphic methods can be used to show the different functions and their
· relationships, Such to diagram is not intended t.o show a design of a product but
simply shows the logical relations hips among vari a bles·.
2.3 c,.., Characteristics
Describe those general characteristicsof the intended usersof the product including educational
level, experience, and technical expertise. Do not state specific requirements but rather pro-
vide the reasons why certain specific requirements are later specified in sections 3.
2.4 Conatralnts
Provide a general description of any other items that will limit the developer's options. These
can include:
(i) Regulatory policies
(ii) Hardware limitations (for example, signal timing requirements)
(i i i ) Interface to other applications
(iv) Parallel operation
(v) Audit functions
(vi ) Control functions
(vii) Higher-order language requi-rements
(viii) Signal handshake protocols (for example, XON-XOFF, ACK-NACK)
(ix) Reliability requirements
(x) Criticality of the application
(xi ) Safety and security considerations.

Copyrighted material
Software Requirements Analysis and Specifications 73

2.5 Assumptions and Dependencln


List each of the factors that affect the requirements stated in the SRS. These factors are not
design constraints on the software but are, rather , any changes to them that can affect the
requirements in the SRS. For example, an assumption might be that a specific operating sys-
m would be available on th e hardware designated for the software product. If, in fact,
the operating systelll were not available, the SRS would then have to change accordingly.
.a.B Apportioning of Requirements
Identify requirements that may be delayed until future versions of the sys te m.

3- Specific Requirements
This section contains all the software requirements at a level of detail sufficient to enable
designers to design a system to satisfy those requirements, and testers to test that the system
satisfies those requirements. Throughout this section. everystated requirement should be
externally perceivable by users , operators, or bothexternal systems. These requirements should
include at a minimum a description of every input into the system, every output from the
system and all functions performed by the sys tem in response to an input or in support of an
output. The following principles apply:
(i) Specific requirements should be stated with all the characteristics of a good SRS
• corr ec t
• unambiguous
• complete
• consistent
• ranked for importance and/or stability
• verifiable
• modifiable
• tracea ble
(i i ) Specific requirements should be cross-referenced to earlier documents that relate
(i i i ) All requirements should be uniquely identifiable
(iv) Careful attention should be given to organizing the requirements to
maximize rea dability.
Before examining specific ways of organizing the requirements it is helpfulto understand
the various items that comprise requirements as described in the following sebsections.
3.1 External Interfaces
This contains a detailed description of all inputs into and outputs from the software system. It
complements the interface descriptions in section 2 but does not repeat information there .
It contains both content and format as follows:
• Name of item
• Description of purpose
• Source of input or destination of output
• Valid ran ge, accuracy and/or tolerance
• Units of measure

Copyrighted material
Software Engineering I
• Timing
• Relationships to other inputs/outputs
• Screen formats/organization
• Window formats/organization
• Data formats
• Command formats
• End messages.
3.2 Functions
Functional requirements [Link] the fundamental actions that must take place in the software
in accepting and processing the inputs and in processin g and generating the outputs. These
are generally listed as "shall" statements starting with "T_he system shall...
These include:
• Validity checks on the inputs
• Exact sequence of operations
• Responses to abnormal situation, including
• Overflow
• Communication facilities
• Error handling and recovery
• Effect of parameters
• Relationship of outputs tn inputs, including
• Input/Output sequences
• Formulas for input to output conver on.
It may be appropriate to partition the functional requirements into sub-functions or sub-
processes. This does not imply that thesoftware design will also be partitioned that way.
3.3 Performance Requirements
Thi s s ubsection specifies both the static and the dynamic numerical requirements placed on
the software or on human interaction with the software, as a whole. Static numerical require-
ments may include:
(i) The number of terminals to be supported
(i i ) Th e number of simultaneous users to be supported
(i i i ) Amount and type of information to be handled
Static numerical requirements are sometime·s identified under a separate section enti-
tled capacity.
Dynamic numerical requirements may include, for example, the number of transactions
and tasks and the amount of data to be processed within certain time periods for both normal
and peak workload conditions .
All of these requirements should be stated in measurable [Link]:
For example,
95% of the transactions shall be processed in less than 1 second rather than,
An operator shall not have to wait for the transaction to complete.

Copyrighted material
Software Requirements Analysis and Specifications 75

(Note: Numerical limits applied to one specific function are normally specified as partof
the processing subparagraph description of that function).
3.4 Log/cal Database Requirements
[Link] section specifies the logical requirements for any information that is to be placed into a
database. This may include:
• Types of information used by various functions
• Frequency of use
• Accessing capabilities
• Data entities and their relationships
• Integrity constraints
• Data retention requirements.
3,5 Dflslgn Constralnta
Specify design constraints that can be imposed by other standards, hardware limi tionll, etc.
3.5.1 Standards Compliance
Specify the requirements derived from existing standards or regulations. They might include:
(i) Report format
. (ii) Data naming
(iii) Accounting procedures
(iv ) Audit Tracing
For example, this could specify the requirement for software to processing activity. Such traces
are needed for some applications to meet minimum regulatory or [Link] standard. An audit
trace requirement may, for example, state that all changes to a payroll database must be
recorded in a trace file with before and after values.
3.6 Software System Attributes
There are a number of quality attributes of software that can serve as requirements. It is
important that required attributes be specified so that their achievement can be objectively
verified. Fig. 3.19 has the definitions of the quality attributes of the software discussed in this
subsection [ROBE02]. The following items provide a partial list of examples.
3.6.1 Reliability
Specify the factors required to establish the required reliability of the software system a t im e
of delivery.
3.6.2 Availability
Specify the factors required to guarantee a defined availability level for the entire system such
as checkpoint, recovery, and restart.
3.6.3 Security
Specify the factors that would protect the software from accidental or malicious access, use,
modification, destruction , or disclosure. Specific requirements in this area could include the
need to:

Copyrighted material
Software Engi ring I
• Utilize certain cryptographic techniques
• Keep specific fog or history data sets
• Assign certain functions to different modules
• Restrict communications between some areas of the program
• Check data integrity for critical variables.
3.6.4 Maintainability
Specify attribµtes of software that rfl1 4! to the ease of maintenance of the software itself.
'fhere mar be some requirement for certain nao,:h;Jsn-ity, interfaces, complexity, etc. Reqµjre-
IQent;s should not be placed here just because they are ought tobe good desigq practipes.
3.6.9 Portability
Specjfy l\ttributes of software tqat relate to the ease of parting the software t.o other host
machines and/or operating systems. This may include:
• Percentage of components with host-dependent code
• Percentage of code that is host dependent
• Use of a proven portable language
• Use of a particular compiler or language subset
• Use of a particular operating system.

S. No. Quality Attributes Definitwn

1. Correctness extent to which program satisfies specifications, fulfills user's


mission objectives
2. Efficiency .amount of computing resources and code required to perform
function
3. Flexibility effort needed to modify operational program
4. Interoperability effort needed to couple one system with another
5. Reliability extent to which program performs with required precision
6. Reusability extent to which it can be reused in another application
7. Testa bility effort needed to test toensure performance as intended
8. Usability effort required tolearn, operate, prepareinput, and interpretoµtput
9. Maintainability effort required to locate and fix an error during operation
10. Portability effort needed to transfer from one hardware or software
environment to another.
11. Integrity/security exte nt to which access to software or data by unauthorised people
can be controlled.

Fig. 3.19: Definitions of quality attributes.

Copyrighted material
I Software Requirements Analysis and Specifications 11 1
3.1 Organizing the Specific Requirements
For anything but trivial systems the detailed requirements tend to be extensive. For this rea-
son, it is recommended that careful consideration be given to organizing these in a manner
optimal for understanding. There is no one optimal organization for all systems. Different
classes of systems lend themselves to different organizations of requirements. Some of these
organizations are described in the following subclasses.
3.7.1 System Mode
Some systems behave [Link] differently depending on the mode of operation. When organizing
by mode there are two possible outlines. The choice depends on whether interfaces d per•
formance are dependent on mode.
3.7.2 User Class
Some systems provide different sets of functions to different classes of usel'S.
3.7.3 3 Objects
Objects are real-world entities that have a counterpart within the system. Associated with
each object is a set of attributes and functions. These functions are also called services, meth
ods, or processes. Note that sets of objects may sha re attributes and services. These are
grouped together as classes.
3.7.4 Feature
A feature is an externally desired service by the system that may require a sequenceof inputs
to effect the desired result. Each feature is generally described as sequence of stimulus-re-
sponse pairs.
3.7.5 Stimulus
Some systems can be best organized by describing their functions in terms of stimuli.
3.7.6 Response
Some systems can be best organized by describing their functions in support of the generation
of a response.
3.7.7 Functional Hierarchy
When non of the Above organizAtion schemes prove helpful, the overall functionality can be
organized into a hierarchy of fupctions organized by eith er common inputs , common outputs,
or common internal data access. Da flow diagr ams and datadictionaries can be used to show
the relationships between and among the functions and data.
3.8 Addlt/Of1fll Commeni.
Whenever a new SRS is contemplated, more than one of the organizational techniques given
in 3.7 may be appropriate. In such cases, organize the specific requirements for multiple bier•
archies tailored to the specific needs of the system under specification.

Copyrighted material
Software EngineerinQ I
There are many notations, methods, and [Link] support tools available t.o EUd ill the
documentation of requirements. For the most part, their usefulness is a function of o aniz,a
tion. For example, when organizing by mode, finite state machines or state charts may prove
helpful, when organizing by object, object-oriented analysis may prove·helpful; when organiz-
ing by feature, stimulus-response sequences may prove helpful ; when organizi.Q by function,u
hierarchy, data flow diagram s and data dictionaries may prove helpful.
In any of th e outlines below, those sections called "Functional Req rement i" ma;v Qt3
described in native language, in pseunocode, in a system definition language, or in four sub-
lJet ons titled: Introduction, Inputs, Processmg, O tpnt-R.

4. Change Management Prooess


Identify the change management process t.o be used to identify, log, evaluate, and µpclat(! the
SRS t.o reflect changes in project pe apd requirements.

5. Document Approval
Identify the approvers of the SRS document. Approver's name, signature, and date should be
used.

6. Supporting Information
The supporting information makes the SRS easier to use. It includes:
• Table of Contents
• Index
• Appendices
The Appendices are not always considered part of the actual requirements specification tUid
are not always necessary. They may include:
(a ) Sample 1/0 formats, descriptions of cost analysis studies, results of user surveys
(b) Supporting or background information that can help the readers of the SRS
(c) A description of the problems to be solved by the software
(cl) Special packaging instructions for the code and the media t.o meet security, export,
initial loading, or other requirements.
When Appendices are included, the SRS should explicitly state whether or not the Appendices
are to be considered part of the requirements.
Tables on the following pages provide alternate ways [Link] ructuresection 3 on the specific
req uirements.

Copyrighted material
REQUIREMENTS REVIEW PROCESS

Distribute
Plan Read
SRS
Review Documents
Document

Revise Follow up Organise


Documents Actions View
MODULE:3
Software Design

Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into
a form, i.e., easily implementable using programming language.

1. Conceptual Design :
Conceptual design is an initial/starting phase in the process of planning, during which the broad
outlines of function and sort of something are coupled. It tells the customers that what actually
the system will do
Common methods used for conceptual designs are-
• Wireframes

• Mockups & Flow chart

• Component diagrams

• Class-Responsibility-Collaboration (CRC) cards.

2. Technical Design :
A Technical design is a phase in which the event team writes the code and describes the minute
detail of either the whole design or some parts of it. It tells the designers that what actually the
system will do
Common methods of technical designs are-
• Class Diagrams

• Activity diagram

• Sequence diagram

• State Diagram
Objectives of Software Design

1. Correctness:
A good design should be correct i.e. it should correctly implement all the functionalities
of the system.
2. Efficiency:
A good software design should address the resources, time and cost optimization issues.
3. Understandability:
A good design should be easily understandable, for which it should be modular and all
the modules are arranged in layers.
4. Completeness:
The design should have all the components like data structures, modules, and external
interfaces, etc.
5. Maintainability:
A good software design should be easily amenable to change whenever a change
request is made from the customer side

Importance of the Software Design


1. Better designed software is more Flexible. So, you can add a new component to the
existing software without affecting the existing software.
2. Well-designed software increases Reusability. Because if you follow design patterns
strictly, your software would be more modular ie. consisting of small components that
do only one task. So, these small components can be reused easily.
3. Easy to understand. It has been always a headache to explain projects to new
hires/team members. But if you have good design & documentation, you can easily
communicate the idea of the software to your new team member.
4. Cost-efficiency is increased. You might be thinking how can designing can affect the
cost of the software. To understand this, consider a situation in which you and your team
started building software based on some assumptions, but after developing 50% of the
software, you realize that you’ve met a dead-end and you can’t go ahead with those
assumptions. Now, you’ll have to start the software again which would definitely be
very costly. So, if you’ve focused on the design first, you could have figured out the
dead-end earlier and saved a lot of time, work-force and money. Always remember,
“Designing is far more cost-efficient than developing”
MODULARITY

There are many definitions of the term module. Range is from

i. Fortran subroutine
ii. Ada package
iii. Procedures & functions of PASCAL & C
iv. C++ / Java classes
v. Java packages
vi. Work assignment for an individual programmer

All these definitions are correct. A modular system consist of well defined
manageable units with well defined interfaces among the units.

Properties :

i. Well defined subsystem


ii. Well defined purpose
iii. Can be separately compiled and stored in a library.
iv. Module can use other modules
v. Module should be easier to use than to build
vi. Simpler from outside than from the inside

Modularity is the single attribute of software that allows a program to be


intellectually manageable. It enhances design clarity, which in turn eases
implementation, debugging, testing, documenting, and maintenance of
software product.
Fig. 4 : Modularity and software cost

MODULE COUPLING

Coupling is the measure of the degree of interdependence between modules. Two


modules with high coupling are strongly interconnected and thus , dependent on
each other. Two modules with low coupling are not dependent on one
another. ”Loosely coupled” system are made up of modules which are relatively
independent. ”Highly coupled ” system share a great deal of dependence between
modules.
0 0
0 0
(Uncoupled : no dependencies)

Loosely coupled:
some dependencies
(B)
Fig. 5 : Module coupling

This can be achieved as:

Controlling the number of parameters passed amongst modules.

Avoid passing undesired data to calling module.

Maintain parent / child relationship between calling & called modules.

Pass data, not the control information

Consider the example of editing a student record in a ‘student information system’.


Types of coupling

Different types of coupling are content, common, external, control, stamp and data.
The strength of coupling from lowest coupling (best) to highest coupling(worst).

Data coupling (Best)


Stamp coupling
Control coupling
External coupling
Common coupling
Content coupling (Worst)

Data coupling

The dependency between module A and B is said to be data coupled if their


dependency is based on the fact they communicate by only passing of data. Other
than communicating through data, the two modules are independent. A good
strategy is to ensure that no module communication contains “tramp data”.

Stamp coupling

Stamp coupling occurs between module A and B when complete data structure is
passed from one module to another. Since not all data making up the structure are
usually necessary in communication between the modules, stamp coupling
typically involves tramp data. If one procedure only needs a part of a data structure,
calling module should pass just that part, not the complete data structure.

Control coupling

Module A and B are said to be control coupled if they communicate by passing of


control information. This is usually accomplished by means of flags that are set by
one module and reacted upon by the dependent module.

External coupling

A form of coupling in which a module has a dependency to other module,


external to the software being developed or to a particular type of hardware. This is
basically related to the communication to external tools and devices.

Common coupling

With common coupling, module A and module B have shared data. Global data
areas are commonly found in programming languages. Making a change to the
common data means tracing back to all the modules which access that data to
evaluate the effect of change. With common coupling, it can be difficult to
determine which module is responsible for having set a variable to a particular
value.
Content coupling

Content coupling occurs when module A changes data of module B or when


control is passed from one module to the middle of another.
Module Cohesion

Cohesion is a measure of the degree to which the elements of a module are


functionally related. A strongly cohesive module implements functionality that is
related to one feature of the solution and requires little or no interaction with other
modules.
Types of cohesion

Functional cohesion

Sequential cohesion

Procedural cohesion

Temporal cohesion

Logical cohesion

Coincident cohesion

Fig. 11 : Types of module cohesion


Functional Cohesion

A and B are part of a single functional task. This is very good reason for them to
be contained in the same procedure.

Sequential Cohesion

Module A outputs some data which forms the input to B. This is the reason for
them to be contained in the same procedure.

Procedural Cohesion

Procedural Cohesion occurs in modules whose instructions although accomplish


different tasks yet have been combined because there is a specific order in which
the tasks are to be completed.

Temporal Cohesion

Module exhibits temporal cohesion when it contains tasks that are related by the
fact that all tasks must be executed in the same time-span.

Logical Cohesion

Logical cohesion occurs in modules that contain instructions that appear to be


related because they fall into the same logical class of functions.

Coincidental Cohesion

Coincidental cohesion exists in modules that contain instructions that have little or
no relationship to one another.

Relationship between Cohesion & Coupling

If the software is not properly modularized, a host of seemingly trivial


enhancement or changes will result into death of the project. Therefore, a software
engineer must design the modules with goal of high cohesion and low coupling.

A good example of a system that has high cohesion and low coupling is the
‘plug and play’ feature of the computer system. Various slots in the mother board
of the system simply facilitate to add or remove the various services/functionalities
without affecting the entire system. This is because the add on components provide
the services in highly cohesive manner. Fig 12: provides a graphical review of
cohesion and coupling.

Module design with high cohesion and low coupling characterizes a module as
black box when the entire structure of the system is described. Each module can be
dealt separately when the module functionality is described.

STRATEGY OF DESIGN

A good system design strategy is to organize the program modules in such a way
that are easy to develop and latter to, change. Structured design techniques help
developers to deal with the size and complexity of programs. Analysts create
instructions for the developers about how code should be written and how pieces of
code should fit together to form a program. It is important for two reasons:

First, even pre-existing code, if any, needs to be understood, organized and pieced
together.
Second, it is still common for the project team to have to write some code and
produce original programs that support the application logic of the system.

Bottom-up Design

A common approach is to identify modules that are required by many programs.


These modules are collected together in the form of a “library”. These modules
may be for match functions, for input-output functions, for graphical functions etc.
The set of these modules form a hierarchy as shown in Fig.13 .This is a cross-
linked tree structure in which each module is subordinate to those in which it is
used.

Since the design progressed from bottom layer upwards, the method is
called bottom-up design. The main argument for this design is that is that if we
start coding a module soon after its design, the chances of recoding is high; but the
coded module can be tested and design can be validated sooner than a module
whose sub modules have not yet been designed.

This method has one terrible weakness; we need to use a lot of intuition to
decide exactly what functionality a module should provide.
If we get it wrong, then at a higher level, we will find that it is not as per
requirements; then we have to redesign at a lower level. If a system is to be built
form an existing system ,this approach is more suitable, as it start from some
existing modules.

Top-Down Design

A top down design approach starts by identifying the major modules of the system,
decomposing them into their lower level modules and iterating until the desired
level of detail is achieved. This is stepwise refinement; starting from an abstract
design, in each step the design is refined to a more concrete level, until we reach a
level where no more refinement is needed and the design can be implemented
directly.

Hybrid Design

For top-down approach to be effective, some bottom-up approach is essential for


the following reasons:
To permit common sub modules.

Near the bottom of the hierarchy, where the intuition is simpler, and the need for
bottom-up testing is greater, because there are more number of modules at low
levels than high levels.

In the use of pre-written library modules, in particular, reuse of modules.


FUNCTION ORIENTED DESIGN

It is an approach to software design where the design is decomposed into a set


of interacting units where each unit has a clearly defined function.
Function Oriented Design Strategies are as follows:
1. Data flow diagram:
A data flow diagram (DFD) maps out the flow of information for any
process or system. It uses defined symbols like rectangles, circles and
arrows, plus short text labels, to show data inputs, outputs, storage
points and the routes between each destination.
2. Data Dictionaries:
Data dictionaries are simply repositories to store information about all
data items defined in DFDs. At the requirement stage, data dictionaries
contain data items. Data dictionaries include Name of the item, Aliases
(Other names for items), Description / purpose, Related data items,
Range of values, Data structure definition / form.
3. Structure Charts:
It is the hierarchical representation of a system which partitions the system
into black boxes (functionality is known to users but inner details are
unknown). Components are read from top to bottom and left to right. When
a module calls another, it views the called module as black box, passing
required parameters and receiving results.
Pseudo Code:
Pseudo Code is system description in short English like phrases describing
the function. It uses keywords and indentation. Pseudo codes are used as
replacement for flow charts. It decreases the amount of documentation
required.
OBJECT ORIENTED ANALYSIS (OOA):
Object Oriented Analysis (OOA) is the first technical activity performed as part of
object oriented software engineering. OOA introduces new concepts to
investigate a problem. It is based in a set of basic principles, which are as
follows-
1. The information domain is modeled.
2. Behavior is represented.
3. Function is described.
4. Data, functional, and behavioral models are divided to uncover greater
detail.
5. Early models represent the essence of the problem, while later ones
provide implementation
details.

OBJECT ORIENTED DESIGN

The object-oriented (OO) paradigm is widely used in modern software


[Link] oriented design is the result of focusing attention not on the
function performed by the program, but instead on the data that are to be
manipulated by the [Link] Oriented Design begins with an examination
of the real world “things” that are part of the problem to be solved. These things
(which we will call objects) are characterized individually in terms of their
attributes and [Link] Oriented Design is not dependent on any specific
implementation language. Problems are modeled using objects.
The different terms related to object design are:

1. Objects: All entities involved in the solution design are known as objects.
For example, person, banks, company, and users are considered as
objects. Every entity has some attributes associated with it and has some
methods to perform on the attributes.

2. Classes: A class is a generalized description of an object. An object is an


instance of a class. A class defines all the attributes, which an object can
have and methods, which represents the functionality of the object.

3. Messages: Objects communicate by message passing. Messages consist


of the integrity of the target object, the name of the requested operation,
and any other action needed to perform the function. Messages are often
implemented as procedure or function calls.

4. Abstraction In object-oriented design, complexity is handled using


abstraction. Abstraction is the removal of the irrelevant and the
amplification of the essentials.

5. Encapsulation: Encapsulation is also called an information hiding concept.


The data and operations are linked to a single unit. Encapsulation not only
bundles essential information of an object together but also restricts
access to the data and methods from the outside world.

6. Inheritance: OOD allows similar classes to stack up in a hierarchical


manner where the lower or sub-classes can import, implement, and re-use
allowed variables and functions from their immediate [Link]
property of OOD is called an inheritance. This makes it easier to define a
specific class and to create generalized classes from specific ones.

7. Polymorphism: OOD languages provide a mechanism where methods


performing similar tasks but vary in arguments, can be assigned the same
name. This is known as polymorphism, which allows a single interface is
performing functions for different types. Depending upon how the service is
invoked, the respective portion of the code gets executed.

STEPS TO ANALYSE AND DESIGN OBJECT ORIENTED SYSTEM

i. Create use case model :First step is to identify the actors interacting with
the system. We should then write the use case and draw the use case
diagram.

ii. Draw activity diagram (If required) :Activity Diagram illustrate the
dynamic nature of a system by modeling the flow of control form activity to
activity. An activity represents an operation on some class in the system
that results in a change in the state of the system.
iii. Draw the interaction diagram :An interaction diagram shows an
interaction, consisting of a set of objects and their relationship, including
the messages that may be dispatched among them. Interaction diagrams
address the dynamic view of a system.

Steps to draws interaction diagrams are :

1) Firstly, we should identify that the objects with respects to every use
case.
2) We draw the sequence diagrams for every use case.

3) We draw the collaboration diagrams for every use case.

iv. Draw the class diagram :The class diagram shows the relationship
amongst classes.

There are four types of relationships in class diagrams.

a) Association are semantic connection between classes.

When an association connects two classes, each class can send


messages to the other in a sequence or a collaboration diagram.
Associations can be bi-directional or unidirectional.

b) Dependencies connect two classes.

Dependencies are always unidirectional and show that one class,


depends on the definitions in another class.

c) Aggregations are stronger form of association.


An aggregation is a relationship between a whole and its parts.

d) Generalizations are used to show an inheritance relationship between


two classes.

v. Design of state chart diagrams

A state chart diagram is used to show the state space of a given class, the
event that cause a transition from one state to another, and the action that
result from a state change. A state transition diagram for a “book” in the
library system is given below:
vi. Draw component and development diagram

Component diagrams address the static implementation view of a system


they are related to class diagrams in that a component typically maps to
one or more classes, interfaces or [Link] Diagram
Captures relationship between physical components and the hardware.
MODULE:4
SOFTWARE TESTING
A strategy for software testing provides a road map that describes the steps to be conducted as
part of testing, when these steps are planned and then undertaken, and how much effort, time,
and resources will be required. Therefore, any testing strategy must incorporate test planning,
test-case design, test execution, and resultant data collection and evaluation.

A software testing strategy should be flexible enough to promote a customized testing approach.
At the same time, it must be rigid enough to encourage reasonable planning and management
tracking as the project progresses.

Shooman [Sho83] discusses these issues:

In many ways, testing is an individualistic process, and the number of different types of tests
varies as much as the different development approaches. For many years, our only defense
against programming errors was careful design and the native intelligence of the programmer.
We are now in an era in which modern design techniques [and technical reviews] are helping us
to reduce the number of initial errors that are inherent in the code. Similarly, different test
methods are beginning to cluster themselves into several distinct approaches and philosophies.

These “approaches and philosophies” are what we call strategy .

A STRATEGIC APPROACH TO SOFTWARE TESTING

Testing is a set of activities that can be planned in advance and conducted systematically. For
this reason a template for software testing—a set of steps into which we can place specific test-
case design techniques and testing methods—should be defined for the software process.

• What is it? :

Software is tested to uncover errors that were made inadvertently as it was designed and
constructed. But how do you conduct the tests? Should you develop a formal plan for your tests?
Should you test the entire program as a whole or run tests only on a small part of it? Should you
rerun tests you’ve already conducted as you add new components to a large system? When
should you involve the customer? These and many other questions are answered when you
develop a software testing strategy.
• Who does it? :

A strategy for software testing is developed by the project manager, software engineers, and
testing specialists.

• Why is it important? :

Testing often accounts for more project effort than any other software engineering action. If it is
conducted haphazardly, time is wasted, unnecessary effort is expended, and even worse, errors
sneak through undetected. It would therefore seem reasonable to establish a systematic strategy
for testing software.

• What are the steps? :

Testing begins “in the small” and progresses “to the large.” By this we mean that early testing
focuses on a single component or on a small group of related components and applies tests to
uncover errors in the data and processing logic that have been encapsulated by the component(s).
After components are tested they must be integrated until the complete system is constructed. At
this point, a series of high-order tests are executed to uncover errors in meeting customer
requirements. As errors are uncovered, they must be diagnosed and corrected using a process
that is called debugging.

• What is the work product? :

A Test specification documents the software team’s approach to testing by defining a plan that
describes an overall strategy and a procedure that defines specific testing steps and the types of
tests that will be conducted.

• How do I ensure that I’ve done it right? :

By reviewing the Test Specification prior to testing, you can assess the completeness of test
cases and testing tasks. An effective test plan and procedure will lead to the orderly construction
of the software and the discovery of errors at each stage in the construction process.
GENERIC CHARACTERISTICS

A number of software testing strategies have been proposed in the literature. All provide you
with a template for testing and all have the following generic characteristics:

• To perform effective testing, you should conduct effective technical reviews. By doing this,
many errors will be eliminated before testing commences.

• Testing begins at the component level and works “outward” toward the integration of the
entire computer-based system.

• Different testing techniques are appropriate for different software engineering approaches and
at different points in time.

• Testing is conducted by the developer of the software and (for large projects) an independent
test group.

• Testing and debugging are different activities, but debugging must be accommodated in any
testing strategy.

NOTE: A strategy for software testing must accommodate low-level tests that are necessary to
verify that a small source code segment has been correctly implemented as well as high-level tests
that validate major system functions against customer requirements. A strategy should provide
guidance for the practitioner and a set of milestones for the manager. Because the steps of the
test strategy occur at a time when deadline pressure begins to rise, progress must be measurable
and problems should surface as early as possible.
Verification and Validation
Software testing is one element of a broader topic that is often referred to as verification and
validation (V&V). Verification refers to the set of tasks that ensure that software correctly
implements a specific function. Validation refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements. Boehm [Boe81] states this
another way:

Verification: “Are we building the product right?”

Validation: “Are we building the right product?”

Verification and validation includes a wide array of SQA activities: technical reviews, quality and
configuration audits, performance monitoring, simulation, feasibility study, documentation
review, database review, algorithm analysis, development testing, usability testing, qualification
testing, acceptance testing, and installation testing. Although testing plays an extremely
important role in V&V, many other activities are also necessary.

Testing does provide the last bastion from which quality can be assessed and, more
pragmatically, errors can be uncovered. But testing should not be viewed as a safety net. As they
say, “You can’t test in quality. If it’s not there before you begin testing, it won’t be there when
you’re finished testing.” Quality is incorporated into software throughout the process of software
engineering. Proper application of methods and tools, effective technical reviews, and solid
management and measurement all lead to quality that is confirmed during testing.

Miller [Mil77] relates software testing to quality assurance by stating that “the underlying
motivation of program testing is to affirm software quality with methods that can be
economically and effectively applied to both large-scale and small-scale systems.”
CRS
(Customer
Requirements
Specification)

SRS
([Link]
Requirements
Spacification)
HLO
(High Level Design)
L LD
(Low Level design)
DIFFERENCE BETWEEN VERIFICATION AND VALIDATION

Verification Validation

It includes checking documents, design, It includes testing and validating the actual
codes and programs. product.

Verification is the static testing. Validation is the dynamic testing.

It does notinclude the execution of the


code. It includes the execution of the code.

Methods used in verification are reviews, Methods used in validation are Black Box
walkthroughs, inspections and desk- Testing, White Box Testing and non-
checking. functional testing.

It checks whether the software meets the


It checks whether the software conforms requirements and expectations of a
to specifications or not. customer or not.

It can find the bugs in the early stage of It can only find the bugs that could not be
the development. found by the verification process.
Verification Validation

The goal of verification is application and


software architecture and specification. The goal of validation is an actual product.

Validation is executed on software code


Quality assurance team does verification. with the help of testing team.

It comes before validation. It comes after verification.

It consists of checking of documents/files It consists of execution of program and is


and is performed by human. performed by computer.
Software Testing Strategy—The Big Picture
The software process may be viewed as the spiral illustrated in Figure 1 .Initially , system
engineering defines the role of software and leads to software requirements analysis, where the
information domain, function, behavior, performance, constraints, and validation criteria for
software are established. Moving inward along the spiral, you come to design and finally to
coding. To develop computer software, you spiral inward along streamlines that decrease the
level of abstraction on each turn.

Figure 1

A strategy for software testing may also be viewed in the context of the spiral ( Figure 1 ). Unit
testing begins at the vortex of the spiral and concentrates on each unit (e.g., component, class, or
WebApp content object) of the software as implemented in source code .Testing progresses by
moving outward along the spiral to integration testing, where the focus is on design and the
construction of the software architecture. Taking another turn outward on the spiral, you
encounter validation testing, where requirements established as part of requirements modeling
are validated against the software that has been constructed. Finally, you arrive at system testing,
where the software and other system elements are tested as a whole. To test computer software,
you spiral out along streamlines that broaden the scope of testing with each turn.

Considering the process from a procedural point of view, testing within the context of software
engineering is actually a series of four steps that are implemented sequentially. The steps are
shown in Figure 2 . Initially, tests focus on each component individually, ensuring that it
functions properly as a unit. Hence, the name unit testing. Unit testing makes heavy use of
testing techniques that exercise specific paths in a component’s control structure to ensure
complete coverage and maximum error detection. Next, components must be assembled or
integrated to form the complete software package. Integration testing addresses the issues
associated with the dual problems of verification and program construction. Test-case design
techniques that focus on inputs and outputs are more prevalent during integration, although
techniques that exercise specific program paths may be used to ensure coverage of major control
paths. After the software has been integrated (constructed), a set of high-order tests is conducted.
Validation criteria (established during requirements analysis) must be evaluated. Validation
testing provides final assurance that software meets all functional, behavioral, and performance
requirements.

Figure 2

The last high-order testing step falls outside the boundary of software engineering and into the
broader context of computer system engineering. Software, once validated, must be combined
with other system elements (e.g. hardware , people ,databases). System testing verifies that all
elements mesh properly and that overall system function/performance is achieved.
Criteria for Completion of Testing
A classic question arises every time software testing is discussed: “When are we done testing—
how do we know that we’ve tested enough?” Sadly, there is no definitive answer to thisquestion,
but there are a few pragmatic responses and early attempts at empirical guidance.

One response to the question is: “You're never done testing; the burden simply shifts from you
(the software engineer) to the end user.” Every time the user executes a computer program, the
program is being tested. This sobering fact underlines the importance of other software quality
assurance activities. Another response (somewhat cynical but nonetheless accurate) is: “You’re
done testing when you run out of time or you run out of money.” Although few practitioners
would argue with these responses, you need more rigorous criteria for determining when
sufficient testing has been conducted. The clean room software engineering approach suggests
statistical use techniques [Kel00] that execute a series of tests derived from a statistical sample of
all possible program executions by all users from a targeted population. By collecting metrics
during software testing and making use of existing statistical models, it is possible to develop
meaningful guidelines for answering the question: “When are we done testing?”

STRATEGIC ISSUES

Later in this, we present a systematic strategy for software testing. But even the best strategy will
fail if a series of overriding issues are not addressed. Tom Gilb [Gil95] argues that a software
testing strategy will succeed only when software testers:

(1) Specify product requirements in a quantifiable manner long before testing commences

(2) State testing objectives explicitly,

(3) Understand the users of the software and develop a profile for each user category

(4) Develop a testing plan that emphasizes “rapid cycle testing,”

(5) Build “robust” software that is designed to test itself

(6) Use effective technical reviews as a filter prior to testing

(7) Conduct technical reviews to assess the test strategy and test cases themselves
(8) Develop a continuous improvement approach for the testing process.
Test strategies for conventional software
A testing strategy that is chosen by many software teams falls between the two extremes. It
takes an incremental view of testing, beginning with the testing of individual program units,
moving to tests designed to facilitate the integration of the units (sometimes on a daily basis),
and culminating with tests that exercise the constructed system.
Unit Testing
Unit Testing, the very first thing that we need to know is when the Unit Testing is being
performed. Unit testing is the initial level of software testing, that is performed on the
application source code mainly by the developer. The main motive of Unit Testing is to isolate
a section of code and verify it’s correctness.
A level of software testing where the individual units or components of a software or web
applications are tested by developer is called Unit Testing. It is an important aspect of
Software Testing. It is a component of test-driven development (TDD).
Benefits of Unit Testing:
1. Defects revealed by a unit test are easy to locate and relatively easy to repair. Unit testing
verifies the accuracy of the each unit.
2. In unit testing procedure – the developer needs to write test cases for all functions and
methods so that whenever a change is required, it can be fixed quickly in later date and the
module also works correctly.
3. Unit testing improves the quality of the code. It helps the programmer to write the better
code. It identifies every defect that may have come up before code is sent further for
regression testing.
4. If a test fails, then only the latest changes need to be made in the code and need to debug. So
It’s better to say unit testing helps to simplify the debugging process.
5. Codes are more reusable. In order to make unit testing possible, codes need to be modular.
This means that codes are easier to reuse.
The developer should make Unit testing a part of their regime to make neat and clean,
reusable, and bug free codes. Also by using unit testing the quality of code will be improved.
Unit testing helps to reduce the cost of bug fixes.
Unit Test Case Best Practices:
• You Should always follow the proper naming conventions for your unit testing, i.e. clear and
consistent naming conventions for the unit tests.
• If any changes done in the requirements then your unit test case should not be affected.
Your test cases should be independent.
• Always follow “test as your code" approach. The more code you write without testing, the
more paths you have to check for errors.
• If any changes needs to done in the code for the corresponding module, make sure that you
have the unit test case for that particular module and need to pass the test case before any
implementation done.

Integration testing
Integration testing is one of the agile methodologies of software testing where individual
components or units of code are tested to validate interactions among different software
system modules. In this process, these system components are either tested as a single group or
organized iteratively.

Typically, system integration testing is taken up to validate the performance of the entire
software system as a whole. The main purpose of this testing method is to expand the process
and validate the integration of the modules with other groups. It is performed to verify if all
the units operate in accordance with their specifications defined.
Top-Down Integration.

Top-down integration testing is an incremental approach to construction of the software


architecture. Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module (main program). Modules subordinate (and
ultimately subordinate) to the main control module are incorporated into the structure in either
a depth first or breadth-first manner.

depth-first integration integrates all components on a major control path of the program
structure. Selection of a major path is somewhat arbitrary and depends on application-specific
characteristics. For example, selecting the left-hand path, components M1, M2 , M5 would be
integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be
integrated. Then, the central and right-hand control paths are built. Breadth-first integration
incorporates all components directly subordinate at each level, moving across the structure
horizontally. From the figure, components M2, M3, and M4 would be integrated first. The
next control level, M5, M6, and so on, follows. The integration process is performed in a series
of five steps:
1. The main control module is used as a test driver and stubs are substituted for all components
directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first), subordinate
stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real component. 5.
Regression testing may be conducted to ensure that new errors have not been introduced.
Bottom-Up Integration
Bottom-up integration testing, as its name implies, begins construction and testing with atomic
modules (i.e., components at the lowest levels in the program structure). Because components
are integrated from the bottom up, the functionality provided by components subordinate to a
given level is always available and the need for stubs is eliminated. A bottom-up integration
strategy may be implemented with the following steps:
1. Low-level components are combined into clusters (sometimes called builds) that perform a
specific software subfunction.
2. A driver (a control program for testing) is written to coordinate test-case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.

In this Figure the components are combined to form clusters 1, 2, and 3. Each of the clusters
is tested using a driver (shown as a dashed block). Components in clusters 1 and 2 are
subordinate to M a. Drivers D1 and D2 are removed and the clusters are interfaced directly to
M a. Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both
Ma and M b will ultimately be integrated with component Mc, and so forth. As integration
moves upward, the need for separate test drivers lessens. In fact, if the top two levels of
program structure are integrated top down, the number of drivers can be reduced substantially
and integration of clusters is greatly simplified.
Regression testing
Regression testing is a testing type performed to validate that existing functionalities are
working fine after code modification. It testing is one of the most common terms used in
software testing and quality assurance. Regression testing is the technique or the testing
type performed to ensure that the existing functionality of the software or application
works as expected if any new code is introduced or a new defect fix has been done, or any
new functionality added to the application. It is the task of a quality assurance engineer to
check the already tested features after modifications and ensure that code changes have not
impacted the existing features.
The need for Regression Testing could arise when there are changes as mentioned below:

• To fix the defect.


• When a new feature is added.
• When changes are made to the existing feature.
• To maintain the software product bug-free.
• Change in configuration/environment (hardware, software, network)

It focuses on minimising the risks of defects or dependencies due to any changes to the
code. This testing is conducted after maintenance; changes to functionalities or
enhancements are made to a product to make sure that there are not any unexpected
outcomes.

Smoke Testing

Smoke testing is an integration testing approach that is commonly used when product software
is developed. It is designed as a pacing mechanism for time-critical projects, allowing the
software team to assess the project on a frequent basis. In essence, the smoke-testing approach
encompasses the following activities:

1. Software components that have been translated into code are integrated into a build. A
build includes all data fi les, libraries, reusable modules, and engineered components that are
required to implement one or more product functions.

2. A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “show-stopper” errors that have the
highest likelihood of throwing the software project behind schedule.

3. The build is integrated with other builds, and the entire product (in its current form) is
smoke tested daily. The integration approach may be top down or bottom up.

Smoke testing provides a number of benefits when it is applied on complex, time-critical


software projects:

• Integration risk is minimized. Because smoke tests are conducted daily, incompatibilities and
other show-stopper errors are uncovered early, thereby reducing the likelihood of serious
schedule impact when errors are uncovered.

The quality of the end product is improved. Because the approach is construction (integration)
oriented, smoke testing is likely to uncover functional errors as well as architectural and
component-level design errors. If these errors are corrected early, better product quality will
result.

• Error diagnosis and correction are simplified. Like all integration testing approaches, errors
uncovered during smoke testing are likely to be associated with “new software increments”—
that is, the software that has just been added to the build(s) is a probable cause of a newly
discovered error.

• Progress is easier to assess. With each passing day, more of the software has been integrated
and more has been demonstrated to work. This improves team morale and gives managers a
good indication that progress is being made.
Validation Testing
Validation is the process of evaluating a system or component during or at the end of
development process to determine whether it satisfies the specified requirements.
The process of evaluating software during the development process or at the end of
the development process to determine whether it satisfies specified business
requirements. Validation Testing ensures that the product actually meets the client's
needs.
Validation testing begins at the culmination of integration testing, when individual
components have been exercised, the software is completely assembled as a
package, and interfacing errors have been uncovered and corrected. At the validation
or system level, the distinction between different software categories disappears.
Testing focuses on user-visible actions and user-recognizable output from the system.

• It refers to test the software as a complete product.


• This should be done after unit & integration testing.
• Alpha, beta & acceptance testing are nothing but the various ways of
involving customer during testing.
• Validation testing improves the quality of software product in terms of
functional capabilities and quality attributes.

Validation-Test Criteria
• Ensure that all functional requirements are satisfied
• All behavioural characteristics are achieved
• All content is accurate and properly presented
• All performance requirements are attained
• Documentation is correct, and usability and other requirements are met.

System Testing
System Testing is a black box testing technique performed to evaluate the complete
system the system's compliance against specified requirements. In System testing,
the functionalities of the system are tested from an end-to-end perspective.
A classic system-testing problem is “finger pointing.” This occurs when an error is
uncovered.
System Testing is usually carried out by a team that is independent of the development
team in order to measure the quality of the system unbiased. It includes both functional
and Non-Functional testing.
• Recovery Testing
Recovery testing is a system test that forces the software to fail in a variety of
ways and verifies that recovery is properly performed. Recovery testing is a
type of non-functional testing technique performed in order to determine how
quickly the system can recover after it has gone through system crash or
hardware failure. Recovery testing is the forced failure of the software to verify
if the recovery is successful.
• Security Testing
Security Testing is a type of system testing that uncovers vulnerabilities of the
system and determines that the data and resources of the system are protected
from possible intruders. It ensures that the software system and application are
free from any threats or risks that can cause a loss. Security testing attempts
to verify that protection mechanisms built into a system will, in fact, protect it
from improper penetration.
• Stress Testing
Stress tests are designed to confront programs with abnormal situations.
Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume. During stress testing, the system is
monitored after subjecting the system to overload to ensure that the system
can sustain the stress.
• Performance Testing
Performance testing is designed to test the run-time performance of software
within the context of an integrated system. Performance testing occurs
throughout all steps in the testing process. Even at the unit level, the
performance of an individual module may be assessed as tests are conducted.
Performance testing, is a non-functional testing technique performed to
determine the system parameters in terms of responsiveness and stability
under various workload. Performance testing measures the quality attributes
of the system, such as scalability, reliability and resource usage.
SOFTWARE TESTING FUNDAMENTALS

The goal of testing is to find errors, and a good test is one that has a high probability of finding
an error. Therefore, you should design and implement a computer-based system or a product
with “testability” in mind. At the same time, the tests themselves must exhibit a set of
characteristics that achieve the goal of finding the most errors with a minimum of effort.

Testability: Software testability is simply how easily [a computer program] can be tested.

The following characteristics lead to testable software.

• Operability. “The better it works, the more efficiently it can be tested.” If a system is
designed and implemented with quality in mind, relatively few bugs will block the
execution of tests, allowing testing to progress without fits and starts.

• Observability: “What you see is what you test.” Inputs provided as part of testing produce
distinct outputs. System states and variables are visible or queriable during execution.
Incorrect output is easily identified. Internal errors are automatically detected and reported.
Source code is accessible.

• Controllability. “The better we can control the software, the more the testing can be
automated and optimized.” All possible outputs can be generated through some
combination of input, and I/O formats are consistent and structured. All code is executable
through some combination of input. Software and hardware states and variables can be
controlled directly by the test engineer. Tests can be conveniently specified, automated, and
reproduced.

• Decomposability. “By controlling the scope of testing, we can more quickly isolate
problems and perform smarter retesting.” The software system is built from independent
modules that can be tested independently.
• Simplicity. “The less there is to test, the more quickly we can test it.” The program should
exhibit functional simplicity (e.g., the feature set is the minimum necessary to meet
requirements); structural simplicity (e.g., architecture is modularized to limit the
propagation of faults), and code simplicity (e.g., a coding standard is adopted for ease of
inspection and maintenance).

• Stability. “The fewer the changes, the fewer the disruptions to testing.” Changes to the
software are infrequent, controlled when they do occur, and do not invalidate existing tests.
The software recovers well from failures.

• Understandability. “The more information we have, the smarter we will test.” The
architectural design and the dependencies between internal, external, and shared
components are well understood. Technical documentation is instantly accessible, well
organized, specific and detailed, and accurate. Changes to the design are communicated to
testers.
✓ What are the characteristics of testability?
A. Test Characteristics.
o A good test has a high probability of finding an error. To achieve this goal, the tester must
understand the software and attempt to develop a mental picture of how the software might
fail.
o A good test is not redundant. Testing time and resources are limited. There is no point in
conducting a test that has the same purpose as another test. Every test should have a
different purpose (even if it is subtly different).
o A good test should be “best of breed” [Kan93]. In a group of tests that have a similar
intent, time and resource limitations may dictate the execution of only those tests that has
the highest likelihood of uncovering a whole class of errors.
o A good test should be neither too simple nor too complex. Although it is sometimes
possible to combine a series of tests into one test case, the possible side effects associated
with this approach may mask errors. In general, each test should be executed separately.

✓ What is a good test?

B. INTERNAL AND EXTERNAL VIEWS OF TESTING


Any engineered product (and most other things) can be tested in one of two ways:
(1) Knowing the specified function that a product has been designed to perform, tests can
be conducted that demonstrate each function is fully operational while at the same time
searching for errors in each function.
(2) Knowing the internal workings of a product, tests can be conducted to ensure that “all
gears mesh,” that is, internal operations are performed according to specifications and all
internal components have been adequately exercised.

The first test approach takes an external view and is called black-box testing. The second
requires an internal view and is termed white-box testing

Black-box testing alludes to tests that are conducted at the software interface. A black-box
test examines some fundamental aspect of a system with little regard for the internal logical
structure of the software.

White-box testing of software is predicated on close examination of procedural detail.


Logical paths through the software and collaborations between components are tested by
exercising specific sets of conditions and/or loops
C. WHITE-BOX TESTING

White-box testing, sometimes called glass-box testing or structural testing, is a test-case


design philosophy that uses the control structure described as part of component-level
design to derive test cases.
White box testing techniques analyze the internal structures the used data structures,
internal design, code structure and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing or clear box testing
or structural testing.

From <[Link]

In this testing method, the design and structure of the code are known to the tester.
Programmers of the code conduct this test on the code.
The below are some White-box testing techniques:
• Control-flow testing - The purpose of the control-flow testing to set up test cases
which covers all statements and branch conditions. The branch conditions are tested
for both being true and false, so that all statements can be covered.
• Data-flow testing - This testing technique emphasis to cover all the data variables
included in the program. It tests where the variables were declared and defined and
where they were used or changed.

From
<[Link]

Using white-box testing methods,


you can derive test cases that
(1) Guarantee that all independent paths within a module have been exercised at least
once,
(2) exercise all logical decisions on their true and false sides,
(3) Execute all loops at their boundaries and within their operational bounds
(4) Exercise internal data structures to ensure their validity.

BASIS PATH TESTING


Basis path testing is a white-box testing technique first proposed by Tom McCabe
[McC76]. The basis path method enables the test-case designer to derive a logical
complexity measure of a procedural design and use this measure as a guide for defining a
basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to
execute every.
1. FLOW GRAPH NOTATION [IMP]

Before the basis path method can be introduced, a simple notation for the representation of
control flow, called a flow graph (or program graph) must be introduced.

Consider the procedural design representation in Figure 23.2a. Here, a flowchart is used to depict
program control structure. Figure 23.2bmaps the flowchart into a corresponding flow graph
(assuming that no compound conditions are contained in the decision diamonds of the
flowchart). Referring to Figure 23.2b, each circle, called a flow graph node, represents one or
more procedural statements. A sequence of process boxes and a decision diamond can map into a
single node. The arrows on the flow graph, called edges or links, represent flow of control and
are analogous to flowchart arrows. An edge must terminate at a node, even if the node does not
represent any procedural statements (e.g., see the flow graph symbol for the if-then-else
construct).
Areas bounded by edges and nodes are called regions. When counting regions, we include the
area outside the graph as a region. When compound conditions are encountered in a procedural
design, the generation of a flow graph becomes slightly more complicated. A compound
condition occurs when one or more Boolean operators (logical OR, AND, NAND, NOR) is
present in a conditional statement. Referring to Figure 23.3, the program design language (PDL)
segment translates into the flow graph shown. Note that a separate node is created for each of the
conditions a and b in the statement IF a OR b. Each node that contains a condition is called a
predicate node and is characterized by two or more edges emanating from it.

2. Independent Program Paths

An independent path is any path through the program that introduces at least one new set of
processing statements or a new condition. When stated in terms of a flow graph, an
independent path must move along at least one edge that has not been traversed before the
path is defined. For example, a set of independent paths for the flow graph illustrated in
Figure 23.2b is
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11
Note that each new path introduces a new edge.
The path 1-2-3-4-5-10-1-2-3-6-8-9-10-1-11
is not considered to be an independent path because it is simply a combination of already
specified paths and does not traverse any new edges. Paths 1 through 4 constitute a basis
set for the flow graph in Figure 23.2b . That is, if you can design tests to force execution of
these paths (a basis set), every statement in the program will have been guaranteed to be
executed at least one time and every condition will have been executed on its true and false
sides. It should be noted that the basis set is not unique. In fact, a number of different basis
sets can be derived for a given procedural design. How do you know how many paths to
look for? The computation of cyclomatic complexity provides the answer. Cyclomatic
complexity is a software metric that provides a quantitative measure of the logical
complexity of a program. When used in the context of the basis path testing method, the
value computed for cyclomatic complexity defines the number of independent paths in the
basis set of a program and provides you with an upper bound for the number of tests that
must be conducted to ensure that all statements have been executed at least once.

Cyclomatic complexity has a foundation in graph theory and provides you with an
extremely useful software metric.

Complexity is computed in one of three ways:


1. The number of regions of the flow graph corresponds to the cyclomatic complexity. 2.
Cyclomatic complexity V( G) for a flow graph G is defined as
V( G) =E - N + 2
where E is the number of flow graph edges and N is the number of flow graph nodes. 3.
Cyclomatic complexity V( G) for a flow graph G is also defined as
V( G) = P + 1
where P is the number of predicate nodes contained in the flow graph G.

For example

Referring once more to the flow graph in Figure 23.2b , the cyclomatic complexity can be
computed using each of the algorithms just noted:
1. The flow graph has four regions.
2. V( G) =11 edges - 9 nodes + 2 =4.
3. V( G) = 3 predicate nodes +1 = 4.

Therefore, the cyclomatic complexity of the flow graph in Figure 23.2 b is 4.

✓ How do I compute cyclomatic complexity? ?

3. Deriving Test Cases


The basis path testing method can be applied to a procedural design or to source code. In
this section, we present basis path testing as a series of steps. The procedure average,
depicted in PDL in Figure 23.4 , will be used as an example to illustrate each step in the
test-case design method. Note that average, although an extremely simple algorithm,
contains compound conditions and loops. The following steps can be applied to derive the
basis set:

• Using the design or code as a foundation, draw a corresponding flow graph. A flow graph is created
using the symbols and construction rules presented in Section 23.4.1. Referring to the PDL for
average in Figure 23.4 , a flow graph is created by numbering those PDL statements that will be
mapped into corresponding flow graph nodes.
• Determine the cyclomatic complexity of the resultant flow graph. The cyclomatic complexity V( G)
is determined by applying the algorithms described in Section 23.4.2. It should be noted that V( G)
can be determined
• Determine a basis set of linearly independent paths.
The value of V( G) provides the number of linearly independent paths through the program
control structure. In the case of procedure average, we expect to specify six paths:
Path 1: 1-2-10-11-13
Path 2: 1-2-10-12-13
Path 3: 1-2-3-10-11-13
Path 4: 1-2-3-4-5-8-9-2-. . .
Path 5: 1-2-3-4-5-6-8-9-2-. . .
Path 6: 1-2-3-4-5-6-7-8-9-2-. . .

The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path through the remainder
of the control structure is acceptable. It is often worthwhile to identify predicate nodes as
an aid in the derivation of test cases. In this case, nodes 2, 3, 5, 6, and 10 are predicate
nodes.

• Prepare test cases that will force execution of each path in the basis set.

Data should be chosen so that conditions at the predicate nodes are appropriately set as
each path is tested. Each test case is executed and compared to expected results. Once all
test cases have been completed, the tester can be sure that all statements in the program
have been executed at least once.
It is important to note that some independent paths (e.g., path 1 in our example) cannot be
tested in stand-alone fashion. That is, the combination of data required to traverse the path
cannot be achieved in the normal flow of the program. In such cases, these paths are tested
as part of another path test.

4. Graph Matrices

The procedure for deriving the flow graph and even determining a set of basis paths is
amenable to mechanization. A data structure, called a graph matrix, can be quite useful for
developing a software tool that assists in basis path testing. A graph matrix is a square
matrix whose size (i.e., number of rows and columns) is equal to the number of nodes on
the flow graph. Each row and column corresponds to an identified node, and matrix entries
correspond to connections (an edge) between nodes. A simple example of a flow graph and
its corresponding graph matrix [Bei90] is shown in Figure 23.6 . Referring to the figure,
each node on the flow graph is identified by numbers, while each edge is identified by
letters. A letter entry is made in the matrix to correspond to a connection between two
nodes. For example, node 3 is connected to node 4 by edge b. To this point, the graph
matrix is nothing more than a tabular representation of a flow graph. However, by adding a
link weight to each matrix entry, the graph
matrix can become a powerful tool for evaluating program control structure during testing.
The link weight provides additional information about control flow. In its simplest form,
the link weight is 1 (a connection exists) or 0 (a connection does not exist). But link
weights can be assigned other, more interesting properties:
• The probability that a link (edge) will be executed. • The processing time expended
during traversal of a link
• The memory required during traversal of a link
• The resources required during traversal of a link. Beizer [Bei90] provides a thorough
treatment of additional mathematical algorithms that can be applied to graph matrices.
Using these techniques, the analysis required to design test cases can be partially or fully
automated.

✓ What is a graph matrix and how do we extend it for use in testing?

D. BLACK-BOX TESTING
Black-box testing, also called behavioral testing or functional testing, focuses on the
functional requirements of the software. That is, black-box testing techniques enable you to
derive sets of input conditions that will fully exercise all functional requirements for a
program. Black-box testing is not an alternative to white-box techniques. Rather, it is a
complementary approach that is likely to uncover a different class of errors than white-box
methods.
Black box testing is a technique of software testing which examines the functionality of
software without peering into its internal structure or coding. The primary source of black
box testing is a specification of requirements that is stated by the customer.
In this method, tester selects a function and gives input value to examine its functionality,
and checks whether the function is giving expected output or not. If the function produces
correct output, then it is passed in testing, otherwise failed. The test team reports the result
to the development team and then tests the next function. After completing testing of all
functions if there are severe problems, then it is given back to the development team for
correction.
From <[Link]

Black-box testing attempts to find errors in the following categories:


(1) Incorrect or missing functions,
(2) Interface errors,
(3) Errors in data structures or external database access,
(4) Behavior or performance errors, and
(5) Initialization and termination errors. Unlike white-box testing, which is
performed early in the testing process, black box testing tends to be applied during
later stages oftesting.
Because black-box testing purposely disregards control structure, attention is focused on
the information domain.
Tests are designed to answer the following questions:
• How is functional validity tested?
• How are system behavior and performance tested?
• What classes of input will make good test cases? • Is the system particularly sensitive to
certain input values?
• How are the boundaries of a data class isolated? • What data rates and data volume can
the system tolerate?
• What effect will specific combinations of data have on system operation? By applying
black-box techniques, you derive a set of test cases that satisfy the following criteria
[Mye79]: test cases that reduce, by a count that is greater than one, the number of
additional test cases that must be designed to achieve reasonable testing, and test cases that
tell you something about the presence or absence of classes of errors, rather than an error
associated only with the specific test at hand.

✓ What questions do black-box tests answer?

1. Graph-Based Testing Methods


The first step in black-box testing is to understand the objects that are modeled in software
and the relationships that connect these objects.
Once this has been accomplished, the next step is to define a series of tests that verify “all
objects
Have the expected relationship to one another” [Bei95]. Stated in another way, software
testing begins by creating a graph of important objects and their relationships and then
devising a series of tests that will cover the graph so that each object and relationship is
exercised and errors are uncovered.
To accomplish these steps, you begin by creating a graph—a collection of nodes
that represent objects, links that represent the relationships between objects, node weights
that describe the properties of a node (e.g., a specific data value or state behavior), and link
weights that describe some characteristic of a link.

The symbolic representation of a graph is shown in Figure 23.8a. Nodes are


represented as circles connected by links that take a number of different forms. A directed
link (represented by an arrow) indicates that a relationship moves in only one direction. A
bidirectional link, also called a symmetric link, implies that the relationship applies in both
directions. Parallel links are used when a number of different relationships are established
between graph nodes

As a simple example, consider a portion of a graph for a word-processing application (Figure


23.8b) where
Object #1 =newFile (menu selection)
Object #2 = documentWindow
Object #3 = documentText
Referring to the figure, a menu select on newFile generates a document window. The node
weight of documentWindow provides a list of the window attributes that are to be expected when
the window is generated. The link weight indicates that the window must be generated in less
than 1.0 second. An undirected link establishes a symmetric relationship between the newFile
menu selection and documentText, and parallel links indicate relationships between
documentWindow and documentText. In reality, a far more detailed graph would have to be
generated as a precursor to test-case design. You can then derive test cases by traversing the
graph and covering each of the relationships shown. These test cases are designed in an attempt
to find errors in any of the relationships. Beizer [Bei95] describes a number of behavioral testing
methods that can make use of graphs:

Transaction flow modeling. The nodes represent steps in some transaction (e.g., the steps
required to make an airline reservation using an online service), and the links represent the
logical connection between steps. For example, a data object flightInformationInput is followed
by the operation validationAvailabilityProcessing().

Finite state modeling. The nodes represent different user-observable states of the software
(e.g., each of the “screens” that appear as an order entry clerk takes a phone order), and the links
represent the transitions that occur to move from state to state (e.g., orderInformation is
verified during inventoryAvailabilityLook-up() and is followed by customerBillingInformation
input). The state diagram (Chapter 11) can be used to assist in creating graphs of this type.

Data flow modeling. The nodes are data objects, and the links are the transformations that occur
to translate one data object into another. For example, the node FICATaxWithheld ( FTW) is
computed from gross wages ( GW) using the relationship, FTW = 0.62 * GW.

Timing modeling: The nodes are program objects, and the links are the sequential connections
between those objects. Link weights are used to specify the required execution times as the
program executes.

2. Equivalence Partitioning:[imp]
Equivalence partitioning is a black-box testing method that divides the input domain of a
program into classes of data from which test cases can be derived. An ideal test case single-
handedly uncovers a class of errors (e.g., incorrect processing of all character data) that might
otherwise require many test cases to be executed before the general error is observed. Test-case
design for equivalence partitioning is based on an evaluation of equivalence classes for an input
condition. Using concepts introduced in the preceding section, if a set of objects can be linked
by relationships that are symmetric, transitive, and reflexive, an equivalence class is present
[Bei95]. An equivalence class represents a set of valid or invalid states for input conditions.
Typically, an input condition is either a specific numeric value, a range of values, a set of related
values, or a Boolean condition. Equivalence classes may be defined according to the following
guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are
defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes
are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class
are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined

By applying the guidelines for the derivation of equivalence classes, test cases for each input
domain data item can be developed and executed. Test cases are selected so that the largest
number of attributes of an equivalence class are exercised at once.

✓ How do I define equivalence classes for testing? ?

3. Boundary Value Analysis [just for understanding]


Boundary value analysis is a test-case design technique that complements equivalence
partitioning. Rather than selecting any element of an equivalence class, BVA leads to the
selection of test cases at the “edges” of the class.
Guidelines for BVA are similar in many respects to those provided for equivalence partitioning:
1. If an input condition specifies a range bounded by values a and b, test cases should be
designed with values a and b and just above and just below a and b.
2. If an input condition specifies a number of values, test cases should be developed that exercise
the minimum and maximum numbers. Values just above and below minimum and maximum are
also tested.
3. Apply guidelines 1 and 2 to output conditions. For example, assume that a temperature versus
pressure table is required as output from an engineering analysis program. Test cases should be
designed to create an output report that produces the maximum (and minimum) allowable
number of table entries.

4. If internal program data structures have prescribed boundaries (e.g., a table has a defined
limitof 100 entries), be certain to design a test case to exercise the data structure at its boundary.

Most software engineers intuitively perform BVA to some degree. By applying these guidelines,
boundary testing will be more complete, thereby having a higher likelihood for error detection

4. Orthogonal Array Testing


Orthogonal array testing can be applied to problems in which the input domain is relatively small
but too large to accommodate exhaustive testing. The orthogonal array testing method is
particularly useful in finding region faults—an error category associated with faulty logic within
a software component.
To illustrate the use of the L9 orthogonal array,
consider the send function for a fax application.
Four parameters, P1, P2, P3, and P4, are passed to the send function
. Each takes on three discrete values.
For example,
P1 takes on values:
P1 = 1, send it now
P1 =2, send it one hour later
P1 = 3, send it after midnight
P2, P3, and P4 would also take on values of 1, 2 and 3, signifying other send functions.

The orthogonal array testing approach enables you to provide good test coverage with far fewer
test cases than the exhaustive strategy. An L9 orthogonal array for the fax send function is
illustrated in Figure 23.10 .
MODULE:5
Object oriented modelling: use case: Actors, Scenarios & usw cases,
Drawing use case diagrams

Object-oriented modeling (OOM) is an approach to modeling an application that is used at the beginning
of the software life cycle when using an object-oriented approach to software development.

The software life cycle is typically divided up into stages going from abstract descriptions of the problem to
designs then to code and testing and finally to deployment. Modeling is done at the beginning of the process.
Object-oriented modeling is typically done via use cases and abstract definitions of the most important
objects. The most common language used to do object-oriented modeling is the Object Management
Group's Unified Modeling Language (UML).

A use case diagram is a dynamic or behavior diagram in UML. Use case diagrams model the functionality
of a system using actors and use cases. Use cases are a set of actions, services, and functions that the system
needs to perform. In this context, a "system" is something being developed or operated, such as a web site.
The "actors" are people or entities operating under defined roles within the system.

Actors represent the role of the future users of the system. Actors model the user's perspective of the
system. Actors are located outside the system; therefore, in order to depict actors, it is important to define
the boundaries between actors and the system.

A Scenario is a formal description of the flow of events that occur during the execution of a use
case instance. It defines the specific sequence of events between the system and the external actors. It is
normally described in text and corresponds to the textual representation of the sequence diagram.

A use case is a written description of how users will perform tasks on your website. It outlines, from
a user's point of view, a system's behavior as it responds to a request. Each use case is represented as a
sequence of simple steps, beginning with a user's goal and ending when that goal is fulfilled.

Why Make Use Case Diagrams?

Use case diagrams are valuable for visualizing the functional requirements of a system that will translate
into design choices and development priorities.
They also help identify any internal or external factors that may influence the system and should be taken
into consideration.
They provide a good high level analysis from outside the system. Use case diagrams specify how the system
interacts with actors without worrying about the details of how that functionality is implemented.

Basic Use Case Diagram Symbols and Notations

System
Draw your system's boundaries using a rectangle that contains use cases. Place actors outside the system's
boundaries.

Use Case
Draw use cases using ovals. Label the ovals with verbs that represent the system's functions.

Actors
Actors are the users of a system. When one system is the actor of another system, label the actor system with
the actor stereotype.

Relationships
Illustrate relationships between an actor and a use case with a simple line. For relationships among use cases,
use arrows labeled either "uses" or "extends." A "uses" relationship indicates that one use case is needed by
another in order to perform a task. An "extends" relationship indicates alternative options under a certain use
case.
Purpose of Use Case Diagrams

The main purpose of a use case diagram is to portray the dynamic aspect of a system. It accumulates the
system's requirement, which includes both internal as well as external influences. It invokes persons, use
cases, and several things that invoke the actors and elements accountable for the implementation of use case
diagrams. It represents how an entity from the external environment can interact with a part of the system.

Following are the purposes of a use case diagram given below:

1. It gathers the system's needs.


2. It depicts the external view of the system.
3. It recognizes the internal as well as external factors that influence the system.
4. It represents the interaction between the actors.

How to draw a Use Case diagram?

It is essential to analyze the whole system before starting with drawing a use case diagram, and then the
system's functionalities are found. And once every single functionality is identified, they are then
transformed into the use cases to be used in the use case diagram.

After that, we will enlist the actors that will interact with the system. The actors are the person or a thing that
invokes the functionality of a system. It may be a system or a private entity, such that it requires an entity to
be pertinent to the functionalities of the system to which it is going to interact.

Once both the actors and use cases are enlisted, the relation between the actor and use case/ system is
inspected. It identifies the no of times an actor communicates with the system. Basically, an actor can
interact multiple times with a use case or system at a particular instance of time.

Following are some rules that must be followed while drawing a use case diagram:

1. A pertinent and meaningful name should be assigned to the actor or a use case of a system.
2. The communication of an actor with a use case must be defined in an understandable way.
3. Specified notations to be used as and when required.
4. The most significant interactions should be represented among the multiple no of interactions
between the use case and actors.
Example of a Use Case Diagram

A use case diagram depicting the Online Shopping website is given below.

Here the Web Customer actor makes use of any online shopping website to purchase online. The top-level
uses are as follows; View Items, Make Purchase, Checkout, Client Register. The View Items use case is
utilized by the customer who searches and view products. The Client Register use case allows the customer
to register itself with the website for availing gift vouchers, coupons, or getting a private sale invitation. It is
to be noted that the Checkout is an included use case, which is part of Making Purchase, and it is not
available by itself.

The View Items is further extended by several use cases such as; Search Items, Browse Items, View
Recommended Items, Add to Shopping Cart, Add to Wish list. All of these extended use cases provide some
functions to customers, which allows them to search for an item. The View Items is further extended by
several use cases such as; Search Items, Browse Items, View Recommended Items, Add to Shopping Cart,
Add to Wish list. All of these extended use cases provide some functions to customers, which allows them to
search for an item.
Both View Recommended Item and Add to Wish List include the Customer Authentication use case, as
they necessitate authenticated customers, and simultaneously item can be added to the shopping cart without
any user authentication.

Similarly, the Checkout use case also includes the following use cases, as shown below. It requires an
authenticated Web Customer, which can be done by login page, user authentication cookie ("Remember
me"), or Single Sign-On (SSO). SSO needs an external identity provider's participation, while Web site
authentication service is utilized in all these use cases.

The Checkout use case involves Payment use case that can be done either by the credit card and external
credit payment services or with PayPal.
Important tips for drawing a Use Case diagram

Following are some important tips that are to be kept in mind while drawing a use case diagram:

1. A simple and complete use case diagram should be articulated.


2. A use case diagram should represent the most significant interaction among the multiple interactions.
3. At least one module of a system should be represented by the use case diagram.
4. If the use case diagram is large and more complex, then it should be drawn more generalized.
Module:5

➢ Three Common Use Case Formats: -

• Brief - one-paragraph summary, usually of the main success


scenario.
- During early requirements analysis

Write two to four sentences per use case, capturing key activities
and key-extension handling.

• Expand the high priority use-cases by writing a two- to four-


sentence use cases for each entry in the list.
• Briefly describe each use case’s main scenario and most important
extensions.
• Include enough information to eliminate ambiguity for at least the
main scenario.

• Casual - Informal paragraph format. Multiple paragraphs that


cover various scenarios.
• A few paragraphs (usually two) of text that is conversational in
nature written in terms of a generic user role rather than specific
people. The first paragraph starts with “An X wants to do Y to
achieve Z. They have already done W.” The second paragraph
describes the interactions and information flow, and ends with
successful accomplishment of the goal.

• Fully dressed - All steps and variations are written in detail, and
there are supporting sections, such as preconditions and success
guarantees.
1
• A carefully structured and detailed description enabling a deep
understanding of the goals, tasks, and requirements.
• Use case diagrams can be embedded at any level.
• Simple projects may only need a brief or casual use
case. Complex projects are likely to need a fully dressed use
case to define requirements.
• The case level may also depend on the progress of the
project. The first use case may be brief, and become more
detailed later when solution owners need more specific and
detailed guidance.

2
➢ The System Sequence Diagram (SSD): -

In software engineering, a system sequence diagram is a sequence


diagram that shows, for a particular scenario of a use case, the
events that external actors generate, their order, and possible
inter-system events. System sequence diagrams are visual
summaries of the individual use cases.
 Use cases describe how external actors interact with the A
system sequence diagram is a picture that shows, for one
particular scenario of a use case, the events that external actors
generate, their order, and inter-system events.
 software system.
 During this interaction an actor generates system events to a
system, usually requesting some system operation to handle the
event.

3
A system sequence diagram should specify and show the following:

• External actors
• Messages (methods) invoked by these actors
• Return values (if any) associated with previous messages
• Indication of any loops or iteration area

➢ Relationship between SSD and Use case

 An SSD shows system events for one scenario of a use case,


therefore it is generated from inspection of a use case.

4
5
➢ How to name system events in SSD?
 System events should be expressed at the abstract level of
intention rather than in terms of the physical input device.

6
UML Interaction Diagram
As the name suggests, the interaction diagram portrays the interactions between distinct
entities present in the model. It amalgamates both the activity and sequence diagrams. The
communication is nothing but units of the behavior of a classifier that provides context for
interactions.

A set of messages that are interchanged between the entities to achieve certain specified
tasks in the system is termed as interaction. It may incorporate any feature of the classifier
of which it has access. In the interaction diagram, the critical component is the messages
and the lifeline.

In UML, the interaction overview diagram initiates the interaction between the objects
utilizing message passing. While drawing an interaction diagram, the entire focus is to
represent the relationship among different objects which are available within the system
boundary and the message exchanged by them to communicate with each other.

The message exchanged among objects is either to pass some information or to request
some information. And based on the information, the interaction diagram is categorized
into the sequence diagram, collaboration diagram, and timing diagram.

The sequence diagram envisions the order of the flow of messages inside the system by
depicting the communication between two lifelines, just like a time-ordered sequence of
events.

The collaboration diagram, which is also known as the communication diagram, represents
how lifelines connect within the system, whereas the timing diagram focuses on that instant
when a message is passed from one element to the other.

Purpose of an Interaction Diagram:


The interaction diagram helps to envision the interactive (dynamic) behavior of any system.
It portrays how objects residing in the system communicates and connects to each other. It
also provides us with a context of communication between the lifelines inside the system.

Following are the purpose of an interaction diagram given below:


1. To visualize the dynamic behavior of the system.
2. To envision the interaction and the message flow in the system.
3. To portray the structural aspects of the entities within the system.
4. To represent the order of the sequenced interaction in the system.
5. To visualize the real-time data and represent the architecture of an object-oriented
system.
Notation of an Interaction Diagram:

How to draw an Interaction Diagram?


Since the main purpose of an interaction diagram is to visualize the dynamic behavior of the
system, it is important to understand what a dynamic aspect really is and how we can
visualize it. The dynamic aspect is nothing but a screenshot of the system at the run time.
Before drawing an interaction diagram, the first step is to discover the scenario for which
the diagram will be made. Next, we will identify various lifelines that will be invoked in the
communication, and then we will classify each lifeline. After that, the connections are
investigated and how the lifelines are interrelated to each other.
Following are some things that are needed:
1. A total no of lifeline which will take part in the communication.
2. The sequence of the message flow among several entities within the system.
3. No operators used to ease out the functionality of the diagram.
4. Several distinct messages that depict the interactions in a precise and clear way.
5. The organization and structure of a system.
6. The order of the sequence of the flow of messages.
7. Total no of time constructs of an object.
Use of an Interaction Diagram
The interaction diagram can be used for:
1. The sequence diagram is employed to investigate a new application.

2. The interaction diagram explores and compares the use of the collaboration diagram
sequence diagram and the timing diagram.
3. The interaction diagram represents the interactive (dynamic) behavior of the system.
4. The sequence diagram portrays the order of control flow from one element to the
other elements inside the system, whereas the collaboration diagrams are employed
to get an overview of the object architecture of the system.
5. The interaction diagram models the system as a time-ordered sequence of a system.
6. The interaction diagram models the system as a time-ordered sequence of a system.
7. The interaction diagram systemizes the structure of the interactive elements.

1. Sequence Diagram
The sequence diagram represents the flow of messages in the system and is also termed as
an event diagram. It helps in envisioning several dynamic scenarios. It portrays the
communication between any two lifelines as a time-ordered sequence of events, such that
these lifelines took part at the run time. In UML, the lifeline is represented by a vertical bar,
whereas the message flow is represented by a vertical dotted line that extends across the
bottom of the page. It incorporates the iterations as well as branching.
Purpose of a Sequence Diagram
1. To model high-level interaction among active objects within a system.
2. To model interaction among objects inside a collaboration realizing a use case.
3. It either models generic interactions or some certain instances of interaction.
Notations of a Sequence Diagram
a. Lifeline
An individual participant in the sequence diagram is represented by a lifeline. It is positioned
at the top of the diagram.
b. Actor
A role played by an entity that interacts with the subject is called as an actor. It is out of the
scope of the system. It represents the role, which involves human users and external
hardware or subjects. An actor may or may not represent a physical entity, but it purely
depicts the role of an entity. Several distinct roles can be played by an actor or vice versa.
c. Activation
It is represented by a thin rectangle on the lifeline. It describes that time period in which an
operation is performed by an element, such that the top and the bottom of the rectangle is
associated with the initiation and the completion time, each respectively.

d. Messages
The messages depict the interaction between the objects and are represented by arrows.
They are in the sequential order on the lifeline. The core of the sequence diagram is formed
by messages and lifelines.
Following are types of messages enlisted below:

o Call Message: It defines a particular communication between the lifelines of an


interaction, which represents that the target lifeline has invoked an operation.
o Return Message: It defines a particular communication between the lifelines of
interaction that represent the flow of information from the receiver of the
corresponding caller message.

o Self Message: It describes a communication, particularly between the lifelines of an


interaction that represents a message of the same lifeline, has been invoked.

o Recursive Message: A self message sent for recursive purpose is called a recursive
message. In other words, it can be said that the recursive message is a special case of
the self message as it represents the recursive calls.
o Create Message: It describes a communication, particularly between the lifelines of
an interaction describing that the target (lifeline) has been instantiated.

o Destroy Message: It describes a communication, particularly between the lifelines of


an interaction that depicts a request to destroy the lifecycle of the target.

o Duration Message: It describes a communication particularly between the lifelines


of an interaction, which portrays the time passage of the message while modeling a
system.
e. Note
A note is the capability of attaching several remarks to the element. It basically carries
useful information for the modelers.

Sequence Fragments:
1. Sequence fragments have been introduced by UML 2.0, which makes it quite easy
for the creation and maintenance of an accurate sequence diagram.
2. It is represented by a box called a combined fragment, encloses a part of interaction
inside a sequence diagram.
3. The type of fragment is shown by a fragment operator.

Types of fragments
Following are the types of fragments enlisted below;

Operator Fragment Type


alt Alternative multiple fragments: The only fragment for which the condition is
true, will execute.
opt Optional: If the supplied condition is true, only then the fragments will
execute. It is similar to alt with only one trace.
par Parallel: Parallel executes fragments.
loop Loop: Fragments are run multiple times, and the basis of interaction is shown
by the guard.
region Critical region: Only one thread can execute a fragment at once.
neg Negative: A worthless communication is shown by the fragment
ref Reference: An interaction portrayed in another diagram. In this, a frame is
drawn so as to cover the lifelines involved in the communication. The
parameter and return value can be explained.
sd Sequence Diagram: It is used to surround the whole sequence diagram.

Example of a Sequence Diagram:


An example of a high-level sequence diagram for online bookshop is given below.
Any online customer can search for a book catalog, view a description of a particular book,
add a book to its shopping cart, and do checkout.
Benefits of a Sequence Diagram
1. It explores the real-time application.
2. It depicts the message flow between the different objects.
3. It has easy maintenance.
4. It is easy to generate.
5. Implement both forward and reverse engineering.
6. It can easily update as per the new change in the system.

The drawback of a Sequence Diagram


1. In the case of too many lifelines, the sequence diagram can get more complex.
2. The incorrect result may be produced, if the order of the flow of messages changes.
3. Since each sequence needs distinct notations for its representation, it may make the
diagram more complex.
4. The type of sequence is decided by the type of message.

2. Communication Diagram
The communication diagram is used to show the relationship between the objects in a
system. Both the sequence and the communication diagrams represent the same
information but differently. Instead of showing the flow of messages, it depicts the
architecture of the object residing in the system as it is based on object-oriented
programming. An object consists of several features. Multiple objects present in the system
are connected to each other. The communication diagram, which is also known as a
collaboration diagram, is used to portray the object's architecture in the system.
Notations of a Communication Diagram
Following are the components of a component diagram that are enlisted below:
1. Objects: The representation of an object is done by an object symbol with its name
and class underlined, separated by a colon.
In the communication diagram, objects are utilized in the following ways:

o The object is represented by specifying their name and class.


o It is not mandatory for every class to appear.
o A class may constitute more than one object.
o In the communication diagram, firstly, the object is created, and then its class
is specified.
o To differentiate one object from another object, it is necessary to name
them.
2. Actors: In the communication diagram, the actor plays the main role as it invokes the
interaction. Each actor has its respective role and name. In this, one actor initiates
the use case.
3. Links: The link is an instance of association, which associates the objects and actors.
It portrays a relationship between the objects through which the messages are sent.
It is represented by a solid line. The link helps an object to connect with or navigate
to another object, such that the message flows are attached to links.
4. Messages: It is a communication between objects which carries information and
includes a sequence number, so that the activity may take place. It is represented by
a labeled arrow, which is placed near a link. The messages are sent from the sender
to the receiver, and the direction must be navigable in that particular direction. The
receiver must understand the message.
When to use a Communication Diagram?

The collaborations are used when it is essential to depict the relationship between the
object. Both the sequence and communication diagrams represent the same information,
but the way of portraying it quite different. The communication diagrams are best suited for
analyzing use cases.
Following are some of the use cases enlisted below for which the communication diagram is
implemented:
OOPs Concepts in Java

1. To model collaboration among the objects or roles that carry the functionalities of
use cases and operations.
2. To model the mechanism inside the architectural design of the system.

3. To capture the interactions that represent the flow of messages between the objects
and the roles inside the collaboration.
4. To model different scenarios within the use case or operation, involving a
collaboration of several objects and interactions.
5. To support the identification of objects participating in the use case.
6. In the communication diagram, each message constitutes a sequence number, such
that the top-level message is marked as one and so on. The messages sent during the
same call are denoted with the same decimal prefix, but with different suffixes of 1,
2, etc. as per their occurrence.
Steps for creating a Communication Diagram
1. Determine the behavior for which the realization and implementation are specified.
2. Discover the structural elements that are class roles, objects, and subsystems for
performing the functionality of collaboration.
o Choose the context of an interaction: system, subsystem, use case, and
operation.
3. Think through alternative situations that may be involved.
o Implementation of a communication diagram at an instance level, if needed.
o A specification level diagram may be made in the instance level sequence
diagram for summarizing alternative situations.
Example of a Communication Diagram

Benefits of a Communication Diagram


1. The communication diagram is also known as Collaboration Diagram.
2. It mainly puts emphasis on the structural aspect of an interaction diagram, i.e., how
lifelines are connected.
3. The syntax of a communication diagram is similar to the sequence diagram; just the
difference is that the lifeline does not consist of tails.
4. The messages transmitted over sequencing is represented by numbering each
individual message.
5. The communication diagram is semantically weak in comparison to the sequence
diagram.
6. The special case of a communication diagram is the object diagram.
7. It focuses on the elements and not the message flow, like sequence diagrams.
8. Since the communication diagrams are not that expensive, the sequence diagram
can be directly converted to the communication diagram.
9. There may be a chance of losing some amount of information while implementing a
communication diagram with respect to the sequence diagram.
Drawback of a Communication Diagram
1. Multiple objects residing in the system can make a complex communication diagram,
as it becomes quite hard to explore the objects.
2. It is a time-consuming diagram.
3. After the program terminates, the object is destroyed.
4. As the object state changes momentarily, it becomes difficult to keep an eye on
every single that has occurred inside the object of a system.

Similarities Between Sequence and Communication Diagram


1. In Unified Modelling Language both the sequence diagram and Communication
diagram are used as interaction diagrams.
2. Both the diagrams details about the behavioural aspects of the system.

Differences Between Sequence and Communication diagram:

Sequence Diagrams Communication Diagrams


The sequence diagram represents the UML, The collaboration diagram also comes
which is used to visualize the sequence of calls under the UML representation which is
in a system that is used to perform a specific used to visualize the organization of the
functionality. objects and their interaction.
The sequence diagrams are used to represent The collaboration diagrams are used to
the sequence of messages that are flowing from represent the structural organization of
one object to another. the system and the messages that are sent
and received.

The sequence diagram is used when time The collaboration diagram is used when
sequence is main focus. object organization is main focus.
The collaboration diagrams are better
The sequence diagrams are better suited of suited for depicting simpler interactions of
analysis activities. the smaller number of objects.
Class Diagrams
What is class diagrams?
In software engineering, a class diagram in the Unified Modeling Language (UML)
is a type of static structure diagram that describes the structure of a system by
showing the system's classes, their attributes, operations (or methods), and the
relationships among objects.
Purpose of Class Diagrams
The purpose of class diagram is to model the static view of an application. Class
diagrams are the only diagrams which can be directly mapped with object-
oriented languages and thus widely used at the time of construction.

UML diagrams like activity diagram, sequence diagram can only give the sequence
flow of the application, however class diagram is a bit different. It is the most
popular UML diagram in the coder community.

• The purpose of the class diagram can be summarized as −


• Analysis and design of the static view of an application.
• Describe responsibilities of a system.
• Base for component and deployment diagrams.
• Forward and reverse engineering.

Simple class diagram :


In the example, a class called “loan account” is depicted. Classes in class diagrams
are represented by boxes that are partitioned into three:

[Link] top partition contains the name of the class.


[Link] middle part contains the class’s attributes.
3. The bottom partition shows the possible operations that are associated with the
class.
The example shows how a class can encapsulate all the relevant data of a
particular object in a very systematic and clear way. A class diagram is a collection
of classes similar to the one above.

Common class diagram notations and Relationships

Classes are interrelated to each other in specific ways. In particular, relationships


in class diagrams include different types of logical connections. The following are
such types of logical connections that are possible in UML:
• Association
• Directed Association
• Reflexive Association
• Multiplicity
• Aggregation
• Composition
• Inheritance/Generalization
• Realization
Association

is a broad term that encompasses just about any logical connection or


relationship between classes. For example, passenger and airline may be
linked as above:

Directed Association

refers to a directional relationship represented by a line with an arrowhead.


The arrowhead depicts a container-contained directional flow.

Multiplicity

is the active logical association when the cardinality of a class in relation to


another is being depicted. For example, one fleet may include multiple
airplanes, while one commercial airplane may contain zero to many
passengers. The notation 0..* in the diagram means “zero to many”.
Reflexive Association

This occurs when a class may have multiple functions or responsibilities. For
example, a staff member working in an airport may be a pilot, aviation
engineer, a ticket dispatcher, a guard, or a maintenance crew member. If the
maintenance crew member is managed by the aviation engineer there could
be a managed by relationship in two instances of the same class.

Aggregation

refers to the formation of a particular class as a result of one class being


aggregated or built as a collection. For example, the class “library” is made up
of one or more books, among other materials. In aggregation, the contained
classes are not strongly dependent on the lifecycle of the container. In the
same example, books will remain so even when the library is dissolved. To
show aggregation in a diagram, draw a line from the parent class to the child
class with a diamond shape near the parent class.

To show aggregation in a diagram, draw a line from the parent class to the
child class with a diamond shape near the parent class.
Composition

The composition relationship is very similar to the aggregation relationship.


with the only difference being its key purpose of emphasizing the dependence
of the contained class to the life cycle of the container class. That is, the
contained class will be obliterated when the container class is destroyed. For
example, a shoulder bag’s side pocket will also cease to exist once the
shoulder bag is destroyed.

To show a composition relationship in a UML diagram, use a directional line


connecting the two classes, with a filled diamond shape adjacent to the
container class and the directional arrow to the contained class.

Inheritance / Generalization

refers to a type of relationship wherein one associated class is a child of


another by virtue of assuming the same functionalities of the parent class. In
other words, the child class is a specific type of the parent class. To show
inheritance in a UML diagram, a solid line from the child class to the parent
class is drawn using an unfilled arrowhead.
Realization

denotes the implementation of the functionality defined in one class by


another class. To show the relationship in UML, a broken line with an unfilled
solid arrowhead is drawn from the class that defines the functionality of the
class that implements the function. In the example, the printing preferences
that are set using the printer setup interface are being implemented by the
printer.

How to Draw a Class Diagram?


Class diagrams are the most popular UML diagrams used for construction
of software applications. It is very important to learn the drawing procedure
of class diagram.
Class diagrams have a lot of properties to consider while drawing but here
the diagram will be considered from a top level view.
Class diagram is basically a graphical representation of the static view of
the system and represents different aspects of the application. A collection
of class diagrams represent the whole system.
The following points should be remembered while drawing a class diagram

• The name of the class diagram should be meaningful to describe the
aspect of the system.
• Each element and their relationships should be identified in advance.
• Responsibility (attributes and methods) of each class should be clearly
identified
• For each class, minimum number of properties should be specified, as
unnecessary properties will make the diagram complicated.
• Use notes whenever required to describe some aspect of the diagram.
At the end of the drawing it should be understandable to the
developer/coder.
• Finally, before making the final version, the diagram should be drawn
on plain paper and reworked as many times as possible to make it
correct.
The following diagram is an example of an Order System of an application.
It describes a particular aspect of the entire application.
• First of all, Order and Customer are identified as the two elements of
the system. They have a one-to-many relationship because a
customer can have multiple orders.
• Order class is an abstract class and it has two concrete classes
(inheritance relationship) SpecialOrder and NormalOrder.
• The two inherited classes have all the properties as the Order class.
In addition, they have additional functions like dispatch () and receive
().
The following class diagram has been drawn considering all the points
mentioned above.
Where to use?
Class diagram is a static diagram and it is used to model the static view of a system. The
static view describes the vocabulary of the system.
Class diagram is also considered as the foundation for component and deployment
diagrams. Class diagrams are not only used to visualize the static view of the system
but they are also used to construct the executable code for forward and reverse
engineering of any system.
Generally, UML diagrams are not directly mapped with any object-oriented programming
languages but the class diagram is an exception.
Class diagram clearly shows the mapping with object-oriented languages such as Java,
C++, etc. From practical experience, class diagram is generally used for construction
purpose.
In a nutshell it can be said, class diagrams are used for −
• Describing the static view of the system.
• Showing the collaboration among the elements of the static view.
• Describing the functionalities performed by the system.
• Construction of software applications using object oriented languages.
Software Engineering Note
Activity Diagrams :-

We use Activity Diagrams to illustrate the flow of control in a system and


refer to the steps involved in the execution of a use case. We model
sequential and concurrent activities using activity diagrams. So, we basically
depict workflows visually using an activity diagram. An activity diagram
focuses on condition of flow and the sequence in which it happens. We
describe or depict what causes a particular event using an activity diagram.

An activity diagram is very similar to a flowchart.

Difference between an Activity diagram and a Flowchart –

Flowcharts were typically invented earlier than activity diagrams.


Non programmers use Flow charts to model workflows.
So, programmers use activity diagrams (advanced version of a flowchart) to
depict workflows. An activity diagram is used by developers to understand
the flow of programs on a high level.

Difference between a Use case diagram and an Activity diagram

An activity diagram is used to model the workflow depicting conditions,


constraints, sequential and concurrent activities. On the other hand, the
purpose of a Use Case is to just depict the functionality i.e. what the system
does and not how it is done. So in simple terms, an activity diagram shows
‘How’ while a Use case shows ‘What’ for a particular system.
The levels of abstraction also vary for both of them. An activity diagram can
be used to illustrate a business process (high level implementation) to a
stand alone algorithm (ground level implementation). However, Use cases
have a low level of abstraction. They are used to show a high level of
implementation only.
Figure – an activity diagram for an emotion based music player
The above figure depicts an activity diagram for an emotion based music
player which can also be used to change the wallpaper.
Activity Diagram Notations –

1. Initial State – The starting state before an activity takes place is


depicted using the initial state.

Figure – notation for initial state or start state


A process can have only one initial state unless we are depicting
nested activities. We use a black filled circle to depict the initial
state of a system. For objects, this is the state when they are
instantiated. The Initial State from the UML Activity Diagram marks
the entry point and the initial Activity State.
For example – Here the initial state is the state of the system before
the application is opened.

Figure – initial state symbol being used

2. Action or Activity State – An activity represents execution of an


action on objects or by objects. We represent an activity using a
rectangle with rounded corners. Basically any action or event that
takes place is represented using an activity.

Figure – notation for an activity state


For example – Consider the previous example of opening an
application opening the application is an activity state in the activity
diagram.

Figure – activity state symbol being used


3. Action Flow or Control flows – Action flows or Control flows are
also referred to as paths and edges. They are used to show the
transition from one activity state to another.

Figure – notation for control Flow


An activity state can have multiple incoming and outgoing action
flows. We use a line with an arrow head to depict a Control Flow. If
there is a constraint to be adhered to while making the transition it
is mentioned on the arrow.
Consider the example – Here both the states transit into one final
state using action flow symbols i.e. arrows.

Figure – using action flows for transitions

4. Decision node and Branching – When we need to make a


decision before deciding the flow of control, we use the decision
node.

Figure – notation for decision node


The outgoing arrows from the decision node can be labelled with
conditions or guard [Link] always includes two or more
output arrows.
Figure – an activity diagram using decision node

5. Guards – A Guard refers to a statement written next to a decision


node on an arrow sometimes within square brackets.

Figure – guards being used next to a decision node


The statement must be true for the control to shift along a particular
direction. Guards help us know the constraints and conditions
which determine the flow of a process.
6. Fork – Fork nodes are used to support concurrent activities.

Figure – fork notation


When we use a fork node when both the activities get executed
concurrently i.e. no decision is made before splitting the activity into
two parts. Both parts need to be executed in case of a fork
statement.

We use a rounded solid rectangular bar to represent a Fork


notation with incoming arrow from the parent activity state and
outgoing arrows towards the newly created activities.
For example: In the example below, the activity of making coffee
can be split into two concurrent activities and hence we use the fork
notation.

Figure – a diagram using fork


7. Join – Join nodes are used to support concurrent activities
converging into one. For join notations we have two or more
incoming edges and one outgoing edge.

Figure – join notation

For example – When both activities i.e. steaming the milk and
adding coffee get completed, we converge them into one final
activity.

Figure – a diagram using join notation


8. Merge or Merge Event – Scenarios arise when activities which are
not being executed concurrently have to be merged. We use the
merge notation for such scenarios. We can merge two or more
activities into one if the control proceeds onto the next activity
irrespective of the path chosen.

Figure – merge notation


For example – In the diagram below: we can’t have both sides
executing concurrently, but they finally merge into one. A number
can’t be both odd and even at the same time.

Figure – an activity diagram using merge notation


9. Swimlanes – We use swimlanes for grouping related activities in
one column. Swimlanes group related activities into one column or
one row. Swimlanes can be vertical and horizontal. Swimlanes are
used to add modularity to the activity diagram. It is not mandatory to
use swimlanes. They usually give more clarity to the activity
diagram. It’s similar to creating a function in a program. It’s not
mandatory to do so, but, it is a recommended practice.

Figure – swimlanes notation


We use a rectangular column to represent a swimlane as shown in
the figure above.
For example – Here different set of activities are executed based on
if the number is odd or even. These activities are grouped into a
swimlane.

Figure – an activity diagram making use of swimlanes


10. Time Event –

Figure – time event notation


We can have a scenario where an event takes some time to
complete. We use an hourglass to represent a time event.
For example – Let us assume that the processing of an image
takes takes a lot of time. Then it can be represented as shown
below.

Figure – an activity diagram using time event

[Link] State or End State – The state which the system reaches
when a particular process or activity ends is known as a Final State
or End State. We use a filled circle within a circle notation to
represent the final state in a state machine diagram. A system or a
process can have multiple final states.

Figure – notation for final state


How to Draw an activity diagram –

1. Identify the initial state and the final states.


2. Identify the intermediate activities needed to reach the final state
from he initial state.
3. Identify the conditions or constraints which cause the system to
change control flow.
4. Draw the diagram with appropriate notations.

Figure – an activity diagram


The above diagram prints the number if it is odd otherwise it subtracts one
from the number and displays it.

Uses of an Activity Diagram –

• Dynamic modelling of the system or a process.

• Illustrate the various steps involved in a UML use case.

• Model software elements like methods,operations and functions.

• We can use Activity diagrams to depict concurrent activities easily.

• Show the constraints, conditions and logic behind algorithms.

You might also like