You are on page 1of 126

1

MODULE 1

INTRODUCTION TO SOFTWARE ENGINEERING


1.1 Introduction
* As the importance of software has risen, millions of computer programmes have been
developed and will need to be fixed, changed, and enhanced. Nobody could have
predicted that these upkeep tasks would use up more time and energy than developing
brand new software. Therefore, people in the software industry have been hard at work
creating tools that make it easier, faster, and cheaper to build and maintain high-quality
computer programmes. However, the software community has not yet succeeded in
developing a single technology that can accomplish all of these goals. Therefore, in order
for us to accomplish this, we require a framework that includes a procedure, a group of
procedures, and a variety of instruments. The term "Software Engineering" refers to this
particular framework.

1.2 The Evolving Role of Software


* Today software takes a dual role
=> As a Product
=> A vehicle for delivering a product
(1) As a Product:
 Hardware local connection to a network of computers makes available the
processing power already present in the hardware.
 A programme is an information transformer if it produces, manages, acquires,
modifies, or transfers information; this is true whether the programme is in a
mobile phone or a supercomputer.

(2) A vehicle for delivering a product:


* Software provides one of the most important products of our time, information,
by doing things like:
=> It transforms personnel data [for example, an individual financial transaction]
=> It manages business information to improve competitiveness
=> It serves as a gateway to global information networks [for example, the
internet]
2

=>It provides a means of acquiring information in all of its forms


* Software provides one of the most significant products of our day, information,
by doing things like:
=> It changes personnel data [for example, an individual financial transaction]
=> It handles business

1.3 Adaptation of Software Engineering Practices:


* Extraordinary advances in hardware performance,
=> deep changes in computing design,
=> vast increases in memory and storage capacity,
=> exotic input and output possibilities

 All of these factors have contributed to the evolution of increasingly complex and
sophisticated computer-based systems.
 Sophistication and complexity yield desirable outcomes if the system works as
intended, but can pose serious challenges if the opposite is true.
 Large software companies now employ entire groups of specialists, each of whom
works on a specific aspect of the technology needed to complete a single application.
 The inquiries made of the lone programmer are identical to those made throughout the
development of contemporary computer systems.

They are,
(1) What factors contribute to the protracted time required to finish software?
(2) What aspects of development add to the staggering costs?
(3) Why is it that we are unable to detect all of the faults in the software before releasing
it to our clients?
(4) Why do we put forth a lot of time and energy to keep the programmes that are
currently in place going?

(5) Why do we still struggle to accurately measure the amount of progress made when
the software is being built and maintained?
3

 The fact that these inquiries are being made demonstrates that businesses are
worried about software and its creation process.
 This worry has prompted the growth of software engineering practices.

1.4 Software:
Definition:
“ It is a set of instruction that when executed provide desired features, functions
and performance”
(Or)
“ It is a data structure that enable the programs to adequately manipulate
information”
(Or)
“ It is a documents that describe the operation and use of programs”

1.5 Software Characteristics:


The following are some ways in which software features differ from hardware ones:
(1) Software is not produced in the conventional sense; rather, it is developed (or
engineered).
(2) In both software and hardware production, high quality is accomplished through
good design.
(3) Whereas software rarely has quality issues, hardware often does throughout the
production phase.
(4) Although human beings are essential to the success of both endeavours, the
correlation between manpower and output is very different.
5) In both cases, something must be built.
(6) There is an investment of time and energy in both

(2) Software doesn’t “wear out”


Once problems are addressed, the failure rate stabilises for a while.
Dust, Vibration, Abuse, Extreme Temperatures, and Time all add up to shorten the
lifespan of hardware components.
4

This leads to an increase in the failure rate; alternatively, the hardware "wears out." The
"Bathtub Curve" below illustrates this point.

“Infant “Wear Out”


Mortality”

Failure
Rate

Time

* The failure rate curve for software should take the form of a "idealised curve,"
as it is not affected by environmental conditions in any way.

Increased Failure rate due


to Side Effects

Failure
Rate
Actual Curve
Change

Idealized Curve
* In the beginning stages of a program's life, significant failure rates are caused by
Time
mistakes that have not yet been found.
* The curve will become flatter once these faults have been rectified (provided that no
new flaws are introduced).
Therefore, it is abundantly evident that:
5

 Software undergoes deterioration rather than complete obsolescence;


modifications occur throughout its lifespan; and the implementation of changes
introduces errors.
 In order for the curve to revert back to its initial steady state failure, an additional
modification is requested that induces another surge in the curve, thereby
escalating the failure rate. Consequently, this degradation impacts the software's
quality.
 Hardware components are replaceable with spare parts when they fail; however,
software components are not replaceable. This distinction renders software
maintenance a more challenging endeavour.
 Hardware maintenance is less complex in comparison to software maintenance.

(3) Although the industry is moving toward component – based construction, most
software continues to be custom built
Component reuse is a natural element of the engineering process in the world of
hardware, but in the world of software, it has to be performed on a large scale to be
effective.
* It is recommended to build and develop a reusable software component that may be
utilized in a variety of different programmes.
Example:
Present-day user interfaces are constructed using reusable components, which facilitate
the following:
=> Window creation;
=> Pull-down menus; and
=> An extensive range of interaction mechanisms.

1.6 The Changing nature of Software


* There are seven primary classifications of computer software, each of which presents
its own unique set of obstacles to software developers.
(1) System Software: It is a collection of programmes that were written in order to
provide support for other programmes.
6

=> Some pieces of system software process information structures that are complex yet
determinable, whereas other pieces of system software process data that is mostly
undetermined.
As an example: Compilers, editors, and utilities for managing and organizing files

Characteristics:
=> Extensive connection with computer hardware
=> Prolonged and intensive use by a number of individuals
=> Structures of data that are complex
= Multiple connections to the outside world
=> Working in parallel at the same time

(2) Application Software:


* It is a self-contained application that caters to a particular organizational requirement.
* This piece of software can be used for either business or technical decision making. In
addition to this, it can be used to exercise real-time control over the operations of a
corporation.
Here's an example:
=> Processing of sales transactions at the point of sale
=> Real-time monitoring and management of the production process

(3) Engineering / Scientific Software:


Several applications utilise this software, including but not limited to
 astronomy,
 molecular biology, and a
 utomated manufacturing.

Contemporary scientific and engineering applications are increasingly diverging from


conventional numerical algorithms.
Aspects of system simulation, computer-aided design, and other interactive applications
have begun to incorporate real-time functionality.

(4) Embedded Software:


7

* This software is incorporated into a product.


* This software is integrated into a system. It is employed to execute and regulate
functionalities and features for both the system and the end user. For example:

=> A microwave oven's keypad control corresponds to a variety of digital functions


found in automobiles, including the gas lever and dashboard.
=> Such components as displays and the braking system are encompassed.

(5) Product-line Software:


* It is intended to give a certain competence, which may then be utilised by a wide
variety of consumers.
* This software addresses both the general consumer market as well as the more
specialized niche markets.
Here's an example:
=> Word processing
=> Spreadsheets
=> Presentation software Computer Graphics
=>Multimedia, Entertainment, and Other Things, etc.

(6) Web Applications:


* The software's uses are extremely varied.
* Web applications are transforming into complex computing environments that
incorporate corporate database and business application in addition to providing end
users with stand-alone features, computing capabilities, and content.

(7) Artificial Intelligence Software:


* This piece of software employs methods that are not numerical in nature in order to
solve difficult situations. Applications that fall under this category include:
1) Robotics;
2) Expert Systems;
3) Pattern Recognition; and
4) Pattern Recognition.
The demonstration of theorems and the practise of games
8

************************************************************************
Note:
Legacy Software’s:
* Decades have passed since the development of this software system [i.e., older
programmes], and it has been continuously updated to accommodate shifting business
requirements and computing platforms.
* Such system proliferation causes headaches for large organisations due to the high cost
of maintenance and the inherent risk associated with their evolution.

************************************************************************
1.7 Software Myths:
* Beliefs about software
* The procedure used to build it, which can be traced back to the earliest days of
computers
* The myth – has a number of characteristics that have contributed to them becoming
insidious [i.e. proceeding inconspicuously but harmfully]
* For example, myths give the impression of being factual claims that are rational [and
sometimes do contain aspects of truth].

(1) Management Myths:


* Managers in most fields are frequently under pressure to keep budgets in check, prevent
schedules from falling behind, and enhance quality
* Software managers frequently cling to the assumption that certain software myths are
true in the hopes that this will reduce the amount of pressure they are under.

Myth 1:
There is already a book in our possession that is loaded with guidelines and processes
for the construction of software....Won't that provide the information that my people
require to make informed decisions?
Reality:
=> The book of standards might very well exist......However, is it put to use?
=> Are those who work in the software industry aware of its existence?
9

=> Does it adhere to the practices of contemporary software engineering?


=> Does it have everything? Is it amenable to change?
=> Is it possible to streamline it in order to shorten the amount of time it takes to provide
while still concentrating on the quality?
* The answer to each of these questions will most likely be "No" in the majority of
instances.

Myth 2:
If we fall behind schedule, we have the ability to add more programmers and make up
the time [this strategy is sometimes referred to as the "Mongolian Horde Concept"].
Reality:
* Adding personnel to a software project that has already been delayed results in the
project being further behind schedule.
* By educating the newly added members, the time allocated for productive development
is diminished.
* While it is possible to add personnel, it is crucial that such additions are conducted in a
methodical and coordinated fashion.
Myth 3: By delegating the software development to a third-party firm, I can simply
sit back and unwind while the project is executed.
Reality:
Failure to comprehend internal software project management and control will inevitably
result in difficulties for an organisation when it attempts to outsource software projects.

(2) Customer Myths:

The clientele requesting computer software might consist of,"


=> A person"
=> An technical crew
=> A sales or marketing department (Or)
=> an external provider
Customer misconceptions regarding software are common.
* Myths result in erroneous anticipations (on the part of the clientele) and, ultimately,
developer discontentment
10

Myth 1:
It suffices to commence programme writing with a broad statement of objectives.
The details can be completed later.
Reality:
 A statement that is equivocal, or has two meanings, gives rise to a multitude of
complications.
 However, unambiguous statements can only be generated via consistent and
efficient communication between the client and the developer.
 Consequently, it is not always feasible to formulate statements of requirements
that are exhaustive and consistent.

Myth 2:

Project requirements are subject to constant evolution; however, modifications can


be readily integrated due to the adaptable nature of software.
Reality:
 Early requests for requirements modifications, prior to the commencement of
design or code, result in a comparatively minimal cost impact.
 Requests for requirement modifications made after design (or code) has
commenced have an excessively high cost impact.
 The proposed change has the potential to induce disruption, such as violent
change or disturbance, which may necessitate the allocation of supplementary
resources and substantial modifications to the design.

(3) Practitioners Myths


Myth 1:
After completing the program's development and testing, our work is complete.
Reality:
* According to industry data, between sixty and eighty percent of all software
development efforts will be repeated subsequent to the initial delivery of the product to
the client.
11

Myth 2:
Before the programme is operational, it is impossible for me to evaluate its quality.
Reality:
* Implement one of the effective software quality mechanisms during the project's
inception.
Software quality evaluations exhibit greater efficacy in identifying specific categories of
software errors when compared to testing.

Myth 3:
The working programme is the sole deliverable work product that ensures a
successful undertaking.

Reality:
Work programmes are components of software configuration, which comprises a
multitude of elements.
However, documentation merely serves as a cornerstone for effective software
engineering and support.

Myth 4:
Inevitably, software engineering will force us to generate copious amounts of
superfluous documentation, which will impede our progress.

Reality:
Software engineering focuses on producing high-quality software rather than mere
document creation.
Thus, improved quality results in decreased rework.
* Decreased rework results in expedited delivery
In closing,
The fact that numerous software professionals acknowledge the fallacy (erroneous belief)
of software myths will indirectly contribute to the propagation of ineffective management
and technical practices.
12

However, acknowledging the realities of software is the initial stride in developing


pragmatic resolutions for software engineering.

A GENERIC VIEW OF PROCESS

1.8 Software Engineering – A Layered Technology:

(i) Quality Focus:


* Every engineering approach, software engineering included, must be founded upon an
organisational dedication to quality. Total quality management facilitates ongoing
process improvement, which in turn fosters the creation of efficient software engineering
methodologies. A quality-oriented approach serves as the foundation that bolsters
software engineering.
(ii) Process:
The process layer serves as the cornerstone of software engineering, facilitating the
logical and expeditious development of computer software by acting as a cohesive agent
between the technology layers.
It establishes a framework that is necessary for the following:
=> Effective delivery of software engineering technology
13

* The software process serves as the foundation for management control.


It provides the framework within which

=> Technical methods are applied


=> Work products are produced [i.e. models, documents, data,
reports, forms etc..]
=> Milestones are established
=> Quality is ensured and change is probably managed
(iii) Methods:
The document encompasses a wide range of tasks, which comprise:
=> Communication
=> Requirement analysis
=> Design modeling
=> Program Construction
=> Testing and support
* It depends on a set of basic principles that
=> Govern each area of the technology
=> Include modeling activities and other descriptive techniques
(iv) Tools:
* It offers automated or partially automated assistance for the procedures and processes.
* When tools are incorporated, data generated by one tool becomes accessible to another.

1.9 A Process Framework:


* It finds a small set of framework tasks that can be used on all software projects, no
matter how big or complicated they are.
*The process framework has a group of tasks that can be used throughout the whole
software development process.

Framework Activity:
* It has a set of software engineering actions [a group of related jobs that come together
to make a big piece of software engineering work].
Design is an action in software engineering.
14

Each action has its own set of tasks that need to be done. These tasks do some of the
work that the action implies.
15

A Process Framework

1.10 Generic Framework Activities:


(1) Communication:
Constant interaction and cooperation with the client is required, as well as the gathering
of requirements and other relevant tasks.
(2) Planning:
16

* It describes the
=> Technical tasks to be conducted
=> The risks that are expected
=> Resources that will be required
=> Work products to be produced
=> Work Schedule
(3) Modeling:
It explains how to build models that help both developers and clients visualise and
discuss software's desired features and functionality.
(4) Construction:
* It combines
=> Code generation [either manual (Or) automated]
=> Testing [Required uncovering errors in the code]
(5) Deployment:
* The customer receives the software and evaluates it.
The customer then gives feedback based on the evaluation.
These general framework tasks can be used during the

=> Development of small programs


=> Creation of large web applications
=> Engineering of large complex computer based systems
*The way the software is made is different each time, but the tasks that make up the
framework stay the same.
1.11 Umbrella Activities:
(1) Software project tracking and control:
* Let the software team make work on the project plan. If you need to, take steps to stay
on schedule.
(2) Risk Management:
*Look for risks that could affect how the job turns out or the quality of the product.
(3) Software quality assurance:
*It lists and carries out the tasks needed to make sure the quality of software
(4) Formal Technical Reviews:
* Get rid of any mistakes that you find before moving on to the next action (Or) activity.
17

(5) Measurements:
To aid the team in delivering software, it specifies and collects process, project, and
product measures, and it can be used in conjunction with other frameworks and
overarching tasks.
(6) Software configuration management:
Effectively handles change impact management for software development.
(7) Reusability management:
* It sets up a way to make parts that can be used again and again.
* It sets rules for reusing work products.
(8) Work product preparation and production:
* It includes the things that need to be done to make work goods, like
=> Models
=> Documents
=> Logs, forms and lists

PROCESS MODELS
2.0 Process Models – Definition
* It's a separate collection of things you have to do, accomplish, and produce in order to
create high-quality software.
* While not flawless, these process models do provide a helpful framework for software
engineering projects.

2.1 THE WATER FALL MODEL


18

* The term "classic life cycle" is occasionally used to describe it.


It recommends a methodical and sequential approach to software development, starting
with a detailed specification of the needs of the end user and continuing all the way
through

=> Planning
=> Modeling
=> Construction and
=> Deployment
Problems encountered in waterfall model:
(1) In practise, projects rarely progress in a linear fashion. Consequently, the team's
progress is muddled by the constant stream of changes.
(2) The consumer often has trouble articulating their needs in detail;
(3) The customer must be patient

* Because of the sequential structure of the water fall model, "Blocking State" occurs
when certain members of the project team must wait for others to finish dependent tasks.
* In certain contexts, the water-fall model can be utilized effectively as a model for the
process.
=> Requirements are fixed and work is to proceed to completion in a
linear manner
19

2.2 INCREMENTAL PROCESS MODELS


* This model applies linear sequences in a staggered form as calendar time progresses
* This model combines aspects of the water fall model that are applied in an iterative
fashion
* The incremental model combines elements of the water fall model
* Each linear sequence results in the production of software "increments" that can be
delivered.
Main Point:
* The initial iteration of an incremental model is referred to as the "CORE PRODUCT"
when the model is used.
20

* meaning that the fundamental needs have been met, but the primary extra features have
not been given
* Either the fundamental product is put through extensive testing by the customer, or the
customer uses it.
* As a direct outcome of the evaluation, a strategy for the subsequent increment is
prepared.

=> Communication

=> Planning

=> Modeling {Analysis, Design}

=> Construction {Code, Test}

=> Deployment {Delivery, Feedback}

Unlike the prototyping model, which prioritises the delivery of a functional product with
each increment, the incremental model focuses on the delivery of an operational product
with each increment, ensuring that the product fully meets the needs of the customer
before it is considered complete.
In contrast to the prototype methodology, the incremental model focuses on adapting the
original product to new circumstances.
21

* When personnel is unavailable for a comprehensive implementation by the business


deadline that has been imposed for the project, this strategy is particularly effective.
* It is possible to implement early increments with a smaller number of individuals. In
the event that the core product is favourably received, extra personnel may be added in
order to implement the subsequent increase. It is possible to plan for increments in order
to mitigate technological risks.
* For instance, the production of a significant quantity of brand new hardware is now
underway, although the exact date of its release is unknown.
* Therefore, it is important to arrange early increments in a way that prevents the use of
this hardware. This will make it possible for end-users to receive partial functionality
without an excessive amount of delay.

AN AGILE VIEW OF PROCESS


Agile software engineering is a methodology that fosters both the early incremental
creation of software and the pleasure of customers.
Software engineers and other project stakeholders, such as managers, end-users, and
consumers, collaborate as part of an agile team. This means that the team is responsible
for its own organisation and is in charge of its own fate.
The members of an agile team are better able to communicate with one another and work
together more quickly.
• Agile software engineering is an acceptable alternative to conventional software
engineering for certain categories of software and certain types of software projects. This
is because agile software engineering prioritises collaboration and iterative development.
• Customers and Software Engineers that have accepted the agile concept share the same
point of view, which is that the only really important work product is an Operational
"Software increment" that is provided to the customer on or before the appropriate
commitment date.
• If the agile team comes to the conclusion that the process is successful and the team is
able to produce increments of software that the customer finds satisfactory.

What Is Agility ?
• Modifications to the software under development, adjustments to team members,
modifications brought about by new technology, and modifications of any kind that could
22

affect the product they created or the project that develops the project are all instances of
the kinds of modifications to which an agile team can adapt in a way that is appropriate.
• An agile team is aware that software is created by people working in groups, and that
the success of the project depends on the abilities and talents of these people cooperating.
• Agility encompasses more than just the ability to adapt quickly to change. In addition to
that, it incorporates the agile way of thinking.

What Is An Agile Process?


The bulk of software development projects are based on three key assumptions, and
an AGILE SOFTWARE PROCESS is characterised by the way it handles these
assumptions.
1. It is impossible to determine in advance which software requirements will continue
to be necessary and which will be replaced by new ones. It is similarly challenging to
anticipate how the priorities of a customer will shift as a project moves forward.
2. The phases of design and production are frequently combined in the creation of
different kinds of software. i.e. It is recommended that both processes be carried out
simultaneously so that design models can be validated as they are being developed. It
is challenging to make an accurate estimate of the amount of design work that must
be completed before construction can be used to validate the design.
3. The phases of analysis, design, construction, and testing are not as predicable as we
would like them to be (based on the planning).

• Based on these three presumptions, we are able to assert that the process's success
resides in its adaptability (to rapidly shifting technical conditions and project
parameters).
• Flexibility is an absolute requirement for an agile process.
• The iterative approach to software development known as agile needs to change.
• The agile team needs feedback from customers in order to achieve incremental
goals.
23

• The iterative methodology enables the customer to frequently evaluate the software
increment, provide the software team with any necessary input, and have some say in
the process changes that are made to meet the feedback provided by the customer.

Those who wish to attain agility are required to adhere to the following 12 principles,
as defined by the Agile Alliance:

1. The earliest and most consistent delivery of useful software is our first and
foremost concern in order to fulfil the requirements of our customers.
2. Be prepared to modify plans in response to changing needs, particularly as
development progresses. Agile processes give the client a competitive edge by
allowing them to adapt to change.
3. Regularly deliver functional software, giving attention to completion in the least
amount of time. A few weeks or a few months could pass in this case.
4. Business experts and developers work together every day for the duration of the
project.
5. Focus on individuals with a strong sense of motivation. Have faith in their abilities
to do the task and give them the environment and assistance they need.
6. Direct, in-person communication is the most effective and beneficial means of
sharing information with other team members and members of a development team.
7. The best measure of success is having software that functions as intended.
Using agile methodologies promotes sustainable development. The speed is
anticipated to be steady indefinitely, and sponsors, developers, and consumers should
all be able to maintain it.
9. You can improve agility by keeping an eye on sound design and technical
excellence.
Simplifying involves maximizing the amount of work that is not done, which is a key
element.
11. The best architectures, specifications, and designs are created by self-organizing
teams.
12. On a regular basis, the team reflects on how it may become more efficient and
then adapts and changes its behavior to take those ideas into account.
24

13. Using an agile approach can be advantageous for any software development
process. The method places emphasis on incremental delivery, with the goal of
providing clients with functional software as soon as feasible while considering the
nature of the product and the operational context.

AGIL E PROCESS MODELS


• There are thousands of defunct process descriptions and techniques, modelling
approaches and notations, tools, and technologies littered throughout the history
of software engineering.
• With the introduction of a wide variety of agile process models, each of which is
competing for approval within the community of software developers.
• The term "agile" can refer to a variety of different process paradigms. There are a
lot of parallels to be drawn between these different approaches. the aspects of
each approach that set them apart from one another.
• The Manifesto for Agile Software Development and the concepts of agility are
adhered to by each and every one of the Agile models.

• EXTREME PROGRAMMING(XP)
• XP, short for Extreme Programming, is the widely adopted agile development
methodology that follows an object-oriented approach. It consists of a collection of
principles and practices that are carried out in the context of four distinct framework
activities: planning, designing, coding, and testing.

PLANNING
• The process of planning begins with the preparation of a set of Stories, which are also
referred to as User stories and that explain the features and functionality that are
necessary for software that is to be produced.
• The customer is responsible for writing each tale, often known as a "Use-Case," and
placing it on an index card.
• The customer gives the narrative a VALUE (priority) based on the total business value
of the feature or function being discussed.
• After that, members of the XP team evaluate each story and assign a cost for it,
measured in terms of the number of weeks required for its creation.
25

• If the narrative will require more than three weeks of development time, the
customer will be asked to break the tale up into smaller stories, and the value and cost
will be assigned to each of those stories individually once again.
• The new stories may be written whenever the author chooses.
• The XP team and its customers collaborate to decide how to organise individual
user stories into the next release, also known as the next software increment. This
decision is made by the XP team.
• Once a release has been committed to, the XP team will place an order for the
stories that will be developed in one of the following three ways:

1. All of the stories will be put into action right away (within a few weeks).
2. The tasks associated with the stories that have the greatest potential impact will be
prioritised higher in the timetable and completed first.
26

3. The stories with the highest potential for failure will be placed to the front of the
schedule and implemented first.

DESIGN
• The design of XP adheres to the "Keep it Simple" (KIS) approach. A less complicated
representation is preferable over a more complicated design.
• The design offers implementation assistance for a story exactly as it is stated; nothing
less and nothing more than that.
• XP promotes the utilization of CRC cards, which stand for "Class-Responsible
Collaboration." These cards identify and organize the object-oriented classes that are
pertinent to the currently active software increment.
• The only design work products that were generated as a result of the XP process were
the CRC cards.
• Refactoring, a building approach that doubles as a design technique, is encouraged by
the XP game mode.
• The term "REFACTORING" refers to a design process that is ongoing during the
construction of the system.

CODING
• According to the XP approach, once a team has finished working on the stories and the
preliminary design work, they should not move on to coding but rather develop a set of
unit tests that are included into the software increment that is presently being worked on.
The XP methodology proposes that once the team has finished working on the stories and
the preliminary design work, they should not go on to work on coding.
• Once the unit test has been constructed, the developer will have a much easier time
focusing on the requirements that need to be met in order for the unit test to be passed. •
Once the programming has been completed, the code may be instantly put through unit
testing, which provides the developers with immediate feedback.

• PAIR PROGRAMMING, XP recommends having two people cooperate on the


production of the story's code while working at the same computer work station. These
two people should be working together. This provides a means for addressing issues in
real time and checking the quality of work in real time while it is being performed.
27

EX: It's possible that one person will focus on the specifics of the coding for a segment of
the design while another looks over their shoulder to make sure the coding standards are
being adhered to.
• The created code will "FIT" into the larger framework of what the tale is about.
• The integration work is the duty of the pair programmers. This technique of continuous
integration helps to avoid problems with compatibility and interfacing, and it creates a
"SMOKE TESTING" environment, which helps to expose defects at an earlier stage.

TESTING
• The newly developed unit tests ought to be put into practise by making use of a
framework that gives rise to the possibility of their being automated. When code
is modified in any way, this should encourage the use of a regression testing
technique.
• Integration and validation testing of the system can be performed on a daily basis
now that the unit tests have been arranged into a "Universal Testing suit." The XP
team receives a continuous indication of progress from it, and it also has the
potential to raise early warning signs if things start to deteriorate.
• XP acceptance tests, also known as customer tests, are tests that are specified by
the customer and concentrate on the overall features and functionality of the
system that are reviewed by the client.
• Acceptance tests are created from user stories after they have been incorporated
into a product release.
• Once the XP team has completed the delivery of the first release of the project,
they will compute PROJECT VELOCITY, which is the number of customer
stories that were implemented during the first release. After then, one may make
advantage of Project Velocity to
1. Contribute to the estimation of delivery dates and the release schedule for
subsequent versions and
2. Determine whether an over commitment has been made for all of the stories that
are part of the overall development project. In the event that an over commitment
takes place, either the substance of the release or the end-delivery dates will be
adjusted.
28

• As the development process progresses, the client has the option of adding new
tales, altering the value of an existing narrative, splitting existing stories, or
removing existing stories. After then, the XP team reviews all of the releases that
are still to come and adjusts its plan accordingly.

SCRUM
SCRUM principles are consistent with the agile manifesto :
• Small working teams are organised to make the most of their communication,
minimise their overhead costs, and make the most of their opportunity to share
their knowledge.
• In order to "ensure the best product is produced," the process needs to be flexible
enough to accommodate both changes in technology and in business.
• The procedure results in frequent software increments "that can be inspected,
adjusted, tested, documented, and built upon."
• The work of development and the individuals who carry it out are separated "into
clean, low coupling partitions, or packets"
• As the product is being assembled, testing and documentation are carried out in a
continuous manner.
• The SCRUM methodology affords users the "flexibility to declare a product
'done' whenever it is necessary to do so."
• The SCRUM principles are utilised to direct the development activities that take
place within a process that also includes the activities of the following framework:
requirements, analysis, design, evolution, and delivery.
• Within each activity of the framework, work tasks take place according to a
pattern of processes known as Sprint.
• The work that is completed during a Sprint (the number of sprints necessary for
each framework activity will vary based on the complexity and scale of the
product) is suited to the problem that is currently being worked on. Additionally,
the SCRUM team defines the work and frequently modifies it in real time. The
following diagram outlines the general steps involved in the SCRUM process.
• SCRUM places an emphasis on the utilisation of a collection of "Software process
patterns," which have been demonstrated to be effective for projects that have
tight timeframes, fluctuating requirements, and high business criticality.
29

• A group of different kinds of development work is outlined by each of these


process patterns.
• A prioritised list of project requirements or features that generate business value
for the customer is referred to as a backlog. At any time, additional items may be
added to the Backlog. The Backlog is evaluated by the product manager, who
then updates the priorities as necessary.
• Sprints are comprised of work units that must be completed within a specified
amount of time in order to fulfil a need that has been outlined in the Backlog.
During a Sprint, the items in the Backlog are frozen, which enables the members
of the team to operate in an environment that is both short-term and consistent.

• Scrum meetings are brief meetings that are held every day by the scrum team. During
the meeting, there are three important issues that are discussed, and each member of the
team provides an answer.
o Since our previous gathering, what have you been up to?
o What kinds of challenges are you facing right now?
o What do you hope to have accomplished by the time we get together again as a team?
30

• The gathering is directed by a group leader known as a "Scrum master," who also
evaluates the contributions made by each individual. The team is able to identify potential
difficulties at the earliest possible stage thanks to the Scrum sessions.
• The regular meetings facilitate "knowledge socialization," which in turn contributes to
the development of a structure that allows the team to organise itself.
• Demos - Deliver the software increment to the customer so that the customer can
showcase and evaluate the functionality that has been built. This allows the customer to
provide feedback on the functionality.
• The demonstration might not have all of the functionality that was anticipated, but it
should be possible to implement these features within the time constraint that was set.

DYNAMIC SYSTEMS DEVELOPMENT METHOD(DSDM)


• The DYNAMIC SYSTEMS DEVELOPMENT METHOD (DSDM) is an agile
software development approach that offers a structured framework for creating
and maintaining systems that adhere to strict time restrictions. This is achieved
through the implementation of incremental prototyping inside a controlled project
environment. This is achieved through the utilization of the DYNAMIC
SYSTEMS DEVELOPMENT METHOD (DSDM) agile software development
approach.
• One arrives at the DSDM principle by deriving it from the pareto concept.
• Much like XP, the DSDM employs an iterative software development process.
This allows for the delivery of 80% of an application in the same amount of time
as it takes to produce the full product. In each iteration, the DSDM methodology
adheres to the 80% rule, which states that only enough work is done to meet the
requirements of the next increment. The remaining work is finished at a later time
when more business requirements are discovered or modifications need to be
accommodated.
• The DSM consortium, which is an association of enterprises for some specific
purpose, is a global organization. This organization has devised an iterative
process model that is referred to as the DSM life cycle.
• The DSM life cycle outlines three distinct repeating cycles, which are preceded
by two more activities that are part of the life cycle.
31

Feasibility study – In order to determine if an application is a good fit for the DSDM
process, it is necessary to first establish the fundamental business requirements and
constraints connected with the application.

Business Study – Establishes the functional and information requirements that must be
met in order for the application to be able to give value to the company, as well as
specifies the fundamental architecture of the application and outlines the needs that must
be met for the application to be maintainable.
Implementation – Installs the most recent software update available into the environment
in which it is now running. It is essential to keep in mind that
1) The increment might not be completely done, or
2) Changes might be required while the incremental is being implemented. Both of these
scenarios are crucial to keep in mind.
• DSDM and XP can be coupled to give a strategy that defines a stable process model
with the nuts and bolts practises (XP) that are used to construct software increments.
This combination method is known as a combination approach.

AGILE MODELING(AM)
Software engineers often find themselves in the position of having to construct
massive, mission-critical systems for businesses.
Modelling the scope and complexity of such systems is necessary in order to achieve
the following goals:
1. ensuring that all stakeholders have a better understanding of what has to be
achieved.
2. The individuals who are responsible for finding a solution to the problem can be
divided up into groups that are more likely to be successful. And
3. The quality can be evaluated at each stage of the engineering and construction
processes for the system.

• Many other software engineering modelling methods and notations have been
suggested for use in the process of analysis and design; however, despite the major
virtues of these methods, it has been found that they are difficult to implement and
demanding to maintain.
32

• The "Weight" of these modelling methodologies is a contributing factor to the


problem. When we refer to this, we are referring to the amount of notation that is
required, the degree of formalism that is advised, the complexity of the models for
larger projects, as well as the difficulty in sustaining the model as change occurs.

• The only methods that provide a sustainable benefit for larger projects are the
analysis and the design modelling.

• The AGILE MODELLING was implemented to make the projects intellectually


more manageable.

A description of the AGILE MODELLING (AM) is as follows:The acronym "AGILE


MODELLING" (AM) refers to a methodology that is based on practises and is used
to efficiently model and describe software-based systems. Agile modelling is a set of
concepts, ideals, and practises for modelling software that may be utilised in a
software development project in an efficient and unburdensome manner. These can be
utilised in the context of agile software development.

AM proposes a wide variety of "CORE" and "SUPPLIMENTARY" modelling


principles, which are what set AM apart from other modelling approaches.
Design with a clear end in mind A developer using AM should have a goal in mind
(such as "to communicate information to the customer") before commencing work on
the model. Once the purpose of the model has been established, both the form of
notation that will be applied to it and the level of detail that it must contain will
become more clear.
Make use of a variety of models. There is a wide variety of both models and notations
that can be utilized in the process of describing software. Only a small portion of the
whole is required for the majority of tasks.
Don't pack too much As the work on software engineering progresses, you should
only employ those models that will only deliver a value over the long run. Every
33

piece of work that is preserved must be brought up to date whenever there is a


change.
The actual content is more essential than how it is represented. The audience that the
modelling is meant for should receive information from it. A model with correct
syntax and notation that does not transfer any information to the target audience is not
as valuable as one with incorrect syntax and notation that does not target audience at
all.
Be familiar with the models you generate and the tools you use to do so. Gain an
understanding of the techniques that are used to develop each model, as well as the
benefits and drawbacks of each model.
Adapt on a local level. The modelling strategy ought to be modified so that it caters to
the requirements of the agile team.

AGILE UNIFIED PROCESS


• Agile Unified Process follows a serial "in the large" and an iterative "in the small"
methodology.
• By utilising the traditional activities of the UP stages, which are "Inception,"
"Elaboration," "Construction," and "Transition," respectively.
• Iteration is used throughout each of the actions by the team in order to achieve
agility and to ensure that end users receive meaningful software increments as
quickly as feasible.
• The actions listed below are taken into consideration by this edition of the AUP:

• Modeling
• Implementation
• Testing
• Deployment
• Configuration and Project Management
• Environment Management

SPECIALIZED PROCESS MODELS :


• These models are typically utilised if a specialised or narrowly defined method of
software engineering is selected as the strategy of choice.
34

Component-Based Development
1. 1. Research and analysis are conducted on the component-based products that are
currently available on the market for the application domain in question.
2. Problems with component integration are taken into consideration.
3. A software architecture that can accommodate the components is built.
4. The architecture incorporates the components into its structure.
5. Extensive testing is carried out to validate the correct operation of the component.
The use of a component-based development methodology results in increased
software reuse and reusability.
The Formal Methods Model
• Using formal techniques, you can describe, build, and validate a computer-based
system by employing a stringent mathematical language. This is made possible by
the use of formal methods.
• The creation of formal models now requires a significant amount of time and
money due to their complexity.
• Extensive training is necessary since only a small percentage of software
engineers have the appropriate experience to apply formal approaches.
• Customers who are not technically savvy will have a tough time understanding
how to use the models as a communication channel.

Aspect-Oriented Software Development


• Aspectual requirements are used to define the concerns that span across multiple
layers of the software architecture and have an effect on each.
• Aspect-oriented software development (AOSD) is a modern approach to software
engineering that offers a systematic and methodical approach for defining,
specifying, designing, and constructing aspects. It is alternatively referred to as
aspect-oriented programming (AOP) and aspect-oriented component engineering
(AOCE).
• The process will incorporate features from both evolutionary and concurrent
process models.
• Aspect-oriented software development (AOSD) is an acronym that stands for
"aspect-oriented software development."
35

THE UNIFIED PROCESS


It's an effort to incorporate many of the finest practises of agile software development
while drawing on the strengths of traditional software process models.
The streamlined procedure highlights the significance of software architecture and directs
the architect's attention to where it's needed most.
=> Appropriate objectives
=> Clarity
=> Adaptability
=> Reusability
* It suggests at a process flow that is iterative and gradual, which creates the appearance
that evolution is taking place.

Phases of the Unified Process:


36

(1) Inception Phase:


1) In this stage, you'll be communicating with customers and making plans at the same
time. One can determine the software's business needs by combining the two above;
2) The system's general layout is proposed; and
3) the project's iterative and incremental nature will be taken into account as the plan is
produced.
A use-case is a generic term for a description of a set of steps taken by an actor (whether
human or machine).
* Use-cases lay the groundwork for project planning and help define the scope of the
project.

(2) Elaboration Phase:


It improves and extends the initial use cases created during the inception stage.
It adds five more perspectives to the software's architectural representation:
=> Use-case model
=> Analysis Model
=> Design Model
=> Implementation Model
=> Deployment Model
* The modification to the plan may be made at this time
(3) Construction Phase:
* The software building blocks that will make each use case usable by actual people are
created (or purchased) here.
* The code for the software contains all of its necessary and required features and
functions. Unit tests are developed and executed for individual components after their
implementation.

(4) Transition Phase:


During this stage, the software team can create all of the documentation it will need to
run well.
37

* In the Beta testing phase, the programme is made available to actual end users.
Feedback on reported issues and necessary tweaks comes directly from customers.
Methods of Assembly

(5) Production Phase:


* During these phase the
The infrastructure of the running environment is supported, and the software's continued
usage is monitored.
• Problem and enhancement reports are being reviewed as they are submitted.
38

MODULE-2
REQUIREMENTS ENGINEERING
What is Requirement
Requirements engineering is the systematic procedure of determining the specific
services that a client demands from a system, as well as the limitations and conditions
that govern its operation and development. Requirements are the explicit descriptions of
the services and limitations of a system that are developed during the process of
requirements engineering.
Different types of Requirement Specification

Functional and non-functional requirements


• Functional requirements
Specifications describing the functions the system is expected to carry out, as well as
the expected responses to and actions in certain scenarios.
Non-functional requirements
• limitations imposed by the system on the services or capabilities it provides, such
as strict deadlines, difficult development processes, stringent quality controls, etc.

Domain requirements
39

Domain-specific requirements are system requirements that are derived from the
application domain and reflect the unique characteristics and functionalities of that
domain.
Functional requirements
• Specify the features and services of the system.
• Be contingent upon the nature of the software, its intended audience, and the nature of
the system on which it runs.
• The functional system requirements must be specific, in contrast to the functional user
requirements which might be more general descriptions of the system's expected
behaviour.
Requirements completeness and consistency
It is generally accepted that requirements should be exhaustive and uniform.
Complete
– All services required by the user should be defined.
Consistent
It is impossible to create a comprehensive and consistent requirements document in
practise, but it is essential that there be no inconsistencies or conflicts in the descriptions
of system facilities.

Non-functional requirements
• The requirements and characteristics of a system are defined by these factors. I/O
device limitations, system representations, and similar factors all serve as
constraints.
• Non-functional needs can be more crucial than functional requirements. If these
conditions are not fulfilled, the system becomes ineffective.
• The implementation of these needs may be spread out across the system. There
are two factors contributing to this:
• The non-functional requirements of a system may have more of an impact on the
system's architecture as a whole than on its individual components. To fulfil performance
criteria, for instance, you might have to organise the system in such a way as to reduce
the amount of communication that occurs between its various components.
• It is possible for a single non-functional demand, such as a requirement for system
security, to spawn a number of related functional requirements that describe new system
40

services that must be provided. Additionally, it is possible for it to produce requirements


that constrain requirements that are already in place.

Non-functional classifications
• Product requirements
• Prescriptions on how the delivered product must perform, including time to
completion, reliability, etc.
• Organisational requirements
• Company-specific requirements, such as those for meeting process standards,
meeting deadlines, etc., that are a direct outcome of the company's policies and
procedures.
• External requirements
- Requirements that are imposed on the system and its development process by elements
that are not directly related to the system itself, such as interoperability requirements,
legislative requirements, and so on.

Metrics for specifying Non functional Requirements


41

The Software Requirements Document


A formal statement of what should be implemented by the system developers, the
software requirements document (also known as the software requirements specification
or SRS).
User needs and a thorough description of the system's specifications should both be
included.
• The precise system requirements may be presented in a different document if there are
several of them.
If you're having a third-party firm create your software, "requirements documents" are a
must.
• In Agile development where the requirements change fast, SRS gets out of date.
Therefore, Extreme Programming-style models gradually gather user needs.

Users of Requirements Document


42

The structure of a requirements document


43

Requirements specification
• An ideal requirements definition would result in user and system needs that
are crystal obvious, unambiguous, simple to grasp, exhaustive, consistent, and
well-organized into a requirements document.
• System users without extensive technical knowledge should be able to grasp
the user requirements for a system if they accurately represent the system's
functional and nonfunctional requirements.
• The system's external behaviour and operational restrictions should be simply
described in the requirements.
• They shouldn't worry about the system's design or implementation.
All design information must be included, and that's not doable. This is due to a
number of factors:
1. To better organise the requirements specification, you may need to create an
initial architecture of the system. Requirements are categorised by the several
subsystems that make up the whole.
2. Most systems need to communicate with one another, which might place
limitations on the design and additional demands on the new system.
44

3. The usage of a specific architecture to satisfy non-functional criteria may be


necessary. If an outside authority has to verify the system's security, they can
insist on using a layout that has already been approved.

Ways of writing a system requirements specification

A structured specification of a requirement for an insulin pump


45

Requirements Engineering Processes


Definition:
* The requirements definition and requirements specification are the end results of the
requirements engineering process.
The software specification and reports on the system's viability are also included.

There are four principal stages in this process:


Feasibility Study:
* It is estimated whether the identified user need can be met with current software and
hardware technology.
The study will also decide if the suggested system is cost-effective and if it can be built
within the current budget.
The result should help decide if a more in-depth study should be done or not.
A viability study shouldn't cost too much and shouldn't take too long.

Requirements Analysis:
obtaining the system requirements by performing activities such as analysing tasks,
talking to possible users, evaluating existing systems, etc. The analyst gains a better
46

understanding of the system that needs to be detailed as a result of this. In order to


acquire a clearer picture of the requirements, system prototypes are often developed.
Requirements Definition:
The activity of transforming the information acquired during the analysis activity into a
document that describes a set of requirements is known as the requirements definition
activity.
* This document needs to be created in such a way that it can be comprehended by the
end-user as well as the system customer.

Requirements Specification:
The development of this document is conducted concurrently with the high-level design
process. Deficiencies in requirement specification are identified during the process of
document development, necessitating modifications to rectify these issues.
* A comprehensive and accurate specification of the system requirements is established,
serving as the foundation for the contractual agreement between the client and the
software developer.
* The system requirements statement serves as the foundation for the contractual
agreement between the client and the software developer.

Requirement Elicitation:
* It is a procedure that involves asking review questions with the client, the user, and
other people to inquire about
=> What the goal of the system is
.=> What exactly is it that has to be done?
=> How does the system accommodate the requirements of the organization?
=> What is the recommended way to use the product or the system on a day-to-day basis?
47

The requirements elicitation and analysis process

The process activities are:


1. Requirements discovery This is the process of discovering the requirements of the
stakeholders of the system through interaction with those stakeholders. During the course
of this process, domain needs from the stakeholders and the documentation are also
found.
2. Requirements classification and organization This action takes the unstructured
collection of requirements and organises them into coherent clusters by grouping together
requirements that are connected to one another. Using a model of the system architecture
to identify sub-systems and to associate requirements with each sub-system is the most
popular method for grouping requirements. This is also the most effective method. In
actual practise, the processes of requirements engineering and architectural design simply
cannot be kept entirely apart from one another.
3. Requirements prioritization and negotiation When there are several stakeholders
involved, it is inevitable that their requirements may conflict with one another. Within the
scope of this activity are the processes of prioritising requirements, discovering conflicts
between requirements, and resolving those conflicts through negotiation. In most cases,
the parties involved are required to get together in order to reconcile their differences and
reach an agreement on the necessary concessions.
48

4. Requirements specification The specifications are written down and included into the
subsequent iteration of the spiral. There is potential for the production of formal or
informal requirements documents.
Viewpoints
• Viewpoints serve as a method for organizing requirements in order to accurately
represent the many perspectives of different stakeholders. Stakeholders can be
categorized based on several perspectives.
• A multi-perspective study is crucial because there is no definitive method for
analyzing system requirements.
Types of viewpoint
• Interactor viewpoints
• Individuals or other entities that directly engage with the system. In an
ATM, the customer's information and the account database are
interconnected virtual private networks.
• Indirect viewpoints
• Individuals who are not directly involved in the system's operation but
who have an impact on its requirements. Both the management and the
security staff at an ATM are considered to be indirect opinions.
• Domain viewpoints
• The needs are influenced by the characteristics and constraints of the
domain. In the context of an Automated Teller Machine (ATM), an
illustration would be the protocols and guidelines governing the exchange
of information between different banks.
Interviewing
• During either a formal or a casual interview, the RE team will ask stakeholders
questions about the system that they currently use as well as the system that will be
constructed.
• There are two different kinds of interviews: closed interviews, in which participants
answer a series of questions that have been determined in advance, and open interviews.

— Interviews with no set agenda, in which a wide variety of topics are discussed with
various stakeholders; these are open interviews.
49

Scenarios
• Scenarios are real-life examples of how a system might be utilised. • Scenarios should
include the following: a description of the beginning situation; a description of the goal of
the scenario.

- A description of the typical order in which things occur;


- A discussion of the various things that could go wrong;
- Information regarding other activities taking place at the same time;
- A description of the situation that exists once the scenario has been completed.

Requirements checking
• <text>Validity.</text> <text>Does the system offer the functions that most
effectively meet the customer's requirements?</text>
• Consistency. Are there any conflicts arising from requirements?
• Completeness. Does the customer's requirements encompass all necessary
functions?
• Realism. Is it feasible to achieve the requirements within the constraints of the
existing budget and technology?
• Verifiability. Could you verify the requirements?
Requirements validation techniques
• Requirements reviews
– A methodical manual analysis of the specifications.
• Prototyping
– Verifying requirements with an executable model of the system.
• Test-case generation
– Creating tests to verify the testability of requirements.
Requirements reviews
• While the requirements definition is being developed, regular reviews ought to be
conducted.
• Staff from the contractor and the client should participate in reviews.
50

• Reviews might be informal or formal, requiring completed papers. Early problem


solving is possible when developers, clients, and users have effective
communication.

Review checks
• Verifiability. Can the requirement be tested in a practical way?
• Comprehensibility. Does the requirement make sense to you?
• Traceability. Does the requirement clearly identify where it came from?
• Adaptability. Is it possible to modify the requirement without significantly
affecting other requirements?

Requirements Management
• Managing evolving needs during requirements engineering and system
development is known as requirements management.
• There will always be incomplete and inconsistent requirements;
– As business needs evolve and a deeper understanding of the system is
created, new requirements will always arise;
– Diverse perspectives will result in diverse requirements, many of which
are contradictory.

Requirements evolution
51

Requirements classification

Requirements Management Planning


• During the process of requirements engineering, you are required to plan for the
following: – Requirements identification
• The procedure that is followed when assessing a change in the requirements; – An
approach to the management of change
• The method by which individual needs are determined
- Policies regarding traceability
• The quantity of information that is retained regarding the links between the needs;
– Support for CASE-based tools
• The tool support necessary to assist with the management of changing requirements;
Requirements change management
• Must be applicable to any suggested modifications to the requirements.
• Principal stages
– Problem analysis. Examine the requirements issue and suggest a solution;
– Change analysis and costing. Analyze how a modification will affect other
requirements;
– Change implementation. Changes should be reflected in the requirements
document and other documents.
Change management
52

Requirements Modeling
Requirements Analysis
• Models of the following kinds are produced as a result of the requirements
modelling process:
• Requirements models based on scenarios, including input from a wide range of
system "actors."
• Class-oriented models capture the interplay between classes in an object-oriented
framework and the properties and operations they use to fulfil system needs.
• Pattern- and behavior-based models that show how the programme responds to
outside "events."
• Data models representing the problem's information space.
• Models focused on data flow that depict the system's functional components and
the transformations they enact on data as it moves through the system.
• There are three main goals that the requirements model has to accomplish:
(1) describing the needs of the customer;
(2) providing a foundation for the software architecture; and
(3) defining a set of requirements that can be verified once the software is developed.

Analysis Rules of Thumb


• The model should be centred on the obvious requirements of the problem or business
domain. It is advised that a relatively high level of abstraction be used.
• The information domain, the function, and the behaviour of the system are all aspects of
the system that could benefit from the insights provided by the requirements model.
That's the case if we want to say this model works.
• Consideration of infrastructure and other non-functional models should be postponed
until after design has been completed. That is to say, it is possible that a database will be
necessary; nevertheless, the classes necessary to implement it, the functions required to
access it, and the behaviour that will be exhibited as it is used should not be addressed
until after the issue domain analysis has been finished.
• Reduce the amount of connection that exists throughout the system. It is essential to
accurately portray the connections that exist between classes and functions. However, if
53

the degree of "interconnectedness" is already at an excessively high level, there is a need


for attempts to bring it down.
• You should make sure that the requirements model offers something of value to all of
the stakeholders. The model can be put to a variety of different uses depending on the
constituency. For instance, business stakeholders should use the model to validate the
requirements; designers should use the model as a basis for design; and quality assurance
experts should use the model to assist in the planning of acceptance tests.

• Try to keep the model as straightforward as possible. It is not necessary to provide


additional diagrams if they do not contribute any new information. Avoid using
complicated notational forms wherever possible and stick to using lists instead.

Elements of the analysis model

DATA MODELING CONCEPTS


• Data Objects
• Data Attributes
• Relationships
54

Data Objects
• A data object is a computer-understandable representation of complex
data.
• • Anything that meets the definition of a data object (such as anything that
generates or uses information) can be considered a data object.
• something (such a report or a display, for example)
• an occurrence (like a phone call), • an event (like an alarm), or both
• a function (such as a salesperson),
• a department within an organisation (such as accountancy), a location
(such as a warehouse), or a structure (such as a file) are all examples of
organisational units.

Figure. Tabular representation of data objects


Data Attributes
The properties of a data entity are referred to as its data attributes, and these attributes can
have one of three distinct qualities. They have three different functions:
(1) they can be used to name an instance of the data object;
(2) they can be used to describe the instance; and
(3) they can be used to make a reference to another instance that is located in another
table.

Relationships
• There are various methods in which data objects are related to one another.
55

Figure. Relationships between data objects


Flow Oriented Modelling
• It illustrates the transformations that occur to data objects as they pass through the
system.
• DFDs explain how information enters and exits a system and what the system
does with the data.
• A data flow diagram (DFD) shows how information moves through a system.
56
57

DFD for SafeHome system

SCENARIO-BASED MODELING
Firstly, what topic should you write on?
58

Secondly, What is the appropriate amount of writing to do about it?


Thirdly, what level of information is appropriate for your description?
And finally, what order should the description be in?

These are the problems that need to be addressed in order for use cases to be a useful tool
for requirements modeling.
The SafeHome home surveillance function that are performed by the homeowner
actor:
• Choose the camera you want to view.
• Make sure thumbnails are requested from all of the cameras.
• Views of the camera can be seen in a window on your computer.
• Manage the camera's pan and zoom settings individually.
• Record the output of the camera in a selectable manner.
• Play back the output from the camera.
• Use the Internet to access the video surveillance cameras.

Use case: Access camera surveillance via the Internet—display camera views
Actor: homeowner
1. The homeowner visits the SafeHome Products website and checks in to their account.
2. The user ID of the homeowner is entered into the system.
3. The homeowner is required to enter two passwords, each of which must be at least
eight characters long.
4. The system presents buttons for all of the primary functions.
5. The homeowner presses the button labelled "surveillance" to access the system's
primary functions.
6. The homeowner then chooses the option to "pick a camera."
7. The system will show you the layout of the house's floor plan.
8. The homeowner chooses an icon for a camera from the floor layout.
9. The homeowner clicks the "view" button on their computer screen.
59

Refining a Preliminary Use Case


Therefore, in order to evaluate each phase that makes up the major scenario, the
following questions will be asked.
• Is there any other course of action that the actor can take at this point?
• Is it feasible that the actor will experience some kind of error circumstance at this
particular juncture? If that's the case, what could it be?
• Is it likely that the actor will come across another behaviour at this point, such as a
behaviour that is triggered by an event that is not under the actor's control? If that's the
case, what could it be?
Preliminary use case diagram for the SafeHome system

UML MODELS THAT SUPPLEMENT THE USE CASE


Even if it's something as straightforward as a use case, there are a lot of requirements
modelling scenarios in which a text-based model might not be able to convey information
in a way that's both clear and succinct. In situations like this, you have access to a vast
library of UML graphical models to select from.
• Developing an Activity Diagram
• Swimlane Diagrams
60

Activity diagram for Access camera surveillance via the Internet— display camera
views function.
61

Swimlane diagram for Access camera surveillance via the Internet—display camera
views function.

REQUIREMENTS MODELING FOR WEB AND MOBILE APPS

The following size-related variables dictate how much focus is placed on requirements
modeling for Web and mobile applications:
(1) The scope and intricacy of the application increment;
(2) The quantity of stakeholders (analysis can assist in identifying conflicting
requirements originating from various sources);
(3) The size of the app development team;
(4) The extent to which team members have collaborated previously (analysis can aid in
creating a shared understanding of the project); and
(5) The duration elapsed since the team's last collaboration.
62

MODULE-4
TESTING STRATEGIES

A Strategic Approach to Software Testing:


* Testing is a group of tasks that can be methodically and premeditatedly completed.
* A collection of testing procedures and test case design strategies are explained for a
software process.
* Several testing techniques are suggested
* Diverse testing techniques are suggested
* Testing is a series of tasks that can be organized and completed methodically in
advance.
Testing Strategies – Generic Characteristics:
* A group of software developers must conduct efficient formal technical reviews before
they may test software.
* Testing starts with the individual components and progresses to the integration of the
complete computer-based system.
* The software developer and, for larger projects, an independent test group carry out the
testing.
* Testing begins with component level work and progresses to the integration of the
entire computer-based system.
************************************************************************
Note:
Testing VS Debugging:
While testing and debugging are two different processes, every testing approach must
include debugging.
* A testing strategy should include guidelines for the tester as well as a management
checklist of benchmarks.
************************************************************************
(1) Verification and Validation:
* Verification refers to a series of steps taken to ensure that a programme correctly
accomplishes a specified task.
Example:
Verification: Are we building the product right?
63

* Various procedures are carried out under the umbrella term of "validation" to guarantee
that the final product of software development is tied to the requirements of the customer.
Example:
Validation: Are we building the right product?
* The processes of verification and validation involve a wide range of SQA operations
that include the following:
=> Formal Technical Reviews
=> Quality and Configuration audits
=> Performance Monitoring
=> Simulation
=> Feasibility Study
=> Documentation Review
=> Database Review
=> Analysis Algorithm
=> Development Testing
=> Usability Testing
=> Qualification Testing
=> Installation Testing
(2) Organizing for Software Testing:
* The developer often also carries out integration testing, which is a phase of testing that
comes before the complete software architecture is constructed.
* Testing the various units (components) of the program is always the responsibility of
the software developer.
* Once the software architecture has been completed, an independent testing group will
be recruited to evaluate the product. The purpose of an Independent Test Group, which is
also abbreviated as an ITG, is to eliminate the inherent challenges that come when the
builder is given the opportunity to test the thing that has been built. This is accomplished
by eliminating the inherent difficulties. During the course of a software project, the
developer and the ITG work together very closely to ensure that through testing will be
performed.
* The developer needs to be available when testing is being done so that he or she may
fix any mistakes that are found.
64

(3) Software testing strategy for conventional software architecture:


* A testing method for software can be understood in the context of the spiral, as depicted
in the following diagram:

* Unit testing starts at the middle of the spiral and concentrates on the software's
components as they are written in the source code. * Proceeding in a different direction
down the spiral, we encounter integration testing, which is concerned with the planning
and building of software architecture. * Next, validation testing is introduced, which
compares requirements that have been established during software requirements analysis
with software that has already been developed. * Finally, we come across verification
testing, which verifies that software that has been constructed satisfies requirements that
were determined during software requirements At last, we come to the system testing
phase, which involves testing the software along with the other components of the system
as a whole.
Software Testing Steps:
(i) Unit Testing:
* The initial phase of testing involves the examination of each component in isolation to
verify its proper functioning as an independent unit.
* Unit testing employs rigorous testing techniques to achieve comprehensive coverage
and optimal error detection within the control structure of the components.
* Subsequently, the components are integrated to form cohesive software packages.
65

(ii) Integration Testing:


* It solves the problems that arise when verifying and building a programme
simultaneously
* The requirements analysis validation criteria
* need to be tested. With validation testing, you can rest assured that your software will
perform as expected in every way.

* A series of high - order tests are executed after the software has been integrated [built].

(iii) High Order Testing:


* It is not considered to be a part of software engineering
* After the programme has been verified, it needs to be integrated with the other
components of the system (for example, the hardware, the people, and the software).
* Testing a system ensures that all of its components work together as intended and that
the desired level of overall functionality and performance is reached.

6.4 Strategic Issues:


* If you want your software testing strategy to be successful, you need to address the
following issues:
(1) Specify product requirements in a quantifiable manner well in advance of testing
beginning;
66

(2) State testing objectives explicitly;


(3) Understand who will be using the software and create a profile for each user category.
(4) Formulate a strategy for tasting that places an emphasis on "Rapid Cycle Testing."
(5) Construct reliable software that is intended to perform its own testing.
(6) Employ efficient formal technical reviews as a filter prior to testing.
(7) Carry out formal technical reviews to evaluate the test strategy and test cases.
(8) Construct an approach for the testing process that emphasises continual improvement.

Test Strategy for Conventional Software:


* There are many different ways to test software, thus some of the available options are
as follows:
(i) A software team could wait until the system is completely built, and then they could
run tests on the system to look for faults. * This strategy is not effective in the majority of
situations.
(ii) A software engineer might be able to run tests every day anytime any component of
the system is being built
* This strategy has the potential to yield excellent results. But the majority of software
engineers are reluctant to make use of it
(iii) The majority of software teams opt for a testing technique that lies somewhere in the
middle of the two extremes
It adopts an incremental approach to testing, beginning with the testing of individual
programme units, then moving on to tests meant to aid the integration of the modules,
and finally culminating with tests that exercise the created system as a whole.

6.6 Unit Testing:


* It concentrates on the verification of the smallest unit of software design (also known as
a software component or module);
* It concentrates on the internal processing logic and data structures that are contained
inside the bounds of the component;

Unit test Considerations:


* The test that is performed as a part of the unit tests is displayed below
Interface:
67

* It is examined to ensure the following conditions are met:


=> Information is able to flow correctly into and out of the programme unit that is being
evaluated Local
Data structures:
* These are investigated to guarantee that
=> The data that is kept in a temporary storage space will keep its integrity during the
entirety of the operation of an algorithm.

Independent Paths:
* Each and every basis path that passes through the control structures is investigated to
guarantee that
=> Each and every statement contained within a module has been run at least once.

Boundary Conditions:
* These are verified to ensure that
68

=> the module runs correctly within the boundaries that have been specified in order to
limit (Or restrict) processing.
* And lastly, all possible error-handling routes are put through their paces.
Before any other tests can be run, there must first be validation of the dataflow across an
interface module.
another test will be performed.
During the unit testing process, one of the most important tasks is to perform selective
testing of the execution path.

* Test cases must to be developed to unearth faults brought on by= Incorrect


Mathematical Processing Comparisons That Are Not Accurate Control not being properly
maintained
Common errors in computations are:
(1) Mixed mode operations
(2) Misunderstood arithmetic precedence
(3) wrong initialization
(4) Precision inaccuracy
(5) Misunderstood or wrong arithmetic precedence
(6) An inaccurate portrayal of a symbol representing an expression

Test cases should uncover errors, such as


(1) A comparative analysis of multiple data formats;
(2) erroneous logical operators (Or) hierarchy
(3) Expecting equality when precision error makes it unlikely;
(4) Making incorrect variable comparisons
(5) Incorrect (Or nonexistent) loop termination;
(6) Failure to terminate when divergent iteration occurs;
(7) Inaccurately changed loop variables

Boundary Testing:
* This is one of the most significant responsibilities involved in unit testing
* A common cause of software failure is when it reaches one of its limits (for example,
an error frequently happens when the nth element of an n-dimensional array is handled).
69

When evaluating error handling, the following are some examples of potential errors that
should be tested:
Processing under exception conditions is not right. The error description is
insufficient to help identify the error's cause.
(1) The error description is not clear.
(2) The error reported does not match the error experienced.
(3) An operating system intervention occurs before error handling due to an error state.
(4) The error description is insufficient to help identify the error's cause.

Unit Test Procedures:


* Unit test design can be done either before or after code is generated.
70

Driver:
* A driver is not much more than an application's "main programme" in the vast majority
of cases * It accepts
=> data from test cases
=> Sends these data to the component that is [about to be tested].
=> Print the findings that are relevant.

Stub (Or) Dummy Programs:


* It takes the place of modules that are called by the component that is being tested.
* Stub utilises
=> the interface of the subordinate module.
= >Manipulate the data as little as possible
.=> guarantees the authenticity of the entry
=> transfers control back to the module that is currently being evaluated

* Drivers and stubs are two different types of software that need to be built, but they are
not included in the final software product.
* The real overhead is reasonably modest if the drivers and stubs are kept simple;
otherwise, it is substantial.
* When a component that has a high cohesiveness is designed, it simplifies the unit
testing process.

Integration Testing:
* Once all of the modules have passed their own unit tests meaning that all of the
modules function properly, we have doubts about whether or not they will work, when do
we integrate them together?
* Integration testing is going to be the solution for this problem.
Interfacing:
It is the mechanism that brings together all of the individual modules.
The following are the issues that can arise throughout the interfacing process: It is
possible to lose data when moving between interfaces.
=> An unintended negative effect can be caused by one module on another module.
71

=> The combination of subfunctions might not result in the principal function that is
required.
What it does
=> An imprecision that is tolerable on its own might become intolerable when it is
multiplied.
scales levels
=> Issues may arise due to the use of global data structures
Integration Testing – Definition:
* It is a methodical approach to building the software architecture, while at the same time
running tests to find faults linked with the software's interface.
* The goal is to use components that have undergone unit testing and construct a
programme structure according to the specifications set out by the design.

Incremental Integration:
* In this scenario, the programme is built and tested in small steps
* Errors are easy to localise and rectify
* Interfaces are tested in their entirety and a systematic testing strategy may be used
* There are several variants of the incremental integration method to choose from.

Incremental Integration – Types:


(1) Top – Down Integration:

The software architecture is being built up in stages, including through this testing, which
is an incremental method.
72

* The modules are integrated by starting with the primary control module (also known as
the main programme) and progressing downwards down the control hierarchy.
* The subordinate module to the primary control module can be incorporated in either a
breadth first or a depth first fashion.

Depth First Integration:


* The program structure's primary control path is integrated by all of its components.
* The primary route chosen is determined by the features unique to the application.
* For instance, if the left-hand path components were chosen, integration would begin
with M1, M2, and M5, and finish with M8.
* Subsequently, the right hand and center control routes are built.

Breadth First integration:


* It moves the structure horizontally by incorporating all elements that are directly
subordinate at each level.
The subassemblies M2, M3, and M4 in the picture on the right would be integrated first,
then M5, M6, and so on.

The following are the steps that make up the integration process:
Step 1: the main control module serves as the test driver, and stubs are installed in place
of all components that are immediately subordinate to the main control module.
Step 2: Subordinate stubs are replaced one at a time with genuine components, and this
process is carried out in one of two ways, determined by the integration strategy that was
chosen: breadth first or depth first.
Step3 :, tests are carried out while each component is being merged.
Step 4: Once each group of duties has been finished, a new stub will be removed and
replaced with the actual component.
Step 5: You may want to perform regression testing in order to make certain that no new
mistakes have been introduced.
* Beginning with Step 2, the procedure will continue until the entirety of the programme
structure is developed.
73

Advantages:
(1) Top-down integration ensures that significant control or decision points are validated
at an earlier stage in the testing process, which is beneficial in a number of ways.
(2) It is to the advantage of both the customer and the developer to conduct an early
demonstration of the product's functional capabilities in order to build the customer's
confidence and the developer's. This is significant since it reveals that the feature is
performing as planned, which is important information to have.
(3) Although this approach is not overly complicated, in practise it may lead to a variety
of practical issues.

(2) Bottom – Up Integration:


* In this instance, start building and testing with the program's lowest level components.
* The processing necessary for components subordinate to a certain level to always be
available is made possible by the bottom-up integration of components.
* In this instance, the stub is not required.

Steps followed in the Bottom – Up Integration:


Step 1: Low-level components are assembled to create builds, or clusters, that carry out
particular software tasks.
Step 2: Involves writing a driver, or control program for testing, to synchronize the input
and output of test cases.
Step 3: Testing the cluster
Step4: Clusters are consolidated and drivers are eliminated as they go up the program
structure.

Example:
* Components are assembled to form Clusters 1, 2, and 3 as shown in the image below *
Each of the Clusters is put through its paces with the help of a driver * The Ma
component takes precedence over the Clusters 1 and 2 components.
* Drivers D1 and D2 are taken out of service, and clusters are now connected directly to
Ma.
* In a similar manner, driver D3 for cluster – 3 has been eliminated and has been
integrated with MB.
* The components Ma and Mb are both incorporated into the Mc structure.
74

Bottom up Integration

6.9 Regression Testing:


* It involves rerunning a portion of previously completed tests to make sure that
modifications have not had any unanticipated negative effects.
* i.e. whenever software is updated, a portion of the software configuration is altered,
including the program, its documentation, and the data that supports it.
* Regression testing is the process used to make sure that modifications made [for testing
purposes or for other reasons] don't result in the introduction of new errors.
* Regression testing can be carried out either manually by running a subset of all test
cases again or automatically by employing tools for capture and playback.
75

* Software engineers can record test cases and outcomes with capture and playback tools
for later comparison and playback.
* There are three types of test cases in the regression test suite:
(i) A sample set of tests that will run through every feature of the
software
(ii) Further testing concentrating on software features that are probably
going to be impacted by the modification
(iii) Tests concentrating on the modified software components

Smoke Testing:
* When developing software products, this approach to integration testing is frequently
employed.
* It is intended to serve as a patching mechanism for projects that are time-sensitive,
enabling the software team to regularly evaluate its work.

Activities included in the smoke testing:


(1) A "Cluster" is assembled from software components that have been converted into
code. Every data file, library, reusable module, and engineering component needed to
carry out one or more product functions is included in a cluster.
(2) A battery of tests is intended to identify any problems that prevent the cluster from
operating as intended.
(3) The product is tested for smoke every day and the clusters are integrated with
additional clusters.
* The integration strategy might be either bottom-up or top-down.

Critical Module:
* It is a measure which contains one (Or) more of the following characteristics:
(i) Addresses several software requirements
(ii) Has a high level of control [resides relatively high in program
structure]
(iii) is complex (Or) error prone
(iv) Has definite performance requirements
* Testing the crucial module as soon as feasible is recommended.
76

* Regression tests typically need to concentrate on important module functionalities.

Integration Test Documentation:


* A test specification includes a detailed task description and an overarching plan for
software integration.
* This document contains a
=> Test Plan
=> Test Procedure
=> Work product of the software process
* Here, the testing process is broken down into phases and clusters that focus on
particular functional and behavioral aspects of the program.

Validation Testing:
* The validation process at the system level concentrates on the following:
=> User – visible actions
=> User recognizable output from the system
* Validation testing is only successful when the program operates as the customer would
reasonably expect it to.
* The Software Requirements Specifications define reasonable expectations
* A section of the specification called validation criteria serves as the foundation for a
validation testing approach

Validation Test Criteria:


* A test plan specifies the classes of tests to be conducted;
* Test procedures identify individual test cases;
* Software validation is accomplished by a sequence of tests;
* The purpose of the procedures and test plan is to guarantee:
=> Every functional requirement has been met
=> all behavioral characteristics are achieved
=> all performance requirements are attained
=> documentation is correct
=> usability and other requirements are met
77

* One of the following two scenarios could arise following the validation test:
(i) The function's (or) performance characteristics meet the requirements
and are approved.
(ii) A list of deficiencies is made and the derivation from the specification
is discovered.
(iii) Derivation (Or) errors found at this point are rarely able to be fixed in
time for the delivery date.

Configuration Review (Or) Audit:


* It is a crucial component of the validation procedure.
* The purpose of the review is to make sure that
=> all software configuration elements have been generated or cataloged
correctly and
=> contain all the information needed to support each stage of the
software life cycle.

Alpha and Beta testing:


* To enable the customer to verify all requirements, a series of acceptance tests are
carried out when custom software is developed for a single customer.
* The end-user acceptance test, which is carried out by them instead of software
engineers
* Since it is difficult to conduct acceptance tests with every client while developing
software for mass use, most software product developers employ a procedure known as
alpha (Or) beta testing.

(i) Alpha testing:


* End Users perform it where the developers are located.
* The program is used in a realistic environment;
* The developer is present at all times;
* The developer monitors errors and usage issues;
* Alpha tests are conducted in a closely supervised environment.
.
78

(ii) Beta testing:


* The beta test is a "Live" implementation of the program in an uncontrollable
environment for the developer.
* The end-user logs any issues that come up during beta testing;
* These error reports are forwarded to the developer on a regular basis;
* The software engineer modifies the product based on the error report, and then gets
ready to release it to all of the users.
* It is carried out at end-user locations;
* The developer is not present at this time;
* The beta test is a "Live" application of the software in an

System Testing:
* It is a set of several tests with the main goal of thoroughly testing the computer-based
system.
* Although the goals of each test vary, they always aim to confirm that the various
components of the system have been correctly integrated and are carrying out their
designated tasks.
* Despite the fact that this test series' main objective is to thoroughly test the computer-
based system

Types:
(i) Recovery Testing
(ii) Security Testing
(iii) Stress Testing
(iv) Performance Testing
(v) Sensitivity Testing
(1) Recovery Testing:
* It is a type of system test that involves purposefully breaking the programme in a
number of various ways and checking to see whether or not recovery is carried out
properly.
* If the recovery process is automated, often known as being carried out by the system on
its own, then
79

=> * If it is necessary for human intervention to recover the lost data and restart the
system, the Mean Time to Repair (MTTR) metric is analysed to evaluate whether or not it
falls within the allowed range of values. The evaluation of the validity of the
checkpointing processes that occur as a consequence of reinitialization is what ultimately
leads to the assessment of data recovery and restarting.

(2) Security Testing:


Good security testing will eventually be able to break into a system if given sufficient
time and resources to do so. When designing a system, it is the job of the system designer
to ensure that the cost of penetrating the system is higher than the value of the
information that would be obtained.
* It ensures that the protective mechanism that is implemented into a system will, in fact,
safeguard the system from unauthorised intrusion.
* In the process of evaluating the system's security, the tester acts out the part of a
potential threat who wants to break into the system.

* In the process of evaluating the system's security, the tester acts out the part of a
potential threat who wants to break into the system.
* It ensures that the safety feature that was incorporated into a

(3) Stress Testing:


* Stress testing is the process of putting a system through its paces in a manner that
places an abnormal demand on its resources in terms of quality, frequency, or volume.
Example:
(i) When one (Or) two is the average rate, a unique test that produces the interruptions
per second may be created.
(ii) In order to see how input functions react, input data rates may be raised by an order of
magnitude.
Test cases requiring the maximum amount of memory (iii) or other resources are run.
(iv) Test cases are created that could result in issues with memory management.
80

(4) Performance Testing:


* Its purpose is to evaluate software's performance within the framework of an integrated
system.
* At every stage of the testing process, performance testing takes place.
* Stress testing and performance testing are commonly coupled, and both hardware and
software requirements are usually involved.
* Its purpose is to assess software performance within the framework of a runtime
environment.

(5) Sensitivity Testing:


* It's a type of stress testing.
* For instance, a limited set of data that falls inside the boundaries of lawful data may
result in inflated and incorrect processing in the great majority of mathematical
algorithms.
* Finding data combinations within valid input classes that may cause instability or
incorrect processing is required to test for sensitivity.

The Art of Debugging:


* Debugging is not a type of testing, but it does always happen as a consequence of
testing. * This is when a test case shows an issue, and debugging is an activity that
ultimately leads to the problem being removed from the system.
81

The Debugging Process

* The execution of a test case is the first step in the debugging process.
Next, the results are analysed, and a discrepancy between expected and actual
performance is discovered.
Next, debugging attempts to match symptoms with the underlying causes of errors, which
ultimately leads to error correction.
Finally, debugging always has one of two possible outcomes:
(i) the cause will be located and fixed, or
(ii) the cause will not be located. Debugging will always have one of these two outcomes.

Why is debugging so difficult?


(1) The symptom and the cause may be located in different parts of the world [that is, the
symptom may appear in one section of a programme, but the reason may actually be
situated at a location that is quite far away].
(2) The symptom might go away (temporarily) if another error is fixed.
(3) The symptom might really be caused by something that isn't an error at all (such as
rounding off inaccuracy).
(4) The symptom could be the result of an error made by a person that is difficult to
pinpoint.
(5) The symptom could be the result of timing problems rather than processing problems.
82

(6) It may be challenging to correctly duplicate the conditions of the input [for example,
in a real-time application when the ordering of the data is unpredictable].
(7) The manifestation of the ailment may come and go. This is especially common in
embedded systems, which combine hardware and software in a manner that cannot be
separated.
(8) The symptom may be the result of a lot of reasons that are spread across a variety of
jobs that are being executed on various processors.

*The amount of pressure to determine the causes of a mistake also grows *This pressure
compels the software developer to fix one problem while simultaneously adding two
more
*The amount of pressure to determine the causes of a mistake also increases
*The greater the implications of an error, the greater the amount of pressure there is to
investigate and pinpoint its root cause.

6.15 Debugging Strategies:


* In general three debugging strategies have been proposed
(i) Brute Force
(ii) Back Tracking
(iii) Cause Elimination

(1) Brute Force:


Memory dumps, runtime traces, and output statements in the code are used for this, and
while the resulting data may be useful in the long run, the time and energy spent
gathering it are often better spent elsewhere.

* this method is applied only when all other methods fail.


* the philosophy used here may be "Let the computer finds the error."
* we apply this method only when all other methods fail.
* we only apply this method when all other methods fail.
* we only apply this method when all other methods fail.
83

(2) Back Tracking:


* This method is the most frequent one that can be implemented effectively in relatively
modest programmes.
* The source code is traced backward [manually] starting at the spot where a symptom
has been detected and continuing until the reason is found.
* If the number of source lines increases, the number of possible backward paths may
become unmanageably large.
* The number of possible backward paths could grow uncontrollably if the number of
source lines rises.
* The number of possible backward paths could become unmanageable if the number of
source lines rises.
* The number of alternative backward paths may grow unmanageably enormous if the
source code contains a big number of lines.
* If the source code contains a large number of lines, the number

(3) Cause Elimination:


* This can be accomplished through the use of induction (or) deduction
* It presents the idea of binary partitioning. It is followed by the following steps:
* A cause hypothesis is formulated and is supported (or refuted) by the data pertaining to
the mistake occurrence.
* Every potential reason is enumerated, and experiments are carried out to rule them all
out.
* When preliminary investigations suggest that a specific cause theory holds potential,
data is reworked to try and find the bug.
* A binary partitioning scheme is introduced
* Testing is done to rule out each potential reason after a list of all potential causes is
created.

6.16 White Box Testing:


* Another name for this test is the Glass-Box Test
Testing in the white box environment gives the software developer the ability to design
test cases that:
84

(1) Guarantee that all independent routes contained within a module have been
investigated at least once;
(2) Exercise all logical judgements on both their true and false sides.
(3) Ensure that all loops are executed at their respective boundaries and within the scope
of their respective operational bounds; and
(4). Perform tests on the organization's internal data structures to confirm that they are
correct.
White box testing
• To test logical paths across the software, test cases with particular sets of conditions
and/or loops are provided.
• Testing of the software is referred to as white-box testing when it is dependent on a
detailed investigation of the product's procedures.
• The "status of the programme" can be evaluated at a number of different periods in
time.

• White-box testing, also known as glass-box testing, is a method for designing test cases
that derives test cases by using the control structure of the procedural design. This
method is frequently referred to as "glass-box testing."

Through the use of this strategy, SE is able to generate test cases that
1. Ensure that each and every independent branch contained within a module has been
traversed at least once.
2. Consider all logical judgements from both the correct and incorrect perspectives,
3. Ensure that all loops are executed at their respective limits and remain inside their
respective operational limitations
4. Perform tests on the internal data structures to confirm that they are valid.

Basis path testing


• Test cases with specific sets of criteria and/or loops are supplied in order to evaluate
logical paths throughout the program.
• It is assured that each statement in the programme will be run at least once when using
test cases that are derived to exercise the basis set.
85

Methods:
1. Flow graph notation
2. Independent program paths or Cyclomatic complexity
3. Deriving test cases
4. Graph Matrices
86

Each circle in figure B, which is referred to as a node on a flow graph, represents one or
more procedural statements.
• A mapping into a single node is possible for a series of process boxes and a decision
diamond.
• Similar to the arrows on flowcharts, the arrows on the flow graph, sometimes referred to
as edges or links, show the flow of control.
• Even in cases when a node does not reflect a procedural statement, it still needs an edge
to finish at that node.
• Any area bounded by a network of edges and nodes is referred to as a region. The area
outside the graph is counted together with the regions when we calculate the overall
count.
• In the event that a compound condition arises during the process of procedural design,
the flow graph will become marginally more convoluted.

Independent program paths or Cyclomatic complexity


• An independent path is any way through the programme that includes at least one new
set of processing statements or new condition. This is the minimum requirement for a
path to be considered independent.
87

• Take, for instance, the following as an illustration of a set of independent paths for a
flow graph:
1-11 is the first path, and
1-2-3-4-5-10-1-11 is the second.
– Path 3: 1-2-3-6-8-9-1-11
– Path 4: 1-2-3-6-7-9-1-11
• Take note that every new path results in the creation of a new edge.
• The path 1-2-3-4-5-10-1-2-3-6-8-9-1-11 is not an independent path because it is merely
a combination of paths that have already been stated and does not traverse any new
edges. This means that it does not go through any new nodes.
• Test cases should be built in such a way that they are forced to follow these basic set
paths.
• Each and every line in the programme should have at least one opportunity to be run,
and each and every condition should have been tested both ways (true and false).
How can we determine the total number of possible routes to investigate?
• Cyclomatic complexity is a software metric that provides a quantitative measure of the
logical difficulty of a programme. It does this by counting the number of paths through a
programme.
•It offers the number of tests that need to be carried out in addition to defining the total
number of independent pathways in the basis set.
•Calculating cyclomatic complexity can be done one of three ways:
1. The cyclomatic complexity is directly proportional to the number of areas.
2. The cyclomatic complexity, denoted by the symbol V(G), of a flow graph, G, is
defined as V(G) = E - N + 2, where E represents the number of edges in the flow graph
and N represents the number of nodes in the flow graph.
3. A flow graph's cyclomatic complexity, V(G), can be written as V(G) = P + 1, where P
is the total number of nodes and edges in the predicate.
As a result, we have an upper bound on the total number of tests based on the value of
V(G).
Deriving Test Cases
• This strategy consists of a predetermined order of actions.
• The average of the operation as shown in the PDL.
88

• The extremely straightforward algorithm known as average includes both complex


criteria and loops.

To derive basis set, follow the steps.


1) A flowchart should be created based on the design or code.
Numbering the PDL statements that will become nodes in the flow graph is the first step
in making the graph.
89
90

Flow graph for the procedure average


Determine the cyclomatic complexity of the resultant flow graph.
1. Second, V(G) can be calculated without making a flowchart by counting the
number of conditional statements in the PDL (complex conditions count as two
for the average method) and then adding 1.
2. Theorem 3: V(G) = 6 regions
3. Fourth, V(G) = 17 - 13 + 2 = 6
4. In other words, V(G) = 5 predicate nodes+ 1 = 6
Determine a basis set of linearly independent paths
a. The value of V(G) provides the number of linearly independent paths
through the program control structure.
b. path 1: 1-2-10-11-13
c. path 2: 1-2-10-12-13
d. path 3: 1-2-3-10-11-13
e. path 4: 1-2-3-4-5-8-9-2-. . .
f. path 5: 1-2-3-4-5-6-8-9-2-. . .
g. path 6: 1-2-3-4-5-6-7-8-9-2-. . .
h. The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path
through the remainder of the control structure is acceptable.
5. Prepare test cases that will force execution of each path in the basis set.
a. Data should be chosen so that conditions at the predicate nodes are
appropriately set as each path is tested.
b. Each test case is executed and compared to expected results.
c. Once all test cases have been completed, the tester can be sure that all
statements in the program have been executed at least once.
Black – Box Testing:
Black box testing is a complementary strategy that is likely to reveal a different class of
problems than white-box testing methods; it is not meant to be used in place of white box
testing.
* It is also known as Behavioural Testing.
* White box testing and black box testing are not interchangeable.
* Black box testing is also known as Behavioural Testing. Testing in a black box makes
an effort to identify problems in the following areas:
91

(1) Incorrect Functions, Or Functions That Are Missing


(2) Errors in the interface
(3) Errors in the data structures (Or) unauthorised access to external data bases
(4) Inappropriate behaviours (Or) Poor performances
(5) Errors during the initialization and termination processes

* By applying testing methodologies known as black box testing, we are able to develop
a set of test cases that are compliant with the following criteria.(i) Test cases that reduce
the number of additional test cases that need to be written to ensure reasonable testing by
a count that is greater than one (ii) Test cases that provide information on the presence
(Or) absence of classes of mistakes rather than errors that are just associated with the
particular test that is now being carried out (iii) Test cases that provide information on the
presence (Or) absence of classes of mistakes rather than errors that are just associated
with the particular test that is now being carried out (iv) Test cases that

• Testing using a black box whereas the control structure is being willfully disregarded,
the emphasis will be placed on the information sphere. Exams are structured to provide
responses to the following questions:
Testing for functional validity entails what steps, exactly?
How are the system's performance and behaviour put to the test?
Which types of input will make the most effective test cases?
• We are able to build a set of test cases that are in agreement with the following criteria
thanks to the utilisation of black-box approaches, which are described below.
- Test cases that cut down on the number of additional test cases that have to be designed
in order to conduct testing that is regarded to be reasonable (thus minimising the amount
of work that must be done and the amount of time that must be spent).

— Test cases that provide information regarding the existence or non-existence of


specific sorts of faults.
 Testing procedures using black boxes
 Methods of Testing Relying on Graphs
 Partitioning based on equivalence
 Boundary value analysis, often known as BVA
92

 Orthogonal Array Testing


Graph-Based Testing Methods
• To have an understanding of the relationships that connect the items that are modelled
in software as well as the objects themselves.
• The following step is to develop a series of tests that will verify that "all objects have
the expected relationship to one another."
•Alternately stated as follows:
- Make a diagram depicting significant things and the connections between them.
- Conceive a battery of examinations that will be centred on the graph.
• In order to ensure that every item and relationship is tested and that any errors are
located.
• To begin, design a graph that consists of a set of nodes, which stand for items, and
links, which depict the connections that exist between those objects. The nodes will
represent the items, and the links will show the relationships between the items.
– weights that describe some aspect of a link, which are referred to as link weights; –
weights that characterise the qualities of a node, which are referred to as node weights.

• The nodes in the network are depicted as circles, and the connections between them can
take on a variety of forms.
• A one-way relationship is denoted by a directed link, which is depicted as an arrow and
indicates that the link only goes in one direction.
• The relationship is considered to apply in both directions when there is a bidirectional
link, which is also referred to as a symmetric link.
When multiple distinct associations need to be constructed between two nodes in a graph,
the use of parallel links is necessary.

Example
93

• Object #1 is a selection from the new file menu


• Object #2 is a window for the document
• Object No. 3 is the text of the document

A document window is produced when a new file is selected from the menu (see the
illustration to the right).
• According to the link weight, the window must be formed in fewer than one hundred
sixty five milliseconds.
• The document window's node weight gives a list of the window properties that should
be anticipated when the window is formed. This list can be found in the document
window.
• A parallel connection indicates a relationship between the document window and the
document text. • An undirected link creates a symmetric relationship between the new
file menu select and the document text.

To define equivalence classes follow the guideline


1. If the input condition requires the specification of a range, then one valid and two
invalid equivalence classes are defined.
2. There is one defined legitimate equivalency class and two defined invalid
equivalency classes when an input condition requests a certain value.
3. An equivalence class will be developed for the case in which an input condition
identifies a member of a set.
94

4. A valid class and an invalid class are specified for the case where an input
condition is Boolean.
Example
• the area code, which might be a blank number or a three-digit number
• prefix, a three-digit number that does not start with zero or one
• suffix, sometimes known as a four-digit number
• Password: an alphabetic and numeric string consisting of six digits
• commands such as cheque, deposit, and bill pay, as well as similar ones
• Input condition, Boolean—the area code may or may not be present; the condition can
either be true or false.
• Please provide the condition and value as a three-digit number.
• prefix: • Input condition, range—values defined between 200 and 999, with certain
exceptions to the rule.
• Input condition, value—four-digit length • Suffix: • Input condition, value
• password: • Input condition, Boolean — the presence of a password can either be
present or absent.
• Input condition, value, is a string of six characters.
• command: • input condition, set— check, deposit, and bill pay.

Boundary Value Analysis (BVA)


• Equivalence partitioning is a technique for the creation of test cases, while
boundary value analysis is a technique that complements it.
• • The BVA method will lead to the selection of test cases that are located on the
"edges" of the equivalence class, as opposed to selecting any element of the class.
• • To put it another way, instead of concentrating simply on the circumstances of
the input domain, BVA generates test cases from the output domain as well.
Guidelines for BVA
1. If an input condition specifies a range that is bounded by the values a and b, test cases
should be built using the values a and b, as well as the values immediately above and
immediately below a and b.
2. If an input condition requires a certain number of values, test cases should be designed
that exercise both the lowest possible value and the highest possible value. Additionally
tested are values that lie just above and below the minimum and maximum, respectively.
95

3. Apply the first and second guidelines to the output conditions.


4. If the internal programme data structures have borders that have been mandated, make
sure to construct a test case that will exercise the data structure at its boundary.

Orthogonal Array Testing


• There are not many input parameters, and the possible range of values that each
parameter can potentially take is clearly specified and constrained.
• When these numbers are relatively tiny, it is conceivable to consider every
potential input permutation. One example of this would be having three input
parameters, each of which might take on three discrete values.
• • On the other hand, exhaustive testing is required when the number of input
values increases along with the number of discrete values for each data item. • A
method known as orthogonal array testing is one that can be utilised in situations
where exhaustive testing is not possible due to the problem's relatively large
scope but where the input domain is relatively limited.
• Orthogonal Array Testing is a method that may be utilised to cut down on the
amount of possible permutations and achieve maximum coverage while
conducting fewer tests overall.

Example
• Take into consideration the send option available in the fax programme.
• The send function receives four parameters, which are referred to as P1, P2, P3, and P4.
Each one might be any one of three distinct values.
• P1 assumes the following values:
Send it now if P1 equals 1, send it an hour from now if P1 equals 2, send it after midnight
if P1 equals 3.
• Other send functions would be indicated by P2, P3, and P4 taking on the values 1, 2,
and 3 respectively.
• The OAT is an array of values, and each column in the array represents a parameter,
which is a value that can be one of a set of predetermined options called levels.
• Each entry in this table represents a different test case.
• Parameters are mixed in pairs rather than expressing all of the potential combinations of
levels and parameters; this is done in order to improve performance.
96
97

-MODULE - 5
RISK, QUALITY MANAGEMENT AND REENGINEERING

RISK MANAGEMENT
Introduction:
* A software development team can learn to deal with uncertainty through a process
known as risk analysis and management.
* A risk is a prospective problem; it may or may not materialize.
* However, it is a good idea to identify risks, evaluate their likelihood of occurring, and
calculate their potential impact.

Reactive VS Proactive Risk strategies:


(i) Reactive risk Strategies:
 Until something goes wrong, the software team does nothing about potential
dangers.
 When an issue arises, the team springs into action to try to find a solution as soon
as possible.
 In this state, one is in "Fire - Fighting Mode," often known as the "Indiana Jones
School of risk management" because of the famous quote attributed to the hero,
"Don't worry I'll think of something!" when presented with a seemingly
insurmountable problem.

(ii) Proactive Risk Strategies:


* This method gets underway a very long time before any technical work is done. Here,
we take a look at the various dangers that could arise.
After determining their likelihood and potential for harm, as well as the order in which
they should be addressed, the software team devises a strategy for mitigating the risks
they face.
* The primary goal is to prevent risk, but because no risk can be eliminated entirely,
* The team is working to establish a contingency plan that will enable it to react in a
controlled and efficient manner in the event that a risk is encountered.
Software Risks: Risk Characteristics:
98

(1) The possibility that the risk will not materialize; in other words, there are no dangers
that are one hundred percent likely to occur.
(2) Unwanted Consequences Or Losses: If a risk were to materialize, unintended
consequences or losses would follow.
* When doing an analysis of risks, it is essential to quantify both the degree of loss that is
connected with each risk as well as the level of uncertainty that is associated with that
risk.

Categories of Danger:

(i) Dangers to the Project:


* It has an impact on the project's overall plan
* If a project's risks are realised, it is likely that the following will occur:
=> The timetable for the project will be delayed.
=> There will be an increase in cost.
* The identification of project dangers
=> Possibilities Regarding the Budget
=> Set a timetable
=> Staffing and organization in terms of personnel
=> Useful Reference The term "Stakeholder"
==> Problems With Requirements..

(ii) Risks Involving Technology:


* It poses a risk to both the overall quality and the schedule adherence of the software
that is to be produced
* If a technological danger turns out to be a reality, implementation can become more
challenging.
* Identification of Potential Technical Risks
=> The Possibile Construction
=> The Putting Into Action
=> Point of Contact
=> The Act of Checking
=> Issues with upkeep and maintenance
99

* There is a potential for technical concerns because the problem is more difficult to
address than we had anticipated it would be.

(iii) Dangers to the Business:


* It has an impact on the possibility of the programme being constructed.
* The following are the top five dangers to a business:
(i) Constructing a wonderful product (Or System) that virtually no one desires at all
[Risks in the Market]
(ii) Developing a product that is no longer compatible with the overarching strategy of
the company [Risks to the company's strategic position]
(iii) Constructing a product that the sales team is unable to sell because they do not
comprehend how to do so [Risks Regarding Sales]
(iv) The loss of support from top management as a result of a shift in the organization's
focus (Or) a transformation in individuals [Risks Involved in Management]
(v) Failing to invest sufficient financial resources or personnel [Risks to the Budget]

(iv) The Known Dangers:

* These risks can be identified when the project strategy, as well as the commercial and
technological environment in which the project is being produced, have undergone
thorough evolution.
* Additional credible information sources include the following:
=>Date of delivery that is impossible to meet
=> The lack of requirements for documents
=> An unfavourable setting for development

(v) Risks That Can Be Anticipated:

* These dangers were derived from analysis of completed projects in the past.

As an example
:=> Turnover in the workforce
=> Lack of effective communication with the client
100

=> A reduction in the amount of work put in by staff as continuous maintenance requests
are attended to

(vi) Dangers That Cannot Be Anticipated:


* These dangers are like a wild card in a deck of cards because they can and do
materialise. However, it is quite challenging to predict them in advance.

Identifying Potential Dangers:


* It is an organised effort to identify potential dangers to the project plan, such as
timetables, resource loads, and cost estimations.
* A known and predictable risk is identified, which is the first step that a project manager
takes towards avoiding it when possible and regulating it when it is important to do so.

There are two main categories of dangers, which are as follows:

(i) Generic Risks - Every software development project faces the possibility of these
risks.
(ii) Product-special Risks: These can be identified only by individuals who have a clear
grasp of the technology, the people, and the environment that is special to the software
that is going to be produced.
These risks are unique to the software that is going to be built.

* The creation of a risk item check list is one approach of identifying risks that can be
used.

* The checklist can be used to identify potential risks, with a particular emphasis on a
subset of known and foreseeable dangers falling into one or more of the following
generic subcategories:
Size of the Product = Danger Associated with the Overall Size of the Software to
be constructed (Or) remodeled.
The risk that is connected with the limitations placed on the business as a result of
either administration or the market place
101

Customer Characteristics = Risk Associated with the Degree of Complexity of the


the ability of both the customer and the developers to interact with the client in a manner
that is with in a timely manner.

Risks linked with the extent to which the process is followed = Process Definition
The software development method has been outlined and is being carried out
by the Organisation for Development and Cooperation Risks related to the quantitative
abilities linked with the development environment as well as the quality of the tools that
will be utilised to construct the item being soldRisk linked with the complexity of the
system equals the technology that must be built.
As well as the "Newness" of the technology to be constructed, It is contained within a
package by the System.
Risks associated with the overall technical and project experience of the software
engineers who will be doing the work are proportional to the size and level of expertise of
the staff.
Assessing the Overall Dangers of the Project:
* The questions that are presented here are based on risk information received from
seasoned software project managers.
* The order of the questions reflects their relative significance to the completion of the
project.
Possible Answers:
(1) Do the highest-level software and customer managers have an official commitment to
back the project?
(2) Do the people who will be using the finished result have a passionate commitment to
the project and the system or product that will be built?
(3) Is there a mutual understanding between the software engineering team and its
customers regarding the requirements?
(4) To what extent have customers been involved in the process of defining the
requirements?
(5) Do end-users have expectations that are grounded in reality?
(6) Is there no change to the project's scope?
(7) Does the team working on the software engineering have the appropriate variety of
skill sets?
102

(8) Can you guarantee that the project requirements won't change?
(9) Does the team working on the project have previous experience working with the
technology that will be implemented?
(10) Does the project team have a sufficient number of members to complete the task at
hand?
(11) Do all of the customer and user constituencies have the same opinion regarding the
significance of the project as well as the needs for the system or product that will be
constructed?
* The proportion of unfavorable replies to these questions is directly linked to the degree
to which the project is at risk.

Risk Factors and Influencing Factors:

* The following criteria are used to define the risk factors:


(1) Performance Risks - The degree to which it is unknown that the product will fulfill
its specifications and be suitable for the purpose for which it was designed.
(2) Financial Risks = The degree to which it is uncertain that the project's allotted funds
will be sufficient
(3) Support Risks = The degree to which it is uncertain that the finished programme will
be simple to amend, modify, and improve.
(4) Schedule Risks => The degree to which it is unknown if the project schedule will be
kept and that the product will be delivered on time
* The influence of each risk driver on the various risk components can be broken down
into the following four categories:
=> Insignificance
=> On the margins
=> Very important
=> A cataclysmic event

Projections of Danger:
* Another name for this concept is risk assessment.
* It tries to assign a score to each risk in two different ways:
(i) the chance or probability that the risk is indeed there in the situation
103

(ii) the repercussions of the problem connected with the risk in the event that it occurs
take place.
* The project planner and the technical personnel will complete the following four
processes of risk projection:
(1) Construct a rating system that takes into account how likely it is that a risk may occur.
(2) Explain in detail the possible outcomes of the risk.
(3) Determine an estimate of how the risk will affect the project and the product.
(4) Make sure that the overall precision of the risk projection is taken into consideration,
so that there is no room for misinterpretation.

7.12 Risk Table:


*A risk table provides a project manager with a simple technique for risk projection
Risks Category Probability Impact RMMM
Size estimate PS 60% 2
may be
significantly
low
Large number PS 30% 3
of users than
planned
Less reuse than PS 70% 2
planned
Delivery BU 50% 2
deadline will
be tightened
Customer will PS 80% 2
change
requirements
Staff ST 30% 2
Inexperienced

Values of Influence
1. Extremely bad
104

2. Critical Level
3. Minimum:
4. Maximum:
* All risks identified by the project team are included in the first column of the table.
PS = Project Sizing Danger
BU == Business Risk
* Third column == Impact
* Fourth column == Probability of Occurrence
* Fifth column == After the first four columns are filled in, the risk table is complete; it is
sorted according to likelihood and impact.
* High probability and high impact risks move to the top of the table
* Low probability risks move to the bottom of the table
* A "Cut Off Line" is established when the project manager analyses the sorted table.
*All risks above the Cut Off Line must be managed, as indicated by the horizontal line
drawn at some point on the table, which suggests that only those risks above the line will
receive further attention.
* Risk mitigation, monitoring, and management information can be found at the link
given in the fifth column labelled RMMM.

Example:

High

Very Management
High Concern

Impact Disregard
risk factor

Very Low
0
Probability of
occurrence 1.0
105

* A risk factor like the one in the above image that has a big impact but a low chance of
happening shouldn't receive a lot of management attention.
* Both low impact risks with a high probability and high impact risks with a moderate to
high likelihood should be carried over into the subsequent risk analysis steps.

Risk Refinement:
* As more time goes by and more information is gathered about the project and the risk,
it is feasible that the risk can be refined into a group of hazards that are more specific.
* One method for performing refinement is to express the risk in the format of Condition,
Transition, and Consequence [CTC]. * The potential harm is described using the
following format:

Given that <condition> then there is concern


That (possibly) <Consequence>

Risk Mitigation, Monitoring and management [RMMM]:


* If a software team takes a preventative, proactive approach to risk management, they
will always come out on top.
* The following are the preventative measures that could be taken:

Mitigation Strategy:
=> In order to discover the factors that contribute to employee turnover, you need have a
conversation with the current personnel.
Example: Unsafe working conditions, low pay, and a highly competitive employment
market
=> Avoid those factors that are within our control prior to the beginning of the project
=> Assume that people will quit the project at some point after it has begun, and work on
developing strategies to maintain continuity in their absence.
Organise the project team in such a way that knowledge regarding each aspect of the
development process is broadly distributed. Define the documentation standards, and then
set up a procedure to make sure that the papers are developed in a timely manner.
106

=> Carry out peer reviews on every piece of work [so that more than one person is aware
of what's going on]
=> Ensure that every important technologist has a backup staff member assigned to them.
When initiatives are already in progress, risk monitoring operations start
=> The following are some of the factors that can be monitored:

Monitoring Strategy:
=> The general attitudes of the members of the team in response to the demands of the
project
=> The degree to which the team's members have developed strong interpersonal
relationships with one another
=> The cohesiveness of the team as a whole
=> Problems that could arise with regard to salary and benefits
=> Differences in the quality of work available both inside and outside the organisation

Risk Management and Contingency Planning:


* The project is well on its way to completion, and a number of people have announced
that they would be quitting.
* Supposing that the risk reduction approach was implemented correctly, this would
mean that
=> Backup is availableInformation is recorded for future reference.
=> The crew as a whole has been made aware of the information.
In addition, the manager could examine the project schedule by itself for a little period of
time.
* It should be noted that additional project costs will result from RMMM stages.

RMMM – Plan:
* The RMMM plan, which records every task completed for risk analysis
* The project manager incorporates it into the overall project plan.
* Some software teams use Risk Information Sheets (RIS) to document each risk
separately rather than creating a formal RMMM document.
107

RISK INFORMATION SHEET


Risk Id: CSE -06 Date: 11/12/2006 Probability: 80% Impact: High
Description: In actuality, only 70% of the software components slated for reuse will
be incorporated into the program.
Refinement / Context:
Sub Condition 1: Without awareness of internal design guidelines, a third party built
certain reusable components.
Sub Condition 2: There is a lack of solidification in the design elements for
component interfaces.
Sub Condition 3: A few reusable parts have been implemented using a language that
the target environment does not support.
Mitigation / Monitoring:
(1) To ascertain whether design issues are being followed, get in touch with a third
party.
(2) Initiate the interface standard completion process.
Management / Contingency Plan / Trigger:
30, 300 is the new computation. Include his payment in the project's contingency
budget. Create a revised timetable and assign staff appropriately.
Trigger: Effectiveness of mitigation measures as of 10/3/06
Current Status: 15/3/2007 : Mitigation steps Initiated
Originator: Arivu Assigned: Selvan
Software Quality Assurance
1. Quality
• Quality as “a characteristic or attribute of something.”
• There are two possible types of quality: -
– A product's design quality improves if it is manufactured in accordance
with requirements;
– The degree to which the design standards are adhered to during
production is known as quality of conformance.
• When it comes to software development,
– requirements, specifications, and system design are all included in quality
of design.
– The main focus of quality of conformity is implementation.
108

User satisfaction = compliant product + good quality + delivery within budget and
schedule
• Quality control refers to the various tests, evaluations, and inspections that are
conducted during the software development process.
• The procedure has a feedback loop as part of quality control.
• The idea that every work product has measurable, established requirements to
which we can compare the results of every operation is fundamental to quality
control.
• The feedback loop is necessary to reduce the amount of flaws that are created.
3. Quality Assurance
• Management's auditing and reporting duties comprise quality assurance.
• Should the data obtained from quality assurance reveal flaws, management must
address the issues and deploy the required resources to rectify quality concerns.
4. Cost of Quality
• All expenses incurred in pursuing quality or carrying out quality-related tasks are
included in the cost of quality.
• Quality costs may be divided 3 mode of cost:
– Prevention
– Appraisal
– Failure.
Software Quality Assurance (SQA)
Conformance to openly stated functional and performance objectives, explicitly defined
development standards, and implicit features that are anticipated of all software that has
been produced professionally is the definition of software quality.
•The purpose of definition is to emphasise the following three crucial points:

The requirements for the software serve as the basis for determining its overall quality. A
lack of quality can be defined as non-compliance with the requirements.

- A set of development criteria is defined by the standards that have been specified. If the
criteria are not followed, it will almost certainly result in a lack of quality.
109

A group of unstated requirements is frequently not taken into consideration. The quality
of software is questionable if it satisfies its formal requirements but falls short of meeting
its implicit requirements even if it does so.

SQA group activities


The Software Quality Assurance (SQA) group is comprised of software engineers,
project managers, customers, salespeople, and individual members.
• There are two distinct groups involved in the process of ensuring the quality of
software.
- Computer programmers who are involved in technical tasks
- Planning, directing, documenting, analyzing, and reporting on quality assurance
activities are the responsibilities of the SQA group.

• Software engineers address quality activities through the use of technical protocols and
metrics, formal technical reviews, and carefully planned software testing.
• The mission of the SQA group is to provide support to the software development team
so that they may produce a product of high quality.

Role of an SQA group


1. Prepares an SQA plan for a project.
• The plan is developed during project planning and is reviewed by all stakeholders.
• The plan identifies the following:
– Audits and reviews that are to be undertaken
– Evaluations that are to be performed
- Specifications for the project that relate to the appropriate standards
- Protocols for the recording and investigation of errors
- Documents that are going to be created by the SQA group
– The total number of comments that were submitted to the software
development team
2. Participates in the development of a project's software process description. • The SQA
group reviews the process description to ensure that it conforms to externally enforced
standards (like ISO-9001), organizational policy, internal software standards, and other
project plan components.
110

3. Reviews software engineering-related activities to make sure the defined software


process is being followed. The SQA group is in charge of finding, recording, and
monitoring process deviations as well as verifying that any required modifications have
been made.
4. Performs audits on certain software work products to check for conformity with
standards that have been established as part of the software development process.
• The SQA group examines certain work items, locates, documents, and keeps track of
deviations; ensures that repairs have been made; and provides the project manager with
periodic updates on the results of its work.
5. Ensures that any variations in the software work and work products are properly
documented and dealt with in accordance with a procedure that has been documented.
• There is a possibility that the project plan, the process description, the applicable
standards, or the technical work products will contain deviations.

6. Records any noncompliance and reports to senior management.


• Items that are not in compliance are tracked until they are fixed.
FORMAL TECHNICAL REVIEWS
• It is a software quality assurance activity performed by software engineers
Objectives of the FTR are
– To find mistakes in the logic, function, or implementation of any software
representation;
– To confirm that the software under review satisfies its requirements;
– To make sure the software has been represented in accordance with
established standards;
– To produce software that is developed uniformly;
– To simplify projects.
• Walkthroughs, inspections, round-robin reviews, and other small-group technical
evaluations of software are really included in the FTR class of reviews.
FTR- Review meeting
• The normal number of attendees for a review meeting should be between three and
five people; • The number of participants in the review should fall anywhere between
three and five.
111

• There should be some level of advance preparation, but the amount of effort
required from each individual should not exceed two hours.
• The time allotted for the review meeting must be significantly shorter than two
hours.
• FTR concentrates on a particular (and relatively insignificant) portion of the
software as a whole. • For instance, rather than attempting to examine the complete
design, FTRs are carried out for each component or small group of components.
• Each and every review meeting ought to be constrained by the following guidelines:
• Typically, the evaluation should include participation from between three and five
different people.
• There should be some level of advance preparation, but the amount of effort
required from each individual should not exceed two hours.
• The time allotted for the review meeting must be significantly shorter than two
hours.
• FTR concentrates on a particular (and relatively insignificant) portion of the
software as a whole. • For instance, rather than attempting to examine the complete
design, FTRs are carried out for each component or small group of components.

Review Reporting and Record Keeping


• During the FTR, a reviewer acts as the recorder and takes active notes on all of the
concerns that have been brought up. • At the conclusion of the review meeting, these
notes are summarised, and a review issues list is produced.
•A review summary report provides responses to the following three questions:
1. What exactly was looked over?
2. Who was the reviewer?
3. What were the results of the study and what were the findings?
• The review summary report is incorporated into the historical record of the project and
may be provided to the project leader as well as any other parties that have an interest in
the work.
•There are two functions served by the review issues list:
1. To pinpoint sections of the product that are causing issues
2. to act as a checkpoint for action items that acts as a guidance for the producer while he
or she makes corrections.
112

• It is important to design a follow-up method to guarantee that items on the issues list
have been correctly remedied. • A issues list is typically attached to the summary report. •
A summary report normally includes an issues list. If this is not done, it is possible that
the issues that have been brought up would "fall between the cracks."
• One strategy involves handing the obligation for follow-up to the person in charge of
the review.

SOFTWARE RELIABILITY
• The statistical definition of software reliability is "the probability of failure-free
operation of a computer programme in a specified environment for a specified amount of
time."
•What exactly does it mean to fail at something?
–Failure is defined as nonconformance to software requirements in the context of any
conversation pertaining to the quality and dependability of software.
It's possible that fixing one mistake will lead to the creation of new ones, which will then
lead to new mistakes, which will then lead to new failures.
• The reliability of software can be monitored, guided, and estimated by using historical
data in conjunction with development data.
Measures of Reliability and Availability
• Mean-time-between-failure (MTBF), where MTBF = MTTF + MTTR, is a
straightforward method for determining how reliable something is.
Mean-time-to-failure (MTTF) and mean-time-to-repair (MTTR) are two different
acronyms that refer to the same concept.
• The mean time between failures, or MTBF, is a far more meaningful measurement than
defects per KLOC or problems per FP.
To put it another way, an end-user is only concerned with the number of failures, not the
total number of errors. The total error count is not a particularly reliable indicator of the
dependability of a system because the failure rate of each individual error found within a
programme is not the same.
•We need to establish a measure of availability in addition to the dependability metric
that we already have.
113

• Software availability is the likelihood that a programme is working according to the


requirements at a given point in time. The formula for calculating availability is as
follows: Availability = [MTTF/(MTTR + MTTF)] 100%.
• The sensitivity of the MTBF reliability measure to the MTTF and MTTR is the same. •
The availability measure is somewhat more sensitive to the MTTR, which is an indirect
indication of the maintainability of software.

Reengineering
Introduction
• Regardless of application size, complexity and domain environment modification
occurs.
1. Due to new feature demanded by customer.
2. Due to errors.
3. Due to new technology
• We have to maintain when it is necessary and we have to re-engineer right.
• What is it?
• Who does it?
114

• Why it is important?
• What are the steps?
 Maintenance correct the defects, adopts new functionality as per user needs and
changing env
 At strategic level BPR identifies and evaluates the existing business process and
create revised BP that better meet the current goals
• What is the work product?
 Variety of maintenance and re-engineering work products are produced
eg: usecases, analysis and design model, test procedures
 The final output is upgraded software
• How do you ensure that you have done right?
 Use SQA practices that are applied to every SE process
 Technical reviews assess the analysis and design models
 Specialized review consider the business applicability and compatibility
 Testing is applied to uncover the errors in content functionality etc.
Re-Engineering Advantages
 Reduced risk
 There is a high risk in new software development. There may be development
problems, staffing problems and specification problems.
 Reduced cost
 The cost of re-engineering is often significantly less than the costs of developing
new software.
Business Process Re-Engineering
• BPR extends for beyond scope of IT and SE
• Concerned with re-designing business processes to make them more responsive
and more efficient.
Business Process:
• BP is a set of logically related tasks performed to achieve a defined business
outcome.
• Within the BP people, equipment, material resources and business procedures are
combined to produce s specified results.
• The overall business can be segmented as fallow
Business ->Business System->Business Process->Business Sub-process.
115

• BPR can be applied to at any level of hierarchy but as the scope of BPR broadens,
risk associated with BPR grows dramatically.
BPR Model:
• BPR is iterative model
• BG and process that achieve them must be adapted to changing env.
• For this reason there is no start and end to BPR it is evolutionary process.
• The BPR Model consisting of 6 activities
1. Business Definition:
• Business goals are identified within the context of 4 key drivers
 Cost reduction
 Time reduction
 Quality improvement
 Personal development and empowerment
• Goals can be identified at the business level or a for a specific component of a
business.
2. Process Identification:
• Process to achieve the business goals can be identified.
• After identifying process this can be ranked by importance, by need of change, or
in any other way that is appropriate for RE activity
3. Process Evaluation:
• The existing process is thoroughly analyzed and measured.
• Process tasks are identified.
• The cost and time consumed by process tasks can be noted and
quality/performance problems are isolated.
4. Process Specification and design:
• Based on info obtain by the first three BPR activities use-cases are prepared for
each process that is to redesigned.
• Within the BPR use-case identify a scenario that detects some outcome to a
customer.
• Within the use-case as the specification of the process new set of tasks are
designed for the process
5. Prototyping:
• A redesigned BP must be prototype before it fully integrated into the business.
116

• This activity test the process that refinement can be made.


6. Refinement and Instantiation:
• Based on the prototype BP is refined and then instantiated with a business system.

 BPR activities are sometimes used in conjunction with workflow analysis tools.
 The intent of this tool is to build model of existing workflow in an effort to better
analyze existing process.
Software Re-Engineering
• The scenario is all to common
• Reorganizing and modifying existing software systems to make them more
maintainable.
Objectives:
• To explain why software re-engineering is a cost effective option for system
evolution
• To describe the activities involved in the software re-engineering process
• To distinguish between software and data reengineering and to explain the
problems of data re-engineering
Software Re-Engineering Process Model:
117

• Re-engineering takes time, it costs significant amounts of money, and it absorbs


resource.
• For all these reason re-engineering is not accomplished in a few months or even a
few years.
• Re-engineering of information system is an activity that will absorb IT resources
for many years. That’s why every organization needs a pragmatic strategy for
software re-engineering.
• A workable strategy is encompassed in re-engineering process model.
• Re-engineering is a rebuilding activity.
Eg: Rebuilding of a house
 Before you can start rebuilding, it would seem reasonable to inspect the house.
 Before you tear down and rebuild the entire house, be sure that structure is weak.
 Before you start rebuilding be sure you understand how the original was built.
 If you begin to rebuild, use only the most modern, long lasting materials.
 If you decide to rebuild be disciplined about it.

Software Re-engineering Activities:


• The scenario is to common to all: An application is served the business need of a
company for 10 or 15 years.
• During that type it has been corrected, adapted and enhance many times.
118

• Re-Engineering activities paradigm is a cyclical model that means each activities


presented as a part of paradigm can be re visited.
• Totally we are having 6 software re-engineering activities.
 Inventory analysis:
• Every software org should have a inventory of all application.
• Inventory is nothing but spread sheet type of document.
• By sorting the info according to business critically, longevity etc..
• Resources can then allocated to candidate application for re-engineering work.
• This inventory should be revised on a regular cycle as a status of application
change.
 Document Restructuring:
• Weak document is trademark for many legacy software. What you can do about
it? And what are your opt?
 Creating document is far too time consuming:
 If a system is working you just go for live with what you have.
 In some cases this is correct approach why because it is not possible to re-create
document for hundreds of computer program.
 If the program is static document will end otherwise it wont.
 Document must be updated, but your org have limited resources:
 Here we use a “documented when touched” approach it will not necessary to fully
document an application.
 Rather than that those portions of the system that are currently undergoing
change are fully documented.
 The system is business critical and must be fully re documented.
 Even in this case an intelligent approach to pare documentation to as essential
minimum.
 Reverse engineering:
• The term RE has its origin in the hardware world.
• A company disassemble a competitive hardware product in effort to understand
its competitor design and many of secrets.
• These secrets can be easily understood whenever the specifications obtained.
• But these documents are proprietary and not available for re engineering.
119

• Because of this SE derives one or more design and manufacturing specification


for a product by examining the actual outputs.
• Sometime RE is done company’s own work to understand the specification.
• Therefore RE is the process of design recovery.
• RE tools extracts data, architecture and procedural design info from an existing
system.
 Code Restructuring:
• The most common type of re-engineering code restructuring.
• Legacy system have solid program architecture but individual modules coded in
way that makes them difficult to understand, test, maintain.
• In this case code within the suspect module can be restructured.
• To accomplish this task source code can be analyzed by using restructuring tools.
• Violations in structure programming are noted and then code is restructured(this
can be done automatically) or re written in modern language.
• Resultant code can be reviewed and tested to ensure that no anomalies have been
introduced.
• Internal code document is updated.
 Data restructuring:
• A program with weak data architecture will be difficult to adopt and enhance.
• Code restructuring will occurs at low level but data restructuring is full scale re-
engineering actually.
• In most case DR begins with RE actions
• Current data set are dissected and necessary data model are defined.
• Data objects and attributes are identified and existing data structure are reviewed
for quality.
• When data architecture is weak the data are reengineer because it is has strong
influence in either architecture and code level changes.
 Forward engineering:
• Application can be rebuild using automated RE engine.
• The old program fed into the engine analyzed, restructured and regenerated in a
form that exhibit the best aspect of software quality.
Reverse Engineering
• Undocumented source file converted into fully documented source code.
120

• In reverse engineering designer must extract the design info from source code but
 Abstraction level
 Completeness of the documentation
 The degree to which tools and a human analyst work together
 The directionality of the process are highly variable.
• The abstraction level and completeness can be extracted from source code.
• The RE process should be capable of deriving
 procedural design representation(LL)
 Program and DS info(somewhat HL)
 Object model(HL)
• As a abstraction level increases you are provided with info that will allow easier
understand of the program.
• The completeness of RE process refers to the level of details that is provided at an
abstraction.
• Completeness improve direct proportion to the amount of analysis performed by
the person doing RE
• Interactivity refers to degree to which the human is integrated with automated
tools to create effective REP.
• In most cases as AL increases interactivity must increase or completeness will
suffer.
• If the directionality of REP is one way all the info extracted from the source code
is provided to the SE who can use it during any maintenance activity.
• If directionality is two way the info is fed into RE tools that attempts to
restructure or regenerate old program.
• Before Re-Eng commences unstructured source code is restructured
• This makes source code easier to read and provides the basis for all subsequence
RE activity.
• You must evaluate the old program from the source code and develop
 Meaningful specification
 User interface applied
 Program DS or database i.e used
Reverse engineering to understand data
• RE of data occurs at different level of abstraction often it is first RE task.
121

• At the program level internal program DS must often be RE as a part of overal


Re-engineering efforts.
• At the system level global DS can be evaluated.
Internal DS:
• RE tech for internal program data focus on the def of classes of objects.
• In many cases the data org within the code identifies abstract data types.
Database structure:
• Regardless of logical org and physical structure DB allows the def of object and
support some methods to establishing relationship among the objects.
• Therefore re-eng one DB schema into another requires an understanding of
existing object and relationship.
• Following steps may be used to re eng to new DB
 Build initial object model
 Determine candidate key
 Refine tenative class
 Define generalization
 Discover association using CRC tech
RE to understand processing:
• RE to understand processing begins with an attempt to understand and than
extract procedure abstraction presented in source code.
• To understand the process abstraction the code is analyzed at varying level of
abstraction.
• Each program that makes the application represent functional program at high
level of abstraction.
• Block diagram can be created.
• In some cases system program and component specification already exist in this
situation the specification can be reviewed.
• Things become more complex when we look at the section of code that represents
generic procedural pattern.
• Almost in every component a section of code prepares data for processing(within
the module)
• Different section of code does processing and section of code prepares result of
processing for export.
122

• For large system RE is generally accomplished by using semi automated


approach.
• Automatic tools is used to understand the semantics of the code
• Output of this then passed to restructuring and FE tools to complete re-eng
process.
RE user interface:
• Sophisticated GUI have become mandatory to computer based and other based
system.
• Before user interaction be built RE should occur
• The structure and behaviour of the present user interface need to be defined in
order to gain a complete understanding of it.
• Before the start of UI, Merlo and his coworkers present queries that can have their
answers provided.
• What are the fundamental operations that need to be processed by the interface?
• What is a succinct description of the way the behaviour of the system reacts when
it is exposed to the action?
• What exactly do we mean when we talk about replacement, and how exactly does
the idea of equivalence of interface apply to this situation?
• The answers to the first two questions can be found by modelling behaviour.
• It is frequently to one's advantage to conceptualise new interaction metaphors

Restructuring
• Software restructuring modifies source code and/or data in effort to make future
changes.
• This restructuring doesn't modifies the entire program architecture.
• Mainly it focus on the design details of individual modules and local data
structure within the module.
• If suppose restructuring efforts extends beyond the boundaries of module and
encompasses the software architecture restructuring becomes Forward
engineering.
• Restructuring occurs when the basic architecture of an application is solid, even
through technical internal need to work.
123

• This step initiated when major parts of software are serviceable and only subset of
all modules and data need extensive modification.
Code Restructuring:
• CR is performed to yield a design that the same function but with higher quality
than the original program.
• In general CR technology model program logic with Boolean algebra and then
apply a series of transformation rules that restructures logic.
• A resource exchange diagram is a map that depicts each programme module as
well as the resources that are traded between that module and the other modules.
• The programme design can be changed to ensure minimum coupling among
modules if the representation flow is first created. This will allow for more
flexibility.
Data Restructuring:
• The process of reverse engineering, also known as analysis of code, needs to be
carried out before DR can get underway.
• The evaluation of all PL statements takes place, whether or not they contain data
definition, file descriptions, I/O, or interface description.
• The purpose of this activity, which is known as data analysis, is to extract data
items and objects in order to obtain information on data flow and to gain an
understanding of the existing data structures that have been put into place.
• The process of data redesign begins once the examination of the data has been
finished.
• Data record standardization step clarifies data definition to achieve consistency
among data items names or physical record format within the existing DS or file
format
• Another form of redesign called data name rationalization it ensure that all data
naming conventions conform to local standards that aliases are eliminated as data
flow through the system.
• When restructuring move behind the standardization and rationalization physical
modification can be done to existing one in order to make the data design more
effective.
• This means that translation from one file format to another or in some cases
translation from one type database to another.
124

Forward Engineering
• A program with control flow that is the graphic equivalent of bowl of spaghetti
with modules.
 You can struggle through modification after modification, fighting the ad hoc
design and source code to implement the necessary changes.
 You can attempt to understand inner working of program in broader way to make
modification more efficiently.
 You can redesign, recode and test those portion of the software that require
modification and apply SE approach.
 You can completely redesign or recode and conduct test to understand the current
design.
• There is no single correct option circumstances may dictate the first option even if
others are more desirable.

• Rather than waiting until a maintenance request is received the development or


support org uses the results of inventory analysis to select a program that
 Will remain in use for a number of years.
 Is currently being used successfully.
 Is likely to undergo major changes or enhancement in near future.
• The suggestion that you redevelop a large program when a working version
already exists before passing the judgment consider the following points.
 The cost to maintain one line of source code.
 Redesign of SA using modern design concepts.
 Bcz prototype of software already exist, development productivity much higher
than average.
 Now user has experience with software therefore new req and directions change
can be ascertained with greater ease.
 Automated tools for reengineering will facilitate some parts of the job.
 A complete software configuration exist upon completion of preventive
maintenance.
• FE applies the software engineering principles, concepts and methods to re create
an existing application.
125

• In most cases FE does not simply create modern equivalent of existing program
rather new user and tech req are integrated into the reengineering effort.
• So that the redeveloped program extends the capabilities of the older application.
Forward engineering for client server architectures
• Although a variety of different distributed environment can be designed, the
typical mainframe application that is reengineered into a client server architecture
has the following features.
 Application functionality must migrate to each client computer.
 New GUI are implemented at client sites.
 Database function are allocated to server.
 Specialized functionality may remain at server site.
 New communication, security, archiving and control requirement must be
established at both the client and server sites.
• Reengineering for client-server application begins with analysis of the business
environment that encompasses the existing mainframe.
• Three layers of abstraction can be identified
1. The database sits
2. The business rules layer
3. Client application layer
The database sits:
• It is foundation for client server architecture and manages the transactions and
queries from server application.
• Yet this transactions and queries can be controlled within the context of business
rules.
• The function of the existing DBMS and data architecture of existing DB must be
reengineered to redesign DB foundation layer.
Business rules layer:
• Represents software resident at both the client and server.
• This software performs control and coordination tasks to ensure that transactions
and queries between the client application and database conform to the
established business process.
• In many cases mainframe application can be segmented into set of desktop
application that is controlled by business rules layer.
126

Client application layer:


• it implements the business function that are required by specific group of end
user.
Forward engineering for object oriented architectures
• First we do reverse engineering to the existing system so that the appropriate data,
functional and behavioral models can be created.
• If the re engineered system extends the functionality or behavior of original
application usecase are created.
• The data models created in reverse engineering are then used with conjunction
with CRC modeling to establish the basis definition of classes.
• Class hierarchies, object-behavior models and sub systems are defined and object
oriented design commences.
• As object oriented FE progresses from analysis to design

You might also like