You are on page 1of 142

Aryan Institute of Engineering and Technology-CSE

Lecture Notes
on
Software Engineering
Btech-CSE, 6th Sem

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

MODULE-1

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Software Products are nothing but software systems delivered to the customer with the
documentation that describes how to install and use the system. In certain cases, software
products may be part of system products where hardware, as well as software, is delivered to a
customer. Software products are produced with the help of the software process. The software
process is a way in which we produce software.

Types of Software Products

Software products fall into two broad categories:


1. Generic products: Generic products are stand-alone systems that are developed by a
production unit and sold on the open market to any customer who can buy them.
2. Customized Products: Customized products are the systems that are commissioned by a
particular customer. Some contractor develops the software for that customer.

Characteristics of Software Product

A well-engineered software product should possess the following essential characteristics:


1. Efficiency: The software should not make wasteful use of system resources such as
memory and processor cycles.
2. Maintainability: It should be possible to evolve the software to meet the changing
requirements of customers.
3. Dependability: It is the flexibility of the software that ought to not cause any physical or
economic injury in the event of system failure. It includes a range of characteristics such as
reliability, security, and safety.
4. In time: Software should be developed well in time.
5. Within Budget: The software development costs should not overrun, and it should be
within the budgetary limit.
6. Functionality: The software system should exhibit the proper functionality, i.e., it should
perform all the functions it is supposed to perform.
7. Adaptability: The software system should have the ability to get adapted to a reasonable
extent with the changing requirements.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Software Crisis is a term used in computer science for the difficulty of writing useful and
efficient computer programs in the required time. The software crisis was due to using the same
workforce, same methods, and same tools even though rapidly increasing software demand, the
complexity of software, and software challenges. With the increase in software complexity,
many software problems arise because existing methods were insufficient.
If we use the same workforce, same methods, and same tools after the fast increase in software
demand, software complexity, and software challenges, then there arise some issues like
software budget problems, software efficiency problems, software quality problems, software
management, and delivery problems, etc. This condition is called a Software Crisis.

Causes of Software Crisis:

• The cost of owning and maintaining software was as expensive as developing the software.
• At that time Projects were running overtime.
• At that time Software was very inefficient.
• The quality of the software was low quality.
• Software often did not meet user requirements.
• The average software project overshoots its schedule by half.
• At that time Software was never delivered.
• Non-optimal resource utilization.
• Challenging to alter, debug, and enhance.
• The software complexity is harder to change.

Factors Contributing to Software Crisis:


• Poor project management.
• Lack of adequate training in software engineering.
• Less skilled project members.
• Low productivity improvements.

Solution of Software Crisis:


There is no single solution to the crisis. One possible solution to a software crisis is Software
Engineering because software engineering is a systematic, disciplined, and quantifiable
approach. For preventing software crises, there are some guidelines:
• Reduction in software over budget.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• The quality of the software must be high.


• Less time is needed for a software project.
• Experienced and skilled people working on the software project.
• Software must be delivered.
• Software must meet user requirements.

Handling complexity through abstraction and decomposition


Abstraction:
It refers to the construction of a simpler version of a problem by ignoring the details. The principle
of constructing an abstraction is popularly known as modeling.
It is the simplification of a problem by focusing on only one aspect of the problem while omitting
all other aspects. When using the principle of abstraction to understand a complex problem, we
focus our attention on only one or two specific aspects of the problem and ignore the rest.
Whenever we omit some details of a problem to construct an abstraction, we construct a model of
the problem. In everyday life, we use the principle of abstraction frequently to understand a
problem or to assess a situation.

Decomposition:
Decomposition is a process of breaking down. It will be breaking down functions into smaller
parts. It is another important principle of software engineering to handle problem complexity. This
principle is profusely made use of by several software engineering techniques to contain the
exponential growth of the perceived problem complexity. The decomposition principle is
popularly is says the divide and conquer principle.

Functional Decomposition:
It is a term that engineers use to describe a set of steps in which they break down the overall
function of a device, system, or process into its smaller parts.
Steps for the Functional Decomposition:
1. Find the most general function
2. Find the closest sub-functions
3. Find the next levels of sub-functions

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Overview of software development activities


Software Development Life Cycle:
Software Development Life Cycle (SDLC) defines a methodology for improving the quality of
software and the overall development process. It is a well-defined process by which a system is
conceived, developed, and implemented. SDLC is a process followed for a software project, within
a software organization. It consists of a detailed plan describing how to develop, maintain, replace,
and alter or enhance specific software.

SDLC Phases:

1. Planning:
Planning is the initial stage. This phase is about things like the costs of developing a product,
capacity planning around team members, project schedule, and resource allocation. It can either
be the plan of a groundbreaking thought or learning the current framework with progress as the
objective. The planning stage additionally incorporates project plans, cost assessments, and
acquisition necessities.

2. Analysis:
The analysis phase is the most important phase of the software development life cycle since it sets
the requirements for what to build. In this phase, it is vital to understand the client’s requirements
and make sure everyone is on board with the same understanding.

3. Design:
In this phase, the system and software design prepared from the requirement specifications. System
Design helps in specifying hardware and system requirements and also helps in defining overall
system architecture. The system design specifications serve as input for the next phase of the
model.

4. Implementation:
After receiving system design documents, the work divided into modules, and actual frontend and
backend coding started. Since, in this phase, the code produced so it is the main focus for the
developer. Implementation is the longest phase of the software development life cycle (SDLC).

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

5. Testing
After the code developed, it tested against the requirements. It makes sure that the product is
actually solving the needs addressed and gathered during the requirements phase. During the
Testing phase, all types of functional testing like unit testing, integration testing, system testing,
and acceptance testing are done as well as non-functional testing are also done.

6.Maintenance: When the customers start using the developed system then the actual problems
come up and need to be solved from time to time. This process where care taken for the developed
product known as maintenance. It is the last stage, however, it doesn’t end here. Now, the product
item checked to guarantee the product working properly without any bugs or imperfections.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Process Models

A software process model is an abstraction of the software development process. The models
specify the stages and order of a process. So, think of this as a representation of the order of
activities of the process and the sequence in which they are performed.

A model will define the following:

• The tasks to be performed


• The input and output of each task
• The pre and post-conditions for each task
• The flow and sequence of each task

The goal of a software process model is to provide guidance for controlling and coordinating the
tasks to achieve the end product and objectives as effectively as possible.

There are many kinds of process models for meeting different requirements. We refer to these
as SDLC models (Software Development Life Cycle models). The most popular and important
SDLC models are as follows:

• Waterfall model
• V model
• Incremental model
• RAD model
• Agile model
• Iterative model
• Prototype model
• Spiral model

Factors in choosing a software process


Choosing the right software process model for your project can be difficult. If you know your
requirements well, it will be easier to select a model that best matches your needs. You need to
keep the following factors in mind when selecting your software process model:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

1. Project requirements
Before you choose a model, take some time to go through the project requirements and clarify
them alongside your organization’s or team’s expectations. Will the user need to specify
requirements in detail after each iterative session? Will the requirements change during the
development process?
2. Project size
Consider the size of the project you will be working on. Larger projects mean bigger teams, so
you’ll need more extensive and elaborate project management plans.
3. Project complexity
Complex projects may not have clear requirements. The requirements may change often, and the
cost of delay is high. Ask yourself if the project requires constant monitoring or feedback from the
client.
4. Cost of delay
Is the project highly time-bound with a huge cost of delay, or are the timelines flexible?
5. Customer involvement
Do you need to consult the customers during the process? Does the user need to participate in all
phases?
6. Familiarity with technology
This involves the developers’ knowledge and experience with the project domain, software tools,
language, and methods needed for development.
7. Project resources
This involves the amount and availability of funds, staff, and other resources.

Model for Testing in Software Testing:

Waterfall Model:
The waterfall model and its derivatives were extremely popular in the 1970s. It is heavily being
used across many development projects. It is possibly the most obvious and intuitive way in which
software can be developed through a team effort.

The waterfall model is the oldest paradigm for software engineering. The original waterfall model
was proposed by Winston Royce. We can think of the waterfall model as a generic model that has
been extended in many ways for catering to certain specific software development situations to
realize all other software life cycle models.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Iterative Model

The iterative development model develops a system by building small portions of all the features.
This helps to meet the initial scope quickly and release it for feedback.

In the iterative model, you start off by implementing a small set of software requirements. These
are then enhanced iteratively in the evolving versions until the system is completed. This process
model starts with part of the software, which is then implemented and reviewed to identify further
requirements.

Like the incremental model, the iterative model allows you to see the results at the early stages of
development. This makes it easy to identify and fix any functional or design flaws. It also makes
it easier to manage risk and change requirements.

The deadline and budget may change throughout the development process, especially for large
complex projects. The iterative model is a good choice for large software that can be easily broken
down into modules.

Parameter Waterfall Model Incremental


Model
1. Handle Large Project Not Appropriate Not Appropriate
2. Detailed Necessary Yes, but not much
Documentation
3. Cost Low Low
4. Risk High Low
5. Time-frame Very long Long
6. Testing After completion of coding After every iteration
phase
7. Framework Linear Linear+Iterative

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Prototyping Model:

Prototyping model is defined as the process of developing a working replication of a product or


system that has to be engineered. It offers a small-scale facsimile of the end product and is used
for obtaining customer feedback. The Prototyping concept is described below:

The Prototyping Model is one of the most popularly used Software Development Life Cycle
Models (SDLC models). This model is used when the customers do not know the exact project
requirements beforehand. In this model, a prototype of the end product is first developed, tested,
and refined as per customer feedback repeatedly till a final acceptable prototype is achieved which
forms the basis for developing the final product.

Prototyping is defined as the process of developing a working replication of a product or system


that has to be engineered. It offers a small-scale facsimile of the end product and is used for
obtaining customer feedback. The Prototyping concept is described below:
The Prototyping Model is one of the most popularly used Software Development Life Cycle
Models (SDLC models). This model is used when the customers do not know the exact project
requirements beforehand. In this model, a prototype of the end product is first developed, tested,
and refined as per customer feedback repeatedly till a final acceptable prototype is achieved which
forms the basis for developing the final product.

In this process model, the system is partially implemented before or during the analysis phase
thereby giving the customers an opportunity to see the product early in the life cycle. The process
starts by interviewing the customers and developing the incomplete high-level paper model. This
document is used to build the initial prototype supporting only the basic functionality as desired
by the customer. Once the customer figures out the problems, the prototype is further refined to
eliminate them. The process continues until the user approves the prototype and finds the working
model to be satisfactory.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Steps Prototyping Model


Step 1: Requirement Gathering and Analysis: This is the initial step in designing a prototype
model. In this phase, users are asked about what they expect or what they want from the system.
Step 2: Quick Design: This is the second step in Prototyping Model. This model covers the basic
design of the requirement through which a quick overview can be easily described.
Step 3: Build a Prototype: This step helps in building an actual prototype from the knowledge
gained from prototype design.
Step 4: Initial User Evaluation: This step describes the preliminary testing where the
investigation of the performance model occurs, as the customer will tell the strength and
weaknesses of the design, which was sent to the developer.
Step 5: Refining Prototype: If any feedback is given by the user, then improving the client’s
response to feedback and suggestions, the final system is approved.
Step 6: Implement Product and Maintain: This is the final step in the phase of the Prototyping
Model where the final system is tested and distributed to production, here the program is run
regularly to prevent failures.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Types of Prototyping Models

There are four types of Prototyping Models, which are described below.

• Rapid Throwaway Prototyping


• Evolutionary Prototyping
• Incremental Prototyping
• Extreme Prototyping

1. Rapid Throwaway Prototyping

This technique offers a useful method of exploring ideas and getting customer feedback for each
of them. In this method, a developed prototype need not necessarily be a part of the ultimately
accepted prototype. Customer feedback helps in preventing unnecessary design faults and hence,
the final prototype developed is of better quality. +
2. Evolutionary Prototyping

In this method, the prototype developed initially is incrementally refined on the basis of customer
feedback till it finally gets accepted. In comparison to Rapid Throwaway Prototyping, it offers a
better approach that saves time as well as effort. This is because developing a prototype from
scratch for every iteration of the process can sometimes be very frustrating for the developers.
3. Incremental Prototyping

In this type of incremental prototyping, the final expected product is broken into different small
pieces of prototypes and developed individually. In the end, when all individual pieces are properly
developed, then the different prototypes are collectively merged into a single final product in their
predefined order. It’s a very efficient approach that reduces the complexity of the development
process, where the goal is divided into sub-parts and each sub-part is developed individually. The
time interval between the project’s beginning and final delivery is substantially reduced because
all parts of the system are prototyped and tested simultaneously. Of course, there might be the
possibility that the pieces just do not fit together due to some lack of ness in the development phase
this can only be fixed by careful and complete plotting of the entire system before prototyping
starts.

4. Extreme Prototyping

This method is mainly used for web development. It consists of three sequential independent
phases:
1. In this phase, a basic prototype with all the existing static pages is presented in HTML
format.
2. In the 2nd phase, Functional screens are made with a simulated data process using a
prototype services layer.
3. This is the final step where all the services are implemented and associated with the final
prototype.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

This Extreme Prototyping method makes the project cycling and delivery robust and fast and keeps
the entire developer team focused and centralized on product deliveries rather than discovering all
possible needs and specifications and adding necessitated features.

Advantages of Prototyping Model


• The customers get to see the partial product early in the life cycle. This ensures a greater level
of customer satisfaction and comfort.
• New requirements can be easily accommodated as there is scope for refinement.
• Missing functionalities can be easily figured out.
• Errors can be detected much earlier thereby saving a lot of effort and cost, besides enhancing
the quality of the software.
• The developed prototype can be reused by the developer for more complicated projects in the
future.
• Flexibility in design.
• Early feedback from customers and stakeholders can help guide the development process and
ensure that the final product meets their needs and expectations.
• Prototyping can be used to test and validate design decisions, allowing for adjustments to be
made before significant resources are invested in development.
• Prototyping can help reduce the risk of project failure by identifying potential issues and
addressing them early in the process.
• Prototyping can facilitate communication and collaboration among team members and
stakeholders, improving overall project efficiency and effectiveness.
• Prototyping can help bridge the gap between technical and non-technical stakeholders by
providing a tangible representation of the product.

Disadvantages of the Prototyping Model


• Costly with respect to time as well as money.
• There may be too much variation in requirements each time the prototype is evaluated by the
customer.
• Poor Documentation due to continuously changing customer requirements.
• It is very difficult for developers to accommodate all the changes demanded by the customer.
• There is uncertainty in determining the number of iterations that would be required before the
prototype is finally accepted by the customer.
• After seeing an early prototype, the customers sometimes demand the actual product to be
delivered soon.
• Developers in a hurry to build prototypes may end up with sub-optimal solutions.
• The customer might lose interest in the product if he/she is not satisfied with the initial
prototype.
• The prototype may not be scalable to meet the future needs of the customer.
• The prototype may not accurately represent the final product due to limited functionality or
incomplete features.
• The focus on prototype development may shift the focus away from the final product, leading
to delays in the development process.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• The prototype may give a false sense of completion, leading to the premature release of the
product.
• The prototype may not consider technical feasibility and scalability issues that can arise during
the final product development.
• The prototype may be developed using different tools and technologies, leading to additional
training and maintenance costs.
• The prototype may not reflect the actual business requirements of the customer, leading to
dissatisfaction with the final product.

Applications of Prototyping Model


• The Prototyping Model should be used when the requirements of the product are not clearly
understood or are unstable.
• The prototyping model can also be used if requirements are changing quickly.
• This model can be successfully used for developing user interfaces, high-technology software-
intensive systems, and systems with complex algorithms and interfaces.
• The prototyping Model is also a very good choice to demonstrate the technical feasibility of
the product.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Evolutionary Model
The evolutionary model is a combination of the Iterative and Incremental models of the software
development life cycle. Delivering your system in a big bang release, delivering it in incremental
process over time is the action done in this model. Some initial requirements and architecture
envisioning need to be done. It is better for software products that have their feature sets
redefined during development because of user feedback and other factors.
What is the Evolutionary Model?

The Evolutionary development model divides the development cycle into smaller, incremental
waterfall models in which users can get access to the product at the end of each cycle.
1. Feedback is provided by the users on the product for the planning stage of the next cycle
and the development team responds, often by changing the product, plan, or process.
2. Therefore, the software product evolves with time.
3. All the models have the disadvantage that the duration of time from the start of the project
to the delivery time of a solution is very high.
4. The evolutionary model solves this problem with a different approach.
5. The evolutionary model suggests breaking down work into smaller chunks, prioritizing
them, and then delivering those chunks to the customer one by one.
6. The number of chunks is huge and is the number of deliveries made to the customer.
7. The main advantage is that the customer’s confidence increases as he constantly gets
quantifiable goods or services from the beginning of the project to verify and validate his
requirements.
8. The model allows for changing requirements as well as all work is broken down into
maintainable work chunks.

Application of Evolutionary Model

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

1. It is used in large projects where you can easily find modules for incremental
implementation. Evolutionary model is commonly used when the customer wants to start
using the core features instead of waiting for the full software.
2. Evolutionary model is also used in object oriented software development because the
system can be easily portioned into units in terms of objects.

Necessary Conditions for Implementing this Model


1. Customer needs are clear and been explained in deep to the developer team.
2. There might be small changes required in separate parts but not a major change.
3. As it requires time, so there must be some time left for the market constraints.
4. Risk is high and continuous targets to achieve and report to customer repeatedly.
5. It is used when working on a technology is new and requires time to learn.

Advantages Evolutionary Model


1. In evolutionary model, a user gets a chance to experiment partially developed system.
2. It reduces the error because the core modules get tested thoroughly.

Disadvantages Evolutionary Model

1. Sometimes it is hard to divide the problem into several versions that would be acceptable to
the customer which can be incrementally implemented and delivered.

The Spiral Model is one of the most important Software Development Life Cycle models, which
provides support for Risk Handling. This article focuses on discussing the Spiral Model in
detail.

What is the Spiral Model?


The Spiral Model is a Software Development Life Cycle (SDLC) model that provides a
systematic and iterative approach to software development. In its diagrammatic representation,
looks like a spiral with many loops. The exact number of loops of the spiral is unknown and can
vary from project to project. Each loop of the spiral is called a Phase of the software
development process.
1. The exact number of phases needed to develop the product can be varied by the project
manager depending upon the project risks.
2. As the project manager dynamically determines the number of phases, the project manager
has an important role in developing a product using the spiral model.
3. It is based on the idea of a spiral, with each iteration of the spiral representing a complete
software development cycle, from requirements gathering and analysis to design,
implementation, testing, and maintenance.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

What Are the Phases of Spiral Model?


The Spiral Model is a risk-driven model, meaning that the focus is on managing risk through
multiple iterations of the software development process. It consists of the following phases:
1. Planning
The first phase of the Spiral Model is the planning phase, where the scope of the project is
determined and a plan is created for the next iteration of the spiral.
2. Risk Analysis
In the risk analysis phase, the risks associated with the project are identified and evaluated.
3. Engineering
In the engineering phase, the software is developed based on the requirements gathered in the
previous iteration.
4. Evaluation
In the evaluation phase, the software is evaluated to determine if it meets the customer’s
requirements and if it is of high quality.
5. Planning
The next iteration of the spiral begins with a new planning phase, based on the results of the
evaluation.

The Spiral Model is often used for complex and large software development projects, as it allows
for a more flexible and adaptable approach to software development. It is also well-suited to
projects with significant uncertainty or high levels of risk.The Radius of the spiral at any point
represents the expenses(cost) of the project so far, and the angular dimension represents the
progress made so far in the current phase.

Each phase of the Spiral Model is divided into four quadrants as shown in the above figure. The
functions of these four quadrants are discussed below:
1. Objectives determination and identify alternative solutions: Requirements are gathered
from the customers and the objectives are identified, elaborated, and analyzed at the start of
every phase. Then alternative solutions possible for the phase are proposed in this quadrant.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

2. Identify and resolve Risks: During the second quadrant, all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that solution are
identified and the risks are resolved using the best possible strategy. At the end of this
quadrant, the Prototype is built for the best possible solution.
3. Develop the next version of the Product: During the third quadrant, the identified
features are developed and verified through testing. At the end of the third quadrant, the next
version of the software is available.
4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the
so-far developed version of the software. In the end, planning for the next phase is started.

Risk Handling in Spiral Model


A risk is any adverse situation that might affect the successful completion of a software project.
The most important feature of the spiral model is handling these unknown risks after the project
has started. Such risk resolutions are easier done by developing a prototype.
1. The spiral model supports coping with risks by providing the scope to build a prototype at
every phase of software development.
2. The Prototyping Model also supports risk handling, but the risks must be identified
completely before the start of the development work of the project.
3. But in real life, project risk may occur after the development work starts, in that case, we
cannot use the Prototyping Model.
4. In each phase of the Spiral Model, the features of the product dated and analyzed, and the
risks at that point in time are identified and are resolved through prototyping.
5. Thus, this model is much more flexible compared to other SDLC models.

Why Spiral Model is called Meta Model?


The Spiral model is called a Meta-Model because it subsumes all the other SDLC models. For
example, a single loop spiral actually represents the Iterative Waterfall Model.
1. The spiral model incorporates the stepwise approach of the Classical Waterfall Model.
2. The spiral model uses the approach of the Prototyping Model by building a prototype at the
start of each phase as a risk-handling technique.
3. Also, the spiral model can be considered as supporting the Evolutionary model – the
iterations along the spiral can be considered as evolutionary levels through which the
complete system is built.

Advantages of the Spiral Model

Below are some advantages of the Spiral Model.


1. Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the risk
analysis and risk handling at every phase.
2. Good for large projects: It is recommended to use the Spiral Model in large and complex
projects.
3. Flexibility in Requirements: Change requests in the Requirements at a later phase can be
incorporated accurately by using this model.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

4. Customer Satisfaction: Customers can see the development of the product at the early
phase of the software development and thus, they habituated with the system by using it
before completion of the total product.
5. Iterative and Incremental Approach: The Spiral Model provides an iterative and
incremental approach to software development, allowing for flexibility and adaptability in
response to changing requirements or unexpected events.
6. Emphasis on Risk Management: The Spiral Model places a strong emphasis on risk
management, which helps to minimize the impact of uncertainty and risk on the software
development process.
7. Improved Communication: The Spiral Model provides for regular evaluations and
reviews, which can improve communication between the customer and the development
team.
8. Improved Quality: The Spiral Model allows for multiple iterations of the software
development process, which can result in improved software quality and reliability.

Disadvantages of the Spiral Model

Below are some main disadvantages of the spiral model.


1. Complex: The Spiral Model is much more complex than other SDLC models.
2. Expensive: Spiral Model is not suitable for small projects as it is expensive.
3. Too much dependability on Risk Analysis: The successful completion of the project is very
much dependent on Risk Analysis. Without very highly experienced experts, it is going to
be a failure to develop a project using this model.
4. Difficulty in time management: As the number of phases is unknown at the start of the
project, time estimation is very difficult.
5. Complexity: The Spiral Model can be complex, as it involves multiple iterations of the
software development process.
6. Time-Consuming: The Spiral Model can be time-consuming, as it requires multiple
evaluations and reviews.
7. Resource Intensive: The Spiral Model can be resource-intensive, as it requires a significant
investment in planning, risk analysis, and evaluations.
8. The most serious issue we face in the cascade model is that taking a long length to finish the
item, and the product became obsolete. To tackle this issue, we have another methodology,
which is known as the Winding model or spiral model. The winding model is otherwise
called the cyclic model.

When to Use the Spiral Model?


1. When a project is vast in software engineering, a spiral model is utilized.
2. A spiral approach is utilized when frequent releases are necessary.
3. When it is appropriate to create a prototype
4. When evaluating risks and costs is crucial
5. The spiral approach is beneficial for projects with moderate to high risk.
6. The SDLC’s spiral model is helpful when requirements are complicated and ambiguous.
7. If modifications are possible at any moment
8. When committing to a long-term project is impractical owing to shifting economic priorities.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

RAD model

The Rapid Application Development Model was first proposed by IBM in the 1980s. The RAD
model is a type of incremental process model in which there is an extremely short development
cycle. When the requirements are fully understood and the component-based construction
approach is adopted then the RAD model is used. Various phases in RAD are Requirements
Gathering, Analysis and Planning, Design, Build or Construction, and finally Deployment.

The critical feature of this model is the use of powerful development tools and techniques. A
software project can be implemented using this model if the project can be broken down into
small modules wherein each module can be assigned independently to separate teams. These
modules can finally be combined to form the final product. Development of each module
involves the various basic steps as in the waterfall model i.e. analyzing, designing, coding, and
then testing, etc. as shown in the figure. Another striking feature of this model is a short period
i.e. the time frame for delivery(time-box) is generally 60-90 days.

Multiple teams work on developing the software system using the RAD model parallely.

The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also an
integral part of the projects. This model consists of 4 basic phases:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

1. Requirements Planning – It involves the use of various techniques used in requirements


elicitation like brainstorming, task analysis, form analysis, user scenarios, FAST (Facilitated
Application Development Technique), etc. It also consists of the entire structured plan
describing the critical data, methods to obtain it, and then processing it to form a final refined
model.
2. User Description – This phase consists of taking user feedback and building the prototype
using developer tools. In other words, it includes re-examination and validation of the data
collected in the first phase. The dataset attributes are also identified and elucidated in this
phase.
3. Construction – In this phase, refinement of the prototype and delivery takes place. It
includes the actual use of powerful automated tools to transform processes and data models
into the final working product. All the required modifications and enhancements are too done
in this phase.
4. Cutover – All the interfaces between the independent modules developed by separate
teams have to be tested properly. The use of powerfully automated tools and subparts makes
testing easier. This is followed by acceptance testing by the user.
The process involves building a rapid prototype, delivering it to the customer, and taking
feedback. After validation by the customer, the SRS document is developed and the design is
finalized.

When to use RAD Model?


When the customer has well-known requirements, the user is involved throughout the life cycle,
the project can be time-boxed, the functionality delivered in increments, high performance is not
required, low technical risks are involved and the system can be modularized. In these cases, we
can use the RAD Model. when it is necessary to design a system that can be divided into smaller
units within two to three months. when there is enough money in the budget to pay for both the
expense of automated tools for code creation and designers for modeling.

Advantages:
• The use of reusable components helps to reduce the cycle time of the project.
• Feedback from the customer is available at the initial stages.
• Reduced costs as fewer developers are required.
• The use of powerful development tools results in better quality products in comparatively
shorter time spans.
• The progress and development of the project can be measured through the various stages.
• It is easier to accommodate changing requirements due to the short iteration time spans.
• Productivity may be quickly boosted with a lower number of employees.

Disadvantages:
• The use of powerful and efficient tools requires highly skilled professionals.
• The absence of reusable components can lead to the failure of the project.
• The team leader must work closely with the developers and customers to close the project
on time.
• The systems which cannot be modularized suitably cannot use this model.
• Customer involvement is required throughout the life cycle.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• It is not meant for small-scale projects as in such cases, the cost of using automated tools
and techniques may exceed the entire budget of the project.
• Not every application can be used with RAD.

Applications:

1. This model should be used for a system with known requirements and requiring a short
development time.
2. It is also suitable for projects where requirements can be modularized and reusable
components are also available for development.
3. The model can also be used when already existing system components can be used in
developing a new system with minimum changes.
4. This model can only be used if the teams consist of domain experts. This is because relevant
knowledge and the ability to use powerful techniques are a necessity.
5. The model should be chosen when the budget permits the use of automated tools and
techniques required.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

AGILE MODEL

The meaning of Agile is swift or versatile. "Agile process model" refers to a software development
approach based on iterative development. Agile methods break tasks into smaller iterations, or
parts do not directly involve long term planning. The project scope and requirements are laid down
at the beginning of the development process. Plans regarding the number of iterations, the duration
and the scope of each iteration are clearly defined in advance.

Each iteration is considered as a short time "frame" in the Agile process model, which typically
lasts from one to four weeks. The division of the entire project into smaller parts helps to minimize
the project risk and to reduce the overall project delivery time requirements. Each iteration
involves a team working through a full software development life cycle including planning,
requirements analysis, design, coding, and testing before a working product is demonstrated to the
client.

Phases of Agile Model:


Following are the phases in the Agile model are as follows:
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback

1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project.
Based on this information, you can evaluate technical and economic feasibility.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

2. Design the requirements: When you have identified the project, work with stakeholders
to define requirements. You can use the user flow diagram or the high-level UML diagram
to show the work of new features and show how it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance
and looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.

Principles of Agile:

1. The highest priority is to satisfy the customer through early and continuous delivery of
valuable software.
2. It welcomes changing requirements, even late in development.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference for the shortest timescale.
4. Build projects around motivated individuals. Give them the environment and the support
they need and trust them to get the job done.
5. Working software is the primary measure of progress.
6. Simplicity the art of maximizing the amount of work not done is essential.
7. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
8. By the amount of work that has been finished, gauge your progress.
9. Never give up on excellence.
10. Take advantage of change to gain a competitive edge.

Agile Software development cycle:


Let’s see a brief overview of how development occurs in Agile philosophy.
1. concept
2. inception
3. iteration/construction
4. release
5. production
6. retirement

Agile software development cycle

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Step 1: In the first step, concept, and business opportunities in each possible project are
identified and the amount of time and work needed to complete the project is estimated.
Based on their technical and financial viability, projects can then be prioritized and
determined which ones are worthwhile pursuing.
• Step 2: In the second phase, known as inception, the customer is consulted regarding the
initial requirements, team members are selected, and funding is secured. Additionally, a
schedule outlining each team’s responsibilities and the precise time at which each sprint’s
work is expected to be finished should be developed.
• Step 3: Teams begin building functional software in the third step, iteration/construction,
based on requirements and ongoing feedback. Iterations, also known as single development
cycles, are the foundation of the Agile software development cycle.

Agile Testing Methods

o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
o Feature Driven Development(FDD)
o Lean Software Development
o eXtremeProgramming(XP)

Scrum

SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions.

There are three roles in it, and their responsibilities are:

o Scrum Master: The scrum can set up the master team, arrange the meeting and remove
obstacles for the process
o Product owner: The product owner makes the product backlog, prioritizes the delay and
is responsible for the distribution of functionality on each repetition.
o Scrum Team: The team manages its work and organizes the work to complete the sprint
or cycle.

eXtremeProgramming(XP)

This type of methodology is used when customers are constantly changing demands or
requirements, or when they are not sure about the system's performance.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

When to use the Agile Model?


o When frequent changes are required.
o When a highly qualified and experienced team is available.
o When a customer is ready to have a meeting with a software team all the time.
o When project size is small.
Advantage (Pros) of Agile Method:
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Disadvantages (Cons) of Agile Model:
1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken
throughout various phases can be misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project completes and the developers
allotted to another project, maintenance of the finished project can become a difficulty.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

MODULE-2

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Requirement gathering is a crucial phase in the software development life cycle where
information about the desired features, functionalities, and constraints of a software system is
collected. Effective requirement gathering is essential for understanding the needs and
expectations of stakeholders, guiding the development process, and delivering a product that meets
user requirements. Here are key steps and considerations in the requirement gathering process:

1. **Identify Stakeholders:**
- Identify and involve all relevant stakeholders, including end-users, customers, project
managers, developers, testers, and other individuals or groups who have a vested interest in the
software.

2. **Define Project Scope:**


- Clearly outline the boundaries and objectives of the project. This helps in establishing the
context for requirement gathering and sets expectations for what the software will and will not do.

3. **Conduct Stakeholder Interviews:**


- Interview stakeholders to understand their perspectives, expectations, and requirements. This
can involve discussions with both end-users and those who have a broader project vision.

4. **Organize Workshops:**
- Conduct workshops or group sessions to bring together various stakeholders for collaborative
discussions. Workshops can facilitate communication and help uncover different perspectives and
requirements.

5. **Surveys and Questionnaires:**


- Use surveys or questionnaires to gather input from a larger audience, especially when dealing
with a large number of stakeholders or geographically dispersed teams.

6. **Review Existing Documentation:**


- Examine existing documentation such as business plans, user manuals, or any relevant
documents that might provide insights into the existing business processes and requirements.

7. **Prototypes and Mockups:**


- Create prototypes or mockups of the software to give stakeholders a visual representation of
the proposed system. This can help in validating and refining requirements based on tangible
examples.

8. **Document Requirements:**
- Document requirements in a clear and structured manner. This documentation may include
functional requirements, non-functional requirements, use cases, user stories, and any other
relevant information.

9. **Prioritize Requirements:**

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

- Work with stakeholders to prioritize requirements based on their importance and urgency. This
helps in managing scope and focusing on critical features.

10. **Resolve Conflicts:**


- Address any conflicts or discrepancies in requirements by facilitating discussions and reaching
a consensus among stakeholders. Clear communication is essential to avoid misunderstandings.

11. **Validation and Verification:**


- Validate requirements by reviewing them with stakeholders to ensure accuracy and
completeness. Verification involves checking that the requirements align with the overall project
goals.

12. **Iterative Process:**


- Recognize that requirement gathering is often an iterative process. Regularly review and
update requirements as the project progresses, taking into account changes in business needs or
project scope.

13. **Traceability:**
- Establish traceability between requirements and other project artifacts, such as design
documents and test cases. This helps ensure that every requirement is accounted for throughout
the development process.

14. **Communication:**
- Maintain open and effective communication channels with stakeholders. Regularly update
them on the progress of requirement gathering and seek their feedback.

Effective requirement gathering lays the foundation for a successful software development project
by providing a clear understanding of what needs to be built and guiding subsequent phases of the
development life cycle. It is a collaborative and dynamic process that requires ongoing
communication and collaboration among all stakeholders.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Requirements Analysis

Requirement analysis is significant and essential activity after elicitation. We analyze, refine, and
scrutinize the gathered requirements to make consistent and unambiguous requirements. This
activity reviews all requirements and may provide a graphical view of the entire system. After the
completion of the analysis, it is expected that the understandability of the project may improve
significantly. Here, we may also use the interaction with the customer to clarify points of confusion
and to understand which requirements are more important than others.

The various steps of requirement analysis are shown in fig:

(i)Draw the context diagram: The context diagram is a simple model that defines the boundaries
and interfaces of the proposed systems with the external world. It identifies the entities outside the
proposed system that interact with the system. The context diagram of student result management
system is given below:

(ii) Development of a Prototype (optional): One effective way to find out what the customer
wants is to construct a prototype, something that looks and preferably acts as part of the system
they say they want.
We can use their feedback to modify the prototype until the customer is satisfied continuously.
Hence, the prototype helps the client to visualize the proposed system and increase the

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

understanding of the requirements. When developers and users are not sure about some of the
elements, a prototype may help both the parties to take a final decision.
Some projects are developed for the general market. In such cases, the prototype should be shown
to some representative sample of the population of potential purchasers. Even though a person who
tries out a prototype may not buy the final system, but their feedback may allow us to make the
product more attractive to others.
The prototype should be built quickly and at a relatively low cost. Hence it will always have
limitations and would not be acceptable in the final system. This is an optional activity.
(iii) Model the requirements: This process usually consists of various graphical representations
of the functions, data entities, external entities, and the relationships between them. The graphical
view may help to find incorrect, inconsistent, missing, and superfluous requirements. Such models
include the Data Flow diagram, Entity-Relationship diagram, Data Dictionaries, State-transition
diagrams, etc.
(iv) Finalize the requirements: After modeling the requirements, we will have a better
understanding of the system behavior. The inconsistencies and ambiguities have been identified
and corrected. The flow of data amongst various modules has been analyzed. Elicitation and
analyze activities have provided better insight into the system. Now we finalize the analyzed
requirements, and the next step is to document these requirements in a prescribed format.

Functional requirements in software engineering describe the functionality that a software


system must provide to its users. These requirements specify what the system should do and are
typically documented during the early stages of the software development life cycle. Functional
requirements are essential for guiding the design, implementation, and testing of the software. Here
are some key aspects of functional requirements:

1. **User Interfaces:** Describes how users interact with the system, including details about
menus, screens, buttons, and other interface elements.

2. **Data Handling:** Specifies how the system will manage and manipulate data, including data
input, storage, retrieval, and processing.

3. **Processing Logic:** Defines the algorithms and logic that the system must follow to perform
specific functions or operations.

4. **System Behavior:** Describes the expected behavior of the system under different conditions
and scenarios.

5. **Business Rules:** Outlines the rules and regulations that the system must adhere to, often
derived from the business or operational processes it supports.

6. **Security Requirements:** Specifies the security features and measures the system must
implement to protect data and ensure authorized access.

7. **Performance Requirements:** Defines the system's performance expectations, such as


response time, throughput, and scalability.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

8. **Compatibility:** Specifies the compatibility requirements, such as supported browsers,


operating systems, and hardware platforms.

9. **External Interfaces:** Describes how the software will interact with other systems, services,
or external components.

10. **Error Handling:** Outlines how the system should respond to errors or exceptional
situations, including error messages and recovery mechanisms.

11. **Reporting:** Specifies the types of reports the system should generate and the information
they should include.

12. **Audit Trail:** Describes the system's ability to record and track user activities for auditing
purposes.

13. **Documentation:** Includes requirements for user manuals, technical documentation, and
any other documentation needed for system understanding and maintenance.

14. **Legal and Compliance Requirements:** Outlines any legal or regulatory requirements that
the system must comply with.

15. **Testing Requirements:** Describes the conditions and criteria for testing the software to
ensure that it meets the specified functional requirements.

16. **Usability:** Specifies the characteristics that contribute to the system's ease of use,
including user feedback, help features, and accessibility.

Functional requirements are crucial for both developers and stakeholders, as they provide a clear
roadmap for the development process and serve as a basis for validating the successful
implementation of the software.

Non-functional requirements in software engineering define the qualities or attributes that


describe the overall behavior of a system, rather than its specific functionalities. While functional
requirements focus on what the system does, non-functional requirements address how well the
system performs those functions. These requirements are essential for ensuring that the software
meets the necessary standards and provides a satisfactory user experience. Here are some common
categories of non-functional requirements:

1. **Performance:** Describes how the system performs in terms of speed, response time,
throughput, and efficiency. Examples include maximum response time for user interactions,
system scalability, and the ability to handle a specific number of concurrent users.

2. **Reliability:** Specifies the system's ability to perform its functions consistently and reliably
under various conditions. This includes measures such as system uptime, availability, and fault
tolerance.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

3. **Availability:** Defines the percentage of time the system should be operational and
accessible to users. Availability requirements often include factors such as planned downtime for
maintenance.

4. **Scalability:** Describes the system's capability to handle increased load or user demands by
expanding resources (e.g., adding more servers) without degrading performance.

5. **Security:** Outlines the measures and mechanisms to protect the system from unauthorized
access, data breaches, and other security threats. This includes authentication, authorization,
encryption, and audit trails.

6. **Usability:** Specifies characteristics related to the user interface and overall user experience.
This may include factors like ease of use, accessibility, and user satisfaction.

7. **Maintainability:** Describes the ease with which the software can be maintained, updated,
and enhanced over time. This includes aspects like code readability, modularity, and
documentation.

8. **Portability:** Addresses the system's ability to operate in different environments or on


various platforms. This may include requirements for compatibility with different operating
systems, browsers, or hardware configurations.

9. **Compatibility:** Specifies the compatibility of the software with other systems, software, or
technologies, ensuring seamless integration.

10. **Interoperability:** Describes how well the system can interact with other systems, often
focusing on communication protocols and data exchange formats.

11. **Reliability:** Outlines the system's ability to perform its functions consistently and without
errors over time.

12. **Compliance:** Ensures that the system adheres to legal, regulatory, and industry-specific
standards. This may include privacy regulations, data protection laws, or industry-specific
guidelines.

13. **Documentation:** Addresses requirements for documentation quality, completeness, and


availability, including user manuals, technical documentation, and training materials.

14. **Backup and Recovery:** Specifies the procedures and requirements for data backup,
recovery, and disaster recovery to ensure data integrity and availability.

Non-functional requirements are critical for shaping the overall performance and characteristics
of the software system and play a significant role in its success and acceptance by users and
stakeholders.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Software Requirement Specification (SRS) Format as the name suggests, is a complete


specification and description of requirements of the software that need to be fulfilled for the
successful development of the software system. These requirements can be functional as well as
non-functional depending upon the type of requirement. The interaction between different
customers and contractors is done because it is necessary to fully understand the needs of
customers.Depending upon information gathered after interaction, SRS is developed which
describes requirements of software that may include changes and modifications that is needed to
be done to increase quality of product and to satisfy customer’s demand.
Introduction
• Purpose of this Document – At first, main aim of why this document is necessary and
what’s purpose of document is explained and described.
• Scope of this document – In this, overall working and main objective of document and what
value it will provide to customer is described and explained. It also includes a description of
development cost and time required.
• Overview – In this, description of product is explained. It’s simply summary or overall
review of product.
General description
In this, general functions of product which includes objective of user, a user characteristic, features,
benefits, about why its importance is mentioned. It also describes features of user community.
Functional Requirements
In this, possible outcome of software system which includes effects due to operation of program
is fully explained. All functional requirements which may include calculations, data processing,
etc. are placed in a ranked order. Functional requirements specify the expected behavior of the
system-which outputs should be produced from the given inputs. They describe the relationship
between the input and output of the system. For each functional requirement, detailed description
all the data inputs and their source, the units of measure, and the range of valid inputs must be
specified.
Interface Requirements
In this, software interfaces which mean how software program communicates with each other or
users either in form of any language, code, or message are fully described and explained. Examples
can be shared memory, data streams, etc.
Performance Requirements
In this, how a software system performs desired functions under specific condition is explained. It
also explains required time, required memory, maximum error rate, etc. The performance
requirements part of an SRS specifies the performance constraints on the software system. All the
requirements relating to the performance characteristics of the system must be clearly specified.
There are two types of performance requirements: static and dynamic. Static requirements are
those that do not impose constraint on the execution characteristics of the system. Dynamic
requirements specify constraints on the execution behaviour of the system.
Design Constraints
In this, constraints which simply means limitation or restriction are specified and explained for
design team. Examples may include use of a particular algorithm, hardware and software
limitations, etc. There are a number of factors in the client’s environment that may restrict the
choices of a designer leading to design constraints such factors include standards that must be
followed resource limits, operating environment, reliability and security requirements and policies

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

that may have an impact on the design of the system. An SRS should identify and specify all such
constraints.
Non-Functional Attributes
In this, non-functional attributes are explained that are required by software system for better
performance. An example may include Security, Portability, Reliability, Reusability, Application
compatibility, Data integrity, Scalability capacity, etc.
Preliminary Schedule and Budget
In this, initial version and budget of project plan are explained which include overall time duration
required and overall cost required for development of project.
Uses of SRS document
• Development team require it for developing product according to the need.
• Test plans are generated by testing group based on the describe external behavior.
• Maintenance and support staff need it to understand what the software product is supposed to
do.
• Project manager base their plans and estimates of schedule, effort and resources on it.
• customer rely on it to know that product they can expect.
• As a contract between developer and customer.
• in documentation purpose.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Decision Table

A System Requirements Specification (SRS) document is a comprehensive description of the


intended behavior of a software system. While decision tables may not be explicitly part of the
SRS in the traditional sense, they can be referenced or included in the SRS to document specific
decision-making logic within the system. Decision tables can be particularly useful when
describing complex business rules or conditional logic.

Here's how decision tables might be incorporated into an SRS:

1. Requirement Description:
• Identify the specific requirements or features in the system that involve decision-making
logic.
2. Condition and Action Definition:
• Clearly define the conditions that influence the decision and the corresponding actions
that should be taken based on those conditions.
3. Decision Table Representation:
• Create a decision table to represent the various combinations of conditions and actions.
• Use columns for each condition and action, and rows for each unique combination.
4. Integration into SRS:
• Integrate the decision table into the SRS document, typically in the section related to the
specific requirement or feature it addresses.
• Provide context and explanations as needed to ensure that readers understand the purpose
and interpretation of the decision table.
5. Example:
• For instance, if the SRS specifies a requirement related to user authentication, you might
include a decision table that outlines conditions such as "Correct Username," "Correct
Password," and actions like "Grant Access" or "Deny Access."
### Requirement: User Authentication

#### Decision Table:

| Correct Username | Correct Password | Action |


| ---------------- | ---------------- | ----------------- |
| Yes | Yes | Grant Access |
| Yes | No | Deny Access |
| No |- | Deny Access |

#### Explanation:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

- When both the correct username and password are provided, access is granted.
- If the username is correct but the password is incorrect, access is denied.
- If the username is incorrect, access is denied.

Decision Tree
In a Software Requirements Specification (SRS) document, decision trees are not typically used
in their graphical form, as they are more commonly associated with algorithmic or data analysis
processes. However, decision logic and conditions that lead to different outcomes can certainly
be expressed and documented within the SRS.

Here's how decision logic might be integrated into an SRS:

1. **Requirement Description:**
- Clearly define the specific requirement or feature that involves decision-making.

2. **Conditions and Actions:**


- Clearly specify the conditions or criteria that lead to different actions or outcomes.
- Describe the decision-making process in detail.

3. **Decision Logic:**
- Present the decision logic in a structured and textual manner.
- Use if-else statements, bullet points, or any other format that clearly outlines the conditions
and corresponding actions.

4. **Example:**
- If the SRS addresses a requirement related to user authentication, you might express the
decision logic like this:

```markdown
### Requirement: User Authentication

#### Decision Logic:

1. If Correct Username and Correct Password:


- Grant access to the system.

2. If Correct Username but Incorrect Password:


- Deny access and notify the user of incorrect password.

3. If Incorrect Username:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

- Deny access and notify the user of incorrect username.

4. If Other System Error (e.g., database connection issue):


- Display an error message and prompt the user to try again later.

```

5. **Pseudo-Code:**
- In some cases, especially for complex decision logic, pseudo-code might be included to
provide a more algorithmic representation.

```markdown
### Requirement: Complex Decision Logic

#### Pseudo-Code:

```
if (Condition A is true) {
// Action A
performActionA();
} else if (Condition B is true) {
// Action B
performActionB();
} else {
// Default Action
performDefaultAction();
}
```

In this way, while not using a graphical decision tree, you can clearly represent decision logic
within the SRS using text, pseudo-code, or any other format that enhances understanding. The

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

goal is to communicate how the system should behave under different conditions.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

IEEE 830 Standard

IEEE 830: Recommended Practice for Software Requirements Specifications

• Describes the content and qualities of a good software requirements specification (SRS)
• Presents several sample SRS outlines

IEEE 830: Objectives

• Help software customers to accurately describe what they wish to obtain


• Help software suppliers to understand exactly what the customer wants
• Help participants to:
o Develop a template for the software requirements specification (SRS) in their own
organizations
o Develop additional documents such as SRS quality checklists or an SRS writer's
handbook

IEEE 830: Benefits

• Establish the basis for agreement between the customers and the suppliers on what the
software product is to do
• Reduce the development effort
o Elarlyrequirements → reduces later redesign, recoding, retesting
• Provide a basis for realistic estimates of costs and schedules
• Provide a basis for validation and verification
• Facilitate transfer of the software product to new users or new machines
• Serve as a basis for enhancement requests

How to produce a good SRS (IEEE 830: Section 4)

• Goals of SRS
o Functionality, interfaces, performance, qualities, design constraints
• Environment of the SRS
o Where does it fit in the overall project hierarchy
• Characteristics of a good SRS
o Generalization of the characteristicsto the document
• Evolution of the SRS
o Implies a change management process
• Prototyping
o Helps elicit software requirements and reach closure on the SRS
• Including design and project requirements in the SRS
o Focus on external behavior and the product, not the design and the production
process

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Structure of the SRS

Contents of SRS (IEEE 830, Section 5):

1. Introduction
2. General description of the software product
3. Specific requirements (detailed)
4. Additional information such as appendixes and index, if necessary

SRS: 1. Introduction

1.1. Purpose

• Describe purpose of this SRS


• Describe intended audience

1.2. Scope

• Identify the software product


• Enumerate what the system will and will not do
• Describe user classes and benefits for each

1.3. Definitions. Acronyms, and Abbreviations

• Define the vocabulary of the SRS (may reference appendix)

1.4. References

• List all referenced documents including sources

1.5. Overview

• Describe the content of the rest of the SRS


• Describe how the SRS is organized

1.6. Risk Analysis

• Describe the conclusions of risk analysis from using a risk template

SRS: 2. Overall Description

2.1. Product Perspective

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Present the business case and operational concept of the system


• Describe how the proposed system fits into the business context
• Describe external interfaces: system, user, hardware, software, communication
• Describe constraints: memory, operational, site adaptation

2.2. Product Functions

• Summarize the major functional capabilities


• Include the Use Case Diagram and supporting narrative (identify actors and use cases)
• Include Data Flow Diagram if appropriate

2.3. User Characteristics

• Describe and justify technical skills and capabilities of each user class

2.4. Constraints

• Describe other constraints that will limit developer's options:


o regulatory policies;
o target platform, database, network software, etc.;
o development standards requirements

2.5. Assumptions and Dependencies

• List each of the factors that affect the requirements stated

2.6 Apportioning of Requirements

• Identify requirements that may be delayed until future versions

SRS: 3. Specific requirements

• Specify software requirements in sufficient detail to enable designers to design a system


to satisfy those requirements and testers to verify requirements
• State requirements that are externally perceivable by users, operators, or externally
connected systems
• Requirements should include, at a minimum, a description of every input (stimulus) into
the system, every output (response) from the system, and all functions performed by the
system in response to an input or in support of an output
o Requirements should have characteristics of high quality requirements
o Requirements should be cross-referenced to their source.
o Requirements should be uniquely identifiable
o Requirements should be organized to maximize readability

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

3.1 External Interfaces

• Detail all inputs and outputs (complement, not duplicate, information presented in section
2)
• Examples: GUI screens, file formats

3.2 Functions

• Include detailed specifications of each use case, including collaboration and other
diagrams useful for this purpose

3.3 Performance Requirements

• Include the static and the dynamic numerical requirements placed on the software or on
human interaction with the software as a whole.

3.4 Logical Database Requirements

• Include types of information used


• Include data entities and their relationships

3.5 Design Constraints

• Specify design constraints that can be imposed by other standards, hardware limitations,
etc.
• Report format
• Data naming
• Accounting & Auditing procedures

3.6 Software System Attributes

• Reliability, Availability, Security, Maintainability, Portability

3.7 Organizing the specific requirements

• The main body of requirements organized in a variety of possible ways:


o Architecture Specification
o Class Diagram
o State and Collaboration Diagrams
o Activity Diagram (concurrent/distributed)

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Structured Analysis and Structured Design (SA/SD) is a diagrammatic notation that is designed
to help people understand the system. The basic goal of SA/SD is to improve quality and reduce
the risk of system failure. It establishes concrete management specifications and documentation.
It focuses on the solidity, pliability, and maintainability of the system.
Structured Analysis and Structured Design (SA/SD) is a software development method that was
popular in the 1970s and 1980s. The method is based on the principle of structured programming,
which emphasizes the importance of breaking down a software system into smaller, more
manageable components.
In SA/SD, the software development process is divided into two phases: Structured Analysis and
Structured Design. During the Structured Analysis phase, the problem to be solved is analyzed and
the requirements are gathered. The Structured Design phase involves designing the system to meet
the requirements that were gathered in the Structured Analysis phase.
Structured Analysis and Structured Design (SA/SD) is a traditional software development
methodology that was popular in the 1980s and 1990s. It involves a series of techniques for
designing and developing software systems in a structured and systematic way. Here are some
key concepts of SA/SD:
1. Functional Decomposition: SA/SD uses functional decomposition to break down a complex
system into smaller, more manageable subsystems. This technique involves identifying the
main functions of the system and breaking them down into smaller functions that can be
implemented independently.
2. Data Flow Diagrams (DFDs): SA/SD uses DFDs to model the flow of data through the
system. DFDs are graphical representations of the system that show how data moves between
the system’s various components.
3. Data Dictionary: A data dictionary is a central repository that contains descriptions of all the
data elements used in the system. It provides a clear and consistent definition of data
elements, making it easier to understand how the system works.
4. Structured Design: SA/SD uses structured design techniques to develop the system’s
architecture and components. It involves identifying the major components of the system,
designing the interfaces between them, and specifying the data structures and algorithms that
will be used to implement the system.
5. Modular Programming: SA/SD uses modular programming techniques to break down the
system’s code into smaller, more manageable modules. This makes it easier to develop, test,
and maintain the system.
Some advantages of SA/SD include its emphasis on structured design and documentation, which
can help improve the clarity and maintainability of the system. However, SA/SD has some
disadvantages, including its rigidity and inflexibility, which can make it difficult to adapt to
changing business requirements or technological trends. Additionally, SA/SD may not be well-
suited for complex, dynamic systems, which may require more agile development
methodologies.

The following are the steps involved in the SA/SD process:

1. Requirements gathering: The first step in the SA/SD process is to gather requirements from
stakeholders, including users, customers, and business partners.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

2. Structured Analysis: During the Structured Analysis phase, the requirements are analyzed to
identify the major components of the system, the relationships between those components,
and the data flows within the system.
3. Data Modeling: During this phase, a data model is created to represent the data used in the
system and the relationships between data elements.
4. Process Modeling: During this phase, the processes within the system are modeled using
flowcharts and data flow diagrams.
5. Input/Output Design: During this phase, the inputs and outputs of the system are designed,
including the user interface and reports.
6. Structured Design: During the Structured Design phase, the system is designed to meet the
requirements gathered in the Structured Analysis phase. This may include selecting
appropriate hardware and software platforms, designing databases, and defining data
structures.
7. Implementation and Testing: Once the design is complete, the system is implemented and
tested.
SA/SD has been largely replaced by more modern software development methodologies, but its
principles of structured analysis and design continue to influence current software development
practices. The method is known for its focus on breaking down complex systems into smaller
components, which makes it easier to understand and manage the system as a whole.
Basically, the approach of SA/SD is based on the Data Flow Diagram. It is easy to understand
SA/SD but it focuses on well-defined system boundary whereas the JSD approach is too complex
and does not have any graphical representation.
SA/SD is combined known as SAD and it mainly focuses on the following 3 points:
System
Process
Technology
SA/SD involves 2 phases:
Analysis Phase: It uses Data Flow Diagram, Data Dictionary, State Transition diagram and ER
diagram.
Design Phase: It uses Structure Chart and Pseudo Code.

1. Analysis Phase:
Analysis Phase involves data flow diagram, data dictionary, state transition diagram, and entity-
relationship diagram.
Data Flow Diagram:
In the data flow diagram, the model describes how the data flows through the system. We can
incorporate the Boolean operators and & or link data flow when more than one data flow may be
input or output from a process.
For example, if we have to choose between two paths of a process we can add an operator or and
if two data flows are necessary for a process we can add an operator. The input of the process
“check-order” needs the credit information and order information whereas the output of the process
would be a cash-order or a good-credit-order.

Data Dictionary:
The content that is not described in the DFD is described in the data dictionary. It defines the data
store and relevant meaning. A physical data dictionary for data elements that flow between

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

processes, between entities, and between processes and entities may be included. This would also
include descriptions of data elements that flow external to the data stores.
A logical data dictionary may also be included for each such data element. All system names,
whether they are names of entities, types, relations, attributes, or services, should be entered in the
dictionary.

State Transition Diagram:


State transition diagram is similar to the dynamic model. It specifies how much time the function
will take to execute and data access triggered by events. It also describes all of the states that an
object can have, the events under which an object changes state, the conditions that must be
fulfilled before the transition will occur and the activities were undertaken during the life of an
object.

ER Diagram:
ER diagram specifies the relationship between data store. It is basically used in database design. It
basically describes the relationship between different entities.
2. Design Phase:
Design Phase involves structure chart and pseudocode.
Structure Chart:
It is created by the data flow diagram. Structure Chart specifies how DFS’s processes are grouped
into tasks and allocated to the CPU. The structured chart does not show the working and internal
structure of the processes or modules and does not show the relationship between data or data
flows. Similar to other SASD tools, it is time and cost-independent and there is no error-checking
technique associated with this tool. The modules of a structured chart are arranged arbitrarily and
any process from a DFD can be chosen as the central transform depending on the analysts’ own
perception. The structured chart is difficult to amend, verify, maintain, and check for completeness
and consistency.
Pseudo Code: It is the actual implementation of the system. It is an informal way of programming
that doesn’t require any specific programming language or technology.
Advantages of Structured Analysis and Structured Design (SA/SD):
Clarity and Simplicity: The SA/SD method emphasizes breaking down complex systems into
smaller, more manageable components, which makes the system easier to understand and manage.
Better Communication: The SA/SD method provides a common language and framework for
communicating the design of a system, which can improve communication between stakeholders
and help ensure that the system meets their needs and expectations.
Improved maintainability: The SA/SD method provides a clear, organized structure for a system,
which can make it easier to maintain and update the system over time.
Better Testability: The SA/SD method provides a clear definition of the inputs and outputs of a
system, which makes it easier to test the system and ensure that it meets its requirements.
Disadvantages of Structured Analysis and Structured Design (SA/SD):
Time-Consuming: The SA/SD method can be time-consuming, especially for large and complex
systems, as it requires a significant amount of documentation and analysis.
Inflexibility: Once a system has been designed using the SA/SD method, it can be difficult to make
changes to the design, as the process is highly structured and documentation-intensive.
Limited Iteration: The SA/SD method is not well-suited for iterative development, as it is designed
to be completed in a single pass.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

System design involves creating both a High-Level Design (HLD), which is like a roadmap
showing the overall plan, and a Low-Level Design (LLD), which is a detailed guide for
programmers on how to build each part. It ensures a well-organized and smoothly functioning
project. High-Level Design and Low-Level Design are the two main aspects of System Design.
What is High Level Design(HLD)?
High-level design or HLD refers to the overall system, a design that consists description of the
system architecture and design and is a generic system design that includes:
System architecture
Database design
Brief description of systems, services, platforms, and relationships among modules.
A diagram representing each design aspect is included in the HLD (which is based on business
requirements and anticipated results).
It contains description of hardware, software interfaces, and also user interfaces.
It is also known as macro level/system design
It is created by solution architect.
The workflow of the user’s typical process is detailed in the HLD, along with performance
specifications.
What is Low Level Design(LLD)?
LLD, or Low-Level Design, is a phase in the software development process where detailed system
components and their interactions are specified.
It describes detailed description of each and every module means it includes actual logic for every
system component and it goes deep into each modules specification.
It is also known as micro level/detailed design.
It is created by designers and developers.
It involves converting the high-level design into a more detailed blueprint, addressing specific
algorithms, data structures, and interfaces.
LLD serves as a guide for developers during coding, ensuring the accurate and efficient
implementation of the system’s functionality.
Conclusion
High-Level Design documents are like big-picture plans that help project managers and architects
understand how a system will work and low-Level Design documents are more detailed and are
made for programmers.
They show exactly how to write the code and make the different parts of the system fit together.
Both documents are important for different people involved in making and maintaining the
software.
Creating a High-Level Design is like making a big plan for the software, and it helps find problems
early, so the quality of the software can be better assured.
On the other hand, when Low-Level Design is well-documented, it makes it easier for others to
check the code and ensure its quality during the actual writing of the software.
Coupling and Cohesion – Software Engineering
Introduction: The purpose of the Design phase in the Software Development Life Cycle is to
produce a solution to a problem given in the SRS(Software Requirement Specification) document.
The output of the design phase is a Software Design Document (SDD).
Coupling and Cohesion are two key concepts in software engineering that are used to measure the
quality of a software system’s design.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Coupling refers to the degree of interdependence between software modules. High coupling means
that modules are closely connected and changes in one module may affect other modules. Low
coupling means that modules are independent and changes in one module have little impact on
other modules.
Cohesion refers to the degree to which elements within a module work together to fulfill a single,
well-defined purpose. High cohesion means that elements are closely related and focused on a
single purpose, while low cohesion means that elements are loosely related and serve multiple
purposes.
Both coupling and cohesion are important factors in determining the maintainability, scalability,
and reliability of a software system. High coupling and low cohesion can make a system difficult
to change and test, while low coupling and high cohesion make a system easier to maintain and
improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design which tells the
customer what the system will do. Second is Technical Design which allows the system builders
to understand the actual hardware and software needed to solve a customer’s problem.

Conceptual design of the system:


Written in simple language i.e. customer understandable language.
Detailed explanation about system characteristics.
Describes the functionality of the system.
It is independent of implementation.
Linked with requirement document.
Technical Design of the System:
Hardware component and design.
Functionality and hierarchy of software components.
Software architecture
Network architecture
Data structure and flow of data.
I/O component of the system.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Shows interface.
Modularization: Modularization is the process of dividing a software system into multiple
independent modules where each module works independently. There are many advantages of
Modularization in software engineering. Some of these are given below:
Easy to understand the system.
System maintenance is easy.
A module can be used many times as their requirements. No need to write it again and again.
Coupling: Coupling is the measure of the degree of interdependence between the modules. A good
software will have low coupling.

Types of Coupling:
Data Coupling: If the dependency between the modules is based on the fact that they communicate
by passing only data, then the modules are said to be data coupled. In data coupling, the
components are independent of each other and communicate through data. Module
communications don’t contain tramp data. Example-customer billing system.
Stamp Coupling In stamp coupling, the complete data structure is passed from one module to
another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors-
this choice was made by the insightful designer, not a lazy programmer.
Control Coupling: If the modules communicate by passing control information, then they are said
to be control coupled. It can be bad if parameters indicate completely different behavior and good
if parameters allow factoring and reuse of functionality. Example- sort function that takes
comparison function as an argument.
External Coupling: In external coupling, the modules depend on other modules, external to the
software being developed or to a particular type of hardware. Ex- protocol, external file, device
format, etc.
Common Coupling: The modules have shared data such as global data structures. The changes in
global data mean tracing back to all modules which access that data to evaluate the effect of the
change. So it has got disadvantages like difficulty in reusing modules, reduced ability to control
data accesses, and reduced maintainability.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Content Coupling: In a content coupling, one module can modify the data of another module, or
control flow is passed from one module to the other module. This is the worst form of coupling
and should be avoided.
Temporal Coupling: Temporal coupling occurs when two modules depend on the timing or order
of events, such as one module needing to execute before another. This type of coupling can result
in design issues and difficulties in testing and maintenance.
Sequential Coupling: Sequential coupling occurs when the output of one module is used as the
input of another module, creating a chain or sequence of dependencies. This type of coupling can
be difficult to maintain and modify.
Communicational Coupling: Communicational coupling occurs when two or more modules share
a common communication mechanism, such as a shared message queue or database. This type of
coupling can lead to performance issues and difficulty in debugging.
Functional Coupling: Functional coupling occurs when two modules depend on each other’s
functionality, such as one module calling a function from another module. This type of coupling
can result in tightly-coupled code that is difficult to modify and maintain.
Data-Structured Coupling: Data-structured coupling occurs when two or more modules share a
common data structure, such as a database table or data file. This type of coupling can lead to
difficulty in maintaining the integrity of the data structure and can result in performance issues.
Interaction Coupling: Interaction coupling occurs due to the methods of a class invoking methods
of other classes. Like with functions, the worst form of coupling here is if methods directly access
internal parts of other methods. Coupling is lowest if methods communicate directly through
parameters.
Component Coupling: Component coupling refers to the interaction between two classes where a
class has variables of the other class. Three clear situations exist as to how this can happen. A class
C can be component coupled with another class C1, if C has an instance variable of type C1, or C
has a method whose parameter is of type C1,or if C has a method which has a local variable of
type C1. It should be clear that whenever there is component coupling, there is likely to be
interaction coupling.
Cohesion: Cohesion is a measure of the degree to which the elements of the module are
functionally related. It is the degree to which all elements directed towards performing a single
task are contained in the component. Basically, cohesion is the internal glue that keeps the module
together. A good software design will have high cohesion.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Types of Cohesion:
Functional Cohesion: Every essential element for a single computation is contained in the
component. A functional cohesion performs the task and functions. It is an ideal situation.
Sequential Cohesion: An element outputs some data that becomes the input for other element, i.e.,
data flow between the parts. It occurs naturally in functional programming languages.
Communicational Cohesion: Two elements operate on the same input data or contribute towards
the same output data. Example- update record in the database and send it to the printer.
Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions are
still weakly connected and unlikely to be reusable. Ex- calculate student GPA, print student record,
calculate cumulative GPA, print cumulative GPA.
Temporal Cohesion: The elements are related by their timing involved. A module connected with
temporal cohesion all the tasks must be executed in the same time span. This cohesion contains
the code for initializing all the parts of the system. Lots of different activities occur, all at unit
time.
Logical Cohesion: The elements are logically related and not functionally. Ex- A component reads
inputs from tape, disk, and network. All the code for these functions is in the same component.
Operations are related, but the functions are significantly different.
Coincidental Cohesion: The elements are not related(unrelated). The elements have no conceptual
relationship other than location in source code. It is accidental and the worst form of cohesion. Ex-
print next line and reverse the characters of a string in a single component.
Procedural Cohesion: This type of cohesion occurs when elements or tasks are grouped together
in a module based on their sequence of execution, such as a module that performs a set of related
procedures in a specific order. Procedural cohesion can be found in structured programming
languages.
Communicational Cohesion: Communicational cohesion occurs when elements or tasks are
grouped together in a module based on their interactions with each other, such as a module that
handles all interactions with a specific external system or module. This type of cohesion can be
found in object-oriented programming languages.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Temporal Cohesion: Temporal cohesion occurs when elements or tasks are grouped together in a
module based on their timing or frequency of execution, such as a module that handles all periodic
or scheduled tasks in a system. Temporal cohesion is commonly used in real-time and embedded
systems.
Informational Cohesion: Informational cohesion occurs when elements or tasks are grouped
together in a module based on their relationship to a specific data structure or object, such as a
module that operates on a specific data type or object. Informational cohesion is commonly used
in object-oriented programming.
Functional Cohesion: This type of cohesion occurs when all elements or tasks in a module
contribute to a single well-defined function or purpose, and there is little or no coupling between
the elements. Functional cohesion is considered the most desirable type of cohesion as it leads to
more maintainable and reusable code.
Layer Cohesion: Layer cohesion occurs when elements or tasks in a module are grouped together
based on their level of abstraction or responsibility, such as a module that handles only low-level
hardware interactions or a module that handles only high-level business logic. Layer cohesion is
commonly used in large-scale software systems to organize code into manageable layers.
Advantages of low coupling:
Improved maintainability: Low coupling reduces the impact of changes in one module on other
modules, making it easier to modify or replace individual components without affecting the entire
system.
Enhanced modularity: Low coupling allows modules to be developed and tested in isolation,
improving the modularity and reusability of code.
Better scalability: Low coupling facilitates the addition of new modules and the removal of
existing ones, making it easier to scale the system as needed.
Advantages of high cohesion:
Improved readability and understandability: High cohesion results in clear, focused modules with
a single, well-defined purpose, making it easier for developers to understand the code and make
changes.
Better error isolation: High cohesion reduces the likelihood that a change in one part of a module
will affect other parts, making it easier to
Improved reliability: High cohesion leads to modules that are less prone to errors and that function
more consistently,
leading to an overall improvement in the reliability of the system.
Disadvantages of high coupling:
Increased complexity: High coupling increases the interdependence between modules, making the
system more complex and difficult to understand.
Reduced flexibility: High coupling makes it more difficult to modify or replace individual
components without affecting the entire system.
Decreased modularity: High coupling makes it more difficult to develop and test modules in
isolation, reducing the modularity and reusability of code.
Disadvantages of low cohesion:
Increased code duplication: Low cohesion can lead to the duplication of code, as elements that
belong together are split into separate modules.
Reduced functionality: Low cohesion can result in modules that lack a clear purpose and contain
elements that don’t belong together, reducing their functionality and making them harder to
maintain.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Difficulty in understanding the module: Low cohesion can make it harder for developers to
understand the purpose and behavior of a module, leading to errors and a lack of clarity.

Modularity in Software Engineering


Modularity specifies the separation of concerned ‘components’ of the software which can be
addressed and named separately. These separated components are referred to as ‘modules’. These
modules can be integrated to satisfy the requirement of other software.
Define Modularity
Modularity can be defined as a mechanism where a complex system is divided into several
components that are referred to as ‘modules’. Now, whenever a customer requests to develop
software we can use or integrate these modules to develop new software.
The required modules can be selected and integrated to develop new software in this way, we
can customize the software according to users need.

Example 1:

Consider an example, we all have been playing Legos to build different structures. Here we have
several components i.e. blocks which we integrate to build the structure that we want to.

Similarly, we can break the complex codes into different components and using or integrating
those components you can create a new program to develop new software.

Example 2:

Consider that if we have to divide an automobile into several subsystems then the components or
subsystems would be: engines, breaks, wheels, chassis etc.

Here you can observe all the subsystems are independents of each other as much as possible.
And these components can be integrated to design a new automobile.

Why Modularity?

To understand the importance of modularity, consider that we have a monolithic software that
has a large program including a single module. Now if we ask any software engineer to
understand this large program, then it is not that easy for him to do so.

There will be a lot of local variables, global variables their span of reference, several control
paths etc. This will increase the complexity of the large program making it hard to digest. As a
solution to this, the large program must be divided into several components or modules as it will
become easy to understand the modules.

As we are saying that effort or the developing cost will get reduced with the increase in modules.
But as the number, if modules get increased, the cost required to integrate the several modules
also get increased.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

So, you must be careful while modularizing the software. The software should neither be left as
un-modularized nor it must be over-modularized.

Now, with modularity when you try to develop software using the modules that are independent
of each other and has very few references to each other then you have to be conscious at all the
stages of software development such as:

1. Architectural Design

In the architectural design phase, the large-scale structure of the software is determined. You
have to be very careful while creating modularity at this phase as you have to the entire logical
structure of the software.

2. Components Design

If you have created modularity in the architectural design of the software it becomes easy to
figure out and design the individual components. The modular components have a well-defined
purpose and they have a few connections with the other components.

3. Debugging

If the components are designed using modularity then it becomes easy to track them down. You
can easily debug which component is responsible for the error.

Now as we say that the components have little connection to other components of the software so
correcting a single component will not have an adverse effect on the other component.

4. Testing

Once the components are integrated to develop software it becomes almost impossible to test the
entire software at once. Testing one component at a time is much easier.

5. Maintenance

Maintenance is a process of fixing or enhancing the system, to perform according to users need.
Here also modularity plays a vital role. As making changes to a module must not affect another
connected module in the system.

6. Independent Development

Software is never developed by one person. There is a team of people who develop the software
in terms of modules and components. Each person in a team is assigned to develop an individual
component that’s why they also have to take care that the interfaces between the components are
few and all of them are clear.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

7. Damage Control

If the connection between the components of the system is few and clear then the error in one
component will not spread damage to the other components of the system.

8. Software Reuse

Good modularity makes you reuse the component of the earlier software. The reusable
components must:

1. Provide some useful service.


2. It must perform a single function.
3. Have few and clear connections to other components in the systems.

Classification of Components

As modularity speaks of dividing a system into subsystems, components or modules. The


components of software can be classified into:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Computation-only: The computation-only component is the module that performs calculations


or some computations requested by the user. Well, the data used or produced during the
computation does not retain in the computation-only component during subsequent uses.
• Memory: The memory component of the software stores data and this data is beyond the life of
a program or other component.
• Manager: Manager components of a program can be the abstract data type such as stacks or
queues. These components manage and maintain the data and operations performed on this data.
• Controller: The controller component of the software controls when and how should the other
components of the software interact with each other.
• Link: The link components of the software are responsible for transferring the data between the
interacting components.

Though this is the general classification of any software it provides a guide to the developer to
create modularity straight away from the architectural design of the software.

• Computation-only: The computation-only component is the module that performs calculations


or some computations requested by the user. Well, the data used or produced during the
computation does not retain in the computation-only component during subsequent uses.
• Memory: The memory component of the software stores data and this data is beyond the life of
a program or other component.
• Manager: Manager components of a program can be the abstract data type such as stacks or
queues. These components manage and maintain the data and operations performed on this data.
• Controller: The controller component of the software controls when and how should the other
components of the software interact with each other.
• Link: The link components of the software are responsible for transferring the data between the
interacting components.

Though this is the general classification of any software it provides a guide to the developer to
create modularity straight away from the architectural design of the software.

Benefits of Modularity

1. Modularity let the development of software be divided into several components that can be
implemented simultaneously by the team of developers. This minimizes the time that is required
to develop software.
2. Modularity makes the components of the software reusable.
3. As modularity breaks the large complex program into components, it improves manageability.
As it is easy to develop, test, and maintain the small components.
4. It is also easy to debug and trace the error in modular programs.

So, this is all about modularity in software engineering. We have seen the importance of
modularity and how it can be used to develop efficient software. We have also learned about the
term cohesion and coupling plays an important role in creating good modularity and we have
ended up discussing the benefits of modularity.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Layered Technology in Software Engineering


Software engineering is a fully layered technology, to develop software we need to go from
one layer to another. All the layers are connected and each layer demands the fulfillment of
the previous layer.

Layered technology is divided into four parts:


1. A quality focus: It defines the continuous process improvement principles of software. It
provides integrity that means providing security to the software so that data can be accessed by
only an authorized person, no outsider can access the data. It also focuses on maintainability and
usability.
2. Process: It is the foundation or base layer of software engineering. It is key that binds all the
layers together which enables the development of software before the deadline or on
time. Process defines a framework that must be established for the effective delivery of software
engineering technology. The software process covers all the activities, actions, and tasks
required to be carried out for software development.

Process activities are listed below:-


• Communication: It is the first and foremost thing for the development of software.
Communication is necessary to know the actual demand of the client.
• Planning: It basically means drawing a map for reduced the complication of development.
• Modeling: In this process, a model is created according to the client for better
understanding.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Construction: It includes the coding and testing of the problem.


• Deployment:- It includes the delivery of software to the client for evaluation and feedback.
3. Method: During the process of software development the answers to all “how-to-do”
questions are given by method. It has the information of all the tasks which includes
communication, requirement analysis, design modeling, program construction, testing, and
support.
4. Tools: Software engineering tools provide a self-operating system for processes and methods.
Tools are integrated which means information created by one tool can be used by another.

Function–Oriented software design: Structured Analysis using DFD Structured Design


using Structure Chart,

The design process for software systems often has two levels. At the first level, the focus is on
deciding which modules are needed for the system based on SRS (Software Requirement
Specification) and how the modules should be interconnected.
Function Oriented Design is an approach to software design where the design is decomposed
into a set of interacting units where each unit has a clearly defined function.
Generic Procedure
Start with a high-level description of what the software/program does. Refine each part of the
description by specifying in greater detail the functionality of each part. These points lead to a
Top-Down Structure.

Problem in Top-Down Design Method


Mostly each module is used by at most one other module and that module is called its Parent
module.
Solution to the Problem
Designing of reusable module. It means modules use several modules to do their required
functions.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Function Oriented Design Strategies


Function Oriented Design Strategies are as follows:
1. Data Flow Diagram (DFD): A data flow diagram (DFD) maps out the flow of information
for any process or system. It uses defined symbols like rectangles, circles and arrows, plus
short text labels, to show data inputs, outputs, storage points and the routes between each
destination.
2. Data Dictionaries: Data dictionaries are simply repositories to store information about all
data items defined in DFDs. At the requirement stage, data dictionaries contains data items.
Data dictionaries include Name of the item, Aliases (Other names for items), Description /
purpose, Related data items, Range of values, Data structure definition / form.
3. Structure Charts: Structure chart is the hierarchical representation of system which
partitions the system into black boxes (functionality is known to users, but inner details are
unknown). Components are read from top to bottom and left to right. When a module calls
another, it views the called module as a black box, passing required parameters and receiving
results.
4. Pseudo Code: Pseudo Code is system description in short English like phrases describing
the function. It uses keyword and indentation. Pseudocodes are used as replacement for flow
charts. It decreases the amount of documentation required.

Structure Charts in Function Oriented Design


For a function-oriented design, the design can be represented graphically by structure charts.
The structure of a program is made up of the modules of that program together with the modules
of that program together with the interconnections between modules. The structure chart of a
program is a graphic representation of its structure.
1. In a structure chart a module is represented by a box with the module name written in the box.
2. In general, procedural information is not represented in a structure chart, and the focus is
on representing the hierarchy of modules.
3. However, there are situations where the designer may wish to communicate certain
procedural information explicitly, like major loop and decisions.
4. Such information can also be represented in a structure chart.
5. Modules in a system can be categorized into few classes as below:
6. Input module: There are some modules that obtain information from their subordinates and
then pass it to their superordinate.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

7. Output module: Module which take information from their superordinate and pass it on to
its subordinates.
8. Transform module: Modules that exist solely for the sake of transforming data into some
other form.
9. Coordinate module: Modules whose primary concern is managing the flow of data to and
from different subordinates.
10. A structure chart is a nice representation for a design that uses functional abstraction.

DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a
process is represented by DFD. It also gives insight into the inputs and outputs of each entity
and the process itself. DFD does not have control flow and no loops or decision rules are
present. Specific operations depending on the type of data can be explained by a flowchart. It
is a graphical tool, useful for communicating with users ,managers and other personnel. it is
useful for analyzing existing as well as proposed system.
It should be pointed out that a DFD is not a flowchart. In drawing the DFD, the designer has to
specify the major transforms in the path of the data flowing from the input to the output. DFDs
can be hierarchically organized, which helps in progressively partitioning and analyzing large
systems.
It provides an overview of
• What data is system processes.
• What transformation are performed.
• What data are stored.
• What results are produced , etc.
Data Flow Diagram can be represented in several ways. The DFD belongs to structured-
analysis modeling tools. Data Flow diagrams are very popular because they help us to visualize
the major steps and data involved in software-system processes.

Characteristics of DFD
• DFDs are commonly used during problem analysis.
• DFDs are quite general and are not limited to problem analysis for software requirements
specification.
• DFDs are very useful in understanding a system and can be effectively used during
analysis.
• It views a system as a function that transforms the inputs into desired outputs.
• The DFD aims to capture the transformations that take place within a system to the input
data so that eventually the output data is produced.
• The processes are shown by named circles and data flows are represented by named arrows
entering or leaving the bubbles.
• A rectangle represents a source or sink and it is a net originator or consumer of data. A
source sink is typically outside the main system of study.

Components of DFD

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

The Data Flow Diagram has 4 components:


• Process Input to output transformation in a system takes place because of process function.
The symbols of a process are rectangular with rounded corners, oval, rectangle or a circle.
The process is named a short sentence, in one word or a phrase to express its essence
• Data Flow Data flow describes the information transferring between different parts of the
systems. The arrow symbol is the symbol of data flow. A relatable name should be given to
the flow to determine the information which is being moved. Data flow also represents
material along with information that is being moved. Material shifts are modeled in
systems that are not merely informative. A given flow should only transfer a single type of
information. The direction of flow is represented by the arrow which can also be bi-
directional.
• Warehouse The data is stored in the warehouse for later use. Two horizontal lines
represent the symbol of the store. The warehouse is simply not restricted to being a data
file rather it can be anything like a folder with documents, an optical disc, a filing cabinet.
The data warehouse can be viewed independent of its implementation. When the data flow
from the warehouse it is considered as data reading and when data flows to the warehouse
it is called data entry or data updating.
• Terminator The Terminator is an external entity that stands outside of the system and
communicates with the system. It can be, for example, organizations like banks, groups of
people like customers or different departments of the same organization, which is not a part
of the model system and is an external entity. Modeled systems also communicate with
terminator.

Rules for creating DFD


• The name of the entity should be easy and understandable without any extra assistance(like
comments).
• The processes should be numbered or put in ordered list to be referred easily.
• The DFD should maintain consistency across all the DFD levels.
• A single DFD can have a maximum of nine processes and a minimum of three processes.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Symbols Used in DFD


• Square Box: A square box defines source or destination of the system. It is also called
entity. It is represented by rectangle.
• Arrow or Line: An arrow identifies the data flow i.e. it gives information to the data that is
in motion.
• Circle or bubble chart: It represents as a process that gives us information. It is also called
processing box.
• Open Rectangle: An open rectangle is a data store. In this data is store either temporary or
permanently.
Levels of DFD
DFD uses hierarchy to maintain transparency thus multilevel DFD’s can be created. Levels of
DFD are as follows:
• 0-level DFD: It represents the entire system as a single bubble and provides an overall
picture of the system.
• 1-level DFD: It represents the main functions of the system and how they interact with each
other.
• 2-level DFD: It represents the processes within each function of the system and how they
interact with each other.
• 3-level DFD: It represents the data flow within each process and how the data is
transformed and stored.
Advantages of DFD
• It helps us to understand the functioning and the limits of a system.
• It is a graphical representation which is very easy to understand as it helps visualize
contents.
• Data Flow Diagram represent detailed and well explained diagram of system components.
• It is used as the part of system documentation file.
• Data Flow Diagrams can be understood by both technical or nontechnical person because
they are very easy to understand.
Disadvantages of DFD
• At times DFD can confuse the programmers regarding the system.
• Data Flow Diagram takes long time to be generated, and many times due to this reasons
analysts are denied permission to work on it.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Object-Oriented Design
In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities).
The state is distributed among the objects, and each object handles its state data. For example, in
a Library Automation Software, each library representative may be a separate object with its data
and functions to operate on these data. The tasks defined for one purpose cannot refer or change
data of other objects. Objects have their internal data which represent their state. Similar objects
create a class. In other words, each object is a member of some class. Classes may inherit features
from the superclass.

The different terms related to object design are:

1. Objects: All entities involved in the solution design are known as objects. For example,
person, banks, company, and users are considered as objects. Every entity has some
attributes associated with it and has some methods to perform on the attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a
class. A class defines all the attributes, which an object can have and methods, which
represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity of
the target object, the name of the requested operation, and any other action needed to
perform the function. Messages are often implemented as procedure or function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction.
Abstraction is the removal of the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and
operations are linked to a single unit. Encapsulation not only bundles essential information
of an object together but also restricts access to the data and methods from the outside
world.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the
lower or sub-classes can import, implement, and re-use allowed variables and functions
from their immediate superclasses.This property of OOD is called an inheritance. This
makes it easier to define a specific class and to create generalized classes from specific
ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing similar
tasks but vary in arguments, can be assigned the same name. This is known as
polymorphism, which allows a single interface is performing functions for different types.
Depending upon how the service is invoked, the respective portion of the code gets
executed.

User Interface Design


The visual part of a computer application or operating system through which a client interacts with
a computer or software. It determines how commands are given to the computer or the program
and how data is displayed on the screen.

Types of User Interface

There are two main types of User Interface:


o Text-Based User Interface or Command Line Interface
o Graphical User Interface (GUI)

Text-Based User Interface: This method relies primarily on the keyboard. A typical example of
this is UNIX.

Advantages
o Many and easier to customizations options.
o Typically capable of more important tasks.

Disadvantages
o Relies heavily on recall rather than recognition.
o Navigation is often more difficult.

Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example
of this type of interface is any versions of the Windows operating systems.

GUI Characteristics
Characteristics Descriptions

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Windows Multiple windows allow different information to be displayed simultaneously on


the user's screen.

Icons Icons different types of information. On some systems, icons represent files. On
other icons describes processes.

Menus Commands are selected from a menu rather than typed in a command language.

Pointing A pointing device such as a mouse is used for selecting choices from a menu or
indicating items of interests in a window.

Graphics Graphics elements can be mixed with text or the same display.

Advantages
o Less expert knowledge is required to use it.
o Easier to Navigate and can look through folders quickly in a guess and check manner.
o The user may switch quickly from one task to another and can interact with several
different applications.

Disadvantages
o Typically decreased options.
o Usually less customizable. Not easy to use one button for tons of different variations.

UI Design Principles

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Structure: Design should organize the user interface purposefully, in the meaningful and usual
based on precise, consistent models that are apparent and recognizable to users, putting related
things together and separating unrelated things, differentiating dissimilar things and making
similar things resemble one another. The structure principle is concerned with overall user
interface architecture.

Simplicity: The design should make the simple, common task easy, communicating clearly and
directly in the user's language, and providing good shortcuts that are meaningfully related to longer
procedures.

Visibility: The design should make all required options and materials for a given function visible
without distracting the user with extraneous or redundant data.

Feedback: The design should keep users informed of actions or interpretation, changes of state or
condition, and bugs or exceptions that are relevant and of interest to the user through clear, concise,
and unambiguous language familiar to users.

Tolerance: The design should be flexible and tolerant, decreasing the cost of errors and misuse by
allowing undoing and redoing while also preventing bugs wherever possible by tolerating varied
inputs and sequences and by interpreting all reasonable actions.

What is a command-line interface?


A command-line interface (CLI) is a text-based user interface (UI) used to run programs, manage
computer files and interact with the computer. Command-line interfaces are also called command-
line user interfaces, console user interfaces and character user interfaces. CLIs accept as

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

input commands that are entered by keyboard; the commands invoked at the command prompt are
then run by the computer.
How do CLIs work?
Once a computer system is running, its CLI opens on a blank screen with a command prompt and
commands can be entered.
Types of CLI commands include the following:
• system commands that are encoded as part of the operating system interface;
• executable programs that, when successfully invoked, run text-based or graphical applications;
and
• batch programs (or batch files or shell scripts) which are text files listing a sequence of
commands. When successfully invoked, a batch program runs its commands which may
include both system commands and executable programs.
CLI is more than a simple command/response system, as most have additional features that make
one preferable to another. Some features include the following:
• Scripting capability enables users to write programs that can be run on the system from the
command line.
• Command pipes enable users to direct the output of one program to be the input for another
program ("piping" the flow of data).
• System variables can be set at the command line, or the values of those variables displayed.
• Command history features enable the user to recall previous commands issued. Some save
command history for the session (like PowerShell), others can be configured to store session
history for longer (like bash).

menu-driven interface
A menu-driven interface is a type of user interface where users interact with a program or system
through a series of menus. These menus present options or commands that the user can select,
typically through the use of a pointer, keyboard, or touchscreen, simplifying the interaction with
the system.
Benefits of menu-driven interface
Menu-driven interfaces come with several benefits:

• Intuitive Navigation: Menus logically categorize and group similar functions together, making
it easier for users to find what they need.
• Reduced Errors: By limiting user choices to valid options, the chances of errors are reduced.
• Efficiency: Menus often provide shortcuts to frequently used functions, enhancing user
efficiency.
• Accessibility: They can be more accessible for users with certain disabilities because they don’t
rely on memorizing specific commands or sequences.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Consistency: They provide a consistent structure and operation across different parts of an
application or system, improving the user experience.
• Flexibility: They are adaptable to different input methods (mouse, touch, keyboard), making
them suitable for a variety of devices and contexts.
• User-friendly: They are typically easy to understand and use, even for less tech-savvy users, as
they offer a visual representation of options and commands.
How to create menu-driven interface
Creating a menu-driven interface involves a multi-step process. Here’s a outline:

1. Identify User Needs: Understand the needs and requirements of your users, the tasks
they need to perform, and the context of use. This is usually achieved through methods
such as user interviews, surveys, and usage data analysis.
2. Design the Menu Structure: Define the hierarchy of the menus based on the identified
user tasks. Group similar functions together. Consider the depth and breadth of the menu
structure – it should be easy to navigate, not too deep (many levels) or too broad (many
options on one level).
3. Design the Menu Layout: Design the visual representation of the menu. This might be
dropdown menus, sidebars, toolbars, etc. The layout should be consistent across the
application.
4. Implement the Menu: Using a programming language or a software tool, implement the
menu in your application. This often involves coding the behavior of the menu, including
handling user interactions.
5. Test and Iterate: Perform usability testing to verify that the menu works as intended and
is easy to use. Use the feedback to refine and improve the menu.
6. Document: Document the design and implementation details of the menu interface for
future reference and updates.

ICONIC INTERFACE

A user interface that displays graphic elements to represent menu options. Also called an "iconic
interface" and "widget-based interface," the term is often used to contrast a GUI interface with a
command-line interface..

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

MODULE-3

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Software testing techniques are the ways employed to test the application under test against
the functional or non-functional requirements gathered from business. Each testing technique
helps to find a specific type of defect. For example, Techniques that may find structural
defects might not be able to find the defects against the end-to-end business flow. Hence,
multiple testing techniques are applied in a testing project to conclude it with acceptable
quality. Software testing techniques are methods used to design and execute tests to evaluate
software applications. The following are common testing techniques:
1. Manual testing – Involves manual inspection and testing of the software by a human tester.
2. Automated testing – Involves using software tools to automate the testing process.
3. Functional testing – Tests the functional requirements of the software to ensure they are
met.
4. Non-functional testing – Tests non-functional requirements such as performance, security,
and usability.
5. Unit testing – Tests individual units or components of the software to ensure they are
functioning as intended.
6. Integration testing – Tests the integration of different components of the software to
ensure they work together as a system.
7. System testing – Tests the complete software system to ensure it meets the specified
requirements.
8. Acceptance testing – Tests the software to ensure it meets the customer’s or end-user’s
expectations.
9. Regression testing – Tests the software after changes or modifications have been made to
ensure the changes have not introduced new defects.
10. Performance testing – Tests the software to determine its performance characteristics such
as speed, scalability, and stability.
11. Security testing – Tests the software to identify vulnerabilities and ensure it meets security
requirements.
12. Exploratory testing – A type of testing where the tester actively explores the software to
find defects, without following a specific test plan.
13. Boundary value testing – Tests the software at the boundaries of input values to identify
any defects.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

14. Usability testing – Tests the software to evaluate its user-friendliness and ease of use.
15. User acceptance testing (UAT) – Tests the software to determine if it meets the end-user’s
needs and expectations.

Principles of Testing
1. All the tests should meet the customer’s requirements.
2. To make our software testing should be performed by a third party.
3. Exhaustive testing is not possible. As we need the optimal amount of testing based on the
risk assessment of the application.
4. All the tests to be conducted should be planned before implementing it.
5. It follows the Pareto rule (80/20 rule) which states that 80% of errors come from 20% of
program components.
6. Start testing with small parts and extend it to large parts.

Types of Software Testing Techniques

There are two main categories of software testing techniques:


1. Static Testing Techniques are testing techniques that are used to find defects in an
application under test without executing the code. Static Testing is done to avoid errors at an
early stage of the development cycle thus reducing the cost of fixing them.
2. Dynamic Testing Techniques are testing techniques that are used to test the dynamic
behaviour of the application under test, that is by the execution of the code base. The main
purpose of dynamic testing is to test the application with dynamic inputs- some of which
may be allowed as per requirement (Positive testing) and some are not allowed (Negative
Testing).

Advantages of software testing techniques:

1. Improves software quality and reliability – By using different testing techniques, software
developers can identify and fix defects early in the development process, reducing the risk
of failure or unexpected behaviour in the final product.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

2. Enhances user experience – Techniques like usability testing can help to identify usability
issues and improve the overall user experience.
3. Increases confidence – By testing the software, developers, and stakeholders can have
confidence that the software meets the requirements and works as intended.
4. Facilitates maintenance – By identifying and fixing defects early, testing makes it easier to
maintain and update the software.
5. Reduces costs – Finding and fixing defects early in the development process is less expensive
than fixing them later in the life cycle.

Disadvantages of software testing techniques:

1. Time-consuming – Testing can take a significant amount of time, particularly if thorough


testing is performed.
2. Resource-intensive – Testing requires specialized skills and resources, which can be
expensive.
3. Limited coverage – Testing can only reveal defects that are present in the test cases, and
defects can be missed.
4. Unpredictable results – The outcome of testing is not always predictable, and defects can be
hard to replicate and fix.
5. Delivery delays – Testing can delay the delivery of the software if testing takes longer than
expected or if significant defects are identified.
6. Automated testing limitations – Automated testing tools may have limitations, such as
difficulty in testing certain aspects of the software, and may require significant maintenance
and updates.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Code review is a process of software quality assurance that concerns primarily the code base.
A peer or a senior developer, called a reviewer, reads parts of the source code to give a second
opinion on it. The key purpose is to optimize the code in the latter stages and prevent the
unstable code from launching into usage. It also creates some spirit of collective ownership over
the project’s progress and keeps the team involved in planning the later phases of development.
In case the code lines cover more than one domain, a minimum of 2 experts are required to
review it. The reviewers help to:
• enhance code quality,
• figure out logic problems,
• identify bugs,
• uncover edge cases.
The process touches upon 4 major areas:
• Code,
• Formatting consistency with overall solution design,
• Documentation quality,
• The compliance of coding standards with project requirements.
What Are the Benefits of Code Review?
According to Stripe research conducted with Harris Poll, developers spend over 4 hours a week
on average fixing bad code. That constitutes about 300B USD in lost productivity every year.
So, we are going to disclose what are the benefits of code review for the development company.
1. Ensuring consistency in design and implementation
Every specialist has their own background and a unique style of programming. Thus, the
collaboration of multiple developers in big projects can be challenging. Code review helps all
experts working on the project standardize the source code and adhere to certain coding
practices.
It is also helpful for future developers in building new features without wasting time on code
studies, especially when we are talking about open-source projects with multiple contributors.
2. Discovering bugs earlier
With source code review, developers get the chance to spot and fix the problem before the users
ever see it. Moreover, by moving this process earlier in the development cycle, the specialists

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

can start fixing without waiting until the end of a lifecycle, when more effort is needed to
remember the reasoning, solutions, and code itself.
3. Verification for the developed and required features
Each project has well-defined requirements and scope of work, and several developers working
on the project can create various features accordingly. It’s vital to assure that none of them
misinterpreted a requirement or crafted a useless feature. It’s exactly what code review helps to
achieve while also ensuring all the critical features were created as defined in the specification
and requirements.
4. Sharing knowledge
Code review practices encourage not only collaboration between the experts and exchanging
feedback, but also sharing of ideas, skills, and knowledge of the latest technologies. Thus,
junior team members can learn new approaches, techniques, and solutions, upgrading their
knowledge.
5. Enhancing security
Team members check the source code for vulnerabilities and warn developers about the threats.
So, code reviews help to create high-level safety, especially when security experts are involved.
6. Better documentation creation
Code reviews help create better documentation so that the developers can easily add new
features to the solution in the future or upgrade the existing ones.

What Are the Disadvantages of Code Review?


The disadvantages of code review are to a high degree mere inconvenience to developers, taking
their time and attention. Let’s look into them in more detail.
• Longer time to release
Time spent on the review, further discussion of the results, and possibly corrections of the errors
found can delay the launch of the software solution. Even though automation can be used for
testing, the process will still take some extra time.
• Shifting focus from other tasks
Since the process presupposes fresh eyes on the code, the reviewers may be sometimes forced
to leave their own coding tasks in favor of their colleague’s code review. With a heavy
workload, it causes delays in other projects.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Extra developer time


Large projects require a significant amount of time for code examination and detailed feedback.
At times, developers may sacrifice the feedback quality to accurately review the code.
How to Do a Code Review
Now knowing what are the disadvantages of code review as well as its benefits, we can proceed
to the major steps for the code review process, which can be a real challenge.
Even though the practices differ from team to team, there are common points to keep in mind:
1. Set goals and metrics
It is important to define the key metrics and set clear-cut goals that include acceptable corporate
coding standards.
2. Convey your goals and expectations
Without communicating goals and expectations, the result can be unpredictable. Not knowing
what is expected, a developer may fail to properly complete the task.
3. Define the process
A clearly defined process of code review helps the whole team stay on track and minimize the
time spent on testing.
4. Use a checklist
A checklist of the critical aspects and criteria created in advance will help the reviewer not to
miss anything.
5. Require annotation from the author in advance
Annotation helps the reviewer comprehend the code and the functions of its separate blocks
better. So, encourage developers to supplement their code with annotations.
6. Review for an hour at a time and not more than that
It is not recommended to review code for more than one hour since after 60 minutes the
efficiency of a reviewer drops, and certain defects may stay unnoticed.
7. Set a process for fixing bugs detected
Fixing the errors is the ultimate goal of code review, so define the process and make sure it’s
realized in the most efficient manner.
8. Foster a positive culture
Code reviews are intended to evaluate the performance of a developer, but they should also be
used to create a positive culture and a supportive environment of learning.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

9. Automate
There are things to check manually, but there are ones that can be verified with automatic tools.
Such tools can scan the entire codebase in less than a minute, spot its defects and offer solutions
right away.

Code Review Techniques

4 most popular examples of code review techniques are as follows:


1. Instant code reviewing
This technique is characterized by the simultaneous work of the author and the reviewer sitting
next to the developer, reading the code and correcting it if it’s necessary on the go. The process
is good for highly complex projects but is not favorable for companies. Two people working on
the same code mean fewer average lines per developer and more interruptions.
2. Ad-hoc reviewing of the code
It’s also a synchronous method of code review, but rather informal and spontaneous. The author
produces the code and then requests a review from his senior colleague on the shared screen.
The code is discussed over the shoulder.
This technique has many risks of missing errors because the reviewer often lacks information
on the project goal.
3. Meeting-based code reviewing
This technique is the least common. A meeting of the tech team is called after the coders
complete their work. Everybody shares ideas and suggests ways to solve problems. However,
this process requires a lot of time, decreases efficiency, and results in a loss of workforce for
the duration of the review.
4. Tool-based review
It’s an asynchronous code review technique when the author makes the code available to the
other team members for review. The reviewer checks the code on their screen providing
comments, or even amendments and notifying the coder to improve it. As soon as there are no
changes, the code is marked with no comments and gets approved.
The process is faster and more efficient, and possible at any time convenient for the reviewer.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Software documentation

is a written piece of text that is often accompanied by a software program. This makes the life
of all the members associated with the project easier. It may contain anything from API
documentation, build notes or just help content. It is a very critical process in software
development. It’s primarily an integral part of any computer code development method.
Moreover, computer code practitioners are a unit typically concerned with the worth, degree of
usage, and quality of the actual documentation throughout the development and its maintenance
throughout the total method. Motivated by the requirements of Novatel opposition, a world-
leading company developing package in support of worldwide navigation satellite system, and
based mostly on the results of a former systematic mapping studies area unit aimed at a higher
understanding of the usage and therefore the quality of varied technical documents throughout
computer code development and their maintenance. For example, before the development of any
software product requirements is documented which is called Software Requirement
Specification (SRS). Requirement gathering is considered a stage of Software Development Life
Cycle (SDLC).
Another example can be a user manual that a user refers to for installing, using, and providing
maintenance to the software application/product.

Types Of Software Documentation:


1. Requirement Documentation: It is the description of how the software shall perform and
which environment setup would be appropriate to have the best out of it. These are generated
while the software is under development and is supplied to the tester groups too.
2. Architectural Documentation: Architecture documentation is a special type of
documentation that concerns the design. It contains very little code and is more focused on
the components of the system, their roles, and working. It also shows the data flow
throughout the system.
3. Technical Documentation: These contain the technical aspects of the software like API,
algorithms, etc. It is prepared mostly for software devs.
4. End-user Documentation: As the name suggests these are made for the end user. It
contains support resources for the end user.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Purpose of Documentation:

• Due to the growing importance of computer code necessities, the method of crucial them
needs to be effective to notice desired results. As to such determination of necessities is often
beneath sure regulation and pointers that area unit core in getting a given goal.
• These all imply that computer code necessities area unit expected to alter thanks to the ever
ever-changing technology within the world. However, the very fact that computer code
information I’d obtained through development has to be modified within the wants of users
and the transformation of the atmosphere area unit is inevitable.
• What is more, computer code necessities ensure that there’s verification and therefore the
testing method, in conjunction with prototyping and conferences there are focus teams and
observations?
• For a software engineer reliable documentation is typically a should the presence of
documentation helps keep track of all aspects of associate applications, and it improves the
standard of wares, it’s the most focused area of unit development, maintenance, and
information transfer to alternative developers. Productive documentation can build info
simply accessible, offer a restricted range of user entry purposes, facilitate new users to learn
quickly, alter the merchandise and facilitate chopping out the price.
• For a programmer reliable documentation is always a must the presence keeps track of all
aspects of an application and helps in keeping the software updated.

Principles of Software Documentation:


While writing or contributing into any software documentation, one must keep in mind the
following set of 7-principles :
1. Write from reader’s point of view:
It’s important to keep in mind the targeted audience that will be learning, and working through
the software’s documentation to understand and implement the fully functional robust software
application and even the ones who will be learning for the purpose of using the software. So,
while writing a documentation it becomes very crucial to use the simplest language & domain
related specific languages and terminologies. The structure of the documentation should be
organized in a clearly viewable, navigable and understandable format.
• If there’s a lot of content, you can organize it in the glossary part at the end of the document.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• List down synonyms, antonyms and difficult terminologies used.


2. Avoid unnecessary repetition:
While the idea of hyperlinking and backlinking may seem redundant at the moment, but it aids
in avoiding the need of redundancy. The back-end database stores every piece of information as
an individual unit and displays it in various different variety of context so redundancy at any
point will not be maintainable and is considered a bad practice.
3. Avoid ambiguity:
Documentation contains a lot of information regarding the versatile functionalities of the
software system, every part of it must be written with clear and precise knowledge while
avoiding any conflicting information that might cause confusion to the reader. For example, if
one terminology is used in different set of context than it must be explicitly defined what it
means so to avoid any miscommunication. This aspect of the software documentation is very
important to avoid any kind of conflicting knowledge between the stakeholders, developers and
the maintainers.
4. Follow a certain standard organization:
In order to maintain the professionalism, accuracy, and precision of the document a certain set
of principles must be followed taking reference from other software documentations that would
aid in organizing and structuring the content of the documentation in a much productive and
organized way.
5. Record a Rationale
Rationale contains a comprehensive understanding of why a certain design or development
decision was made. This part of our documentation is written & maintained by the developer or
the designer itself for justification and verification for later needs. Rationale can be mentioned
in the start or the end of the document although typically, it’s in the start of the document.
6. Keep the documentation updated but to an extent
This principle applies to the maintainers of the documentation of the software, because updates
are made to the software on frequent intervals. The updates may contain some bug fixes, new
feature addition or previous functionality maintenance. The maintainer of the documentation
must only add the valuable content and avoid anything that doesn’t fit and irrelevant for that
particular time.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

7. Review documentation
The documentation consists of too many web-pages collectively holding a large chunk of
information that’s serving a sole purpose – educate and spread knowledge to anyone who is
trying to understand or implement the software. While working with a lot of information it is
important ta take feedback from senior architects and make any necessary changes aligning the
documentation with its sole purpose depending on the type of documentation.
Advantages of software documentation
• The presence of documentation helps in keeping the track of all aspects of an application
and also improves the quality of the software product.
• The main focus is based on the development, maintenance, and knowledge transfer to other
developers.
• Helps development teams during development.
• Helps end-users in using the product.
• Improves overall quality of software product
• It cuts down duplicative work.
• Makes easier to understand code.
• Helps in establishing internal coordination in work.
Disadvantages of software documentation
• The documenting code is time-consuming.
• The software development process often takes place under time pressure, due to which
many times the documentation updates don’t match the updated code.
• The documentation has no influence on the performance of an application.
• Documenting is not so fun, it’s sometimes boring to a certain extent.
The agile methodology encourages engineering groups to invariably concentrate on delivering
prices to their customers. This key should be thought-about within the method of manufacturing
computer code documentation.a good package ought to be provided whether it’s a computer code
specifications document for programmers, testers, or a computer code manual for finish users.

Testing: - Unit testing, Black-box Testing, White-box testing,


Testing is the process of executing a program to find errors. To make our software perform well
it should be error-free. If testing is done successfully it will remove all the errors from the
software. In this article, we will discuss first the principles of testing and then we will discuss,
the different types of testing.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Principles of Testing
• All the tests should meet the customer’s requirements.
• To make our software testing should be performed by a third party.
• Exhaustive testing is not possible. As we need the optimal amount of testing based on the
risk assessment of the application.
• All the tests to be conducted should be planned before implementing it
• It follows the Pareto rule(80/20 rule) which states that 80% of errors come from 20% of
program components.
• Start testing with small parts and extend it to large parts.
• Types of Testing
There are basically 10 types of Testing.
• Unit Testing
• Integration Testing
• System Testing
• Functional Testing
• Acceptance Testing
• Smoke Testing
• Regression Testing
• Performance Testing
• Security Testing
• User Acceptance Testing

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Unit Testing
Unit testing is a method of testing individual units or components of a software application. It is
typically done by developers and is used to ensure that the individual units of the software are
working as intended. Unit tests are usually automated and are designed to test specific parts of
the code, such as a particular function or method. Unit testing is done at the lowest level of
the software development process, where individual units of code are tested in isolation.

Objective of Unit Testing:


The objective of Unit Testing is:
1. To isolate a section of code.
2. To verify the correctness of the code.
3. To test every function and procedure.
4. To fix bugs early in the development cycle and to save costs.
5. To help the developers understand the code base and enable them to make changes quickly.
6. To help with code reuse.

Types of Unit Testing:


There are 2 types of Unit Testing: Manual, and Automated.
Workflow of Unit Testing:

Unit Testing Techniques:


There are 3 types of Unit Testing Techniques. They are
1. Black Box Testing: This testing technique is used in covering the unit tests for input, user
interface, and output parts.
2. White Box Testing: This technique is used in testing the functional behavior of the system
by giving the input and checking the functionality output including the internal design
structure and code of the modules.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

3. Gray Box Testing: This technique is used in executing the relevant test cases, test
methods, and test functions, and analyzing the code performance for the modules.

Unit Testing Tools:


Here are some commonly used Unit Testing tools:
1. Jtest
2. Junit
3. NUnit
4. EMMA
5. PHPUnit

Advantages of Unit Testing: Some of the advantages of Unit Testing are listed below.
• It helps to identify bugs early in the development process before they become more
difficult and expensive to fix.
• It helps to ensure that changes to the code do not introduce new bugs.
• It makes the code more modular and easier to understand and maintain.
• It helps to improve the overall quality and reliability of the software.

Black box testing

Black-box testing is a type of software testing in which the tester is not concerned with the
internal knowledge or implementation details of the software but rather focuses on validating
the functionality based on the provided specifications or requirements.

Black box testing can be done in the following ways:


1. Syntax-Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example, language can be represented by context-free
grammar. In this, the test cases are generated so that each grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many types of inputs work similarly so instead
of giving all of them separately we can group them and test only one input of each group. The
idea is to partition the input domain of the system into several equivalence classes such that each
member of the class works similarly, i.e., if a test case in one class results in some error, other
members of the class would also result in the same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into a minimum of two
sets: valid values and invalid values. For example, if the valid range is 0 to 100 then select
one valid input like 49 and one invalid like 104.
2. Generating test cases – (i) To each valid and invalid class of input assign a unique
identification number. (ii) Write a test case covering all valid and invalid test cases
considering that no two invalid inputs mask each other. To calculate the square root of a
number, the equivalence classes will be (a) Valid inputs:
• The whole number which is a perfect square-output will be an integer.
• The entire number which is not a perfect square-output will be a decimal number.
• Positive decimals
• Negative numbers(integer or decimal).

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Characters other than numbers like “a”,”!”,”;”, etc.


3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence, if
test cases are designed for boundary values of the input domain then the efficiency of testing
improves and the probability of finding errors also increases. For example – If the valid range is
10 to 100 then test for 10,100 also apart from valid and invalid inputs.
4. Cause effect graphing – This technique establishes a relationship between logical input
called causes with corresponding actions called the effect. The causes and effects are represented
using Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop a cause-effect graph.
3. Transform the graph into a decision table.
4. Convert decision table rules to test cases.
5. For example, in the following cause-effect graph:

6.
7. It can be converted into a decision table like:

8.
Each column corresponds to a rule which will become a test case for testing. So there will be 4
test cases.
5. Requirement-based testing – It includes validating the requirements given in the SRS of a
software system.
6. Compatibility testing – The test case results not only depends on the product but is also on
the infrastructure for delivering functionality. When the infrastructure parameters are changed it

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

is still expected to work properly. Some parameters that generally affect the compatibility of
software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32-bit or 64-bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).

Black Box Testing Type


The following are the several categories of black box testing:
1. Functional Testing
2. Regression Testing
3. Nonfunctional Testing (NFT)
Functional Testing: It determines the system’s software functional requirements.
Regression Testing: It ensures that the newly added code is compatible with the existing code.
In other words, a new software update has no impact on the functionality of the software. This
is carried out after a system maintenance operation and upgrades.
Nonfunctional Testing: Nonfunctional testing is also known as NFT. This testing is not
functional testing of software. It focuses on the software’s performance, usability, and
scalability.
Tools Used for Black Box Testing:
1. Appium
2. Selenium
3. Microsoft Coded UI
4. Applitools
5. HP QTP.
What can be identified by Black Box Testing
1. Discovers missing functions, incorrect function & interface errors
2. Discover the errors faced in accessing the database
3. Discovers the errors that occur while initiating & terminating any functions.
4. Discovers the errors in performance or behaviour of software.
Features of black box testing:
1. Independent testing: Black box testing is performed by testers who are not involved in the
development of the application, which helps to ensure that testing is unbiased and impartial.
2. Testing from a user’s perspective: Black box testing is conducted from the perspective of
an end user, which helps to ensure that the application meets user requirements and is easy
to use.
3. No knowledge of internal code: Testers performing black box testing do not have access
to the application’s internal code, which allows them to focus on testing the application’s
external behaviour and functionality.
4. Requirements-based testing: Black box testing is typically based on the application’s
requirements, which helps to ensure that the application meets the required specifications.
5. Different testing techniques: Black box testing can be performed using various testing
techniques, such as functional testing, usability testing, acceptance testing, and regression
testing.
6. Easy to automate: Black box testing is easy to automate using various automation tools,
which helps to reduce the overall testing time and effort.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

7. Scalability: Black box testing can be scaled up or down depending on the size and
complexity of the application being tested.
8. Limited knowledge of application: Testers performing black box testing have limited
knowledge of the application being tested, which helps to ensure that testing is more
representative of how the end users will interact with the application.
Advantages of Black Box Testing:
• The tester does not need to have more functional knowledge or programming skills to
implement the Black Box Testing.
• It is efficient for implementing the tests in the larger system.
• Tests are executed from the user’s or client’s point of view.
• Test cases are easily reproducible.
• It is used in finding the ambiguity and contradictions in the functional specifications.
Disadvantages of Black Box Testing:
• There is a possibility of repeating the same tests while implementing the testing process.
• Without clear functional specifications, test cases are difficult to implement.
• It is difficult to execute the test cases because of complex inputs at different stages of testing.
• Sometimes, the reason for the test failure cannot be detected.
• Some programs in the application are not tested.
• It does not reveal the errors in the control structure.
• Working with a large sample space of inputs can be exhaustive and consumes a lot of time.

White box Testing


White box testing is also known as structural testing or code-based testing, and it is used to
test the software’s internal logic, flow, and structure. The tester creates test cases to examine
the code paths and logic flows to ensure they meet the specified requirements.
Process of White Box Testing
1. Input: Requirements, Functional specifications, design documents, source code.
2. Processing: Performing risk analysis to guide through the entire process.
3. Proper test planning: Designing test cases to cover the entire code. Execute rinse-repeat
until error-free software is reached. Also, the results are communicated.
4. Output: Preparing final report of the entire testing process.
Testing Techniques
1. Statement Coverage
In this technique, the aim is to traverse all statements at least once. Hence, each line of code is
tested. In the case of a flowchart, every node must be traversed at least once. Since all lines of
code are covered, it helps in pointing out faulty code.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Branch Coverage:
In this technique, test cases are designed so that each branch from all decision points is traversed
at least once. In a flowchart, all edges must be traversed at least once.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the following example:
• READ X, Y
• IF(X == 0 || Y == 0)
• PRINT ‘0’
• #TC1 – X = 0, Y = 55
• #TC2 – X = 5, Y = 0
4. Multiple Condition Coverage
In this technique, all the possible combinations of the possible outcomes of conditions are tested
at least once. Let’s consider the following example:
• READ X, Y
• IF(X == 0 || Y == 0)
• PRINT ‘0’
• #TC1: X = 0, Y = 0
• #TC2: X = 0, Y = 5
• #TC3: X = 55, Y = 0

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• #TC4: X = 55, Y = 5
5. Basis Path Testing
In this technique, control flow graphs are made from code or flowchart and then Cyclomatic
complexity is calculated which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path. Steps:
• Make the corresponding control flow graph
• Calculate the cyclomatic complexity
• Find the independent paths
• Design test cases corresponding to each independent path
• V(G) = P + 1, where P is the number of predicate nodes in the flow graph
• V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
• V(G) = Number of non-overlapping regions in the graph
• #P1: 1 – 2 – 4 – 7 – 8
• #P2: 1 – 2 – 3 – 5 – 7 – 8
• #P3: 1 – 2 – 3 – 6 – 7 – 8
• #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing
Loops are widely used and these are fundamental to many algorithms hence, their testing is very
important. Errors often occur at the beginnings and ends of loops.
• Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
• Nested loops: For nested loops, all the loops are set to their minimum count, and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this is
worked outwards till all the loops have been tested.
• Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each. If they’re not independent, treat them like nesting.
White Testing is performed in 2 Steps
1. Tester should understand the code well
2. Tester should write some code for test cases and execute them
Tools required for White box testing:
• PyUnit
• Sqlmap
• Nmap
• ParasoftJtest
• Nunit
• VeraUnit
• CppUnit
• Bugzilla
• Fiddler
• JSUnit.net
• OpenGrok
• Wireshark

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• HP Fortify
• CSUnit
Features of White box Testing
1. Code coverage analysis: White box testing helps to analyze the code coverage of an
application, which helps to identify the areas of the code that are not being tested.
2. Access to the source code: White box testing requires access to the application’s source
code, which makes it possible to test individual functions, methods, and modules.
3. Knowledge of programming languages: Testers performing white box testing must have
knowledge of programming languages like Java, C++, Python, and PHP to understand the
code structure and write tests.
4. Identifying logical errors: White box testing helps to identify logical errors in the code,
such as infinite loops or incorrect conditional statements.
5. Integration testing: White box testing is useful for integration testing, as it allows testers
to verify that the different components of an application are working together as expected.
6. Unit testing: White box testing is also used for unit testing, which involves testing
individual units of code to ensure that they are working correctly.
7. Optimization of code: White box testing can help to optimize the code by identifying any
performance issues, redundant code, or other areas that can be improved.
8. Security testing: White box testing can also be used for security testing, as it allows testers
to identify any vulnerabilities in the application’s code.
9. Verification of Design: It verifies that the software’s internal design is implemented in
accordance with the designated design documents.
10. Check for Accurate Code: It verifies that the code operates in accordance with the
guidelines and specifications.
11. Identifying Coding Mistakes: It finds and fix programming flaws in your code, including
syntactic and logical errors.
12. Path Examination: It ensures that each possible path of code execution is explored and
test various iterations of the code.
13. Determining the Dead Code: It finds and remove any code that isn’t used when the
programme is running normally (dead code).
Advantages of Whitebox Testing
1. Thorough Testing: White box testing is thorough as the entire code and structures are tested.
2. Code Optimization: It results in the optimization of code removing errors and helps in
removing extra lines of code.
3. Early Detection of Defects: It can start at an earlier stage as it doesn’t require any
interface as in the case of black box testing.
4. Integration with SDLC: White box testing can be easily started in Software Development
Life Cycle.
5. Detection of Complex Defects: Testers can identify defects that cannot be detected
through other testing techniques.
6. Comprehensive Test Cases: Testers can create more comprehensive and effective test
cases that cover all code paths.
7. Testers can ensure that the code meets coding standards and is optimized for performance.
Disadvantages of White box Testing
1. Programming Knowledge and Source Code Access: Testers need to have programming
knowledge and access to the source code to perform tests.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

2. Overemphasis on Internal Workings: Testers may focus too much on the internal
workings of the software and may miss external issues.
3. Bias in Testing: Testers may have a biased view of the software since they are familiar
with its internal workings.
4. Test Case Overhead: Redesigning code and rewriting code needs test cases to be written
again.
5. Dependency on Tester Expertise: Testers are required to have in-depth knowledge of the
code and programming language as opposed to black-box testing.
6. Inability to Detect Missing Functionalities: Missing functionalities cannot be detected as
the code that exists is tested.
7. Increased Production Errors: High chances of errors in production.

Cyclomatic Complexity
The cyclomatic complexity of a code section is the quantitative measure of the number
of linearly independent paths in it. It is a software metric used to indicate the complexity of a
program. It is computed using the Control Flow Graph of the program. The nodes in the graph
indicate the smallest group of commands of a program, and a directed edge in it connects the
two nodes i.e. if the second command might immediately follow the first command.
For example, if the source code contains no control flow statement then its cyclomatic
complexity will be 1, and the source code contains a single path in it. Similarly, if the source
code contains one if condition then cyclomatic complexity will be 2 because there will be two
paths one for true and the other for false.

Mathematically, for a structured program, the directed graph inside the control flow is the edge
joining two basic blocks of the program as control may pass from first to second.

So, cyclomatic complexity M would be defined as,


M = E – N + 2P where E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components

In the case of a single method, P is equal to 1. So, for a single subroutine, the formula can be
defined as
M=E–N+2
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
How to Calculate Cyclomatic Complexity?
Steps that should be followed in calculating cyclomatic complexity and test cases design are:
Construction of graph with nodes and edges from code.
• Identification of independent paths.
• Cyclomatic Complexity Calculation

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Design of Test Cases


Let a section of code as such:
A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C
Control Flow Graph of the above code

The cyclomatic complexity calculated for the above code will be from the control flow graph.
The graph shows seven shapes(nodes), and seven lines(edges), hence cyclomatic complexity is
7-7+2 = 2.
Use of Cyclomatic Complexity
• Determining the independent path executions thus proven to be very helpful for Developers
and Testers.
• It can make sure that every path has been tested at least once.
• Thus help to focus more on uncovered paths.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Code coverage can be improved.


• Risks associated with the program can be evaluated.
• These metrics being used earlier in the program help in reducing the risks.
Advantages of Cyclomatic Complexity
• It can be used as a quality metric, given the relative complexity of various designs.
• It is able to compute faster than Halstead’s metrics.
• It is used to measure the minimum effort and best areas of concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
Disadvantages of Cyclomatic Complexity
• It is the measure of the program’s control complexity and not the data complexity.
• In this, nested conditional structures are harder to understand than non-nested structures.
• In the case of simple comparisons and decision structures, it may give a misleading figure.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Mutation Testing
Mutation Testing is a type of Software Testing that is performed to design new software tests
and also evaluate the quality of already existing software tests. Mutation testing is related to
modification a program in small ways. It focuses to help the tester develop effective tests or
locate weaknesses in the test data used for the program.

Mutation testing can be applied to design models, specifications, databases, tests, and XML. It
is a structural testing technique, which uses the structure of the code to guide the testing
process. It can be described as the process of rewriting the source code in small ways in order
to remove the redundancies in the source code.
Objective of Mutation Testing:
The objective of mutation testing is:
• To identify pieces of code that are not tested properly.
• To identify hidden defects that can’t be detected using other testing methods.
• To discover new kinds of errors or bugs.
• To calculate the mutation score.
• To study error propagation and state infection in the program.
• To assess the quality of the test cases.
Types of Mutation Testing:
Mutation testing is basically of 3 types:
1. Value Mutations:
In this type of testing the values are changed to detect errors in the program. Basically a small
value is changed to a larger value or a larger value is changed to a smaller value. In this testing
basically constants are changed.
Example:
Initial Code:

int mod = 1000000007;


int a = 12345678;
int b = 98765432;

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

int c = (a + b) % mod;

Changed Code:

int mod = 1007;


int a = 12345678;
int b = 98765432;
int c = (a + b) % mod;
2. Decision Mutations:
In decisions mutations are logical or arithmetic operators are changed to detect errors in the
program.
Example:
Initial Code:

if(a < b)
c = 10;
else
c = 20;

Changed Code:

if(a > b)
c = 10;
else
c = 20;
3. Statement Mutations:
In statement mutations a statement is deleted or it is replaces by some other statement.
Example:
Initial Code:

if(a < b)
c = 10;
else
c = 20;

Changed Code:

if(a < b)
d = 10;
else
d = 20;
Tools used for Mutation Testing :
• Judy
• Jester
• Jumble
• PIT

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• MuClipse.
Advantages of Mutation Testing:
• It brings a good level of error detection in the program.
• It discovers ambiguities in the source code.
• It finds and solves the issues of loopholes in the program.
• It helps the testers to write or automate the better test cases.
• It provides more efficient programming source code.
Disadvantages of Mutation Testing:
• It is highly costly and time-consuming.
• It is not able for Black Box Testing.
• Some, mutations are complex and hence it is difficult to implement or run against various
test cases.
• Here, the team members who are performing the tests should have good programming
knowledge.
• Selection of correct automation tool is important to test the programs.

DEBUGGING

What is Debugging?
Debugging is the process of finding and resolving defects or problems within a computer
program that prevent the correct operation of computer software or a system.
Need for debugging
Once errors are known during a program code, it’s necessary to initially establish the precise
program statements liable for the errors and so to repair them.
Challenges in Debugging
There are a lot of problems at the same time as acting the debugging. These are the following:
1. Debugging is finished through the individual that evolved the software program and it’s
miles difficult for that person to acknowledge that an error was made.
2. Debugging is typically performed under a tremendous amount of pressure to fix the
supported error as quick as possible.
3. It can be difficult to accurately reproduce input conditions.
4. Compared to the alternative software program improvement activities, relatively little
research, literature, and formal preparation exist in the procedure of debugging.
Debugging Approaches
The following are a number of approaches popularly adopted by programmers for debugging.
1. Brute Force Method
This is the foremost common technique of debugging however is that the least economical
method. during this approach, the program is loaded with print statements to print the
intermediate values with the hope that a number of the written values can facilitate to spot the
statement in error. This approach becomes a lot of systematic with the utilisation of a symbolic
program (also known as a source code debugger), as a result of values of various variables will
be simply checked and breakpoints and watch-points can be easily set to check the values of
variables effortlessly.
2. Backtracking
This is additionally a reasonably common approach. during this approach, starting from the
statement at which an error symptom has been discovered, the source code is derived backward

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

till the error is discovered. sadly, because the variety of supply lines to be derived back will
increase, the quantity of potential backward methods will increase and should become
unimaginably large so limiting the utilisation of this approach.
3. Cause Elimination Method
In this approach, a listing of causes that may presumably have contributed to the error
symptom is developed and tests are conducted to eliminate every error. A connected technique
of identification of the error from the error symptom is that the package fault tree analysis.
4. Program Slicing
This technique is analogous to backtracking. Here the search house is reduced by process
slices. A slice of a program for a specific variable at a particular statement is that the set of
supply lines preceding this statement which will influence the worth of that variable.
Debugging Guidelines
Debugging is commonly administrated by programmers supported their ingenuity. The
subsequent are some general tips for effective debugging:
1. Many times debugging needs an intensive understanding of the program style. making an
attempt to rectify supported a partial understanding of the system style and implementation
might need an excessive quantity of effort to be placed into debugging even straightforward
issues.
2. Debugging might generally even need a full plan of the system. In such cases, a typical
mistake that novice programmers usually create is trying to not fix the error however its
symptoms.
3. One should be watched out for the likelihood that a slip correction might introduce new
errors. so when each spherical of error-fixing, regression testing should be administrated.

Integration Testing
Integration testing is the process of testing the interface between two software units or modules.
It focuses on determining the correctness of the interface. The purpose of integration testing is
to expose faults in the interaction between integrated units. Once all the modules have been unit-
tested, integration testing is performed.
Integration testing is a software testing technique that focuses on verifying the interactions and
data exchange between different components or modules of a software application. The goal of
integration testing is to identify any problems or bugs that arise when different components are
combined and interact with each other. Integration testing is typically performed after unit testing
and before system testing. It helps to identify and resolve integration issues early in the
development cycle, reducing the risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there
should be a proper sequence to be followed. And also if you don’t want to miss out on any
integration scenarios then you have to follow the proper sequence. Exposing the defects is the
major focus of the integration testing and the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing approaches. Those
approaches are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the
modules are combined and the functionality is verified after the completion of individual module
testing. In simple words, all the modules of the system are simply put together and tested. This
approach is practicable only for very small systems. If an error is found during the integration

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

testing, it is very difficult to localize the error as the error may potentially belong to any of the
modules being integrated. So, debugging errors reported during Big Bang integration testing is
very expensive to fix.

Advantages:
1. It is convenient for small systems.
2. Simple and straightforward approach.
3. Can be completed quickly.
4. Does not require a lot of planning or coordination.
5. May be suitable for small systems or projects with a low degree of interdependence
between components.
Disadvantages:
1. There will be quite a lot of delay because you would have to wait for all the modules to be
integrated.
2. High-risk critical modules are not isolated and tested on priority since all modules are
tested at once.
3. Not Good for long projects.
4. High risk of integration problems that are difficult to identify and diagnose.
5. This can result in long and complex debugging and troubleshooting efforts.
6. This can lead to system downtime and increased development costs.
7. May not provide enough visibility into the interactions and data exchange between
components.
8. This can result in a lack of confidence in the system’s stability and reliability.
9. This can lead to decreased efficiency and productivity.
10. This may result in a lack of confidence in the development team.
11. This can lead to system failure and decreased user satisfaction.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels are
tested with higher modules until all modules are tested. The primary purpose of this integration
testing is that each subsystem tests the interfaces among various modules making up the
subsystem. This integration testing uses test drivers to drive and pass appropriate data to the
lower-level modules.
Advantages:
• In bottom-up testing, no stubs are required.
• A principal advantage of this integration testing is that several disjoint subsystems can be
tested simultaneously.
• It is easy to create the test conditions.
• Best for applications that uses bottom up design approach.
• It is Easy to observe the test results.
Disadvantages:
• Driver modules must be produced.
• In this testing, the complexity that occurs when the system is made up of a large number of
small subsystems.
• As Far modules have been created, there is no working model can be represented.
3. Top-Down Integration Testing – Top-down integration testing technique is used in order to
simulate the behaviour of the lower-level modules that are not yet integrated. In this integration
testing, testing takes place from top to bottom. First, high-level modules are tested and then low-

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

level modules and finally integrating the low-level modules to a high level to ensure the system
is working as intended.
Advantages:
• Separately debugged module.
• Few or no drivers needed.
• It is more stable and accurate at the aggregate level.
• Easier isolation of interface errors.
• In this, design defects can be found in the early stages.
Disadvantages:
• Needs many Stubs.
• Modules at lower level are tested inadequately.
• It is difficult to observe the test output.
• It is difficult to stub design.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched
integration testing. A mixed integration testing follows a combination of top down and bottom-
up testing approaches. In top-down approach, testing can start only after the top-level module
have been coded and unit tested. In bottom-up approach, testing can start only after the bottom
level modules are ready. This sandwich or mixed approach overcomes this shortcoming of the
top-down and bottom-up approaches. It is also called the hybrid integration testing. also, stubs
and drivers are used in mixed integration testing.
Advantages:
• Mixed approach is useful for very large projects having several sub projects.
• This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
• Parallel test can be performed in top and bottom layer tests.
Disadvantages:
• For mixed integration testing, it requires very high cost because one part has a Top-down
approach while another part has a bottom-up approach.
• This integration testing cannot be used for smaller systems with huge interdependence
between different modules.
Applications:
1. Identify the components: Identify the individual components of your application that need
to be integrated. This could include the frontend, backend, database, and any third-party
services.
2. Create a test plan: Develop a test plan that outlines the scenarios and test cases that need
to be executed to validate the integration points between the different components. This
could include testing data flow, communication protocols, and error handling.
3. Set up test environment: Set up a test environment that mirrors the production
environment as closely as possible. This will help ensure that the results of your integration
tests are accurate and reliable.
4. Execute the tests: Execute the tests outlined in your test plan, starting with the most
critical and complex scenarios. Be sure to log any defects or issues that you encounter
during testing.
5. Analyze the results: Analyze the results of your integration tests to identify any defects or
issues that need to be addressed. This may involve working with developers to fix bugs or
make changes to the application architecture.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

6. Repeat testing: Once defects have been fixed, repeat the integration testing process to
ensure that the changes have been successful and that the application still works as
expected.

System Testing
System testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution. It tests if the system meets the
specified requirements and if it is suitable for delivery to the end-users. This type of testing is
performed after the integration testing and before the acceptance testing.
System Testing is a type of software testing that is performed on a complete integrated system
to evaluate the compliance of the system with the corresponding requirements. In system testing,
integration testing passed components are taken as input. The goal of integration testing is to
detect any irregularity between the units that are integrated together. System testing detects
defects within both the integrated units and the whole system. The result of system testing is the
observed behavior of a component or a system when it is tested. System Testing is carried out
on the whole system in the context of either system requirement specifications or functional
requirement specifications or in the context of both. System testing tests the design and behavior
of the system and also the expectations of the customer. It is performed to test the system beyond
the bounds mentioned in the software requirements specification (SRS). System Testing is
basically performed by a testing team that is independent of the development team that helps to
test the quality of the system impartial. It has both functional and non-functional testing. System
Testing is a black-box testing. System Testing is performed after the integration testing and
before the acceptance testing.
System Testing Process: System Testing is performed in the following steps:
• Test Environment Setup: Create testing environment for the better quality testing.
• Create Test Case: Generate test case for the testing process.
• Create Test Data: Generate the data that is to be tested.
• Execute Test Case: After the generation of the test case and the test data, test cases are
executed.
• Defect Reporting: Defects in the system are detected.
• Regression Testing: It is carried out to test the side effects of the testing process.
• Log Defects: Defects are fixed in this step.
• Retest: If the test is not successful then again test is performed.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Types of System Testing:


• Performance Testing: Performance Testing is a type of software testing that is carried out
to test the speed, scalability, stability and reliability of the software product or application.
• Load Testing: Load Testing is a type of software Testing which is carried out to determine
the behavior of a system or software product under extreme load.
• Stress Testing: Stress Testing is a type of software testing performed to check the
robustness of the system under the varying loads.
• Scalability Testing: Scalability Testing is a type of software testing which is carried out to
check the performance of a software application or system in terms of its capability to scale
up or scale down the number of user request load.
Tools used for System Testing :
1. JMeter
2. Gallen Framework
3. Selenium
Here are a few common tools used for System Testing:
1. HP Quality Center/ALM
2. IBM Rational Quality Manager
3. Microsoft Test Manager
4. Selenium
5. Appium
6. LoadRunner
7. Gatling
8. JMeter
9. Apache JServ
10. SoapUI

Advantages of System Testing :


• The testers do not require more knowledge of programming to carry out this testing.
• It will test the entire product or software so that we will easily detect the errors or defects
which cannot be identified during the unit testing and integration testing.
• The testing environment is similar to that of the real time production or business
environment.
• It checks the entire functionality of the system with different test scripts and also it covers
the technical and business requirements of clients.
• After this testing, the product will almost cover all the possible bugs or errors and hence the
development team will confidently go ahead with acceptance testing.
Disadvantages of System Testing :
• This testing is time consuming process than another testing techniques since it checks the
entire product or software.
• The cost for the testing will be high since it covers the testing of entire software.
• It needs good debugging tool otherwise the hidden errors will not be found.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Regression Testing
is the process of testing the modified parts of the code and the parts that might get affected due
to the modifications to ensure that no new errors have been introduced in the software after the
modifications have been made. Regression means the return of something and in the software
field, it refers to the return of a bug.
When to do regression testing?
• When a new functionality is added to the system and the code has been modified to absorb
and integrate that functionality with the existing code.
• When some defect has been identified in the software and the code is debugged to fix it.
• When the code is modified to optimize its working.
Process of Regression testing:
Firstly, whenever we make some changes to the source code for any reason like adding new
functionality, optimization, etc. then our program when executed fails in the previously
designed test suite for obvious reasons. After the failure, the source code is debugged in order
to identify the bugs in the program. After identification of the bugs in the source code,
appropriate modifications are made. Then appropriate test cases are selected from the already
existing test suite which covers all the modified and affected parts of the source code. We
can add new test cases if required. In the end, regression testing is performed using the
selected test cases.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Techniques for the selection of Test cases for Regression Testing:


• Select all test cases: In this technique, all the test cases are selected from the already
existing test suite. It is the simplest and safest technique but not much efficient.
• Select test cases randomly: In this technique, test cases are selected randomly from the
existing test-suite, but it is only useful if all the test cases are equally good in their fault
detection capability which is very rare. Hence, it is not used in most of the cases.
• Select modification traversing test cases: In this technique, only those test cases are
selected which covers and tests the modified portions of the source code the parts which are
affected by these modifications.
• Select higher priority test cases: In this technique, priority codes are assigned to each test
case of the test suite based upon their bug detection capability, customer requirements, etc.
After assigning the priority codes, test cases with the highest priorities are selected for the
process of regression testing. The test case with the highest priority has the highest rank. For
example, test case with priority code 2 is less important than test case with priority code 1.

Tools for regression testing:


In regression testing, we generally select the test cases from the existing test suite itself and
hence, we need not compute their expected output, and it can be easily automated due to this
reason. Automating the process of regression testing will be very much effective and time saving.
Most commonly used tools for regression testing are:
• Selenium
• WATIR (Web Application Testing In Ruby)
• QTP (Quick Test Professional)
• RFT (Rational Functional Tester)
• Winrunner
• Silktest

Advantages of Regression Testing:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• It ensures that no new bugs has been introduced after adding new functionalities to the
system.
• As most of the test cases used in Regression Testing are selected from the existing test suite,
and we already know their expected outputs. Hence, it can be easily automated by the
automated tools.
• It helps to maintain the quality of the source code.
Disadvantages of Regression Testing:
• It can be time and resource consuming if automated tools are not used.
• It is required even after very small changes in the code.

Software Reliability

Software Reliability means Operational reliability. It is described as the ability of a system or


component to perform its required functions under static conditions for a specific period.

Software reliability is also defined as the probability that a software system fulfills its assigned
task in a given environment for a predefined number of input cases, assuming that the hardware
and the input are free of error.

Software Reliability is an essential connect of software quality, composed with functionality,


usability, performance, serviceability, capability, installability, maintainability, and
documentation. Software Reliability is hard to achieve because the complexity of software turn to
be high. While any system with a high degree of complexity, containing software, will be hard to
reach a certain level of reliability, system developers tend to push complexity into the software
layer, with the speedy growth of system size and ease of doing so by upgrading the software.

For example, large next-generation aircraft will have over 1 million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming International Space Station will have over two million lines on-board and over 10
million lines of ground support software; several significant life-critical defense systems will have
over 5 million source lines of software. While the complexity of software is inversely associated
with software reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.

Software Failure Mechanisms

The software failure can be classified as:

Transient failure: These failures only occur with specific inputs.

Permanent failure: This failure appears on all inputs.

Recoverable failure: System can recover without operator help.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Unrecoverable failure: System can recover with operator help only.

Non-corruption failure: Failure does not corrupt system state or data.

Corrupting failure: It damages the system state or data.

Software failures may be due to bugs, ambiguities, oversights or misinterpretation of the


specification that the software is supposed to satisfy, carelessness or incompetence in writing code,
inadequate testing, incorrect or unexpected usage of the software or other unforeseen problems.

Hardware vs. Software Reliability

Hardware Reliability Software Reliability

Hardware faults are mostly physical faults. Software faults are design faults, which are
tough to visualize, classify, detect, and correct.

Hardware components generally fail due to Software component fails due to bugs.
wear and tear.

In hardware, design faults may also exist, but In software, we can simply find a strict
physical faults generally dominate. corresponding counterpart for
"manufacturing" as the hardware
manufacturing process, if the simple action of
uploading software modules into place does not
count. Therefore, the quality of the software
will not change once it is uploaded into the
storage and start running

Hardware exhibits the failure features shown Software reliability does not show the same
in the following figure: features similar as hardware. A possible curve
is shown in the following figure:

It is called the bathtub curve. Period A, B,

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

and C stand for burn-in phase, useful life If we projected software reliability on the same
phase, and end-of-life phase respectively. axes.

There are two significant differences between hardware and software curves are:

One difference is that in the last stage, the software does not have an increasing failure rate as
hardware does. In this phase, the software is approaching obsolescence; there are no motivations
for any upgrades or changes to the software. Therefore, the failure rate will not change.

The second difference is that in the useful-life phase, the software will experience a radical
increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly
because of the defects create and fixed after the updates.

The upgrades in above figure signify feature upgrades, not upgrades for reliability. For feature
upgrades, the complexity of software is possible to be increased, since the functionality of the
software is enhanced. Even error fixes may be a reason for more software failures if the bug fix
induces other defects into the software. For reliability upgrades, it is likely to incur a drop in
software failure rate, if the objective of the upgrade is enhancing software reliability, such as a
redesign or reimplementation of some modules using better engineering approaches, such as clean-
room method.

A partial list of the distinct features of software compared to hardware is listed below:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Failure cause: Software defects are primarily designed defects.

Wear-out: Software does not have an energy-related wear-out phase. Bugs can arise without
warning.

Repairable system: Periodic restarts can help fix software queries.

Time dependency and life cycle: Software reliability is not a purpose of operational time.

Environmental factors: Do not affect Software reliability, except it may affect program inputs.

Reliability prediction: Software reliability cannot be predicted from any physical basis since it
depends entirely on human factors in design.

Redundancy: It cannot improve Software reliability if identical software elements are used.

Interfaces: Software interfaces are merely conceptual other than visual.

Failure rate motivators: It is generally not predictable from analyses of separate statements.

Built with standard components: Well-understood and extensively tested standard element will
help improve maintainability and reliability. But in the software industry, we have not observed

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

this trend. Code reuse has been around for some time but to a minimal extent. There are no standard
elements for software, except for some standardized logic structures.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Software Reliability Measurement Techniques

Reliability metrics are used to quantitatively expressed the reliability of the software product. The
option of which parameter is to be used depends upon the type of system to which it applies & the
requirements of the application domain.

Measuring software reliability is a severe problem because we don't have a good understanding of
the nature of software. It is difficult to find a suitable method to measure software reliability and
most of the aspects connected to software reliability. Even the software estimates have no uniform
definition. If we cannot measure the reliability directly, something can be measured that reflects
the features related to reliability.

The current methods of software reliability measurement can be divided into four categories:

1. Product Metrics

Product metrics are those which are used to build the artifacts, i.e., requirement specification
documents, system design documents, etc. These metrics help in the assessment if the product is
right sufficient through records on attributes like usability, reliability, maintainability &
portability. In these measurements are taken from the actual body of the source code.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

i. Software size is thought to be reflective of complexity, development effort, and reliability.


Lines of Code (LOC), or LOC in thousands (KLOC), is an initial intuitive approach to
measuring software size. The basis of LOC is that program length can be used as a
predictor of program characteristics such as effort &ease of maintenance. It is a measure
of the functional complexity of the program and is independent of the programming
language.
ii. Function point metric is a technique to measure the functionality of proposed software
development based on the count of inputs, outputs, master files, inquires, and interfaces.
iii. Test coverage metric size fault and reliability by performing tests on software products,
assuming that software reliability is a function of the portion of software that is successfully
verified or tested.
iv. Complexity is directly linked to software reliability, so representing complexity is
essential. Complexity-oriented metrics is a way of determining the complexity of a
program's control structure by simplifying the code into a graphical representation. The
representative metric is McCabe's Complexity Metric.
v. Quality metrics measure the quality at various steps of software product development. An
vital quality metric is Defect Removal Efficiency (DRE). DRE provides a measure of
quality because of different quality assurance and control activities applied throughout the
development process.

2. Project Management Metrics

Project metrics define project characteristics and execution. If there is proper management of the
project by the programmer, then this helps us to achieve better products. A relationship exists
between the development process and the ability to complete projects on time and within the
desired quality objectives. Cost increase when developers use inadequate methods. Higher
reliability can be achieved by using a better development process, risk management process,
configuration management process.

These metrics are:

o Number of software developers


o Staffing pattern over the life-cycle of the software
o Cost and schedule
o Productivity

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

3. Process Metrics

Process metrics quantify useful attributes of the software development process & its
environment. They tell if the process is functioning optimally as they report on characteristics
like cycle time & rework time. The goal of process metric is to do the right job on the first time
through the process. The quality of the product is a direct function of the process. So process
metrics can be used to estimate, monitor, and improve the reliability and quality of software.
Process metrics describe the effectiveness and quality of the processes that produce the software
product.

Examples are:

o The effort required in the process


o Time to produce the product
o Effectiveness of defect removal during development
o Number of defects found during testing
o Maturity of the process

4. Fault and Failure Metrics

A fault is a defect in a program which appears when the programmer makes an error and causes
failure when executed under particular conditions. These metrics are used to determine the
failure-free execution software.

Reliability Metrics

Reliability metrics are used to quantitatively expressed the reliability of the software product. The
option of which metric is to be used depends upon the type of system to which it applies & the
requirements of the application domain.

Some reliability metrics which can be used to quantify the reliability of the software product are
as follows:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

1. Mean Time to Failure (MTTF)

MTTF is described as the time interval between the two successive failures. An MTTF of 200
mean that one failure can be expected each 200-time units. The time units are entirely dependent
on the system & it can even be stated in the number of transactions. MTTF is consistent for
systems with large transactions.

For example, It is suitable for computer-aided design systems where a designer will work on a
design for several hours as well as for Word-processor systems.

o measure MTTF, we can evidence the failure data for n failures. Let the failures appear at the
time instants t1,t2.....tn.

MTTF can be calculated as

2. Mean Time to Repair (MTTR)

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Once failure occurs, some-time is required to fix the error. MTTR measures the average time it
takes to track the errors causing the failure and to fix them.

3. Mean Time Between Failure (MTBR)

We can merge MTTF & MTTR metrics to get the MTBF metric.

MTBF = MTTF + MTTR

Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to appear
only after 300 hours. In this method, the time measurements are real-time & not the execution time
as in MTTF.

4. Rate of occurrence of failure (ROCOF)

It is the number of failures appearing in a unit time interval. The number of unexpected events
over a specific time of operation. ROCOF is the frequency of occurrence with which unexpected
role is likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100
operational time unit steps. It is also called the failure intensity metric.

5. Probability of Failure on Demand (POFOD)

POFOD is described as the probability that the system will fail when a service is requested. It is
the number of system deficiency given several systems inputs.

POFOD is the possibility that the system will fail when a service request is made.

A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential
measure for safety-critical systems. POFOD is relevant for protection systems where services are
demanded occasionally.

6. Availability (AVAIL)

Availability is the probability that the system is applicable for use at a given time. It takes into
account the repair time & the restart time for the system. An availability of 0.995 means that in
every 1000 time units, the system is feasible to be available for 995 of these. The percentage of
time that a system is applicable for use, taking into account planned and unplanned downtime. If
a system is down an average of four hours out of 100 hours of operation, its AVAIL is 96%.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Different Types of Software Metrics are:-

Requirements Reliability Metrics

Requirements denote what features the software must include. It specifies the functionality that
must be contained in the software. The requirements must be written such that is no misconception
between the developer & the client. The requirements must include valid structure to avoid the
loss of valuable data.

he requirements should be thorough and in a detailed manner so that it is simple for the design
stage. The requirements should not include inadequate data. Requirement Reliability metrics
calculates the above-said quality factors of the required document.

Design and Code Reliability Metrics

The quality methods that exists in design and coding plan are complexity, size, and modularity.
Complex modules are tough to understand & there is a high probability of occurring bugs. The
reliability will reduce if modules have a combination of high complexity and large size or high
complexity and small size. These metrics are also available to object-oriented code, but in this,
additional metrics are required to evaluate the quality.

Testing Reliability Metrics

These metrics use two methods to calculate reliability.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

First, it provides that the system is equipped with the tasks that are specified in the requirements.
Because of this, the bugs due to the lack of functionality reduces.

The second method is calculating the code, finding the bugs & fixing them. To ensure that the
system includes the functionality specified, test plans are written that include multiple test cases.
Each test method is based on one system state and tests some tasks that are based on an associated
set of requirements. The goals of an effective verification program is to ensure that each elements
is tested, the implication being that if the system passes the test, the requirements’ functionality is
contained in the delivered system.

Software Fault Tolerance

Software fault tolerance is the ability for software to detect and recover from a fault that is
happening or has already happened in either the software or hardware in the system in which the
software is running to provide service by the specification.

Software fault tolerance is a necessary component to construct the next generation of highly
available and reliable computing systems from embedded systems to data warehouse systems.

To adequately understand software fault tolerance, it is important to understand the nature of the
problem that software fault tolerance is supposed to solve.

Software faults are all design faults. Software manufacturing, the reproduction of software, is
considered to be perfect. The source of the problem being solely designed faults is very different
than almost any other system in which fault tolerance is the desired property.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

1. Recovery Block

The recovery block method is a simple technique developed by Randel. The recovery block
operates with an adjudicator, which confirms the results of various implementations of the same
algorithm. In a system with recovery blocks, the system view is broken down into fault recoverable
blocks.

The entire system is constructed of these fault-tolerant blocks. Each block contains at least a
primary, secondary, and exceptional case code along with an adjudicator. The adjudicator is the
component, which determines the correctness of the various blocks to try.

The adjudicator should be kept somewhat simple to maintain execution speed and aide in
correctness. Upon first entering a unit, the adjudicator first executes the primary alternate. (There
may be N alternates in a unit which the adjudicator may try.) If the adjudicator determines that the
fundamental block failed, it then tries to roll back the state of the system and tries the secondary
alternate.

If the adjudicator does not accept the results of any of the alternates, it then invokes the exception
handler, which then indicates the fact that the software could not perform the requested operation.

The recovery block technique increases the pressure on the specification to be specific enough
to create various multiple alternatives that are functionally the same. This problem is further
discussed in the context of the N-version software method.

2. N-Version Software

The N-version software methods attempt to parallel the traditional hardware fault tolerance
concept of N-way redundant hardware. In an N-version software system, every module is done
with up to N different methods. Each variant accomplishes the same function, but hopefully in a
various way. Each version then submits its answer to voter or decider, which decides the correct
answer, and returns that as the result of the module.

This system can hopefully overcome the design faults present in most software by relying upon
the design diversity concept. An essential distinction in N-version software is the fact that the
system could include multiple types of hardware using numerous versions of the software.

N-version software can only be successful and successfully tolerate faults if the required design
diversity is met. The dependence on appropriate specifications in N-version software, (and
recovery blocks,) cannot be stressed enough.

3. N-Version Software and Recovery Blocks

The differences between the recovery block technique and the N-version technique are not too
numerous, but they are essential. In traditional recovery blocks, each alternative would be executed
serially until an acceptable solution is found as determined by the adjudicator. The recovery block
method has been extended to contain concurrent execution of the various alternatives.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

The N-version techniques have always been designed to be implemented using N-way hardware
concurrently. In a serial retry system, the cost in time of trying multiple methods may be too
expensive, especially for a real-time system. Conversely, concurrent systems need the expense of
N-way hardware and a communications network to connect them.

The recovery block technique requires that each module build a specific adjudicator; in the N-
version method, a single decider may be used. The recovery block technique, assuming that the
programmer can create a sufficiently simple adjudicator, will create a system, which is challenging
to enter into an incorrect state.

Software Reliability Models

A software reliability model indicates the form of a random process that defines the behavior of
software failures to time.

Software reliability models have appeared as people try to understand the features of how and
why software fails, and attempt to quantify software reliability.

Over 200 models have been established since the early 1970s, but how to quantify software
reliability remains mostly unsolved.

There is no individual model that can be used in all situations. No model is complete or even
representative.
Most software models contain the following parts:
o Assumptions
o Factors
A mathematical function that includes the reliability with the elements. The mathematical function
is generally higher-order exponential or logarithmic.
Software Reliability Modeling Techniques

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Both kinds of modeling methods are based on observing and accumulating failure data and
analyzing with statistical inference.

Differentiate between software reliability prediction models and software reliability


estimation models

Basics Prediction Models Estimation Models

Data Reference Uses historical information Uses data from the current software
development effort.

When used in Usually made before Usually made later in the life cycle
development development or test phases; (after some data have been
cycle can be used as early as collected); not typically used in
concept phase. concept or development phases.

Time Frame Predict reliability at some Estimate reliability at either present


future time. or some next time.

Reliability Models

A reliability growth model is a numerical model of software reliability, which predicts how
software reliability should improve over time as errors are discovered and repaired. These models
help the manager in deciding how much efforts should be devoted to testing. The objective of the
project manager is to test and debug the system until the required level of reliability is reached.

Following are the Software Reliability Models are:

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Software reliability growth modeling is a process used to predict and manage the improvement
of software reliability over time. It involves statistical techniques to analyze historical data on
software failures and defects to make projections about future reliability.

Here's an overview of software reliability growth modeling:

1. **Data Collection**: The first step in software reliability growth modeling is collecting data
on software failures and defects. This data typically includes information such as the number of
reported failures, the time between failures, and the severity of each failure.

2. **Model Selection**: Once the data is collected, the next step is to select an appropriate
reliability growth model. There are several types of models used in software reliability growth
modeling, including:

- **Non-homogeneous Poisson Process (NHPP)**: This model assumes that failures occur
according to a Poisson process, but the failure intensity changes over time.

- **Goel-Okumoto Model**: This is one of the earliest and most widely used reliability growth
models. It assumes that failures occur according to a Poisson process with a constant failure
intensity.

- **Logarithmic Model**: This model assumes that the number of remaining defects decreases
logarithmically over time.

- **Rayleigh Model**: This model assumes that the software reliability growth follows a
Rayleigh distribution, which is commonly used in reliability engineering to model the time to
failure of systems.

- **Weibull Model**: This model assumes that the software reliability growth follows a
Weibull distribution, which is a flexible distribution widely used in reliability engineering.

- **Cox Model**: This model is a non-parametric approach to software reliability growth


modeling that does not make any assumptions about the underlying distribution of failure times.

3. **Parameter Estimation**: Once a model is selected, the next step is to estimate its parameters
using the collected data. This involves fitting the model to the data to find the values of the
model parameters that best describe the observed failure behavior.

4. **Model Validation**: After parameter estimation, it is important to validate the reliability


growth model to ensure that it accurately reflects the observed failure behavior. This may involve
comparing the model predictions to additional data not used in parameter estimation or using
statistical tests to assess the goodness-of-fit of the model.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

5. **Prediction and Analysis**: Once the model is validated, it can be used to make predictions
about future reliability based on the current state of the software and the observed failure
behavior. This can help software developers and managers make informed decisions about when
to release the software and how to allocate resources for testing and debugging.

Overall, software reliability growth modeling is a valuable tool for understanding and managing
the reliability of software systems. By analyzing historical failure data and making predictions
about future reliability, organizations can improve the quality and reliability of their software
products.

What is Software Quality?


Software Quality shows how good and reliable a product is. To convey an associate degree
example, think about functionally correct software. It performs all functions as laid out in
the SRS document. But, it has an associate degree virtually unusable program. even though it
should be functionally correct, we tend not to think about it to be a high-quality product.
Another example is also that of a product that will have everything that the users need but has
an associate degree virtually incomprehensible and not maintainable code. Therefore, the normal
construct of quality as “fitness of purpose” for code merchandise isn’t satisfactory.
Factors of Software Quality
The modern read of high-quality associates with software many quality factors like the
following:
1. Portability: A software is claimed to be transportable, if it may be simply created to figure
in several package environments, in several machines, with alternative code merchandise,
etc.
2. Usability: A software has smart usability if completely different classes of users (i.e.
knowledgeable and novice users) will simply invoke the functions of the merchandise.
3. Reusability: A software has smart reusability if completely different modules of the
merchandise will simply be reused to develop new merchandise.
4. Correctness: Software is correct if completely different needs as laid out in the SRS
document are properly enforced.
5. Maintainability: A software is reparable, if errors may be simply corrected as and once
they show up, new functions may be simply added to the merchandise, and therefore the
functionalities of the merchandise may be simply changed, etc
6. Reliability. Software is more reliable if it has fewer failures. Since software engineers do
not deliberately plan for their software to fail, reliability depends on the number and type of
mistakes they make. Designers can improve reliability by ensuring the software is easy to
implement and change, by testing it thoroughly, and also by ensuring that if failures occur,
the system can handle them or can recover easily.
7. Efficiency. The more efficient software is, the less it uses of CPU-time, memory, disk
space, network bandwidth, and other resources. This is important to customers in order to
reduce their costs of running the software, although with today’s powerful computers, CPU
time, memory and disk usage are less of a concern than in years gone by.

Software Quality Management System

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Software Quality Management System contains the methods that are used by the authorities to
develop products having the desired quality.
Managerial Structure
Quality System is responsible for managing the structure as a whole. Every Organization has a
managerial structure.
Individual Responsibilities
Each individual present in the organization must have some responsibilities that should be
reviewed by the top management and each individual present in the system must take this
seriously.
Quality System Activities
The activities which each quality system must have been
1. Project Auditing.
2. Review of the quality system.
3. It helps in the development of methods and guidelines.
Evolution of Quality Management System
Quality Systems are basically evolved over the past some years. The evolution of a Quality
Management System is a four-step process.
1. The main task of quality control is to detect defective devices, and it also helps in finding
the cause that leads to the defect. It also helps in the correction of bugs.
2. Quality Assurance helps an organization in making good quality products. It also helps in
improving the quality of the product by passing the products through security checks.
3. Total Quality Management(TQM) checks and assures that all the procedures must be
continuously improved regularly through process measurements.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Evolution of Quality Management System

The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.

The model defines a five-level evolutionary stage of increasingly organized and consistently
more mature processes.

CMM was developed and is promoted by the Software Engineering Institute (SEI), a research
and development center promote by the U.S. Department of Defense (DOD).

Capability Maturity Model is used as a benchmark to measure the maturity of an organization's


software process.

Methods of SEICMM

There are two methods of SEICMM:

Capability Evaluation: Capability evaluation provides a way to assess the software process
capability of an organization. The results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the results of the software process
capability assessment can be used to select a contractor.

Software Process Assessment: Software process assessment is used by an organization to


improve its process capability. Thus, this type of evaluation is for purely internal use.

SEI CMM categorized software development industries into the following five maturity levels.
The various levels of SEI CMM have been designed so that it is easy for an organization to build
its quality system starting from scratch slowly.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Level 1: Initial

Ad hoc activities characterize a software development organization at this level. Very few or no
processes are described and followed. Since software production processes are not limited,
different engineers follow their process and as a result, development efforts become chaotic.
Therefore, it is also called a chaotic level.

Level 2: Repeatable

At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are
used.

Level 3: Defined

At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured.
ISO 9000 goals at achieving this level.

Level 4: Managed

At this level, the focus is on software metrics. Two kinds of metrics are composed.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Product metrics measure the features of the product being developed, such as its size, reliability,
time complexity, understandability, etc.

Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the average
number of failures detected during testing per LOC, etc. The software process and product quality
are measured, and quantitative quality requirements for the product are met. Various tools like
Pareto charts, fishbone diagrams, etc. are used to measure the product and process quality. The
process metrics are used to analyze if a project performed satisfactorily. Thus, the outcome of
process measurements is used to calculate project performance rather than improve the process.

Level 5: Optimizing

At this phase, process and product metrics are collected. Process and product measurement data
are evaluated for continuous process improvement.

Key Process Areas (KPA) of a software organization

Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas (KPAs)
that contains the areas an organization should focus on improving its software process to the next
level. The focus of each level and the corresponding key process areas are shown in the fig.

SEI CMM provides a series of key areas on which to focus to take an organization from one level
of maturity to the next. Thus, it provides a method for gradual quality improvement over various

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

stages. Each step has been carefully designed such that one step enhances the capability already
built up.

Software Maintenance refers to the process of modifying and updating a software system
after it has been delivered to the customer. It is a critical part of the software development life
cycle (SDLC) and is necessary to ensure that the software continues to meet the needs of the
users over time. This article focuses on discussing Software Maintenance in detail.
What is Software Maintenance?
Software maintenance is a continuous process that occurs throughout the entire life cycle of the
software system.
• The goal of software maintenance is to keep the software system working correctly,
efficiently, and securely, and to ensure that it continues to meet the needs of the users.
• This can include fixing bugs, adding new features, improving performance, or updating the
software to work with new hardware or software systems.
• It is also important to consider the cost and effort required for software maintenance when
planning and developing a software system.
• It is important to have a well-defined maintenance process in place, which includes testing
and validation, version control, and communication with stakeholders.
• It’s important to note that software maintenance can be costly and complex, especially for
large and complex systems. Therefore, the cost and effort of maintenance should be taken
into account during the planning and development phases of a software project.
• It’s also important to have a clear and well-defined maintenance plan that includes regular
maintenance activities, such as testing, backup, and bug fixing.
Several Key Aspects of Software Maintenance
1. Bug Fixing: The process of finding and fixing errors and problems in the software.
2. Enhancements: The process of adding new features or improving existing features to meet
the evolving needs of the users.
3. Performance Optimization: The process of improving the speed, efficiency, and
reliability of the software.
4. Porting and Migration: The process of adapting the software to run on new hardware or
software platforms.
5. Re-Engineering: The process of improving the design and architecture of the software to
make it more maintainable and scalable.
6. Documentation: The process of creating, updating, and maintaining the documentation for
the software, including user manuals, technical specifications, and design documents.
Several Types of Software Maintenance
1. Corrective Maintenance: This involves fixing errors and bugs in the software system.
2. Patching: It is an emergency fix implemented mainly due to pressure from management.
Patching is done for corrective maintenance but it gives rise to unforeseen future errors due
to lack of proper impact analysis.
3. Adaptive Maintenance: This involves modifying the software system to adapt it to
changes in the environment, such as changes in hardware or software, government policies,
and business rules.
4. Perfective Maintenance: This involves improving functionality, performance, and
reliability, and restructuring the software system to improve changeability.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

5. Preventive Maintenance: This involves taking measures to prevent future problems, such
as optimization, updating documentation, reviewing and testing the system, and
implementing preventive measures such as backups.
Maintenance can be categorized into proactive and reactive types. Proactive maintenance
involves taking preventive measures to avoid problems from occurring, while reactive
maintenance involves addressing problems that have already occurred.
Maintenance can be performed by different stakeholders, including the original development
team, an in-house maintenance team, or a third-party maintenance provider. Maintenance
activities can be planned or unplanned. Planned activities include regular maintenance tasks that
are scheduled in advance, such as updates and backups. Unplanned activities are reactive and
are triggered by unexpected events, such as system crashes or security breaches. Software
maintenance can involve modifying the software code, as well as its documentation, user
manuals, and training materials. This ensures that the software is up-to-date and continues to
meet the needs of its users.
Software maintenance can also involve upgrading the software to a new version or platform.
This can be necessary to keep up with changes in technology and to ensure that the software
remains compatible with other systems. The success of software maintenance depends on
effective communication with stakeholders, including users, developers, and management.
Regular updates and reports can help to keep stakeholders informed and involved in the
maintenance process.
Software maintenance is also an important part of the Software Development Life Cycle
(SDLC). To update the software application and do all modifications in software application so
as to improve performance is the main focus of software maintenance. Software is a model that
runs on the basis of the real world. so, whenever any change requires in the software that means
the need for real-world changes wherever possible.

Need for Maintenance


Software Maintenance must be performed in order to:
• Correct faults.
• Improve the design.
• Implement enhancements.
• Interface with other systems.
• Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
• Migrate legacy software.
• Retire software.
• Requirement of user changes.
• Run the code fast

Challenges in Software Maintenance


The various challenges in software maintenance are given below:
• The popular age of any software program is taken into consideration up to ten to fifteen
years. As software program renovation is open-ended and might maintain for decades
making it very expensive.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Older software programs, which had been intended to paint on sluggish machines with
much less reminiscence and garage ability can not maintain themselves tough in opposition
to newly coming more advantageous software programs on contemporary-day hardware.
• Changes are frequently left undocumented which can also additionally reason greater
conflicts in the future.
• As the era advances, it turns into high prices to preserve vintage software programs.
• Often adjustments made can without problems harm the authentic shape of the software
program, making it difficult for any next adjustments.
• There is a lack of Code Comments.
• Lack of documentation: Poorly documented systems can make it difficult to understand
how the system works, making it difficult to identify and fix problems.
• Legacy code: Maintaining older systems with outdated technologies can be difficult, as it
may require specialized knowledge and skills.
• Complexity: Large and complex systems can be difficult to understand and modify,
making it difficult to identify and fix problems.
• Changing requirements: As user requirements change over time, the software system may
need to be modified to meet these new requirements, which can be difficult and time-
consuming.
• Interoperability issues: Systems that need to work with other systems or software can be
difficult to maintain, as changes to one system can affect the other systems.
• Lack of test coverage: Systems that have not been thoroughly tested can be difficult to
maintain as it can be hard to identify and fix problems without knowing how the system
behaves in different scenarios.
• Lack of personnel: A lack of personnel with the necessary skills and knowledge to
maintain the system can make it difficult to keep the system up-to-date and running
smoothly.
• High-Cost: The cost of maintenance can be high, especially for large and complex
systems, which can be difficult to budget for and manage.

To overcome these challenges, it is important to have a well-defined maintenance process in


place, which includes testing and validation, version control, and communication with
stakeholders. It is also important to have a clear and well-defined maintenance plan that includes
regular maintenance activities, such as testing, backup, and bug fixing. Additionally, it is
important to have personnel with the necessary skills and knowledge to maintain the system.

Categories of Software Maintenance


Maintenance can be divided into the following categories.
• Corrective maintenance: Corrective maintenance of a software product may be essential
either to rectify some bugs observed while the system is in use, or to enhance the
performance of the system.
• Adaptive maintenance: This includes modifications and updations when the customers
need the product to run on new platforms, on new operating systems, or when they need the
product to interface with new hardware and software.
• Perfective maintenance: A software product needs maintenance to support the new
features that the users want or to change different types of functionalities of the system
according to the customer’s demands.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Preventive maintenance: This type of maintenance includes modifications and updations


to prevent future problems with the software. It goals to attend to problems, which are not
significant at this moment but may cause serious issues in the future.

Reverse Engineering
Reverse Engineering is the process of extracting knowledge or design information from anything
man-made and reproducing it based on the extracted information. It is also called back
engineering. The main objective of reverse engineering is to check out how the system works.
There are many reasons to perform reverse engineering. Reverse engineering is used to know
how the thing works. Also, reverse engineering is to recreate the object by adding some
enhancements.
Software Reverse Engineering
Software Reverse Engineering is the process of recovering the design and the requirements
specification of a product from an analysis of its code. Reverse Engineering is becoming
important, since several existing software products, lack proper documentation, are highly
unstructured, or their structure has degraded through a series of maintenance efforts.

Why Reverse Engineering?


• Providing proper system documentation.
• Recovery of lost information.
• Assisting with maintenance.
• The facility of software reuse.
• Discovering unexpected flaws or faults.
• Implements innovative processes for specific use.
• Easy to document the things how efficiency and power can be improved.

Uses of Software Reverse Engineering


• Software Reverse Engineering is used in software design, reverse engineering enables the
developer or programmer to add new features to the existing software with or without
knowing the source code.
• Reverse engineering is also useful in software testing, it helps the testers to study or detect
the virus and other malware code.
• Software reverse engineering is the process of analyzing and understanding the internal
structure and design of a software system. It is often used to improve the understanding of
a software system, to recover lost or inaccessible source code, and to analyze the behavior
of a system for security or compliance purposes.
• Malware analysis: Reverse engineering is used to understand how malware works and to
identify the vulnerabilities it exploits, in order to develop countermeasures.
• Legacy systems: Reverse engineering can be used to understand and maintain legacy
systems that are no longer supported by the original developer.
• Intellectual property protection: Reverse engineering can be used to detect and prevent
intellectual property theft by identifying and preventing the unauthorized use of code or
other assets.
• Security: Reverse engineering is used to identify security vulnerabilities in a system, such
as backdoors, weak encryption, and other weaknesses.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Compliance: Reverse engineering is used to ensure that a system meets compliance


standards, such as those for accessibility, security, and privacy.
• Reverse-engineering of proprietary software: To understand how a software works, to
improve the software, or to create new software with similar features.
• Reverse-engineering of software to create a competing product: To create a product that
functions similarly or to identify the features that are missing in a product and create a new
product that incorporates those features.
• It’s important to note that reverse engineering can be a complex and time-consuming
process, and it is important to have the necessary skills, tools, and knowledge to perform it
effectively. Additionally, it is important to consider the legal and ethical implications of
reverse engineering, as it may be illegal or restricted in some jurisdictions.

Advantages of Software Maintenance


• Improved Software Quality: Regular software maintenance helps to ensure that the
software is functioning correctly and efficiently and that it continues to meet the needs of
the users.
• Enhanced Security: Maintenance can include security updates and patches, helping to
ensure that the software is protected against potential threats and attacks.
• Increased User Satisfaction: Regular software maintenance helps to keep the software up-
to-date and relevant, leading to increased user satisfaction and adoption.
• Extended Software Life: Proper software maintenance can extend the life of the software,
allowing it to be used for longer periods of time and reducing the need for costly
replacements.
• Cost Savings: Regular software maintenance can help to prevent larger, more expensive
problems from occurring, reducing the overall cost of software ownership.
• Better Alignment with business goals: Regular software maintenance can help to ensure
that the software remains aligned with the changing needs of the business. This can help to
improve overall business efficiency and productivity.
• Competitive Advantage: Regular software maintenance can help to keep the software
ahead of the competition by improving functionality, performance, and user experience.
• Compliance with Regulations: Software maintenance can help to ensure that the software
complies with relevant regulations and standards. This is particularly important in
industries such as healthcare, finance, and government, where compliance is critical.
• Improved Collaboration: Regular software maintenance can help to improve
collaboration between different teams, such as developers, testers, and users. This can lead
to better communication and more effective problem-solving.
• Reduced Downtime: Software maintenance can help to reduce downtime caused by
system failures or errors. This can have a positive impact on business operations and
reduce the risk of lost revenue or customers.
• Improved Scalability: Regular software maintenance can help to ensure that the software
is scalable and can handle increased user demand. This can be particularly important for
growing businesses or for software that is used by a large number of users.

Disadvantages of Software Maintenance


• Cost: Software maintenance can be time-consuming and expensive, and may require
significant resources and expertise.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Schedule disruptions: Maintenance can cause disruptions to the normal schedule and
operations of the software, leading to potential downtime and inconvenience.
• Complexity: Maintaining and updating complex software systems can be challenging,
requiring specialized knowledge and expertise.
• Risk of introducing new bugs: The process of fixing bugs or adding new features can
introduce new bugs or problems, making it important to thoroughly test the software after
maintenance.
• User resistance: Users may resist changes or updates to the software, leading to decreased
satisfaction and adoption.
• Compatibility issues: Maintenance can sometimes cause compatibility issues with other
software or hardware, leading to potential integration problems.
• Lack of documentation: Poor documentation or lack of documentation can make software
maintenance more difficult and time-consuming, leading to potential errors or delays.
• Technical debt: Over time, software maintenance can lead to technical debt, where the
cost of maintaining and updating the software becomes increasingly higher than the cost of
developing a new system.
• Skill gaps: Maintaining software systems may require specialized skills or expertise that
may not be available within the organization, leading to potential outsourcing or increased
costs.
• Inadequate testing: Inadequate testing or incomplete testing after maintenance can lead to
errors, bugs, and potential security vulnerabilities.
• End-of-life: Eventually, software systems may reach their end-of-life, making maintenance
and updates no longer feasible or cost-effective. This can lead to the need for a complete
system replacement, which can be costly and time-consuming.

Representative Client/Server Systems

• File servers (client requests selected records from a file, server transmits records to client
over the network)
• Database servers (client sends SQL requests to server, server processes the request and
returns the results to the client over the network)
• Transaction servers (client sends requests that invokes remote procedures on the server
side, sever executes procedures invoked and returns the results to the client)
• Groupware servers (server provides set of applications that enable communication among
clients using text, images, bulletin boards, video, etc.)

Software Components for C/S Systems

• User interaction/presentation subsystem (handles all user events)


• Application subsystem (implements requires defined by the application within the context
of the operating environment, components may reside on either the client or server side)
• Database management subsystem (performs data manipulation and management for the
application)
• Middleware (all software components that exist on both the client and the server to allow
exchange of information)

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Representative C/S Configuration Options

• Distributed presentation - database and application logic remain on the server, client
software is used to reformat server data into GUI format
• Remote presentation - similar to distributed presentation, primary database and application
logic remain on the server, data sent by the server is used by the client to prepare the user
presentation
• Distributed logic - client is assigned all user presentation tasks associated with data entry
and formulating server queries, server is assigned data management tasks and updates
information based on user actions
• Remote data management - applications on server side create new data sources,
applications on client side process the new data returned by the server
• Distributed databases - data is spread across multiple clients and servers, requiring clients
to support data management as well as application and GUI components
• Fat server - most software functions for C/S system are allocated to the server
• Thin clients - network computer approach relegating all application processing to a fat
server

Guidelines for Distributing Application Subsystems

• The presentation/interaction subsystem is generally placed on the client.


• If the database is to be shared by multiple users connected by a LAN, the database is
typically located on the server.
• Static data used for reference should be allocated to the client.

Linking C/S Software Subsystems

• Pipes (permit messaging between different machines running different operating systems)
• Remote procedure calls (permit process running on one machine to invoke execution of
process residing on another machine)
• Client/server SQL interaction (SQL requests passed from client to server DBMS, this
mechanism is limited to RDBMS)

Representative Middleware Architectures

• CORBA (ORB)
• COM (Microsoft)
• JavaBeans (Sun)

Design Issues for C/S Systems

• Data and architectural design - dominates the design process to be able to effectively use
the capabilities of RDBMS or OODMBS

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Event-driven paradigm - when used, behavioral modeling should be conducted and the
control-oriented aspects of the behavioral model should translated into the design model
• Interface design - elevated in importance, since the user interaction/presentation
component implements all functions typically associated with a GUI
• Object-oriented point of view - often chosen, since an object structure is provides by events
initiated in the GUI and their event handlers within the client-based software

Architectural Design for Client/Server Systems

• Best described as communicating processes style architecture whose goal is to achieve easy
scalability when adding and arbitrary number of clients
• Since modern C/S systems tend to be component-based, an object request broker (ORB)
architecture is used for implementation
• Object adapters or wrappers provide service to facilitate communication among client and
server components

• component implementations are registered


• all component references are interpreted and reconciled
• component references are mapped to corresponding component implementations
• objects are activated and deactivated
• operations are invoked when messages are transmitted
• security features are implemented

C/S Design Repository Information

• Entities (from ER diagram)


• Files (which implement entities)
• File-to-field relationship (establishes file layout)
• Fields (from data dictionary)
• File-to-file relationships (related files that may be joined together)
• Relationship validation
• Field type (used to permit inheritance from super classes)
• Data type (characteristics of field data)
• File type (used to identify file location)
• Field function (key, foreign key, attribute, etc.)
• Allowed values
• Business values (rules for editing, calculating derived fields, etc.)

Data Distribution and Management Techniques

• Relational data base management systems (RDBMS)


• Manual extract (user allowed to manually copy data from server to client)
• Snapshot (automates manual extract by specifying a copy of the data be transferred from
the client to the server at predefined intervals)

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Replication (multiple copies of data are maintained at different sites)


• Fragmentation (system database is spread across several machines)

C/S Design Approach

1. For each elementary business process, identify the files created, updated, referenced, or
deleted.
2. Use the files from step 1 as basis for defining components or objects.
3. For each component, retrieve the business rules and other business object information that
has been established for the relevant file.
4. Determine which rules are relevant to the process and decompose the rules down to the
method level.
5. As required, define any additional components that are needed to implement the methods.

Process Design Entities

• Methods - describe how a business rule is to be implmemented


• Elementary processes - business processes identified in the analysis model
• Process/component link - identifies components that makeup the solution for an elementary
business process
• Components - describes components shown on structure chart
• Business rule/component link - identified components significant to implementation of a
given business rule

C/S Testing Strategy

• Application function tests (client applications tested in stand alone manner)


• Sever tests (test coordination and management functions of server, also measure
performance of server)
• Database tests (check accuracy and integrity server data, examine transactions posted by
client, test archiving)
• Transaction testing (ensure each class of transactions is processed correctly)
• Network communication testing (verify communication among network nodes)

C/S Testing Tactics

• Begins with testing in the small and then proceeds to integration testing using the non-
incremental or big bang approach

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

• Requires special attention to configuration testing and compatibility testing


• OO testing tactics can be used for C/S systems (even when system was not built using OO
methodology)
• GUI testing requires special techniques in C/S systems (e.g. structured capture/playback)

Service-Oriented Architecture
Service-Oriented Architecture (SOA) is a stage in the evolution of application development
and/or integration. It defines a way to make software components reusable using the
interfaces. Formally, SOA is an architectural approach in which applications make use of
services available in the network. In this architecture, services are provided to form applications,
through a network call over the internet. It uses common communication standards to speed up
and streamline the service integrations in applications. Each service in SOA is a complete
business function in itself. The services are published in such a way that it makes it easy for the
developers to assemble their apps using those services. Note that SOA is different from
microservice architecture.
• SOA allows users to combine a large number of facilities from existing services to form
applications.
• SOA encompasses a set of design principles that structure system development and provide
means for integrating components into a coherent and decentralized system.
• SOA-based computing packages functionalities into a set of interoperable services, which
can be integrated into different software systems belonging to separate business domains.

The different characteristics of SOA are as follows :


o Provides interoperability between the services.
o Provides methods for service encapsulation, service discovery, service composition,
service reusability and service integration.
o Facilitates QoS (Quality of Services) through service contract based on Service Level
Agreement (SLA).
o Provides loosely couples services.
o Provides location transparency with better scalability and availability.
o Ease of maintenance with reduced cost of application development and
deployment.

There are two major roles within Service-oriented Architecture:


1. Service provider: The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To advertise
services, the provider can publish them in a registry, together with a service contract that
specifies the nature of the service, how to use it, the requirements for the service, and the
fees charged.
2. Service consumer: The service consumer can locate the service metadata in the registry
and develop the required client components to bind and use the service.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Services might aggregate information and data retrieved from other services or create workflows
of services to satisfy the request of a given service consumer. This practice is known as service
orchestration Another important interaction pattern is service choreography, which is the
coordinated interaction of services without a single point of control.

Components of
SOA:

Guiding Principles of SOA:


1. Standardized service contract: Specified through one or more service description
documents.
2. Loose coupling: Services are designed as self-contained components, maintain
relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description
documents. They hide their logic, which is encapsulated within their implementation.
4. Reusability: Designed as components, services can be reused more effectively, thus
reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a service
consumer point of view, there is no need to know about their implementation.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

6. Discoverability: Services are defined by description documents that constitute


supplemental metadata through which they can be effectively discovered. Service discovery
provides an effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex operations
can be implemented. Service orchestration and choreography provide a solid support for
composing services and achieving business goals.

Advantages of SOA:
• Service reusability: In SOA, applications are made from existing services. Thus, services
can be reused to make many applications.
• Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
• Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
• Availability: SOA facilities are easily available to anyone on request.
• Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
• Scalability: Services can run on different servers within an environment, this increases
scalability

Disadvantages of SOA:
• High overhead: A validation of input parameters of services is done whenever services
interact this decreases performance as it increases load and response time.
• High investment: A huge initial investment is required for SOA.
• Complex service management: When services interact they exchange messages to tasks.
the number of messages may go in millions. It becomes a cumbersome task to handle a large
number of messages.

Practical applications of SOA: SOA is used in many ways around us whether it is mentioned
or not.
1. SOA infrastructure is used by many armies and air forces to deploy situational awareness
systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example, an app
might need GPS so it uses the inbuilt GPS functions of the device. This is SOA in mobile
solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and content.

Software as a Service | SaaS

SaaS is also known as "On-Demand Software". It is a software distribution model in which


services are hosted by a cloud service provider. These services are available to end-users over the
internet so, the end-users do not need to install any software on their devices to access these
services.

There are the following services provided by SaaS providers -

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Business Services - SaaS Provider provides various business services to start-up the business. The
SaaS business services include ERP (Enterprise Resource Planning), CRM (Customer
Relationship Management), billing, and sales.

Document Management - SaaS document management is a software application offered by a


third party (SaaS providers) to create, manage, and track electronic documents.

Slack, Samepage, Box, and Zoho Forms.

Social Networks - As we all know, social networking sites are used by the general public, so social
networking service providers use SaaS for their convenience and handle the general public's
information.

Mail Services - To handle the unpredictable number of users and load on e-mail services, many
e-mail providers offering their services using SaaS.

Advantages of SaaS cloud computing layer

1) SaaS is easy to buy

SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to
access business functionality at a low cost, which is less than licensed applications.

Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an
optional ongoing support fee), SaaS providers are generally pricing the applications using a
subscription fee, most commonly a monthly or annually fee.

Advantages of SaaS cloud computing layer

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

1) SaaS is easy to buy

SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to
access business functionality at a low cost, which is less than licensed applications.

Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an
optional ongoing support fee), SaaS providers are generally pricing the applications using a
subscription fee, most commonly a monthly or annually fee.

3. One to Many

SaaS services are offered as a one-to-many model means a single instance of the application is
shared by multiple users.

3. Less hardware required for SaaS

The software is hosted remotely, so organizations do not need to invest in additional hardware.

4. Low maintenance required for SaaS

Software as a service removes the need for installation, set-up, and daily maintenance for the
organizations. The initial set-up cost for SaaS is typically less than the enterprise software. SaaS
vendors are pricing their applications based on some usage parameters, such as a number of users
using the application. So SaaS does easy to monitor and automatic updates.

5. No special software or hardware versions required

All users will have the same version of the software and typically access it through the web
browser. SaaS reduces IT support costs by outsourcing hardware and software maintenance and
support to the IaaS provider.

6. Multidevice support

SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and thin
clients.

7. API Integration

SaaS services easily integrate with other software or services through standard APIs.

8. No client-side installation

SaaS services are accessed directly from the service provider using the internet connection, so do
not need to require any software installation.

Disadvantages of SaaS cloud computing layer

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

1) Security

Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.

2) Latency issue

Since data and applications are stored in the cloud at a variable distance from the end-user, there
is a possibility that there may be greater latency when interacting with the application compared
to local deployment. Therefore, the SaaS model is not suitable for applications whose demand
response time is in milliseconds.

3) No client-side installation

SaaS services are accessed directly from the service provider using the internet connection, so do
not need to require any software installation.

Disadvantages of SaaS cloud computing layer

1) Security

Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.

2) Latency issue

Since data and applications are stored in the cloud at a variable distance from the end-user, there
is a possibility that there may be greater latency when interacting with the application compared
to local deployment. Therefore, the SaaS model is not suitable for applications whose demand
response time is in milliseconds.

Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE

Prepared by Dr.AparnaRajesh A

You might also like