You are on page 1of 38

UNIT-III

SOFTWARE DESIGN
Software Design is a process to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation.

When a software program is modularized, its tasks are divided into several modules based on
some characteristics. As we know, modules are set of instructions put together in order to
achieve some tasks. They are though, considered as single entity but may refer to each other to
work together. There are measures by which the quality of a design of modules and their
interaction among them can be measured. These measures are called coupling and cohesion.

Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a
module. The greater the cohesion, the better is the program design.

There are seven types of cohesion, namely –

 Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of
breaking the program into smaller modules for the sake of modularization. Because it is
unplanned, it may serve confusion to the programmers and is generally not-accepted.

 Logical cohesion - When logically categorized elements are put together into a module, it is
called logical cohesion.

 Temporal Cohesion - When elements of module are organized such that they are processed
at a similar point in time, it is called temporal cohesion.

 Procedural cohesion - When elements of module are grouped together, which are executed
sequentially in order to perform a task, it is called procedural cohesion.

 Communicational cohesion - When elements of module are grouped together, which are
executed sequentially and work on same data (information), it is called communicational
cohesion.
 Sequential cohesion - When elements of module are grouped because the output of one
element serves as input to another and so on, it is called sequential cohesion.

 Functional cohesion - It is considered to be the highest degree of cohesion, and it is highly


expected. Elements of module in functional cohesion are grouped because they all contribute
to a single well-defined function. It can also be reused.

Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a
program. It tells at what level the modules interfere and interact with each other. The lower the
coupling, the better the program.

There are five levels of coupling, namely -

 Content coupling - When a module can directly access or modify or refer to the content of
another module, it is called content level coupling.

 Common coupling- When multiple modules have read and write access to some global data,
it is called common or global coupling.

 Control coupling- Two modules are called control-coupled if one of them decides the
function of the other module or changes its flow of execution.

 Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.

 Data coupling- Data coupling is when two modules interact with each other by means of
passing data (as parameter). If a module passes data structure as parameter, then the receiving
module should use all its components.

Ideally, no coupling is considered to be the best.

Function Oriented Design


In function-oriented design, the system is comprised of many smaller sub-systems known as
functions. These functions are capable of performing significant task in the system. The system
is considered as top view of all functions.
Function oriented design inherits some properties of structured design where divide and
conquer methodology is used.

This design mechanism divides the whole system into smaller functions, which provides means
of abstraction by concealing the information and their operation.. These functional modules can
share information among themselves by means of information passing and using information
available globally.

Another characteristic of functions is that when a program calls a function, the function changes
the state of the program, which sometimes is not acceptable by other modules. Function
oriented design works well where the system state does not matter and program/functions work
on input rather than on a state.

Design Process

 The whole system is seen as how data flows in the system by means of data flow diagram.

 DFD depicts how functions changes data and state of entire system.

 The entire system is logically broken down into smaller units known as functions on the basis
of their operation in the system.

 Each function is then described at large.

Object Oriented Design


Object oriented design works around the entities and their characteristics instead of functions
involved in the software system. This design strategies focuses on entities and its
characteristics. The whole concept of software solution revolves around the engaged entities.

Let us see the important concepts of Object Oriented Design:

 Objects - All entities involved in the solution design are known as objects. For example,
person, banks, company and customers are treated as objects. Every entity has some
attributes associated to it and has some methods to perform on the attributes.

 Classes - A class is a generalized description of an object. An object is an instance of a class.


Class defines all the attributes, which an object can have and methods, which defines the
functionality of the object.
In the solution design, attributes are stored as variables and functionalities are defined by
means of methods or procedures.

 Encapsulation - In OOD, the attributes (data variables) and methods (operation on the data)
are bundled together is called encapsulation. Encapsulation not only bundles important
information of an object together, but also restricts access of the data and methods from the
outside world. This is called information hiding.

 Inheritance - OOD allows similar classes to stack up in hierarchical manner where the lower
or sub-classes can import, implement and re-use allowed variables and methods from their
immediate super classes. This property of OOD is known as inheritance. This makes it easier
to define specific class and to create generalized classes from specific ones.

 Polymorphism - OOD languages provide a mechanism where methods performing similar


tasks but vary in arguments, can be assigned same name. This is called polymorphism, which
allows a single interface performing tasks for different types. Depending upon how the
function is invoked, respective portion of the code gets executed.
Design Process
Software design process can be perceived as series of well-defined steps. Though it varies
according to design approach (function oriented or object oriented, yet It may have the
following steps involved:

 A solution design is created from requirement or previous used system and/or system
sequence diagram.

 Objects are identified and grouped into classes on behalf of similarity in attribute
characteristics.

 Class hierarchy and relation among them is defined.

 Application framework is defined.


User Interface (UI) Design focuses on anticipating what users might need to do and
ensuring that the interface has elements that are easy to access, understand, and use to
facilitate those actions. UI brings together concepts from interaction design, visual
design, and information architecture.
Choosing Interface Elements
Users have become familiar with interface elements acting in a certain way, so try to
be consistent and predictable in your choices and their layout. Doing so will help with
task completion, efficiency, and satisfaction.
Interface elements include but are not limited to:

 Input Controls: buttons, text fields, checkboxes, radio buttons, dropdown lists,
list boxes, toggles, date field
 Navigational Components: breadcrumb, slider, search field, pagination, slider,
tags, icons
 Informational Components: tooltips, icons, progress bar, notifications, message
boxes, modal windows
 Containers: accordion
There are times when multiple elements might be appropriate for displaying
content. When this happens, it’s important to consider the trade-offs. For example,
sometimes elements that can help save you space, put more of a burden on the user
mentally by forcing them to guess what is within the dropdown or what the element
might be.
Best Practices for Designing an Interface
Everything stems from knowing your users, including understanding their goals,
skills, preferences, and tendencies. Once you know about your user, make sure to
consider the following when designing your interface:

 Keep the interface simple. The best interfaces are almost invisible to the user.
They avoid unnecessary elements and are clear in the language they use on labels
and in messaging.
 Create consistency and use common UI elements. By using common
elements in your UI, users feel more comfortable and are able to get things done
more quickly. It is also important to create patterns in language, layout and design
throughout the site to help facilitate efficiency. Once a user learns how to do
something, they should be able to transfer that skill to other parts of the site.
 Be purposeful in page layout. Consider the spatial relationships between items
on the page and structure the page based on importance. Careful placement of
items can help draw attention to the most important pieces of information and can
aid scanning and readability.
 Strategically use color and texture. You can direct attention toward or redirect
attention away from items using color, light, contrast, and texture to your
advantage.
 Use typography to create hierarchy and clarity. Carefully consider how you use
typeface. Different sizes, fonts, and arrangement of the text to help increase
scanability, legibility and readability.
 Make sure that the system communicates what’s happening. Always inform
your users of location, actions, changes in state, or errors. The use of various UI
elements to communicate status and, if necessary, next steps can reduce frustration
for your user.
 Think about the defaults. By carefully thinking about and anticipating the goals
people bring to your site, you can create defaults that reduce the burden on the
user. This becomes particularly important when it comes to form design where
you might have an opportunity to have some fields pre-chosen or filled out.
Software Reliability
Software reliability is also defined as the probability that a software system fulfills its assigned
task in a given environment for a predefined number of input cases, assuming that the hardware
and the inputs are free of error

A fault is the defect in the program that, when executed under


particular conditions, causes a failure.
The execution time for a program is the time that is actually spent by
a processor in executing the instructions of that program. The
second kind of time is calendar time. It is the familiar time that we
normally experience.

There are four general ways of characterising failure occurrences in


time:
1. time of failure,
2. time interval between failures,
3. cumulative failure experienced up to a given time,
4. failures experienced in a time interval.
Calendar time component is based on a debugging process
model. This model takes into account:
1. resources used in operating the program for a given
execution time and processing an associated quantity of
failure.
2. resources quantities available, and
3. the degree to which a resource can be utilized (due to
bottlenecks) during the period in which it is limiting.

Reliability Allocation Using Lambda Predict


When developing a new product or improving an existing one, engineers are often faced with the
task of designing a system that must meet a certain set of reliability specifications. This involves
a balancing act in order to determine how to allocate reliability among the
subsystems/components in the system. In this article we will introduce several different
reliability allocation methods, which are included in ReliaSoft's Lambda Predict software.
Reliability allocation involves solving the following inequality:

where:
 Ri is the reliability allocated to the ith subsystem/component.
 f is the functional relationship between the subsystem/component and the system.
 Rs is the required system reliability.
Several algorithms for reliability allocation have been developed [1]:
 Equal apportionment
 AGREE apportionment
 ARINC apportionment
 Feasibility of Objectives apportionment
 Repairable Systems apportionment
UNIT-IV

SOFTWARE TESTING
Testing is the process of evaluating a system or its component(s) with the intent to find whether
it satisfies the specified requirements or not.

Testing is executing a system in order to identify any gaps, errors, or missing requirements in
contrary to the actual requirements.

Software Process : Process defines a framework for a set of Key Process Areas (KPAs) that
must be established for effective delivery of software engineering technology. This establishes
the context in which technical methods are applied, work products such as models, documents,
data, reports, forms, etc. are produced, milestones are established, quality is ensured, and
change is properly managed.

Functional testing is a quality assurance (QA) process and a type of black-box testing that
bases its test cases on the specifications of the software component under test. Functions are
tested by feeding them input and examining the output, and internal program structure is rarely
considered (not like in white-box testing). Functional testing usually describes what the system
does.

Functional testing typically involves six steps

The identification of functions that the software is expected to perform

The creation of input data based on the function's specifications

The determination of output based on the function's specifications

The execution of the test case

The comparison of actual and expected outputs

To check whether the application works as per the customer need.


Equivalence Partitioning:
In this method the input domain data is divided into different equivalence data classes.
This method is typically used to reduce the total number of test cases to a finite set of
testable test cases, still covering maximum requirements.
In short it is the process of taking all possible test cases and placing them into classes.
One test value is picked from each class while testing.
E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is
no use in writing thousand test cases for all 1000 valid input numbers plus other test cases
for invalid data.
Using equivalence partitioning method above test cases can be divided into three sets of
input data called as classes. Each test case is a representative of respective class.

So in above example we can divide our test cases into three equivalence classes of some
valid and invalid inputs.

Test cases for input box accepting numbers between 1 and 1000 using Equivalence
Partitioning:
One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a
valid test case. If you select other values between 1 and 1000 then result is going to be
same. So one test case for valid input data should be sufficient.
Input data class with all values below lower limit. I.e. any value below 1, as a invalid
input data test case.
Input data with any value greater than 1000 to represent third invalid input class.
So using equivalence partitioning you have categorized all possible test cases into three
classes. Test cases with other values from any class should give you the same result.

We have selected one representative from every input class to design our test cases. Test
case values are selected in such a way that largest number of attributes of equivalence
class can be exercised.

Equivalence partitioning uses fewest test cases to cover maximum requirements.

Boundary value analysis:


It’s widely recognized that input values at the extreme ends of input domain cause more
errors in system. More application errors occur at the boundaries of input domain.
‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather
than finding those exist in center of input domain.
Boundary value analysis is a next part of Equivalence partitioning for designing test cases
where test cases are selected at the edges of the equivalence classes.

Test cases for input box accepting numbers between 1 and 1000 using Boundary value
analysis:
Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and
1000 in our case.
Test data with values just below the extreme edges of input domains i.e. values 0 and
999.
Decision table testing is black box test design technique to determine the test
scenarios for complex business logic.
We can apply Equivalence Partitioning and Boundary Value Analysis techniques to only
specific conditions or inputs. Although, if we have dissimilar inputs that result in
different actions being taken or secondly we have a business rule to test that there are
different combination of inputs which result in different actions. We use decision table to
test these kinds of rules or logic.
Why Decision table is important?

Decision tables are very much helpful in test design technique – it helps testers to search
the effects of combinations of different inputs and other software states that must
correctly implement business rules. Also, provides a regular way of stating complex
business rules, that’s helpful for developers as well as for testers. Testing combinations
can be a challenge, as the number of combinations can often be huge. It assists in
development process with developer to do a better job. Testing with all combination
might be unrealistic or unfeasible. We have to be happy with testing just a small subset of
combinations but making the option of which combinations to test and which to leave out
is also significant. If you do not have a efficient way of selecting combinations, an
arbitrary subset will be used and this may well result in an ineffective test effort.
A decision table is basically an outstanding technique used in both testing and
requirements management. It is a structured exercise to prepare requirements when
dealing with complex business rules. Also, used in model complicated logic.
Way to use decision tables in test designing

Firstly; get to know a suitable function or subsystem that acts according to a combination of
inputs or events. Taken system should be with fewer inputs or else combinations will become
impossible. Always better to take maximum numbers of conditions, split them into subsets
and use these subsets one at a time. After getting features that need to be combined, add them
to a table showing all combinations of “Yes” and “No” for each of the feature.
The Cause-Effect Graphing Technique is a black box testing technique which captures the
relationships between specific combinations of inputs(causes) and outputs (effects). It deals with
specific cases & avoids the combinatorial explosions. The graph includes a number of
intermediate nodes linking causes and effects. Each Causes/effects are represented as nodes of a
cause effect graph.

Structural testing, also known as glass box testing or white box testing is an approach where the
tests are derived from the knowledge of the software's structure or internal implementation.

The other names of structural testing includes clear box testing, open box testing, logic driven
testing or path driven testing.

Structural Testing Techniques:


 Statement Coverage - This technique is aimed at exercising all programming statements
with minimal tests.

 Branch Coverage - This technique is running a series of tests to ensure that all branches are
tested at least once.

 Path Coverage - This technique

 corresponds to testing all possible paths which means that each statement and branch are
covered.

Calculating Structural Testing Effectiveness:


Statement Testing = (Number of Statements Exercised / Total Number of Statements) x 100 %

Branch Testing = (Number of decisions outcomes tested / Total Number of decision Outcomes) x 100 %

Path Coverage = (Number paths exercised / Total Number of paths in the program) x 100 %

Advantages of Structural Testing:


 Forces test developer to reason carefully about implementation

 Reveals errors in "hidden" code

 Spots the Dead Code or other issues with respect to best programming practices.

Disadvantages of Structural Box Testing:


 Expensive as one has to spend both time and money to perform white box testing.

What is Path Testing?


Path Testing is a structural testing method based on the source code or algorithm and NOT
based on the specifications. It can be applied at different levels of granularity.

Path Testing Assumptions:


 The Specifications are Accurate

 The Data is defined and accessed properly

 There are no defects that exist in the system other than those that affect control flow

Path Testing Techniques:


 Control Flow Graph (CFG) - The Program is converted into Flow graphs by representing
the code into nodes, regions and edges.

 Decision to Decision path (D-D) - The CFG can be broken into various Decision to
Decision paths and then collapsed into individual nodes.

 Independent (basis) paths - Independent path is a path through a DD-path graph which
cannot be reproduced from other paths by other methods.

What is Data Flow Testing?


Data flow testing is a family of test strategies based on selecting paths through the program's
control flow in order to explore sequences of events related to the status of variables or data
objects. Dataflow Testing focuses on the points at which variables receive values and the points
at which these values are used.

Advantages of Data Flow Testing:


Data Flow testing helps us to pinpoint any of the following issues:

 A variable that is declared but never used within the program.


 A variable that is used but never declared.

 A variable that is defined multiple times before it is used.

 Deallocating a variable before it is used.

What is Mutation Testing?


Mutation testing is a structural testing technique, which uses the structure of the code to guide
the testing process. On a very high level, it is the process of rewriting the source code in small
ways in order to remove the redundancies in the source code

These ambiguities might cause failures in the software if not fixed and can easily pass through
testing phase undetected.

Mutation Testing Benefits:


Following benefits are experienced, if mutation testing is adopted:

 It brings a whole new kind of errors to the developer's attention.

 It is the most powerful method to detect hidden defects, which might be impossible to
identify using the conventional testing techniques.

 Tools such as Insure++ help us to find defects in the code using the state-of-the-art.

 Increased customer satisfaction index as the product would be less buggy.

 Debugging and Maintaining the product would be more easier than ever.

Mutation Testing Types:


 Value Mutations: An attempt to change the values to detect errors in the programs. We
usually change one value to a much larger value or one value to a much smaller value. The
most common strategy is to change the constants.
 Decision Mutations: The decisions/conditions are changed to check for the design errors.
Typically, one changes the arithmetic operators to locate the defects and also we can
consider mutating all relational operators and logical operators (AND, OR , NOT)

 Statement Mutations: Changes done to the statements by deleting or duplicating the line
which might arise when a developer is copy pasting the code from somewhere else.

What is Unit Testing?


Unit testing, a testing technique using which individual modules are tested to determine if there
are any issues by the developer himself. It is concerned with functional correctness of the
standalone modules.

The main aim is to isolate each unit of the system to identify, analyze and fix the defects.

Unit Testing - Advantages:


 Reduces Defects in the Newly developed features or reduces bugs when changing the
existing functionality.

 Reduces Cost of Testing as defects are captured in very early phase.

 Improves design and allows better refactoring of code.

 Unit Tests, when integrated with build gives the quality of the build as well.

Unit Testing LifeCyle:


Unit Testing Techniques:
 Black Box Testing - Using which the user interface, input and output are tested.

 White Box Testing - used to test each one of those functions behaviour is tested.

 Gray Box Testing - Used to execute tests, risks and assessment methods.

What is Integration Testing?


Upon completion of unit testing, the units or modules are to be integrated which gives raise to
integration testing. The purpose of integration testing is to verify the functional, performance,
and reliability between the modules that are integrated.

Integration Strategies:
 Big-Bang Integration
 Top Down Integration

 Bottom Up Integration

 Hybrid Integration

What is System Integration Testing?


System Integration Testing(SIT) is a black box testing technique that evaluates the system's
compliance against specified requirements. System Integration Testing is usually performed on
subset of system while system testing is performed on a complete system and is preceded by the
user acceptance test (UAT).

The SIT can be performed with minimum usage of testing tools, verified for the interactions
exchanged and the behaviour of each data field within individual layer is investigated. After the
integration, there are three main states of data flow:

System Integration Testing - Main States:


 Data state within the integration layer

 Data state within the database layer

 Data state within the Application layer

Granularity in System Integration Testing:


 Intra-system testing

 Inter-system testing

 Pairwise testing

System Integration Testing Techniques:


 Top-down Integration Testing

 Bottom-up Integration Testing

 Sandwich Integration Testing


 Big-bang Integration Testing

What is Debugging?
It is a systematic process of spotting and fixing the number of bugs, or defects, in a piece of
software so that the software is behaving as expected. Debugging is harder for complex systems
in particular when various subsystems are tightly coupled as changes in one system or interface
may cause bugs to emerge in another.

Debugging is a developer activity and effective debugging is very important before testing
begins to increase the quality of the system. Debugging will not give confidence that the system
meets its requirements completely but testing gives confidence.

 Every possibility that few lines of code is missed accidentally.

 Indepth knowledge about the programming language is necessary to perform white box
testing.

Testing Tools:
Tools from a software testing context can be defined as a product that supports one or more test
activities right from planning, requirements, creating a build, test execution, defect logging and
test analysis.

Classification of Tools
Tools can be classified based on several parameters. They include:

 The purpose of the tool

 The Activities that are supported within the tool

 The Type/level of testing it supports

 The Kind of licensing (open source, freeware, commercial)


 The technology used

Types of Tools:
S.No. Tool Type Used for Used by

1. Test Management Tool Test Managing, scheduling, defect logging, testers


tracking and analysis.

2. Configuration management For Implementation, execution, tracking All Team


tool changes members

3. Static Analysis Tools Static Testing Developers

4. Test data Preparation Tools Analysis and Design, Test data generation Testers

5. Test Execution Tools Implementation, Execution Testers

6. Test Comparators Comparing expected and actual results All Team


members

7. Coverage measurement Provides structural coverage Developers


tools

8. Performance Testing tools Monitoring the performance, response time Testers

9. Project planning and For Planning Project


Tracking Tools Managers

10. Incident Management Tools For managing the tests Testers

Tools Implementation - process


 Analyse the problem carefully to identify strengths, weaknesses and opportunities

 The Constraints such as budgets, time and other requirements are noted.

 Evaluating the options and Shortlisting the ones that are meets the requirement

 Developing the Proof of Concept which captures the pros and cons

 Create a Pilot Project using the selected tool within a specified team

 Rolling out the tool phase wise across the organization

Many organizations around the globe develop and implement different standards to improve the
quality needs of their software. This chapter briefly describes some of the widely used standards
related to Quality Assurance and Testing.

ISO/IEC 9126
This standard deals with the following aspects to determine the quality of a software
application:

 Quality model

 External metrics

 Internal metrics

 Quality in use metrics

This standard presents some set of quality attributes for any software such as:

 Functionality

 Reliability

 Usability

 Efficiency

 Maintainability

 Portability
The above-mentioned quality attributes are further divided into sub-factors, which you can
study when you study the standard in detail.

ISO/IEC 9241-11
Part 11 of this standard deals with the extent to which a product can be used by specified users
to achieve specified goals with Effectiveness, Efficiency and Satisfaction in a specified context
of use.

This standard proposed a framework that describes the usability components and the
relationship between them. In this standard, the usability is considered in terms of user
performance and satisfaction. According to ISO 9241-11, usability depends on context of use
and the level of usability will change as the context changes.

ISO/IEC 25000:2005
ISO/IEC 25000:2005 is commonly known as the standard that provides the guidelines for
Software Quality Requirements and Evaluation (SQuaRE). This standard helps in organizing
and enhancing the process related to software quality requirements and their evaluations. In
reality, ISO-25000 replaces the two old ISO standards, i.e. ISO-9126 and ISO-14598.

SQuaRE is divided into sub-parts such as:

 ISO 2500n - Quality Management Division

 ISO 2501n - Quality Model Division

 ISO 2502n - Quality Measurement Division

 ISO 2503n - Quality Requirements Division

 ISO 2504n - Quality Evaluation Division

The main contents of SQuaRE are:

 Terms and definitions

 Reference Models

 General guide

 Individual division guides


 Standard related to Requirement Engineering (i.e. specification, planning, measurement and
evaluation process)

ISO/IEC 12119
This standard deals with software packages delivered to the client. It does not focus or deal with
the clients’ production process. The main contents are related to the following items:

 Set of requirements for software packages.

 Instructions for testing a delivered software package against the specified requirements.

Miscellaneous
Some of the other standards related to QA and Testing processes are mentioned below:

Standard Description

IEEE 829 A standard for the format of documents used in different stages of software testing.

IEEE 1061 A methodology for establishing quality requirements, identifying, implementing,


analyzing, and validating the process, and product of software quality metrics.

IEEE 1059 Guide for Software Verification and Validation Plans.

IEEE 1008 A standard for unit testing.

IEEE 1012 A standard for Software Verification and Validation.

IEEE 1028 A standard for software inspections.

IEEE 1044 A standard for the classification of software anomalies.

IEEE 1044-1 A guide for the classification of software anomalies.


IEEE 830 A guide for developing system requirements specifications.

IEEE 730 A standard for software quality assurance plans.

IEEE 1061 A standard for software quality metrics and methodology.

IEEE 12207 A standard for software life cycle processes and life cycle data.

BS 7925-1 A vocabulary of terms used in software testing.

BS 7925-2 A standard for software component testing.

Software Maintenance
Software maintenance is widely accepted part of SDLC now a days. It stands for all the
modifications and updations done after the delivery of software product. There are number of
reasons, why modifications are required, some of them are briefly mentioned below:

 Market Conditions - Policies, which changes over the time, such as taxation and newly
introduced constraints like, how to maintain bookkeeping, may trigger need for
modification.

 Client Requirements - Over the time, customer may ask for new features or functions in the
software.

 Host Modifications - If any of the hardware and/or platform (such as operating system) of
the target host changes, software changes are needed to keep adaptability.

 Organization Changes - If there is any business level change at client end, such as
reduction of organization strength, acquiring another company, organization venturing into
new business, need to modify in the original software may arise.
Types of maintenance
In a software lifetime, type of maintenance may vary based on its nature. It may be just a
routine maintenance tasks as some bug discovered by some user or it may be a large event in
itself based on maintenance size or nature. Following are some types of maintenance based on
their characteristics:

 Corrective Maintenance - This includes modifications and updations done in order to


correct or fix problems, which are either discovered by user or concluded by user error
reports.

 Adaptive Maintenance - This includes modifications and updations applied to keep the
software product up-to date and tuned to the ever changing world of technology and
business environment.

 Perfective Maintenance - This includes modifications and updates done in order to keep
the software usable over long period of time. It includes new features, new user
requirements for refining the software and improve its reliability and performance.

 Preventive Maintenance - This includes modifications and updations to prevent future


problems of the software. It aims to attend problems, which are not significant at this
moment but may cause serious issues in future.

Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on estimating software
maintenance found that the cost of maintenance is as high as 67% of the cost of entire software
process cycle.
On an average, the cost of software maintenance is more than 50% of all SDLC phases. There
are various factors, which trigger maintenance cost go high, such as:

Real-world factors affecting Maintenance Cost

 The standard age of any software is considered up to 10 to 15 years.

 Older softwares, which were meant to work on slow machines with less memory and storage
capacity cannot keep themselves challenging against newly coming enhanced softwares on
modern hardware.

 As technology advances, it becomes costly to maintain old software.

 Most maintenance engineers are newbie and use trial and error method to rectify problem.

 Often, changes made can easily hurt the original structure of the software, making it hard for
any subsequent changes.

 Changes are often left undocumented which may cause more conflicts in future.
Software-end factors affecting Maintenance Cost

 Structure of Software Program

 Programming Language
 Dependence on external environment

 Staff reliability and availability

Maintenance Activities
IEEE provides a framework for sequential maintenance process activities. It can be used in
iterative manner and can be extended so that customized items and processes can be included.

These activities go hand-in-hand with each of the following phase:

 Identification & Tracing - It involves activities pertaining to identification of


requirement of modification or maintenance. It is generated by user or system may itself
report via logs or error messages.Here, the maintenance type is classified also.

 Analysis - The modification is analyzed for its impact on the system including safety and
security implications. If probable impact is severe, alternative solution is looked for. A
set of required modifications is then materialized into requirement specifications. The
cost of modification/maintenance is analyzed and estimation is concluded.
 Design - New modules, which need to be replaced or modified, are designed against
requirement specifications set in the previous stage. Test cases are created for validation
and verification.

 Implementation - The new modules are coded with the help of structured design created
in the design step.Every programmer is expected to do unit testing in parallel.

 System Testing - Integration testing is done among newly created modules. Integration
testing is also carried out between new modules and the system. Finally the system is
tested as a whole, following regressive testing procedures.

 Acceptance Testing - After testing the system internally, it is tested for acceptance with
the help of users. If at this state, user complaints some issues they are addressed or noted
to address in next iteration.

 Delivery - After acceptance test, the system is deployed all over the organization either
by small update package or fresh installation of the system. The final testing takes place
at client end after the software is delivered.

Training facility is provided if required, in addition to the hard copy of user manual.

 Maintenance management - Configuration management is an essential part of system


maintenance. It is aided with version control tools to control versions, semi-version or
patch management.

Software Re-engineering
When we need to update the software to keep it to the current market, without impacting its
functionality, it is called software re-engineering. It is a thorough process where the design of
software is changed and programs are re-written.

Legacy software cannot keep tuning with the latest technology available in the market. As the
hardware become obsolete, updating of software becomes a headache. Even if software grows
old with time, its functionality does not.
For example, initially Unix was developed in assembly language. When language C came into
existence, Unix was re-engineered in C, because working in assembly language was difficult.

Other than this, sometimes programmers notice that few parts of software need more
maintenance than others and they also need re-engineering.

Re-Engineering Process

 Decide what to re-engineer. Is it whole software or a part of it?

 Perform Reverse Engineering, in order to obtain specifications of existing software.

 Restructure Program if required. For example, changing function-oriented programs into


object-oriented programs.

 Re-structure data as required.

 Apply Forward engineering concepts in order to get re-engineered software.

There are few important terms used in Software re-engineering

Reverse Engineering
It is a process to achieve system specification by thoroughly analyzing, understanding the
existing system. This process can be seen as reverse SDLC model, i.e. we try to get higher
abstraction level by analyzing lower abstraction levels.
An existing system is previously implemented design, about which we know nothing. Designers
then do reverse engineering by looking at the code and try to get the design. With design in
hand, they try to conclude the specifications. Thus, going in reverse from code to system
specification.

Program Restructuring
It is a process to re-structure and re-construct the existing software. It is all about re-arranging
the source code, either in same programming language or from one programming language to a
different one. Restructuring can have either source code-restructuring and data-restructuring or
both.

Re-structuring does not impact the functionality of the software but enhance reliability and
maintainability. Program components, which cause errors very frequently can be changed, or
updated with re-structuring.

The dependability of software on obsolete hardware platform can be removed via re-structuring.

Forward Engineering
Forward engineering is a process of obtaining desired software from the specifications in hand
which were brought down by means of reverse engineering. It assumes that there was some
software engineering already done in the past.

Forward engineering is same as software engineering process with only one difference – it is
carried out always after reverse engineering.

Component reusability
A component is a part of software program code, which executes an independent task in the
system. It can be a small module or sub-system itself.

Example
The login procedures used on the web can be considered as components, printing system in
software can be seen as a component of the software.

Components have high cohesion of functionality and lower rate of coupling, i.e. they work
independently and can perform tasks without depending on other modules.

In OOP, the objects are designed are very specific to their concern and have fewer chances to be
used in some other software.

In modular programming, the modules are coded to perform specific tasks which can be used
across number of other software programs.

There is a whole new vertical, which is based on re-use of software component, and is known as
Component Based Software Engineering (CBSE).

Re-use can be done at various levels

 Application level - Where an entire application is used as sub-system of new software.

 Component level - Where sub-system of an application is used.

 Modules level - Where functional modules are re-used.

Software components provide interfaces, which can be used to establish communication


among different components.
Reuse Process
Two kinds of method can be adopted: either by keeping requirements same and adjusting
components or by keeping components same and modifying requirements.

 Requirement Specification - The functional and non-functional requirements are specified,


which a software product must comply to, with the help of existing system, user input or
both.

 Design - This is also a standard SDLC process step, where requirements are defined in terms
of software parlance. Basic architecture of system as a whole and its sub-systems are
created.

 Specify Components - By studying the software design, the designers segregate the entire
system into smaller components or sub-systems. One complete software design turns into a
collection of a huge set of components working together.

 Search Suitable Components - The software component repository is referred by designers


to search for the matching component, on the basis of functionality and intended software
requirements..
 Incorporate Components - All matched components are packed together to shape them as
complete software.

The quick-fix model is an ad hoc approach used for maintaining the software system. The
objective of this model is to identify the problem and then fix it as quickly as possible.
The advantage is that it performs its work quickly and at a low cost. This model is an
approach to modify the software code with little consideration for its impact on the
overall structure of the software system.

Sometimes, users do not wait for long time. Rather, they require the modified software system to
be delivered to them in the least possible time. As a result, the software maintenance team needs
to use a quick-fix model to avoid the time consuming process of SMLC.

This model is beneficial when a single user is using the software system. As the user has proper
knowledge of the software system, it becomes easier to maintain the software system without
having need to manage the detailed documentation. This model is also advantageous in situations
when the software system is to be maintained with certain deadlines and limited resources.
However, this model is not suitable to fix errors for a longer period.

The iterative enhancement model, which was originally proposed as a process model, can be
easily adapted for maintaining a software system. It considers that the changes made to the
software system are iterative in nature. The iterative enhancement model comprises three stages,
namely, analysis of software system, classification of requested modifications, and
implementation of requested modifications.

The reuse-oriented model assumes that the existing program components can be reused to
perform maintenance.

It consists of the following steps.


1. Identifying the components of the old system which can be reused
2. Understanding these components
3. Modifying the old system components so that they can be used in the new system
4. Integrating the modified components into the new system.

Software configuration management (SCM) is a software engineering discipline consisting of


standard processes and techniques often used by organizations to manage the changes introduced
to its software products. SCM helps in identifying individual elements and configurations,
tracking changes, and version selection, control, and baselining.
SCM defines a mechanism to deal with different technical difficulties of a project plan. In a
software organization, effective implementation of software configuration management can
improve productivity by increased coordination among the programmers in a team. SCM helps to
eliminate the confusion often caused by miscommunication among team members. The SCM
system controls the basic components such as software objects, program code, test data, test
output, design documents,and user manuals.

The SCM system has the following advantages:

 Reduced redundant work.


 Effective management of simultaneous updates.
 Avoids configuration-related problems.
 Facilitates team coordination.
 Helps in building management; managing tools used in builds.
 Defect tracking: It ensures that every defect has traceability back to its source.

SCM is also known as software control management. SCM aims to control changes introduced to
large complex software systems through reliable version selection and version control.

Software documentation is written text that accompanies computer software. It either explains
how it operates or how to use it, and may mean different things to people in different roles.

Documentation is an important part of software engineering. Types of documentation include:

1. Requirements - Statements that identify attributes, capabilities, characteristics, or


qualities of a system. This is the foundation for what shall be or has been implemented.
2. Architecture/Design - Overview of software. Includes relations to an environment and
construction principles to be used in design of software components.
3. Technical - Documentation of code, algorithms, interfaces, and APIs.
4. End user - Manuals for the end-user, system administrators and support staff.
5. Marketing - How to market the product and analysis of the market demand.

You might also like