You are on page 1of 21

Unit 3

Introduction to design process

•The main aim of design engineering is to generate a model which shows firmness, delight and
commodity.
•Software design is an iterative process through which requirements are translated into the blueprint
for building the software.

Software quality guidelinesA design is generated using the recognizable architectural styles and
compose a good design characteristic of components and it is implemented in evolutionary manner
for testing.

•A design of the software must be modular i.e the software must be logically partitioned into
elements.
•In design, the representation of data , architecture, interface and components should be distinct.
•A design must carry appropriate data structure and recognizable data patterns.
•Design components must show the independent functional characteristic.
•A design creates an interface that reduce the complexity of connections between the components.
•A design must be derived using the repeatable method.
•The notations should be use in design which can effectively communicates its meaning.

Quality attributes

The attributes of design name as 'FURPS' are as follows:

Functionality:
It evaluates the feature set and capabilities of the program.

Usability:
It is accessed by considering the factors such as human factor, overall aesthetics, consistency and
documentation.

Reliability:
It is evaluated by measuring parameters like frequency and security of failure, output result accuracy,
the mean-time-to-failure(MTTF), recovery from failure and the the program predictability.

Performance:
It is measured by considering processing speed, response time, resource consumption, throughput and
efficiency.

Supportability:
•It combines the ability to extend the program, adaptability, serviceability. These three term defines
the maintainability.
•Testability, compatibility and configurability are the terms using which a system can be easily
installed and found the problem easily.
•Supportability also consists of more attributes such as compatibility, extensibility, fault tolerance,
modularity, reusability, robustness, security, portability, scalability.

Design concepts

The set of fundamental software design concepts are as follows:

1. Abstraction
•A solution is stated in large terms using the language of the problem environment at the highest
level abstraction.
•The lower level of abstraction provides a more detail description of the solution.
•A sequence of instruction that contain a specific and limited function refers in a procedural
abstraction.
•A collection of data that describes a data object is a data abstraction.
2. Architecture
•The complete structure of the software is known as software architecture.
•Structure provides conceptual integrity for a system in a number of ways.
•The architecture is the structure of program modules where they interact with each other in a
specialized way.
•The components use the structure of data.
•The aim of the software design is to obtain an architectural framework of a system.
•The more detailed design activities are conducted from the framework.
3.Patterns
A design pattern describes a design structure and that structure solves a particular design problem in a
specified content.

4. Modularity
•A software is separately divided into name and addressable components. Sometime they are called
as modules which integrate to satisfy the problem requirements.
•Modularity is the single attribute of a software that permits a program to be managed easily.
5.Information hiding
Modules must be specified and designed so that the information like algorithm and data presented in
a module is not accessible for other modules not requiring that information.

6. Functional independence
•The functional independence is the concept of separation and related to the concept of modularity,
abstraction and information hiding.
•The functional independence is accessed using two criteria i.e Cohesion and coupling.
Cohesion
•Cohesion is an extension of the information hiding concept.
•A cohesive module performs a single task and it requires a small interaction with the other
components in other parts of the program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.

7. Refinement
•Refinement is a top-down design approach.
•It is a process of elaboration.
•A program is established for refining levels of procedural details.
•A hierarchy is established by decomposing a statement of function in a stepwise manner till the
programming language statement are reached.
8. Refactoring
•It is a reorganization technique which simplifies the design of components without changing its
function behaviour.
•Refactoring is the process of changing the software system in a way that it does not change the
external behaviour of the code still improves its internal structure.
9. Design classes
•The model of software is defined as a set of design classes.
•Every class describes the elements of problem domain and that focus on features of the problem
which are user visible.
Architectural Design:

Software Engineering | Architectural Design


Introduction: The software needs the architectural design to represents the design of software. IEEE
defines architectural design as “the process of defining a collection of hardware and software
components and their interfaces to establish the framework for the development of a computer
system.” The software that is built for computer-based systems can exhibit one of these many
architectural styles.
Each style will describe a system category that consists of :
• A set of components(eg: a database, computational modules) that will perform a function
required by the system.
• The set of connectors will help in coordination, communication, and cooperation between the
components.
• Conditions that how components can be integrated to form the system.
• Semantic models that help the designer to understand the overall properties of the system.
The use of architectural styles is to establish a structure for all the components of the system.
Taxonomy of Architectural styles: Data centred architectures:

•A data store will reside at the center of this architecture and is accessed frequently by the other
components that update, add, delete or modify the data present within the store.
•The figure illustrates a typical data centered style. The client software access a central
repository. Variation of this approach are used to transform the repository into a blackboard
when data related to client or data of interest for the client change the notifications to client
software.
•This data-centered architecture will promote integrability. This means that the existing
components can be changed and new client components can be added to the architecture
without the permission or concern of other clients.
•Data can be passed among clients using blackboard mechanism.
Data flow architectures:
•This kind of architecture is used when input data to be transformed into output data through a
series of computational manipulative components.
•The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a
set of components called filters connected by pipes.
•Pipes are used to transmit data from one component to the next.
•Each filter will work independently and is designed to take data input of a certain form and
produces data output to the next filter of a specified form. The filters don’t require any
knowledge of the working of neighboring filters.
Call and Return architectures: It is used to create a program that is easy to scale and modify.
Many sub-styles exist within this category. Two of them are explained below.
•Remote procedure call architecture: This components is used to present in a main program or
sub program architecture distributed among multiple computers on a network.
•Main program or Subprogram architectures: The main program structure decomposes into
number of subprograms or function into a control hierarchy. Main program contains number of
subprograms that can invoke other components.
Object Oriented architecture: The components of a system encapsulate data and the operations
that must be applied to manipulate the data. The coordination and communication between the
components are established via the message passing.

1.Layered architecture:
•A number of different layers are defined with each layer performing a well-defined set of
operations. Each layer will do some operations that becomes closer to machine instruction
set progressively.
•At the outer layer, components will receive the user interface operations and at the inner
layers, components will perform the operating system interfacing(communication and
coordination with OS)
•Intermediate layers to utility services and application software functions.
Low level design: Low-level design (LLD) is a component-level design process that follows a step-
by-step refinement process. This process can be used for designing data structures, required software
architecture, source code and ultimately, performance algorithms. Overall, the data organization may
be defined during requirement analysis and then refined during data design work. Post-build, each
component is specified in detail. The LLD phase is the stage where the actual software components
are designed.During the detailed phase the logical and functional design is done and the design of
application structure is developed during the high-level design phase.
Design Phase
A design is the order of a system that connects individual components. Often, it can interact with
other systems. Design is important to achieve high reliability, low cost, and good maintain-ability. We
can distinguish two types of program design phases:
•Architectural or high-level design
•Detailed or low-level design
Purpose:
The goal of LLD or a low-level design document (LLDD) is to give the internal logical design of the
actual program code. Low-level design is created based on the high-level design. LLD describes the
class diagrams with the methods and relations between classes and program specs. It describes the
modules so that the programmer can directly code the program from the document.
A good low-level design document makes the program easy to develop when proper analysis is
utilized to create a low-level design document. The code can then be developed directly from the
low-level design document with minimal debugging and testing. Other advantages include lower cost
and easier maintenance.
Modularization
Modularization is a technique to divide a software system into multiple discrete and independent
modules, which are expected to be capable of carrying out task(s) independently. These modules may
work as basic constructs for the entire software. Designers tend to design modules such that they can
be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving strategy
this is because there are many other benefits attached with the modular design of a software.
Advantage of modularization:
•Smaller components are easier to maintain
•Program can be divided based on functional aspects
•Desired level of abstraction can be brought in the program
•Components with high cohesion can be re-used again
•Concurrent execution can be made possible Desired from security aspect
Design structure charts:
A Structure Chart (SC) in software engineering and organizational theory is a chart which shows
the breakdown of a system to its lowest manageable levels. They are used in structured
programming to arrange program modules into a tree. Each module is represented by a box, which
contains the module's name. The tree structure visualizes the relationships between modules. A
structure chart is a top-down modular design tool, constructed of squares representing the different
modules in the system, and lines that connect them. The lines represent the connection and or
ownership between activities and subactivities as they are used in organization charts.
A structure chart depicts.
•the size and complexity of the system, and
•number of readily identifiable functions and modules within each function and
•whether each identifiable function is a manageable entity or should be broken down into smaller
components.
A structure chart is also used to diagram associated elements that comprise a run stream or thread. It
is often developed as a hierarchical diagram, but other representations are allowable. The
representation must describe the breakdown of the configuration system into subsystems and the
lowest manageable level. An accurate and complete structure chart is the key to the determination of
the configuration items (CI), and a visual representation of the configuration system and the internal
interfaces among its CIs(define CI clearly). During the configuration control process, the structure
chart is used to identify CIs and their associated artifacts that a proposed change may impact.
A flowchart is a type of diagram that represents a workflow or process. A flowchart can also be
defined as a diagrammatic representation of an algorithm, a step-by-step approach to solving a task.
The flowchart shows the steps as boxes of various kinds, and their order by connecting the boxes
with arrows. This diagrammatic representation illustrates a solution model to a given problem.
Flowcharts are used in analyzing, designing, documenting or managing a process or program in
various fields.
Types
•Document flowcharts, showing controls over a document-flow through a system
Data flowcharts, showing controls over a data-flow in a deled from the perspective of different
usersystem
•System flowcharts, showing controls at a physical or resource level
•Program flowchart, showing the controls in a program within a system.

Coupling and Cohesian Measures:


•Coupling is the measure of the degree of interdependence between the modules. A good software
will have low coupling.
•Types of Coupling:
•Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled. In data
coupling, the components are independent to each other and communicating through data.
Module communications don’t contain tramp data. Example-customer billing system.
•Stamp Coupling In stamp coupling, the complete data structure is passed from one module to
another module. Therefore, it involves tramp data. It may be necessary due to efficiency
factors- this choice made by the insightful designer, not a lazy programmer.
•Control Coupling: If the modules communicate by passing control information, then they are
said to be control coupled. It can be bad if parameters indicate completely different behavior
and good if parameters allow factoring and reuse of functionality. Example- sort function that
takes comparison function as an argument.
•External Coupling: In external coupling, the modules depend on other modules, external to the
software being developed or to a particular type of hardware. Ex- protocol, external file, device
format, etc.
• Common Coupling: The modules have shared data such as global data structures.The changes
in global data mean tracing back to all modules which access that data to evaluate the effect of the
change. So it has got disadvantages like difficulty in reusing modules, reduced ability to control data
accesses and reduced maintainability.
•Content Coupling: In a content coupling, one module can modify the data of another module
or control flow is passed from one module to the other module. This is the worst form of
coupling and should be avoided.
•Cohesion: Cohesion is a measure of the degree to which the elements of the module are functionally
related. It is the degree to which all elements directed towards performing a single task are contained
in the component. Basically, cohesion is the internal glue that keeps the module together. A good
software design will have high cohesion.
•Types of Cohesion:
•Functional Cohesion: Every essential element for a single computation is contained in the
component. A functional cohesion performs the task and functions. It is an ideal situation.
•Sequential Cohesion: An element outputs some data that becomes the input for other element,
i.e., data flow between the parts. It occurs naturally in functional programming languages.
•Communicational Cohesion: Two elements operate on the same input data or contribute
towards the same output data. Example- update record int the database and send it to the printer.
•Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions
are still weakly connected and unlikely to be reusable. Ex- calculate student GPA, print student
record, calculate cumulative GPA, print cumulative GPA.
•Temporal Cohesion: The elements are related by their timing involved. A module connected
with temporal cohesion all the tasks must be executed in the same time-span. This cohesion
contains the code for initializing all the parts of the system. Lots of different activities occur, all
at init time.
•Logical Cohesion: The elements are logically related and not functionally. Ex- A component reads
inputs from tape, disk, and network. All the code for these functions is in the same component.
Operations are related, but the functions are significantly different.
•Coincidental Cohesion: The elements are not related(unrelated). The elements have no
conceptual relationship other than location in source code. It is accidental and the worst form of
cohesion. Ex- print next line and reverse the characters of a string in a single component.
Top down:
Each system is divided into several subsystems and components. Each of the subsystem is further
divided into set of subsystems and components. This process of division facilitates in forming a
system hierarchy structure. The complete software system is considered as a single entity and in
relation to the characteristics, the system is split into sub-system and component. The same is done
with each of the sub-system.
This process is continued until the lowest level of the system is reached. The design is started initially
by defining the system as a whole and then keeps on adding definitions of the subsystems and
components. When all the definitions are combined together, it turns out to be a complete system.

For the solutions of the software need to be developed from the ground level, top-down design best
suits the purpose.

•Advantages:
•The mail advantage of top down approach is that its strong focus on requirements helps to
make a design responsive according to its requirements.
Disadvantages:
•Project and system boundries tends to be application specification oriented. Thus it is more
likely that advantages of component reuse will be missed.
•The system is likely to miss, the benefits of a well-structured, simple architecture.
Bottom up:
Bottom-up approach:
The design starts with the lowest level components and subsystems. By using these components, the
next immediate higher level components and subsystems are created or composed. The process is
continued till all the components and subsystems are composed into a single component, which is
considered as the complete system. The amount of abstraction grows high as the design moves to
more high levels.
By using the basic information existing system, when a new system needs to be created, the bottom
up strategy suits the purpose.

Advantages:
•The economics can result when general solutions can be reused.
•It can be used to hide the low-level details of implementation and be merged with top-down
technique.
Disadvantages:
•It is not so closely related to the structure of the problem.
•High quality bottom-up solutions are very hard to construct.
•It leads to proliferation of ‘potentially useful’ functions rather than most approprite ones.
Function oriented design:
Function Oriented Design Strategies:
Function Oriented Design Strategies are as follows:
1.Data Flow Diagram (DFD):
A data flow diagram (DFD) maps out the flow of information for any process or system. It uses
defined symbols like rectangles, circles and arrows, plus short text labels, to show data inputs,
outputs, storage points and the routes between each destination.
2.Data Dictionaries:
Data dictionaries are simply repositories to store information about all data items defined in
DFDs. At the requirement stage, data dictionaries contains data items. Data dictionaries include
Name of the item, Aliases (Other names for items), Description / purpose, Related data items,
Range of values, Data structure definition / form.
3.Structure Charts:
It is the hierarchical representation of system which partitions the system into black boxes
(functionality is known to users but inner details are unknown). Components are read from top
to bottom and left to right. When a module calls another, it views the called module as black
box, passing required parameters and receiving results.
Unit 4:
Top Down and Bottom up programming:
Top-down and bottom-up are both strategies of information processing and knowledge ordering,
used in a variety of fields including software, humanistic and scientific theories , and management
and organization. In practice, they can be seen as a style of thinking, teaching, or leadership.
A top-down approach (also known as stepwise design and in some cases used as a synonym
of decomposition) is essentially the breaking down of a system to gain insight into its compositional
sub-systems in a reverse engineering fashion. In a top-down approach an overview of the system is
formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined
in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is
reduced to base elements. A top-down model is often specified with the assistance of "black boxes",
which makes it easier to manipulate.
A bottom-up approach is the piecing together of systems to give rise to more complex systems, thus
making the original systems sub-systems of the emergent system. Bottom-up processing is a type
of information processing based on incoming data from the environment to form a perception. From
a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the
"bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a
perception (output that is "built up" from processing to final cognition). In a bottom-up approach the
individual base elements of the system are first specified in great detail. These elements are then
linked together to form larger subsystems, which then in turn are linked, sometimes in many levels,
until a complete top-level system is formed.
Structures programming:
Structured programming is a programming paradigm aimed at improving the clarity, quality, and
development time of a computer program by making extensive use of the structured control flow
constructs of selection (if/then/else) and repetition (while and for), block structures, and subroutines.
It emerged in the late 1950s with the appearance of the ALGOL 58 and ALGOL 60 programming
languages, with the latter including support for block structures. Contributing factors to its popularity
and widespread acceptance, at first in academia and later among practitioners, include the discovery
of what is now known as the structured program theorem in 1966, and the publication of the
influential "Go To Statement Considered Harmful" open letter in 1968 +by Dutch computer
scientist Edsger W. Dijkstra, who coined the term "structured programming".
Structured programming is most frequently used with deviations that allow for clearer programs in
some particular cases, such as when exception handling has to be performed.
Elements
Control structures
Following the structured program theorem, all programs are seen as composed of control structures:
•"Sequence"; ordered statements or subroutines executed in sequence.
•"Selection"; one or a number of statements is executed depending on the state of the program. This
is usually expressed with keywords such as if..then..else..endif.
•"Iteration"; a statement or block is executed until the program reaches a certain state, or operations
have been applied to every element of a collection. This is usually expressed with keywords such
as while, repeat, for or do..until.
•"Recursion"; a statement is executed by repeatedly calling itself until termination conditions are met.
While similar in practice to iterative loops, recursive loops may be more computationally efficient,
and are implemented differently as a cascading stack.
Structured programming languages: It is possible to do structured programming in any
programming language, though it is preferable to use something like a procedural programming
language. Some of the languages initially used for structured programming
include: ALGOL, Pascal, PL/I and Ada, but most new procedural programming languages since that
time have included features to encourage structured programming, and sometimes deliberately left
out feature notably GOTO – in an effort to make unstructured programming more
difficult. Structured programming (sometimes known as modular programming) enforces a logical
structure on the program being written to make it more efficient and easier to understand and modify.
What is code Inspection?
Code Inspection is the most formal type of review, which is a kind of static testing to avoid the defect
multiplication at a later stage.
The main purpose of code inspection is to find defects and it can also spot any process improvement.
An inspection report lists the findings, which include metrics that can be used to aid improvements to
the process as well as correcting defects in the document under review.
Preparation before the meeting is essential, which includes reading of any source documents to
ensure consistency.
Inspections are often led by a trained moderator, who is not the author of the code.
The inspection process is the most formal type of review based on rules and checklists and makes use
of entry and exit criteria.
It usually involves peer examination of the code and each one has a defined set of roles.
After the meeting, a formal follow-up process is used to ensure that corrective action is completed in
a timely manner.
Although direct discovery of quality problems is often the main goal, code reviews are usually
performed to reach a combination of goals :
•Better code quality – improve internal code quality and maintainability (readability, uniformity,
understandability, ...)
•Finding defects – improve quality regarding external aspects, especially correctness, but also find
performance problems, security vulnerabilities, injected malware, ...
•Learning/Knowledge transfer – help in transferring knowledge about the codebase, solution
approaches, expectations regarding quality, etc; both to the reviewers as well as to the author
•Increase sense of mutual responsibility – increase a sense of collective code ownership and
solidarity
•Finding better solutions – generate ideas for new and better solutions and ideas that transcend the
specific code at hand.
•Complying to QA guidelines – Code reviews are mandatory in some contexts, e.g., air traffic
software
Compliance with Design and Coding Standards. There are four main drivers for using one:

Every development team should use a standard. Even the most experienced developer could
introduce a coding defect without realizing it. And that one defect could lead to a minor glitch. Or
worse, a serious security breach.

1.Compliance with industry standards (e.g., ISO).


2.Consistent code quality no matter who writes the code.
3.Software security from the start.
4.Reduced development costs and accelerated time to market.In embedded systems industries, coding
standards are required (or highly recommended) for compliance. This is especially true for functional
safety standards, including:
IEC 51508: “Functional safety of electrical/electronic/programmable electronic safety-related
systems”

•ISO 26262: “Road vehicles — functional safety”


•EN 50128: “Railway app
•lications — Communication, signaling, and processing systems — Software for railway control and
protection systems”

Testing objectives:
•Software testing is a crucial element in the software development life cycle (SDLC), which can help
software engineers save time & money of organizations by finding errors and defects during the early
stages of software development. With the assistance of this process one can examine various
components associated with the application and guarantee their appropriateness.
The goals and objectives of software testing are numerous, which when achieved help developers
build a defectless and error free software and application that has exceptional performance,
quality,effectiveness, security, among other things. Though the objective of testing can vary from
company to company and project to project, there are some goals that are similar for all. These
objectives are:
1.Verification: A prominent objective of testing is verification, which allows testers to
confirm that the software meets the various business and technical requirements stated by the
client before the inception of the whole project. These requirements and specifications guide
the design and development of the software, hence are required to be followed rigorously.
Moreover, compliance with these requirements and specifications is important for the success
of the project as well as to satisfy the client.

2.Validation: Confirms that the software performs as expected and as per the requirements of
the clients. Validation involves checking the comparing the final output with the expected
output and then making necessary changes if their is a difference between the two.

3.Defects: The most important purpose of testing is to find different defects in the software
to prevent its failure or crash during implementation or go live of the project. Defects if left
undetected or unattended can harm the functioning of the software and can lead to loss of
resources, money, and reputation of the client. Therefore, software testing is executed
regularly during each stage of software development to find defects of various kinds. The
ultimate source of these defects can be traced back to a fault introduced during the
specification, design, development, or programming phases of the software.

4.Providing Information: With the assistance of reports generated during the process of
software testing, testers can accumulate a variety of information related to the software and
the steps taken to prevent its failure. These, then can be shared with all the stakeholders of the
project for better understanding of the project as well as to establish transparency between
members.

5.Preventing Defects: During the process of testing the aim of testes to identify defects and
prevent them from occurring aging in the future. To accomplish this goal, software is
tested rigorously by a independent testers, who are not responsible for software development.

6.Quality Analysis: Testing helps improve the quality of the software by constantly
measuring and verifying its design and coding. Additionally, various types of testing
techniques are used by testers, which help them achieve the desired software quality.

7.Compatibility: It helps validate application’s compatibility with the implementation


environment, various devices, Operating Systems, user requirements, among other things.

8.For Optimum User Experience: Easy software and application accessibility and optimum
user experience are two important requirements that need to be accomplished for the success
of any project as well as to increase the revenue of the client. Therefore, to ensure this
software is tested again and again by the testers with the assistance of stress testing, load
testing, spike testing, etc.

9.Verifying Performance & Functionality: It ensures that the software has


superior performance and functionality. This is mainly verified by placing the software under
extreme stress to identify and measure its all plausible failure modes. To ensure this,
performance testing, usability testing,functionality testing, etc. is executed by the testers.
Advantages of Testing:: The benefits and advantages offered by software testing are
innumerable. It helps software developers and programmers in verifying and validating
various components of the software as well as in detecting bugs and defects. Moreover, it
helps them deliver a software product that has remarkable quality and innovative features.
Other advantages of software testing are:

•Can be performed manually as well as with the assistance of testing tool.

•Helps identify and prevent bugs.

•Enables team to build a bug free software and application with exceptional quality and
features.

•Saves time and money by testing the software during the early stages of development.

•Identifies areas where developers and programmers require training.

•Offers better user experience and customer services by building a effective and efficient
software.

•Helps reduce and avoid development downtime.

•Monitors and assess various activities during the process of software development.

•Helps identify various human errors, which if executed can lead to system failure.

Unit testing:

Unit Testing
It focuses on smallest unit of software design. In this we test an individual unit or group of inter
related units.It is often done by programmer by using sample input and observing its corresponding
outputs.
Example:

a) In a program we are checking if loop, method or

function is working fine

b) Misunderstood or incorrect, arithmetic precedence.

c) Incorrect initialization

Integration Testing

The objective is to take unit tested components and build a program structure that has been dictated
by design.Integration testing is testing in which a group of components are combined to produce
output.
Integration testing is of four types: (i) Top down (ii) Bottom up (iii) Sandwich (iv) Big-Bang
Example

(a) Black Box testing:- It is used for validation.

In this we ignores internal working mechanism and

focuses on what is the output?.

(b) White Box testing:- It is used for verification.

In this we focus on internal mechanism i.e.

how the output is achieved?

Acceptance Testing
An Acceptance Test is performed by the client and verifies whether the end to end the flow of the
system is as per the business requirements or not and if it is as per the needs of the end-user. Client
accepts the software only when all the features and functionalities work as expected.
It is the last phase of the testing, after which the software goes into production. This is also called
User Acceptance Testing (UAT).
Regression testing:
Testing an application as a whole for the modification in any module or functionality is termed as
Regression Testing. It is difficult to cover all the system in Regression Testing, so
typically Automation Testing Tools are used for these types of testing.
Every time new module is added leads to changes in program. This type of testing make sure that
whole component works properly even after adding components to the complete program.
Example
In school record suppose we have module staff, students

and finance combining these modules and checking if on

integration these module works fine is regression testing

Top down testing:


In Top Down Integration Testing, testing takes place from top to bottom. High-level modules are
tested first and then low-level modules and finally integrating the low-level modules to a high level
to ensure the system is working as intended. In this type of testing, Stubs are used as temporary
module if a module is not ready for integration testing.
Bottom up: It is a reciprocate of the Top-Down Approach. In Bottom Up Integration Testing, testing
takes place from bottom to up. Lowest level modules are tested first and then high-level modules and
finally integrating the high-level modules to a low level to ensure the system is working as
intended. Drivers are used as a temporary module for integration testing.

Structural testing:: Structural testing is the type of testing carried out to test the structure of code.
It is also known as White Box testing or Glass Box testing. This type of testing requires knowledge
of the code, so, it is mostly done by the developers. It is more concerned with how system does it
rather than the functionality of the system. It provides more coverage to the testing. For ex, to test
certain error message in an application, we need to test the trigger condition for it, but there must be
many trigger for it. It is possible to miss out one while testing the requirements drafted in SRS. But
using this testing, the trigger is most likely to be covered since structural testing aims to cover all the
nodes and paths in the structure of code.

FUNCTIONAL TESTING is a type of software testing whereby the system is tested against the
functional requirements/specifications. Functions (or features) are tested by feeding them input and
examining the output. Functional testing ensures that the requirements are properly satisfied by the
application. This type of testing is not concerned with how processing occurs, but rather, with the
results of processing. It simulates actual system usage but does not make any system structure
assumptions. During functional testing, Black Box Testing technique is used in which the internal
logic of the system being tested is not known to the tester. Functional testing is normally performed
during the levels of System Testing and Acceptance Testing.Typically, functional testing involves the
following steps:

•Identify functions that the software is expected to perform.


•Create input data based on the function’s specifications.
•Determine the output based on the function’s specifications.
•Execute the test case.
•Compare the actual and expected outputs.
Functional testing is more effective when the test conditions are created directly from user/business
requirements. When test conditions are created from the system documentation (system requirements/
design documents), the defects in that documentation will not be detected through testing and this
may be the cause of end-users’ wrath when they finally use the software.
Test drivers: What is Test Driver? Test Drivers are used during Bottom-up integration testing in
order to simulate the behaviour of the upper level modules that are not yet integrated. Test Drivers are
the modules that act as temporary replacement for a calling module and give the same output as that
of the actual product. Drivers are also used when the software needs to interact with an external
system and are usually complex than stubs.

+Test Stub:Test stubs are mainly used in incremental testing's top-down approach. Stubs are
computer programs that act as temporary replacement for a called module and give the same output
as the actual product or software.

What is Alpha Testing? Alpha testing is a type of acceptance testing; performed to identify all
possible issues/bugs before releasing the product to everyday users or the public. The focus of this
testing is to simulate real users by using a black box and white box techniques. The aim is to carry
out the tasks that a typical user might perform. Alpha testing is carried out in a lab environment and
usually, the testers are internal employees of the organization. To put it as simple as possible, this
kind of testing is called alpha only because it is done early on, near the end of the development of the
software, and before beta testing.

What is Beta Testing? Beta Testing of a product is performed by "real users" of the software
application in a "real environment" and can be considered as a form of external Beta version of the
software is released to a limited number of end-users of the product to obtain feedback on the product
quality. Beta testing reduces product failure risks and provides increased quality of the product
through customer validation.It is the final test before shipping a product to the customers. Direct
feedback from customers is a major advantage of Beta Testing. This testing helps to tests the product
in customer's environment.

Alpha Testing Beta Testing


Alpha testing performed by Testers who are Beta testing is performed by Clients or End Users
usually internal employees of the organization who are not employees of the organization
Beta testing is performed at a client location or
Alpha Testing performed at developer's site
end user of the product
Reliability and Security Testing are not Reliability, Security, Robustness are checked
performed in-depth Alpha Testing during Beta Testing
Alpha testing involves both the white box and
Beta Testing typically uses Black Box Testing
black box techniques
Beta testing doesn't require any lab environment
Alpha testing requires a lab environment or or testing environment. The software is made
testing environment available to the public and is said to be real time
environment
Long execution cycle may be required for Alpha Only a few weeks of execution are required for
testing Beta testing
Most of the issues or feedback is collected from
Critical issues or fixes can be addressed by
Beta testing will be implemented in future
developers immediately in Alpha testing
versions of the product
Beta testing also concentrates on the quality of the
Alpha testing is to ensure the quality of the product, but gathers users input on the product
product before moving to Beta testing and ensures that the product is ready for real time
users.
Types of Beta Testing : There are different types of Beta tests in software testing, and they are as
follows: Traditional Beta testing: Product is distributed to the target market, and related data is
gathered in all aspects. This data can be used for Product improvement.

Public Beta Testing: Product is publicly released to the outside world via online channels and data
can be gathered from anyone. Based on feedback, product improvements can be done. For example,
Microsoft conducted the largest of all Beta Tests for its OS Windows 8 before officially releasing it.

Technical Beta Testing: Product is released to the internal group of an organization and gathers
feedback/data from the employees of the organization.
Focused Beta: Product is released to the market for gathering feedback on specific features of the
program. For example, important functionality of the software.
Phases of Testing

Alpha and Beta tests are typically carried for "off-the-shelf" software or product-oriented companies.
The Phases of Testing for a product company typically varies from a service-oriented organization.
Following is the testing phase adopted by product firms

Pre-Alpha:- Software is a prototype. UI is complete. But not all features are completed. At this stage,
software is not published.

Alpha: Software is near its development and is internally tested for bugs/issues
Beta: Software is stable and is released to a limited user base. The goal is to get customer feedback
on the product and make changes in software accordingly
Release Candidate (RC): Based on the feedback of Beta Test, you make changes to the software and
want to test out the bug fixes. At this stage, you do not want to make radical changes in functionality
but just check for bugs. RC is also put out to the public
Release: All works, software is released to the public.
Note: Above is a standard definition of the Testing stages but in order to garner marketing buzz,
companies combine stages like "pre-alpha beta", "pre-beta" etc.

Entry Criteria for Alpha testing:


•Software requirements document or Business requirements specification
•Test Cases for all the requirements
•Testing Team with good knowledge about the software application
•Test Lab environment setup
•QA Build ready for execution
•Test Management tool for uploading test cases and logging defects
•Traceability Matrix to ensure that each design requirement has alteast one Test Case that
verifies it
Exit Criteria for Alpha testing
•All the test cases have been executed and passed.
•All severity issues need to be fixed and closed
•Delivery of Test summary report
•Make sure that no more additional features can be included
•Sign off on Alpha testing
Entrance Criteria for Beta Testing:
•Sign off a document on Alpha testing
•Beta version of the software should be ready
•Environment ready to release the software application to the public
•Tool to capture real time faults
Exit Criteria for Beta Testing:
•All major and minor issues are closed
•Feedback report should be prepared from public
•Delivery of Beta test summary report
Advantages of Alpha Testing:
•Provides better view about the reliability of the software at an early stage
•Helps simulate real time user behavior and environment.
•Detect many showstopper or serious errors
•Ability to provide early detection of errors with respect to design and functionality
Advantages of Beta Testing
•Reduces product failure risk via customer validation.
•Beta Testing allows a company to test post-launch infrastructure.
•Improves product quality via customer feedback
•Cost effective compared to similar data gathering methods
•Creates goodwill with customers and increases customer satisfaction
Disadvantages of Alpha Testing:
•In depth, functionality cannot be tested as software is still under development stage
Sometimes developers and testers are dissatisfied with the results of alpha testing
Disadvantages of Beta Testing
•Test Management is an issue. As compared to other testing types which are usually executed
inside a company in a controlled environment, beta testing is executed out in the real world
where you seldom have control.
•Finding the right beta users and maintaining their participation could be a challenge

Software Maintenance
Software Maintenance is the process of modifying a software product after it has been delivered to
the customer. The main purpose of software maintenance is to modify and update software
application after delivery to correct faults and to improve performance.

Need for Maintenance –


Software Maintenance must be performed in order to:
• Correct faults.
• Improve the design.
• Implement enhancements.
• Interface with other systems.
• Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
• Migrate legacy software.
• Retire software.
• Categories of Software Maintenance –
Maintenance can be divided into the following:
• Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs observed
while the system is in use, or to enhance the performance of the system.
1.Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on new
platforms, on new operating systems, or when they need the product to interface with new
hardware and software.
2.Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to
change different types of functionalities of the system according to the customer demands.
3.Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems of
the software. It goals to attend problems, which are not significant at this moment but may
cause serious issues in future.
Cost of Maintenance
Reports suggest that the cost of maintenance is high. A study on estimating software maintenance
found that the cost of maintenance is as high as 67% of the cost of entire software process cycle.
On an average, the cost of software maintenance is more than 50% of all SDLC phases. There are
various factors, which trigger maintenance cost go high, such as:

Real-world factors affecting Maintenance Cost

•The standard age of any software is considered up to 10 to 15 years.


•Older softwares, which were meant to work on slow machines with less memory and storage
capacity cannot keep themselves challenging against newly coming enhanced softwares on
modern hardware.
•As technology advances, it becomes costly to maintain old software.
•Most maintenance engineers are newbie and use trial and error method to rectify problem.
•Often, changes made can easily hurt the original structure of the software, making it hard for
any subsequent changes.
•Changes are often left undocumented which may cause more conflicts in future.

Software-end factors affecting Maintenance Cost

•Structure of Software Program


•Programming Language
•Dependence on external environment
•Staff reliability and availability
• Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from anything man-
made and reproducing it based on extracted information. It is also called back Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design and the requirements
specification of a product from an analysis of it’s code. Reverse Engineering is becoming important,
since several existing software products, lack proper documentation, are highly unstructured, or their
structure has degraded through a series of maintenance efforts.
Why Reverse Engineering?
•Providing proper system documentatiuon.
•Recovery of lost information.
•Assisting with maintenance.
• Facility of software reuse.
•Discovering unexpected flaws or faults.
Used of Software Reverse Engineering –
•Software Reverse Engineering is used in software design, reverse engineering enables the
developer or programmer to add new features to the existing software with or without knowing
the source code.
•Reverse engineering is also useful in software testing, it helps the testers to study the virus and
other malware code .
Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from anything man-
made and reproducing it based on extracted information. It is also called back Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design and the requirements
specification of a product from an analysis of it’s code. Reverse Engineering is becoming important,
since several existing software products, lack proper documentation, are highly unstructured, or their
structure has degraded through a series of maintenance efforts.
Software Re-Engineering is the examination and alteration of a system to reconstitute it in a new
form. The principles of Re-Engineering when applied to the software development process is called
software re-engineering. It affects positively at software cost, quality, service to the customer and
speed of delivery. In Software Re-engineering, we are improving the software to make it more
efficient and effective.

Re-Engineering cost factors:

•The quality of the software to be re-engineered.


•The tool support availability for engineering.
•Extent of the data conversion which is required.
•The availability of expert staff for Re-engineering.
Software Re-Engineering Activities:

1. Inventory Analysis:
Every software organisation should have an inventory of all the applications.
•Inventory can be nothing more than a spreadsheet model containing information that provides
a detailed description of every active application.
•By sorting this information according to business criticality, longevity, current maintainability
and other local important criteria, candidates for re-engineering appear.
•Resource can then be allocated to candidate application for re-engineering work.
2. Document reconstructing:
Documentation of a system either explains how it operate or how to use it.
•Documentation must be updated.
•It may not be necessary to fully document an application.
•The system is business critical and must be fully re-documented.
3. Reverse Engineering:
Reverse engineering is a process of design recovery. Reverse engineering tools extracts data,
architectural and proccedural design information from an existing program.

4. Code Reconstructing:
•To accomplish code reconstructing, the source code is analysed using a reconstructing tool.
Violations of structured programming construct are noted and code is then reconstruct.
•The resultant restructured code is reviewed and tested to ensure that no anomalies have been
introduced.
5. Data Restructuring:
•Data restructuring begins with the reverse engineering activity.
•Current data architecture is dissecred, and necessary data models are defined.
•Data objects and attributes are identified, and existing data structure are reviewed for quality.
6.Forward Engineering:
Forward Engineering also called as renovation or reclamation not only for recovers design
information from existing software but uses this information to alter or reconstitute the existing
system in an effort to improve its overall quality.

You might also like