You are on page 1of 26

UNIT-5

SOFTWARE QUALITY AND TESTING


Software Quality Assurance - Quality metrics - Software Reliability - Software testing - Path
testing – Control Structures testing - Black Box testing - Integration, Validation and system
testing - Reverse Engineering and Reengineering.

CASE tools –projects management, tools - analysis and design tools – programming tools
integration and testing tool - Case studies.
………………………………………………………………………………………………………………

1. Software Quality Assurance


Software Quality Assurance (SQA) is a set of activities for ensuring quality in software
engineering processes (that ultimately result in quality in software products). It includes the
following activities:
 Process definition and implementation
 Auditing
 Training
1.1 Elements of Software Quality Assurance
Software quality assurance encompasses a broad range of concerns and activities that focus on
the management of software quality.
Standards: The IEEE, ISO, and other standards organizations have produced a broad array of
software engineering standards and related documents. The job of SQA is to ensure that
standards that have been adopted are followed and that all work products conform to them.
Reviews and audits: Technical reviews are a quality control activity performed by software
engineers for software engineer’s .Their intent is to uncover errors. Audits are a type of review
performed by SQA personnel with the intent of ensuring that quality guidelines are being
followed for software engineering work.
Testing: Software testing is a quality control function that has one primary goal—to find errors.
The job of SQA is to ensure that testing is properly planned and efficiently conducted so that it
has the
Highest likelihood of achieving its primary goal.

Error/defect collection and analysis: The only way to improve is to measure how you’re doing.
SQA collects and analyzes error and defect data to better understand how errors are introduced
and what software engineering activities are best suited to eliminating them.

Change management. Change is one of the most disruptive aspects of any software project. If it
is not properly managed, change can lead to confusion, and confusion almost always leads to
poor quality. SQA ensures that adequate change management practices have been instituted.

Education. Every software organization wants to improve its software engineering practices. A
key contributor to improvement is education of software engineers, their managers, and other
stakeholders. The SQA organization takes the lead in software process improvement and is a
key proponent and sponsor of educational programs.

1 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Vendor management: The job of the SQA organization is to ensure that high-quality software
results by suggesting specific quality practices that the vendor should follow (when possible),
and incorporating quality mandates as part of any contract with an external vendor.

Security management. With the increase in cyber crime and new government regulations
regarding privacy, every software organization should institute policies that protect data at all
levels, establish firewall protection for WebApps, and ensure that software has not been
tampered with internally.
SQA ensures that appropriate process and technology are used to achieve software security.
Risk management. Although the analysis and mitigation of risk is the concern of software
engineers, the SQA organization ensures that risk management activities are properly conducted
and that risk-related contingency plans have been established.

1.2 SQA Tasks, Goals, and Metrics:


Software quality assurance is composed of a variety of tasks associated with two different
constituencies—the software engineers who do technical work and an SQA group that has
responsibility for quality assurance planning, oversight, record keeping, analysis, and reporting.

Software engineers address quality (and perform quality control activities) by applying solid
technical methods and measures, conducting technical reviews, and performing well-planned
software testing.

SQA Tasks: The Software Engineering Institute recommends a set of SQA actions that address
quality assurance planning, oversight, record keeping, analysis, and reporting.

 Prepares an SQA plan for the projects: Quality assurance actions performed by the
software engineering team and the SQA group are governed by the plan. The plan
identifies evaluations to be performed, audits and reviews to be conducted.
 Participates in the development of the project’s software process description: The
software team selects a process for the work to be performed. The SQA group reviews
the process description for compliance with organizational policy, internal software
standards, external standards, and other parts of the software project plan.
 Reviews software engineering activities to verify compliance with the defined software
process: The SQA group identifies, documents, and tracks deviations from the process
and verifies that corrections have been made.
 Audits designated software work products to verify compliance with those defined as
part of the software process: The SQA group reviews selected work products; identifies,
documents, and tracks deviations; verifies that corrections have been made; and
periodically reports the results of its work to the project manager.
 Ensures that deviations in software work and work products are documented and
handled according to a documented procedure: Deviations may be encountered in the
project plan, process description, applicable standards, or software engineering work
products.
 Records any noncompliance and reports to senior management: Noncompliance items
are tracked until they are resolved.
2 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech
SQA Goals: The SQA actions described in the preceding section are performed to achieve a set
of pragmatic goals:
 Requirements quality: The correctness, completeness, and consistency of the requirements
model will have a strong influence on the quality of all work products that follow. SQA
must ensure that the software team has properly reviewed the requirements model to
achieve a high level of quality.
 Design quality: Every element of the design model should be assessed by the software
team to ensure that it exhibits high quality and that the design itself conforms to
requirements.
 Code quality: Source code and related work products must conform to local coding
standards and exhibit characteristics that will facilitate maintainability. SQA should isolate
those attributes that allow a reasonable analysis of the quality of code.
 Quality control effectiveness: A software team should apply limited resources in a way
that has the highest likelihood of achieving a high-quality result. SQA analyzes the
allocation of resources for reviews and testing to assess whether they are being allocated
in the most effective manner.
1.3 Statistical Software Quality Assurance (Six Sigma Method)

Six Sigma is the most widely used strategy for statistical quality assurance strategy. It is a
business driven approach to process improvements, reduce cost and increase profit. The term
Six Sigma is derived from six standard deviations—3.4 instances (defects) per million
occurrences. Six Sigma originated at Motorola in the 1980’s.
The Six Sigma methodology defines three core steps:
• Define customer requirements and deliverables and project goals via well defined methods of
customer communication.
• Measure the existing process and its output to determine current quality performance (collect
defect metrics).
• Analyze defect metrics and determine the vital few causes

3 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


If an existing software process is in place, but improvement is required, Six Sigma suggests two
additional steps:
• Improve the process by eliminating the root causes of defects.
• Control the process to ensure that future work does not reintroduce the causes of defects.

These core and additional steps are sometimes referred to as the DMAIC (define, measure,
analyze, improve, and control) method.

If an organization is developing a software process (rather than improving an existing process),


the core steps are augmented as follows:
• Design the process to (1) avoid the root causes of defects and (2) to meet customer
requirements.
• Verify that the process model will, in fact, avoid defects and meet customer requirements.

This variation is sometimes called the DMADV (define, measure, analyze, design, and verify)
method.

1.4 Software Reliability


The reliability of a computer program is an important element of its overall quality. Software
reliability can be measured directly and estimated using historical and developmental data.
Software reliability is defined in statistical terms as “the probability of failure-free operation of a
computer program in a specified environment for a specified time”.

Whenever software reliability is discussed, a question arises: What is failure? Failure is


nonconformance to software requirements. One failure can be corrected within seconds, while
another requires weeks or even months to correct. The correction of one failure may in fact
result in the introduction of other errors that ultimately result in other failures.

Measures of Reliability and Availability: All software failures can be traced to design or
implementation problems; if we consider a computer-based system, a simple measure of
reliability is mean-time-between-failure (MTBF):
MTBF = MTTF + MTTR

Where the acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to-repair
respectively. In addition to reliability measure, we should also develop a measure of availability.
Software availability is the probability that a program is operating according to requirements at
a given point in time and is defined as:

The MTBF reliability measure is equally sensitive to MTTF and MTTR.

4 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Software Safety: Software safety is a software quality assurance activity that focuses on the
identification and assessment of hazards (risks) that may affect software negatively and cause an
entire system to fail. Once hazards are identified and analyzed, safety-related requirements can
be specified for the software. That is, the specification can contain a list of undesirable events
and the desired system responses to these events.

1.5 The ISO 9000 Quality Standards

A quality assurance system may be defined as the organizational structure, responsibilities,


procedures, processes, and resources for implementing quality management. Quality assurance
systems are created to help organizations ensure their products and services satisfy customer
expectations by meeting their specifications. These systems cover a wide variety of activities
encompassing a product’s entire life cycle including planning, controlling, measuring, testing and
reporting, and improving quality levels throughout the development and manufacturing
process.

ISO 9000 describes quality assurance elements in generic terms that can be applied to any
business regardless of the products or services offered.

To become registered to one of the quality assurance system models contained in ISO 9000, a
company’s quality system and operations are scrutinized by third-party auditors for compliance
to the standard and for effective operation.

The requirements delineated by ISO 9001:2000 address topics such as management


responsibility, quality system, contract review, design control, document and data control,
product identification and traceability, process control, inspection and testing, corrective and
preventive action, control of quality records, internal quality audits, training, servicing, and
statistical techniques.

In order for a software organization to become registered to ISO 9001:2000, it must establish
policies and procedures to address each of the requirements just noted (and others) and then be
able to demonstrate that these policies and procedures are being followed. If you desire further
information on ISO 9001:2000.
1.6 SQA Plan
The SQA plan is a software document created for summarizing all the SQA
activities conducted for the software project. The SQA plan specifies the goal and tasks that can
be performed in order to do conduct all the SQA activities. Such a plan should be developed by
SQA group. The standard for this plan is published by IEEE standard.

5 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


The template for this plan is as given below.

SQA Plan
1. Purpose and scope of the pal
2. Description of work product
3. Applicable standards
4. SQA activities
5. Tools and methods used
6. SCM procedures for managing change
7. Methods for maintaining SQA related records
8. Organizational roles and responsibilities

The SQA plan is a document aimed to give confidence to developers and customers that the
specified requirements will be met and final product will be a quality product.

6 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


2. Software Quality metrics:

Several models of software quality factors and their categorization have been suggested over the
years. The classic model of software quality factors, suggested by McCall, consists of 11 factors.
The 11 factors are grouped into three categories – product operation, product revision, and
product transition factors.

1. Product Operation Software Quality Factors

According to McCall’s model, product operation category includes five software quality factors,
which deal with the requirements that directly affect the daily operation of the software. They
are as follows –
Correctness: Correctness is the extent to which a program satisfies its specifications.
Reliability: Reliability is the property that defines how well the software meets its requirements.
Efficiency: Efficiency is a factor relating to all issues in the execution of software; it includes
considerations such as response time, memory requirement, and throughput.
Integrity:
This factor deals with the software system security, that is, to prevent access to unauthorized
persons, also to distinguish between the group of people to be given read as well as write
permit.
Usability: Usability or the effort required locating and fixing errors in operating programs.

7 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


2. Product Revision Quality Factors
According to McCall’s model, three software quality factors are included in the product revision
category. These factors are as follows
Maintainability:
Maintainability is the effort required to maintain the system in order to check the quality.
Testability:
Testability is the effort required to test to ensure that the system or a module performs its
intended function.
Flexibility:
Flexibility is the effort required to modify an operational program.

3. Product Transition Software Quality Factor


According to McCall’s model, three software quality factors are included in the product
transition category that deals with the adaptation of software to other environments and its
interaction with other software systems. These factors are as follows −
Portability:
Portability is the effort required to transfer the software from one configuration to another.
Reusability:
Reusability is the extent to which parts of the software can be reused in other related
applications.
Interoperability: Effort to couple one system to another.

8 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Software Testing
Testing Fundamentals
Software testing is a process of executing a program or application with the
intent of finding the software bugs. It can also be stated as the process of validating and
verifying that a software program or application or product: Meets the business and technical
requirements that guided its design and development.
Testing Objectives: Glen Myers states a number of testing objectives:
 Software testing is a process of executing a program with the intent of finding an error.
 A good test case is one that has a high probability of finding an as – yet – undiscovered
error.
 A successful is one that uncovers errors in the software.
Testing Principles: The following are a set of testing principles that guide software testing:
1. All tests should be traceable to customer requirements.
2. Tests should be planned long before testing begins.
3. The Pareto principle (80% of all errors uncovered during testing) applies to software
testing.
4. Testing should begin in the small and progress toward testing in the large .
5. Exhaustive testing is not possible.
6. To be most effective, testing should be conducted by an independent third party.
Testability: Software testability is simply how easily can be tested. The following are the
characteristics that lead to testable software:
 Operability – “The better it works the more efficiently it can be tested”.
 Observability – “What you see is what you test”.
 Controllability – “The better we can control the software, the more the testing can be
automated and optimized”.
 Decomposability – “By controlling the scope of testing, we can more quickly isolate
problems and perform smarter retesting”.
 Simplicity – “The less there is to test, the more quickly we can test it”.
 Stability – “The fewer the changes, the fewer the disruptions to testing”.
 Understandability – “The more information we have, the smarter we will test”.

The following are the attributes of good test:


 A good test has a high probability of finding an error.
 A good test is not redundant.
 A good test should be “best of breed”
 A good test should be neither too simple nor too complex.

9 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


1. White Box Testing
White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box Testing,
Transparent Box Testing, Code-Based Testing or Structural Testing) is a software testing method
in which the internal structure/ design/ implementation of the item being tested is known to the
tester. The tester chooses inputs to exercise paths through the code and determines the
appropriate outputs. Programming know-how and the implementation knowledge is essential.
White box testing is testing beyond the user interface and into the nitty-gritty of a system.

This method is named so because the software program, in the eyes of the tester, is like a white/
transparent box; inside which one clearly sees.

When to perform white box testing?

There are three main reasons behind performing the white box testing.

1. Programming may have some incorrect assumptions while designing or implementing some
functions. Due to this there are chances of having logical errors in the program. To detect and
correct such logical errors procedural details need to be examined.

2. Certain assumptions on flow of control and data may lead programmer to make design
errors. To uncover the errors on logical path, white box testing is must.

3. There may be certain typographical errors that remain undetected even after syntax and type
checking mechanisms. Such errors can be uncovered during white box testing.

The effectiveness of white box testing is commonly expressed in terms if test or code coverage
metrics, which measures the fraction of code exercised by test cases. The various types of testing,
which occur as part of white box testing are

Basic path testing


Control structure testing
Mutation testing.

White Box Testing Techniques:


 Statement Coverage - This technique is aimed at exercising all programming statements
with minimal tests.
 Branch Coverage - This technique is running a series of tests to ensure that all branches
are tested at least once.
 Path Coverage - This technique corresponds to testing all possible paths which means that
each statement and branch is covered.

10 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language
as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

11 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


(I) Path Testing

Path testing is a structural testing method that involves using the source code of a program in
order to find every possible executable path. It helps to determine all faults lying within a piece
of code. This method is designed to execute all or selected path through a computer program.

Any software program includes multiple entry and exit points. Testing each of these points is a
challenging as well as time consuming. In order to reduce the redundant tests and to achieve
maximum test coverage, basis path testing is used.
What is Basis Path Testing?
The basis path testing is same, but it is based on a White Box Testing method, that defines test
cases based on the flows or logical path that can be taken through the program. Basis path
testing involves execution of all possible blocks in a program and achieves maximum path
coverage with least number of test cases. It is a hybrid of branch testing and path testing
methods.
The objective behind basis path testing is that it defines the number of independent paths, thus
the number of test cases needed can be defined explicitly (maximizes the coverage of each test
case).
Here we will take a simple example, to get better idea what is basis path testing include.

In the above example, we can see there are few conditional statements that are executed
depending on what condition it suffices. Here there are 3 paths or condition that needs to be
tested to get the output,
 Path 1: 1,2,3,5,6, 7
 Path 2: 1,2,4,5,6, 7
 Path 3: 1, 6, 7

12 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Steps for Basis Path testing
The basic steps involved in basis path testing include
 Draw a control graph (to determine different program paths)
 Calculate Cyclomatic complexity (metrics to determine the number of independent
paths)
 Find a basis set of paths
 Generate test cases to exercise each path
Benefits of basis path testing
 It helps to reduce the redundant tests
 It focuses attention on program logic
 It helps facilitates analytical versus arbitrary case design
 Test cases which exercise basis set will execute every statement in program at least once
Conclusion:
Basis path testing helps to determine all faults lying within a piece of code.
………………………….

What is Cyclomatic Complexity?

Cyclomatic complexity is a software metric used to measure the complexity of a program. These
metric, measures independent paths through program source code.
Independent path is defined as a path that has at least one edge which has not been traversed
before in any other paths.
Cyclomatic complexity can be calculated with respect to functions, modules, methods or classes
within a program.
This metric was developed by Thomas J. McCabe in 1976 and it is based on a control flow
representation of the program. Control flow depicts a program as a graph which consists of
Nodes and Edges.
In the graph, Nodes represent processing tasks while edges represent control flow between the
nodes.
Cyclomatic complexity has a foundation in graph theory and provides you with extremely
useful software metric. Complexity is computed in one of three ways:
 The number of regions of the flow graph corresponds to the cyclomatic complexity.
 Cyclomatic complexity V(G) for a flow graph G is defined as:
V (G) = E – N + 2
Where E is the number of edges, N is the number of flow graphs.
 Cyclomatic complexity V(G) for a flow graph G is defined as:
V (G) = P + 1
Where P is the number of predicate nodes contained in the flow graph G.

13 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Example:
IF A = 10 THEN
IF B > C THEN
A=B
ELSE
A=C
ENDIF
ENDIF
Print A
Print B
Print C

The Cyclomatic complexity is calculated using the above control flow diagram that shows seven
nodes (shapes) and eight edges (lines), hence the Cyclomatic complexity is 8 - 7 + 2 = 3.

14 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


2. Black Box Testing
Black Box Testing, also known as Behavioral Testing, is a software testing method in which the
internal structure/ design/ implementation of the item being tested is not known to the tester.
These tests can be functional or non-functional, though usually functional.
This method is named so because the software program, in the eyes of the tester, is like a black
box; inside which one cannot see. This method attempts to find errors in the following
categories:
1. Incorrect or missing functions
2. Interface errors
3. Errors in data structures or external database access
4. Behavior or performance errors
5. Initialization and termination errors
EXAMPLE
A tester, without knowledge of the internal structures of a website, tests the web pages by using
a browser; providing inputs (clicks, keystrokes) and verifying the outputs against the expected
outcome.

LEVELS APPLICABLE TO
Black Box testing method is applicable to the following levels of software testing:
 Integration Testing
 System Testing
 Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more black box
testing method comes into use.
BLACK BOX TESTING TECHNIQUES
Following are some techniques that can be used for designing black box tests.
 Equivalence partitioning: It is a software test design technique that involves dividing
input values into valid and invalid partitions and selecting representative values from
each partition as test data.
 Boundary Value Analysis: It is a software test design technique that involves
determination of boundaries for input values and selecting values that are at the
boundaries and just inside/ outside of the boundaries as test data.
 Cause Effect Graphing: It is a software test design technique that involves identifying the
cases (input conditions) and effects (output conditions), producing a Cause-Effect Graph,
and generating test cases accordingly.

15 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


1. Integration Testing
16 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech
Integration Testing is a level of software testing where individual units are combined and tested
as a group.
The purpose of this level of testing is to expose faults in the interaction between integrated
units. Test drivers and test stubs are used to assist in Integration Testing.
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink
cartridge and the ballpoint are produced separately and unit tested separately. When two or
more units are ready, they are assembled and Integration Testing is performed. For example,
whether the cap fits into the body or not.

METHOD
Any of Black Box Testing, White Box Testing, and Gray Box Testing methods can be used.
Normally, the method depends on your definition of ‘unit’.
TASKS
 Integration Test Plan
o Prepare
o Review
o Rework
o Baseline
 Integration Test Cases/Scripts
o Prepare
o Review
o Rework
o Baseline
 Integration Test
o Perform

When is Integration Testing performed?

17 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Integration Testing is performed after Unit Testing and before System Testing.
Who performs Integration Testing?
Either Developers themselves or independent Testers perform Integration Testing.
APPROACHES
 Big Bang is an approach to Integration Testing where all or most of the units are
combined together and tested at one go. This approach is taken when the testing team
receives the entire software in a bundle. So what is the difference between Big Bang
Integration Testing and System Testing? Well, the former tests only the interactions
between the units while the latter tests the entire system.
 Top Down is an approach to Integration Testing where top level units are tested first and
lower level units are tested step by step after that. This approach is taken when top
down development approach is followed. Test Stubs are needed to simulate lower level
units which may not be available during the initial phases.
 Bottom Up is an approach to Integration Testing where bottom level units are tested first
and upper level units step by step after that. This approach is taken when bottom up
development approach is followed. Test Drivers are needed to simulate higher level units
which may not be available during the initial phases.
 Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top
Down and Bottom Up approaches.

18 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


2. System Testing

System Testing is a level of the software testing where a complete and integrated software is
tested.
The purpose of this test is to evaluate the system’s compliance with the specified requirements.

ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail, the ink
cartridge and the ballpoint are produced separately and unit tested separately. When two or
more units are ready, they are assembled and Integration Testing is performed. When the
complete pen is integrated, System Testing is performed.
METHOD
Usually, Black Box Testing method is used.
TASKS
 System Test Plan
o Prepare
o Review
o Rework
o Baseline
 System Test Cases
o Prepare
o Review
o Rework
o Baseline
 System Test
o Perform
When is it performed?
System Testing is performed after Integration Testing and before Acceptance Testing.
Who performs it?
Normally, independent Testers perform System Testing.

19 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


3. Acceptance Testing

Acceptance Testing is a level of the software testing where a system is tested for acceptability.
The purpose of this test is to evaluate the system’s compliance with the business requirements
and assess whether it is acceptable for delivery.

ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink
cartridge and the ballpoint are produced separately and unit tested separately. When two or
more units are ready, they are assembled and Integration Testing is performed. When the
complete pen is integrated, System Testing is performed. Once System Testing is complete,
Acceptance Testing is performed so as to confirm that the ballpoint pen is ready to be made
available to the end-users.
METHOD
Usually, Black Box Testing method is used in Acceptance Testing. Testing does not
normally follow a strict procedure and is not scripted but is rather ad-hoc.

TASKS
 Acceptance Test Plan
o Prepare
o Review
o Rework
o Baseline
 Acceptance Test Cases/Checklist
o Prepare
o Review
o Rework
o Baseline

20 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


 Acceptance Test
o Perform

When is it performed?
Acceptance Testing is performed after System Testing and before making the system available
for actual use.
Who performs it?
 Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of
the organization that developed the software but who are not directly involved in the
project (Development or Testing). Usually, it is the members of Product Management,
Sales and/or Customer Support.
 External Acceptance Testing is performed by people who are not employees of the
organization that developed the software.
o Customer Acceptance Testing is performed by the customers of the organization
that developed the software. They are the ones who asked the organization to
develop the software. [This is in the case of the software not being owned by the
organization that developed it.]
o User Acceptance Testing (Also known as Beta Testing) is performed by the end
users of the software. They can be the customers themselves or the customers’
customers.

21 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Validation Testing

The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?
Validation Testing - Workflow:

Validation testing can be best demonstrated using V-Model. The Software / product under test is
evaluated during this type of testing.

Activities:
 Unit Testing
 Integration Testing
 System Testing
 User Acceptance Testing

Unit Testing is a level of software testing where individual units/ components of software are
tested. The purpose is to validate that each unit of the software performs as designed.
A unit is the smallest testable part of software. It usually has one or a few inputs and usually a
single output. In procedural programming a unit may be an individual program, function,
procedure, etc. In object-oriented programming, the smallest unit is a method, which may
belong to a base/ super class, abstract class or derived/ child class. (Some treat a module of an
application as a unit. This is to be discouraged as there will probably be many individual units
within that module.)
Unit testing frameworks, drivers, stubs, and mock/ fake objects are used to assist in unit testing.

22 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


METHOD
Unit Testing is performed by using the White Box Testing method.
When is it performed?
Unit Testing is the first level of testing and is performed prior to Integration Testing.
Who performs it?
Unit Testing is normally performed by software developers themselves or their peers. In rare
cases it may also be performed by independent software testers.
TASKS
 Unit Test Plan
o Prepare
o Review
o Rework
o Baseline
 Unit Test Cases/Scripts
o Prepare
o Review
o Rework
o Baseline
 Unit Test
o Perform
Let’s say you have a program comprising of two units and the only test you perform is system
testing. [You skip unit and integration testing.] During testing, you find a bug. Now, how will
you determine the cause of the problem?
 Is the bug due to an error in unit 1?
 Is the bug due to an error in unit 2?
 Is the bug due to errors in both units?
 Is the bug due to an error in the interface between the units?
 Is the bug due to an error in the test or test case?
Unit testing is often neglected but it is, in fact, the most important level of testing.

23 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


Software reverse engineering

Software reverse engineering is the process of recovering the design and specification of a
product from an analysis of its code. The purpose of reverse engineering is to facilitate
maintenance work by improving the understand ability of a system and to produce the
necessary documents for a legacy system.
Reverse engineering is becoming important, since legacy software products lack proper
documentation, and are highly unstructured. Even well-designed products become legacy
software as their structure degrades through a series of maintenance efforts.

The abstraction level of a reverse engineering process and the tools can be extracted from
source code. The abstraction level should be as high as possible. As the abstraction level
increases, you are provided with information (program and data structure information, object
models, data or control flow models, and entity relationship models) that will allow easier
understanding of the program.
The completeness of a reverse engineering process refers to the level of detail that is provided at
an abstraction level.

If the directionality of the reverse engineering process is one-way, all information extracted
from the source code is provided to the software engineer who can then use it during any
maintenance activity. If directionality is two-way, the information is fed to a reengineering tool
that attempts to restructure or regenerate the old program.
Reverse engineering to understand data: Reverse engineering of data occurs at different levels of
abstraction and is the first reengineering task. At the program level, internal program data
structures must often be reverse engineered. At the system level, global data structures are often
reengineered.
 Internal Data Structures: Reverse engineering techniques for internal program data focus
on the definition of classes of objects. The data organization within the code identifies
abstract data types. For example, record structures, files, lists, and other data structures
often provide an initial indicator of classes.
 Data Structure: Regardless of its logical organization
and physical structure, a database allows the definition of
data objects and supports some method for establishing
relationships among the objects.
Reverse engineering to understand processing: Reverse
engineering to understand processing begins with an attempt
to understand and then extract procedural abstractions
represented by the source code. To understand procedural
abstractions, the code is analyzed at varying levels of
abstraction: system, program, component, pattern, and
statement.
Reverse engineering user interfaces: GUIs have become
required for computer based products and systems of every
type. Therefore, the redevelopment of user interfaces has

24 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


become one of the most common types of reengineering activity. But before a user interface can
be rebuilt, reverse engineering should occur.

Software Reengineering:
Software reengineering is a combination of two consecutive processes i.e. software reverse
engineering and software forward engineering as shown in the fig.

An application was served the business needs of a company for 10 or 15 years, during that time
it has to be corrected, adapted and enhanced many times. Software maintenance is so difficult
because 60% of the software is enhanced every time. Software maintenance is described by
four activities:
 Corrective Maintenance
 Adaptive Maintenance
 Perfective (or) Enhancement Maintenance
 Preventive Maintenance (or) Reengineering

25 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech


A software reengineering process model:
Reengineering of information systems is an activity that will absorb information
technology resources for many years.

Software reengineering activities: The reengineering paradigm shown in above figure is a cyclical
model. This means that each of the activities presented as a part of the paradigm may be
revisited.
 Inventory analysis: Every software organization should have an inventory of all
applications. The inventory can be nothing more than a spreadsheet model containing
information that provides a detailed description (e.g., size, age, business criticality) of
every active application. The inventory should be revisited on a regular cycle.
 Document Restructuring:
 Creating documentation is far too time consuming.
 Documentation must be updated, but your organization has limited
resources.
 The system is business critical and must be fully redocumented.
 Reverse Engineering: The term reverse engineering has its origins in the hardware world.
A company disassembles a competitive hardware product in an effort to understand its
competitor’s design and manufacturing “secrets.”
Reverse engineering for software is quite similar. Reverse engineering is a process of design
recovery. Reverse engineering tools extract data, architectural, and procedural design
information from an existing program.
 Code Restructuring: The most common type of reengineering is code
restructuring. The source code is analyzed using restructuring tool, and the
internal code documentation code is updated.
 Data Restructuring: Data restructuring is full scale reengineering activity. It
begins with a reverse engineering. Current data architecture is dissected and
necessary data models are defined.
Forward Reengineering: Applications would be rebuilt using an automated “reengineering
engine.” The old program would be fed into the engine, analyzed, restructured, and then
regenerated in a form that exhibited the best aspects of software quality

26 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech

You might also like