Professional Documents
Culture Documents
CASE tools –projects management, tools - analysis and design tools – programming tools
integration and testing tool - Case studies.
………………………………………………………………………………………………………………
Error/defect collection and analysis: The only way to improve is to measure how you’re doing.
SQA collects and analyzes error and defect data to better understand how errors are introduced
and what software engineering activities are best suited to eliminating them.
Change management. Change is one of the most disruptive aspects of any software project. If it
is not properly managed, change can lead to confusion, and confusion almost always leads to
poor quality. SQA ensures that adequate change management practices have been instituted.
Education. Every software organization wants to improve its software engineering practices. A
key contributor to improvement is education of software engineers, their managers, and other
stakeholders. The SQA organization takes the lead in software process improvement and is a
key proponent and sponsor of educational programs.
Security management. With the increase in cyber crime and new government regulations
regarding privacy, every software organization should institute policies that protect data at all
levels, establish firewall protection for WebApps, and ensure that software has not been
tampered with internally.
SQA ensures that appropriate process and technology are used to achieve software security.
Risk management. Although the analysis and mitigation of risk is the concern of software
engineers, the SQA organization ensures that risk management activities are properly conducted
and that risk-related contingency plans have been established.
Software engineers address quality (and perform quality control activities) by applying solid
technical methods and measures, conducting technical reviews, and performing well-planned
software testing.
SQA Tasks: The Software Engineering Institute recommends a set of SQA actions that address
quality assurance planning, oversight, record keeping, analysis, and reporting.
Prepares an SQA plan for the projects: Quality assurance actions performed by the
software engineering team and the SQA group are governed by the plan. The plan
identifies evaluations to be performed, audits and reviews to be conducted.
Participates in the development of the project’s software process description: The
software team selects a process for the work to be performed. The SQA group reviews
the process description for compliance with organizational policy, internal software
standards, external standards, and other parts of the software project plan.
Reviews software engineering activities to verify compliance with the defined software
process: The SQA group identifies, documents, and tracks deviations from the process
and verifies that corrections have been made.
Audits designated software work products to verify compliance with those defined as
part of the software process: The SQA group reviews selected work products; identifies,
documents, and tracks deviations; verifies that corrections have been made; and
periodically reports the results of its work to the project manager.
Ensures that deviations in software work and work products are documented and
handled according to a documented procedure: Deviations may be encountered in the
project plan, process description, applicable standards, or software engineering work
products.
Records any noncompliance and reports to senior management: Noncompliance items
are tracked until they are resolved.
2 © www.anuupdates.org – Prepared By Dodda. Venkata Reddy B.Tech, M.Tech
SQA Goals: The SQA actions described in the preceding section are performed to achieve a set
of pragmatic goals:
Requirements quality: The correctness, completeness, and consistency of the requirements
model will have a strong influence on the quality of all work products that follow. SQA
must ensure that the software team has properly reviewed the requirements model to
achieve a high level of quality.
Design quality: Every element of the design model should be assessed by the software
team to ensure that it exhibits high quality and that the design itself conforms to
requirements.
Code quality: Source code and related work products must conform to local coding
standards and exhibit characteristics that will facilitate maintainability. SQA should isolate
those attributes that allow a reasonable analysis of the quality of code.
Quality control effectiveness: A software team should apply limited resources in a way
that has the highest likelihood of achieving a high-quality result. SQA analyzes the
allocation of resources for reviews and testing to assess whether they are being allocated
in the most effective manner.
1.3 Statistical Software Quality Assurance (Six Sigma Method)
Six Sigma is the most widely used strategy for statistical quality assurance strategy. It is a
business driven approach to process improvements, reduce cost and increase profit. The term
Six Sigma is derived from six standard deviations—3.4 instances (defects) per million
occurrences. Six Sigma originated at Motorola in the 1980’s.
The Six Sigma methodology defines three core steps:
• Define customer requirements and deliverables and project goals via well defined methods of
customer communication.
• Measure the existing process and its output to determine current quality performance (collect
defect metrics).
• Analyze defect metrics and determine the vital few causes
These core and additional steps are sometimes referred to as the DMAIC (define, measure,
analyze, improve, and control) method.
This variation is sometimes called the DMADV (define, measure, analyze, design, and verify)
method.
Measures of Reliability and Availability: All software failures can be traced to design or
implementation problems; if we consider a computer-based system, a simple measure of
reliability is mean-time-between-failure (MTBF):
MTBF = MTTF + MTTR
Where the acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to-repair
respectively. In addition to reliability measure, we should also develop a measure of availability.
Software availability is the probability that a program is operating according to requirements at
a given point in time and is defined as:
ISO 9000 describes quality assurance elements in generic terms that can be applied to any
business regardless of the products or services offered.
To become registered to one of the quality assurance system models contained in ISO 9000, a
company’s quality system and operations are scrutinized by third-party auditors for compliance
to the standard and for effective operation.
In order for a software organization to become registered to ISO 9001:2000, it must establish
policies and procedures to address each of the requirements just noted (and others) and then be
able to demonstrate that these policies and procedures are being followed. If you desire further
information on ISO 9001:2000.
1.6 SQA Plan
The SQA plan is a software document created for summarizing all the SQA
activities conducted for the software project. The SQA plan specifies the goal and tasks that can
be performed in order to do conduct all the SQA activities. Such a plan should be developed by
SQA group. The standard for this plan is published by IEEE standard.
SQA Plan
1. Purpose and scope of the pal
2. Description of work product
3. Applicable standards
4. SQA activities
5. Tools and methods used
6. SCM procedures for managing change
7. Methods for maintaining SQA related records
8. Organizational roles and responsibilities
The SQA plan is a document aimed to give confidence to developers and customers that the
specified requirements will be met and final product will be a quality product.
Several models of software quality factors and their categorization have been suggested over the
years. The classic model of software quality factors, suggested by McCall, consists of 11 factors.
The 11 factors are grouped into three categories – product operation, product revision, and
product transition factors.
According to McCall’s model, product operation category includes five software quality factors,
which deal with the requirements that directly affect the daily operation of the software. They
are as follows –
Correctness: Correctness is the extent to which a program satisfies its specifications.
Reliability: Reliability is the property that defines how well the software meets its requirements.
Efficiency: Efficiency is a factor relating to all issues in the execution of software; it includes
considerations such as response time, memory requirement, and throughput.
Integrity:
This factor deals with the software system security, that is, to prevent access to unauthorized
persons, also to distinguish between the group of people to be given read as well as write
permit.
Usability: Usability or the effort required locating and fixing errors in operating programs.
This method is named so because the software program, in the eyes of the tester, is like a white/
transparent box; inside which one clearly sees.
There are three main reasons behind performing the white box testing.
1. Programming may have some incorrect assumptions while designing or implementing some
functions. Due to this there are chances of having logical errors in the program. To detect and
correct such logical errors procedural details need to be examined.
2. Certain assumptions on flow of control and data may lead programmer to make design
errors. To uncover the errors on logical path, white box testing is must.
3. There may be certain typographical errors that remain undetected even after syntax and type
checking mechanisms. Such errors can be uncovered during white box testing.
The effectiveness of white box testing is commonly expressed in terms if test or code coverage
metrics, which measures the fraction of code exercised by test cases. The various types of testing,
which occur as part of white box testing are
Path testing is a structural testing method that involves using the source code of a program in
order to find every possible executable path. It helps to determine all faults lying within a piece
of code. This method is designed to execute all or selected path through a computer program.
Any software program includes multiple entry and exit points. Testing each of these points is a
challenging as well as time consuming. In order to reduce the redundant tests and to achieve
maximum test coverage, basis path testing is used.
What is Basis Path Testing?
The basis path testing is same, but it is based on a White Box Testing method, that defines test
cases based on the flows or logical path that can be taken through the program. Basis path
testing involves execution of all possible blocks in a program and achieves maximum path
coverage with least number of test cases. It is a hybrid of branch testing and path testing
methods.
The objective behind basis path testing is that it defines the number of independent paths, thus
the number of test cases needed can be defined explicitly (maximizes the coverage of each test
case).
Here we will take a simple example, to get better idea what is basis path testing include.
In the above example, we can see there are few conditional statements that are executed
depending on what condition it suffices. Here there are 3 paths or condition that needs to be
tested to get the output,
Path 1: 1,2,3,5,6, 7
Path 2: 1,2,4,5,6, 7
Path 3: 1, 6, 7
Cyclomatic complexity is a software metric used to measure the complexity of a program. These
metric, measures independent paths through program source code.
Independent path is defined as a path that has at least one edge which has not been traversed
before in any other paths.
Cyclomatic complexity can be calculated with respect to functions, modules, methods or classes
within a program.
This metric was developed by Thomas J. McCabe in 1976 and it is based on a control flow
representation of the program. Control flow depicts a program as a graph which consists of
Nodes and Edges.
In the graph, Nodes represent processing tasks while edges represent control flow between the
nodes.
Cyclomatic complexity has a foundation in graph theory and provides you with extremely
useful software metric. Complexity is computed in one of three ways:
The number of regions of the flow graph corresponds to the cyclomatic complexity.
Cyclomatic complexity V(G) for a flow graph G is defined as:
V (G) = E – N + 2
Where E is the number of edges, N is the number of flow graphs.
Cyclomatic complexity V(G) for a flow graph G is defined as:
V (G) = P + 1
Where P is the number of predicate nodes contained in the flow graph G.
The Cyclomatic complexity is calculated using the above control flow diagram that shows seven
nodes (shapes) and eight edges (lines), hence the Cyclomatic complexity is 8 - 7 + 2 = 3.
LEVELS APPLICABLE TO
Black Box testing method is applicable to the following levels of software testing:
Integration Testing
System Testing
Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more black box
testing method comes into use.
BLACK BOX TESTING TECHNIQUES
Following are some techniques that can be used for designing black box tests.
Equivalence partitioning: It is a software test design technique that involves dividing
input values into valid and invalid partitions and selecting representative values from
each partition as test data.
Boundary Value Analysis: It is a software test design technique that involves
determination of boundaries for input values and selecting values that are at the
boundaries and just inside/ outside of the boundaries as test data.
Cause Effect Graphing: It is a software test design technique that involves identifying the
cases (input conditions) and effects (output conditions), producing a Cause-Effect Graph,
and generating test cases accordingly.
METHOD
Any of Black Box Testing, White Box Testing, and Gray Box Testing methods can be used.
Normally, the method depends on your definition of ‘unit’.
TASKS
Integration Test Plan
o Prepare
o Review
o Rework
o Baseline
Integration Test Cases/Scripts
o Prepare
o Review
o Rework
o Baseline
Integration Test
o Perform
System Testing is a level of the software testing where a complete and integrated software is
tested.
The purpose of this test is to evaluate the system’s compliance with the specified requirements.
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail, the ink
cartridge and the ballpoint are produced separately and unit tested separately. When two or
more units are ready, they are assembled and Integration Testing is performed. When the
complete pen is integrated, System Testing is performed.
METHOD
Usually, Black Box Testing method is used.
TASKS
System Test Plan
o Prepare
o Review
o Rework
o Baseline
System Test Cases
o Prepare
o Review
o Rework
o Baseline
System Test
o Perform
When is it performed?
System Testing is performed after Integration Testing and before Acceptance Testing.
Who performs it?
Normally, independent Testers perform System Testing.
Acceptance Testing is a level of the software testing where a system is tested for acceptability.
The purpose of this test is to evaluate the system’s compliance with the business requirements
and assess whether it is acceptable for delivery.
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink
cartridge and the ballpoint are produced separately and unit tested separately. When two or
more units are ready, they are assembled and Integration Testing is performed. When the
complete pen is integrated, System Testing is performed. Once System Testing is complete,
Acceptance Testing is performed so as to confirm that the ballpoint pen is ready to be made
available to the end-users.
METHOD
Usually, Black Box Testing method is used in Acceptance Testing. Testing does not
normally follow a strict procedure and is not scripted but is rather ad-hoc.
TASKS
Acceptance Test Plan
o Prepare
o Review
o Rework
o Baseline
Acceptance Test Cases/Checklist
o Prepare
o Review
o Rework
o Baseline
When is it performed?
Acceptance Testing is performed after System Testing and before making the system available
for actual use.
Who performs it?
Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of
the organization that developed the software but who are not directly involved in the
project (Development or Testing). Usually, it is the members of Product Management,
Sales and/or Customer Support.
External Acceptance Testing is performed by people who are not employees of the
organization that developed the software.
o Customer Acceptance Testing is performed by the customers of the organization
that developed the software. They are the ones who asked the organization to
develop the software. [This is in the case of the software not being owned by the
organization that developed it.]
o User Acceptance Testing (Also known as Beta Testing) is performed by the end
users of the software. They can be the customers themselves or the customers’
customers.
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?
Validation Testing - Workflow:
Validation testing can be best demonstrated using V-Model. The Software / product under test is
evaluated during this type of testing.
Activities:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
Unit Testing is a level of software testing where individual units/ components of software are
tested. The purpose is to validate that each unit of the software performs as designed.
A unit is the smallest testable part of software. It usually has one or a few inputs and usually a
single output. In procedural programming a unit may be an individual program, function,
procedure, etc. In object-oriented programming, the smallest unit is a method, which may
belong to a base/ super class, abstract class or derived/ child class. (Some treat a module of an
application as a unit. This is to be discouraged as there will probably be many individual units
within that module.)
Unit testing frameworks, drivers, stubs, and mock/ fake objects are used to assist in unit testing.
Software reverse engineering is the process of recovering the design and specification of a
product from an analysis of its code. The purpose of reverse engineering is to facilitate
maintenance work by improving the understand ability of a system and to produce the
necessary documents for a legacy system.
Reverse engineering is becoming important, since legacy software products lack proper
documentation, and are highly unstructured. Even well-designed products become legacy
software as their structure degrades through a series of maintenance efforts.
The abstraction level of a reverse engineering process and the tools can be extracted from
source code. The abstraction level should be as high as possible. As the abstraction level
increases, you are provided with information (program and data structure information, object
models, data or control flow models, and entity relationship models) that will allow easier
understanding of the program.
The completeness of a reverse engineering process refers to the level of detail that is provided at
an abstraction level.
If the directionality of the reverse engineering process is one-way, all information extracted
from the source code is provided to the software engineer who can then use it during any
maintenance activity. If directionality is two-way, the information is fed to a reengineering tool
that attempts to restructure or regenerate the old program.
Reverse engineering to understand data: Reverse engineering of data occurs at different levels of
abstraction and is the first reengineering task. At the program level, internal program data
structures must often be reverse engineered. At the system level, global data structures are often
reengineered.
Internal Data Structures: Reverse engineering techniques for internal program data focus
on the definition of classes of objects. The data organization within the code identifies
abstract data types. For example, record structures, files, lists, and other data structures
often provide an initial indicator of classes.
Data Structure: Regardless of its logical organization
and physical structure, a database allows the definition of
data objects and supports some method for establishing
relationships among the objects.
Reverse engineering to understand processing: Reverse
engineering to understand processing begins with an attempt
to understand and then extract procedural abstractions
represented by the source code. To understand procedural
abstractions, the code is analyzed at varying levels of
abstraction: system, program, component, pattern, and
statement.
Reverse engineering user interfaces: GUIs have become
required for computer based products and systems of every
type. Therefore, the redevelopment of user interfaces has
Software Reengineering:
Software reengineering is a combination of two consecutive processes i.e. software reverse
engineering and software forward engineering as shown in the fig.
An application was served the business needs of a company for 10 or 15 years, during that time
it has to be corrected, adapted and enhanced many times. Software maintenance is so difficult
because 60% of the software is enhanced every time. Software maintenance is described by
four activities:
Corrective Maintenance
Adaptive Maintenance
Perfective (or) Enhancement Maintenance
Preventive Maintenance (or) Reengineering
Software reengineering activities: The reengineering paradigm shown in above figure is a cyclical
model. This means that each of the activities presented as a part of the paradigm may be
revisited.
Inventory analysis: Every software organization should have an inventory of all
applications. The inventory can be nothing more than a spreadsheet model containing
information that provides a detailed description (e.g., size, age, business criticality) of
every active application. The inventory should be revisited on a regular cycle.
Document Restructuring:
Creating documentation is far too time consuming.
Documentation must be updated, but your organization has limited
resources.
The system is business critical and must be fully redocumented.
Reverse Engineering: The term reverse engineering has its origins in the hardware world.
A company disassembles a competitive hardware product in an effort to understand its
competitor’s design and manufacturing “secrets.”
Reverse engineering for software is quite similar. Reverse engineering is a process of design
recovery. Reverse engineering tools extract data, architectural, and procedural design
information from an existing program.
Code Restructuring: The most common type of reengineering is code
restructuring. The source code is analyzed using restructuring tool, and the
internal code documentation code is updated.
Data Restructuring: Data restructuring is full scale reengineering activity. It
begins with a reverse engineering. Current data architecture is dissected and
necessary data models are defined.
Forward Reengineering: Applications would be rebuilt using an automated “reengineering
engine.” The old program would be fed into the engine, analyzed, restructured, and then
regenerated in a form that exhibited the best aspects of software quality