You are on page 1of 18

1.

Software Testing
Software testing is the process used to help identify the correctness,
completeness, security and quality of developed computer software.

The IEEE definition for testing is:


“The process of exercising or evaluating a system by manual or automatic
means to verify that it satisfies specified requirements or to identify
differences between expected and actual results”.

The definition of Myers says,


“Testing is the process of executing a program with the intent of finding
errors”.

Testing is an activity aimed at evaluating an attribute or capability of a


program or system to determine that it meets its required results. It is a
process of evaluating a system or system component by manual or
automated means to verify that it satisfies specified requirements or to
identify differences between expected and actual results.

Testing is used to determine the status of the product during and after the
build or do component of the process. The role of testing changes as the type
of process used to build the product changes.

What is Testing?
* An examination of the behavior of a program by executing on sample data
sets.
* Testing comprises of set of activities to detect defects in a produced
material.
* To unearth & correct defects.
* To detect defects early & to reduce cost of defect fixing.
* To avoid user detecting problems.
* To ensure that product works as users expected it to.
What is the goal of testing?
The main goal of testing is to check if the system meets the user
requirements and check for the systems reliability.

Why software testing?

• To produce quality product.


• To reduce the failure cost and maintenance cost.

Software Testing Types:

Black box testing - Internal system design is not considered in this type of
testing. Tests are based on requirements and functionality.

White box testing - This testing is based on knowledge of the internal logic
of an application’s code. Also known as Glass box Testing. Internal software
and code working should be known for this type of testing. Tests are based
on coverage of code statements, branches, paths, conditions.

Unit testing - Testing of individual software components or modules.


Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. may require developing
test driver modules or test harnesses.

Incremental integration testing - Bottom up approach for testing i.e


continuous testing of an application as new functionality is added;
Application functionality and modules should be independent enough to test
separately. done by programmers or by testers.

Integration testing - Testing of integrated modules to verify combined


functionality after integration. Modules are typically code modules,
individual applications, client and server applications on a network, etc. This
type of testing is especially relevant to client/server and distributed systems.

Functional testing - This type of testing ignores the internal parts and focus
on the output is as per requirement or not. Black-box type testing geared to
functional requirements of an application.
System testing - Entire system is tested as per the requirements. Black-box
type testing that is based on overall requirements specifications, covers all
combined parts of a system.

End-to-end testing - Similar to system testing, involves testing of a


complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or
interacting with other hardware, applications, or systems if appropriate.

Sanity testing - Testing to determine if a new software version is performing


well enough to accept it for a major testing effort. If application is crashing
for initial use then system is not stable enough for further testing and build
or application is assigned to fix.

Regression testing - Testing the application as a whole for the modification


in any module or functionality. Difficult to cover all the system in regression
testing so typically automation tools are used for these testing types.

Acceptance testing -Normally this type of testing is done to verify if system


meets the customer specified requirements. User or customer do this testing
to determine whether to accept application.

Load testing - Its a performance testing to check system behavior under


load. Testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system’s response time
degrades or fails.

Stress testing - System is stressed beyond its specifications to check how


and when it fails. Performed under heavy load like putting large number
beyond storage capacity, complex database queries, continuous input to
system or database load.

Performance testing - Term often used interchangeably with ’stress’ and


‘load’ testing. To check whether system meets performance requirements.
Used different performance and load tools to do this.

Usability testing - User-friendliness check. Application flow is tested, Can


new user understand the application easily, Proper help documented
whenever user stuck at any point. Basically system navigation is checked in
this testing.
Install/uninstall testing - Tested for full, partial, or upgrade install/uninstall
processes on different operating systems under different hardware, software
environment.

Recovery testing - Testing how well a system recovers from crashes,


hardware failures, or other catastrophic problems.

Security testing - Can system be penetrated by any hacking way. Testing


how well the system protects against unauthorized internal or external
access. Checked if system, database is safe from external attacks.

Compatibility testing - Testing how well software performs in a particular


hardware/software/operating system/network environment and different
combination s of above.

Comparison testing - Comparison of product strengths and weaknesses


with previous versions or other similar products.

Alpha testing - In house virtual user environment can be created for this
type of testing. Testing is done at the end of development. Still minor design
changes may be made as a result of such testing.

Beta testing - Testing typically done by end-users or others. Final testing


before releasing application for commercial purpose.

The steps in the TESTING process are as follows.

1. Requirement analysis

Testing should begin in the requirements phase of the software life cycle
(SDLC). The actual requirement should be understand clearly with the help
of Requirement Specification documents, Functional Specification
documents, Design Specification documents, Use case Documents etc.
During the requirement analysis the following should be considered.
-Are the definitions and descriptions of the required capabilities precise?
-Is there clear delineation between the system and its environment?
-Can the requirements be realized in practice?
-Can the requirements be tested effectively?
2. Test Planning

During this phase Test Strategy, Test Plan, Test Bed will be created.
A test plan is a systematic approach in testing a system or software.
The plan should identify:
-Which aspects of the system should be tested?
-Criteria for success.
-The methods and techniques to be used.
-Personnel responsible for the testing.
-Different Test phase and Test Methodologies
-Manual and Automation Testing
-Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
-Evaluation & identification – Test, Defect tracking tools

3. Test Environment Setup

During this phase the required environment will be setup will be done. The
following should also be taken in account.
- Network connectivity’s
-All the Software/ tools Installation and configuration
-Coordination with Vendors and others

4. Test Design

During this phase


-Test Scenarios will be identified.
-Test Cases will be prepared.
-Test data and Test scripts prepared.
-Test case reviews will be conducted.

5. Test Automation

In this phase the requirement for the automation will be identified. The
tools that are to be used will be identified. Designing framework, scripting,
script integration, Review and approval will be undertaken in this phase.

6. Test Execution and Defect Tracking

Testers execute the software based on the plans and tests and report any
errors found to the development team. In this phase
-Test cases will be executed.
-Test Scripts will be tested.
-Test Results will be analyzed.
-Raised the defects and tracking for its closure.

7. Test Reports

Once testing is completed, testers generate metrics and make final reports
on their test effort and whether or not the software tested is ready for
release.
-Test summary reports will be prepared
-Test Metrics and process Improvements made
-Build release
-Receiving acceptance

Software Quality Assurance


The function of software quality that assures that the standards, processes,
and procedures are appropriate for the project and are correctly
implemented.

Software Quality Control


The function of software quality that checks that the project follows its
standards, processes, and procedures, and that the project produces the
required internal and external (deliverable) products.
A further definition of SQA and SQC, by way of example can be found

CMMI
A process improvement approach to software development.

CMMi identifies a core set of Software Engineering process areas as:-

Requirements Development
Requirements Management
Technical Solution
Product Integration
Verification
Validation

CMMI also covers other process areas, such as Process Management,


Project Management and Support but only the core Software Engineering
development processes are used here by way of example.

It is also interesting to note that SQA and SQC are processes defined within
CMMI, they are under the Support process area.
In CMMI SQA\SQC is defined as Process and Product Quality
Assurance.

CMMI is an approach to process improvement, in which SQA\SQC play a


major but not exclusive role.
Everyone in a software development organization takes part in both the
CMMI processes and any improvement initiatives for those processes.

Each of the main Engineering process areas is now described together with
the role that SQA\SQC plays within those areas.

The CMMI Requirements Development process area describes three types of


requirements:-
customer requirements, product requirements, and product-component
requirements.

SQA role To observe (audit) that documented standards, processes, and


procedures are followed.
SQA would also establish Software metrics in order to measure the
effectiveness of this process.
A common metric for measuring the Requirements process would be the
number of errors (found during system testing) that could be traced to
inaccurate or ambiguous requirements (note: SQC would perform the actual
system testing but SQA would collect the metrics for monitoring and
continuous improvement).

SQC role SQC takes an active role with Verification (this is a process itself
that is described later).
Verification of the requirements would involve inspection (reading) and
looking for clarity and completeness.
SQC would also verify that any documentated requirement standards are
followed.

Note there is a subtle difference between SQA and SQC with regard to
standards, SQC’s role is in verifying the output of this process (that is the
Requirement document itself) while SQA’s role is to make sure the process is
followed correctly.

SQA is more of an audit role here, and may sample actual Requirements
whereas SQC is involved in the Verification of all Requirements.

The type of requirement need not be just the functional aspect (or
customer\user facing requirements) they could also include product and\or
component requirements.

The product requirements e.g. Supportability, Adaptability and Reliability


etc. are characteristics discussed here(as part of the FURPS+ model).
The respective roles of SQC and SQA is the same for all types of requirement
(customer and product) with SQC focusing on the ‘internal deliverable’ and
SQA focusing on the process of how the internal deliverable is produced, as
per the formal definition.
SQA and SQC roles in CMMI Requirements Management

The purpose of (CMMI) Requirements Management is to manage the


requirements of the project's products and product components and to
identify inconsistencies between those requirements and the project's plans
and work products.

This process involves version control of the Requirements and the


relationship between the Requirements and other work products. One tool
used in Requirements Management is a Traceability Matrix.

The Traceability Matrix maps where in the Software a given requirement is


implemented, it is a kind of cross reference table.
The traceability matrix also maps which test case verify a given requirement.

There are other processes within Requirements Management and CMMI


should be referenced for further information.

SQA role To observe (audit) that documented standards, processes, and


procedures are followed.
SQA would also establish metrics in order to measure the effectiveness of
this process.
A common metric for measuring the Requirements Management would be
the how many times the wrong Version was referenced.
Another measure (for the Traceability Matrix) would be lack of test coverage,

that is defects detected in the shipped product that were not tested due to the
fact that they were not referenced in the Traceability matrix that referenced
the requirements.

SQC role As with the actual Requirements Development, SQC would be


involved in inspecting the actual deliverable’s (e.g. Traceability Matrix) from
this process.

SQC may also get involved at this stage as they will be the people doing the
actual Testing (Verification and Validation) so for their test coverage a
complete Traceability Matrix is essential.
SQA and SQC roles in CMMI Technical solution

The purpose of (CMMI) Technical Solution is to design, develop, and


implement solutions to requirements.

Solutions, designs, and implementations encompass products, product


components, and product-related life-cycle processes either singly or in
combinations as appropriate.

This is the main Design and Coding processes.


CMMI puts the design and build together. Other important processes, e.g.
Configuration Management are listed in other process areas within CMMI.

SQA role To observe (audit) that documented standards, processes, and


procedures are followed.
SQA would also establish metrics in order to measure the effectiveness of
this process. Clearly testing the end product against the requirements (which
is itself a SQC activity) will reveal any defects introduced during this (the
Technical solution) process.
The number of defects is a common measure for the Design\Build phase.
This metric is usually further quantified by some form of scope, for example
defects per 100 lines of code, or per function.
It is important that the defect may not always be a functional (or Customer
facing defect) it could be that a required adaptability scenario is absent from
the Design and\or coded solution.
The FURPS+ model references typical Software Metrics that are used for
specifying the total (both functional and non-functional) software
requirements.

SQC role The major SQC role during this process will be testing (see
Validation and Verification).

The finished product does not have to be present before testing can begin.
Unit and Component can both take place before the product is complete.
Design and Code reviews are also something that SQC could get involved
with.

The purpose of the review has to be clearly stated, i.e. to verify standards are
followed or to look for potential Supportability (part of the Product
Requirements) issues. The Supportability metric is the time it takes for a
defect in a system to be fixed. This metric is influenced by the complexity of
the code, which impacts the developer’s ability to find the defect.
SQA and SQC roles in CMMI Product Integration

The purpose of Product Integration is to assemble the product from the


product components, ensure that the product, as integrated, functions
properly, and deliver the product. Note this is the final Integration and ‘move
to production’ or product delivery. For large Software packages (consider
SAP, Oracle Financials etc.) the assembly process is huge and the potential
for errors is high. This process does not involve any coding but pure
integration and\or assembly. SQA role: To observe (audit) that documented
standards, processes, and procedures are followed. SQA would also establish
metrics in order to measure the effectiveness of this process. One
measurement for this would be the defects found that resulted from the
interface specifications (part of the Product requirements), potential process
improvements could be to find other, perhaps less ambiguous ways, of
specifying interfaces. For example a development team may move to XML or
Web Services for all interfaces, SQA could then measure the defects and
report back to management and development as to the effectiveness of this
change. SQC role: Again testing would be a large role played by SQC.
Systems Integration Testing (SIT) would be carried out by SQC. Also install
ability testing would be done during this process.
SQA and SQC roles in CMMI Verification

The purpose of Verification is to ensure that selected work products meet


their specified requirements Theses activities are only carried out by SQC,
the role of SQA would be to make sure that SQC had documented
procedures, plans etc. by audit. SQA would also measure the effectiveness of
the Verification processes by tracking defects that were missed by SQC
during Verification. Note the term Verification, as opposed to Validation (see
below). In essence Verification answers the question ‘Are we building the
product correctly’ while Validation answers the question ‘Are we building the
correct product’. Validation demonstrates that the product satisfies its
intended purpose when place in the correct environment while Verification
refers to building to specification. The FURPS+ model identifies both
Customer and Product requirements; Verification applies to both these types
of requirements and can be applied to the intermediary work products.
Design or Code reviews are examples of Verification. These terms
Verification and Validation are often mixed, CMMI makes this comment
about the distinction:- Although “verification” and “validation” at first seem
quite similar in CMMI models, on closer inspection you can see that each
addresses different issues. Verification confirms that work products properly
reflect the requirements specified for them. In other words, verification
ensures that “you built it right.” While SQC carries out all the Verification
activities the Verification process itself is still subject to SQA and process
improvement.
SQA and SQC roles in CMMI Validation

Validation confirms that the product, as provided, will fulfill its intended use.
In other words, validation ensures that “you built the right thing.” As with
Verification, Validation is mainly the domain of SQC. The term Acceptance
Test could also apply to Validation, in most cases the Acceptance test is
carried out by a different group of people from the SQC team that performed
Verification, as the product was being built. In the case where an application
is going to be used internally, then the end user or business representative
would perform the Acceptance testing. Wherever this is done it is in essence
a SQC activity. As with Verification, SQA makes sure that these processes
conform to standards and documented procedures. The Validation process
itself is subject to continuous improvement and measurement.
Conclusion

Although only a high level snapshot has been given of how some SDLC
processes are subjected to SQA and SQC a clear pattern can be seen. In all
cases the SQA and SQC do not get involved in building any of the products.
SQC are only involved in Verification and Validation. The role of SQA is
even more removed from development; they are mainly providing the role of
an Auditor. In addition SQA will collect measurements of the effectiveness
(and cost) of the processes in order to implement continuous process
improvement. This separation of SQC from development and SQA from SQC
ensures objectivity and impartiality. In an ideal environment theses
(Development, SQC and SQA) would be three separate organization units
reporting to different managers. Some of the benefits of SQA\SQC can be
achieved in a less formal environment. This hybrid approach is typically used
by small development groups. An example of this hybrid approach is
documented here at SQA in Practice.

A definition of software quality metrics is:-

A measure of some property of a piece of software or its specifications.

Basically, as applied to the software product, a software metric measures


(or quantifies) a characteristic of the software.
Some common software metrics (discussed later) are:-

Source lines of code.

Cyclomatic complexity is used to measure code complexity.

Function point analysis (FPA), is used to measure the size (functions)


of software.

Bugs per lines of code.

Code coverage, measures the code lines that are executed for a given
set of software tests.

Cohesion, measures how well the source code in a given module work
together to provide a single function.
Coupling, measures how well two software components are data
related, i.e. how independent they are.

The above list is only a small set of software metrics, the important
points to note are:-

They are all measurable, that is they can be quantified.

They are all related to one or more software quality characteristics.

The last point, related to software characteristics, is important for


software process improvement. Metrics, for both process and software,
tell us to what extent a desired characteristic is present in our processes
or our software systems. Maintainability is a desired characteristic of a
software component and is referenced in all the main software quality
models (including the ISO 9126). One good measure of maintainability
would be time required to fix a fault. This gives us a handle on
maintainability but another measure that would relate more to the cause
of poor maintainability would be code complexity. A method for
measuring code complexity was developed by Thomas McCabe and with
this method a quantitative assessment of any piece of code can be made.
Code complexity can be specified and can be known by measurement,
whereas time to repair can only be measured after the software is in
support. Both time to repair and code complexity are software metrics
and can both be applied to software process improvement.

From our previous definition of SQA and SQC, we now see the
importance of measurement (metrics) for the SDLC and SPI. It is
metrics that indicate the value of the standards, processes, and
procedures that SQA assures are being implemented correctly within a
software project. SQA also collects relevant software metrics to provide
input into a SPI (such as a CMMi continuous improvement initiative).
This exercise of constantly measuring the outcome, then looking for a
causal relationship to standards, procedures and processes makes SQA
and SPI pragmatic disciplines.

The whole process of setting up a SDLC, then selecting the correct


metrics and then establishing causal relationships to parts of the SDLC is
more of an art than a science. It is for this reason that there a few, if any,
off the shelf answers to SQA, SQC and SPI. It is a question of where are
the risks and challenges of a given environment. For example if you have
one version of a system and this runs on a central server, then
configuration issues are unlikely and you are less likely to be concerned
with Portability issues than someone who targets multiple platform,
multiple versions is concerned with portability (and configuration
management).

That said the following section tries to pull the ideas of quality metrics,
quality characteristics, SPI, SQC and SQA together with some examples
by way of clarifying the definition of these terms.

The Software Assurance Technology Center (SATC), NASA, Software


Quality Model includes Metrics

The table below cross references Goals, Attributes (software


characteristics) and Metrics. This table is taken from the Software
Assurance Technology Center (SATC) at NASA. Although the software
quality model has different quality characteristics than those previously
discussed on this website, namely ISO 9126, the relationship with Goals
lends itself to giving examples of how this could be used in CMMi. If you
look at the other quality models they have a focus on what comes under
the Product (Code) Quality goal of the SATC model.

The reason SQA.net prefers the SATC quality model is:-

The standard, including the new ISO 9126-1, quality models describe
only the system behavioral characteristics.

The SATC model includes goals for processes, (i.e. Requirements,


Implementation and Testing).

The SATC model can be used to reference all the CMMi, for example
Requirements management (which includes traceability).

If desired the SATC model can be expanded to accommadate grater


risk mitigation in the specified goal areas, or other goal areas can be
created.

Demonstrating the relationship of metrics to quality characteristics


and SPI (CMMI) is well served by the SATC quality model.

If you need to do this work in practice, you will need to select a single
reference point for the Software Quality Model, then research the best
metric for evaluating the characteristic. The importance of being able to
break down a model of characteristics into measurable components
indicates why these models all have a hierarchal form.
The SATC Software Quality Model (which includes Goals and Metrics as
well as the software attributes)
GOALS ATTRIBUTES METRICS
Number of Weak Phrases.
Ambiguity
Number of Optional Phrases.
Number of To Be Determined
Completeness (TBDs) and To be Added
(TBAs).
Requirements Document Structure.
Understandability
Quality Readability Index.
Count of Changes / Count of
Volatility Requirements. Life cycle stage
when the change is made.
Number of software
requirements not traced to
Traceability system requirements. Number
of software requirements not
traced to code and tests.

Logic complexity. GOTO


Structure/Architecture
usage. Size.
Maintainability Correlation of complexity/size.
Product (Code)
Reusability Correlation of complexity/size.
Quality
Internal Documentation Comment Percentage.
External
Readability Index.
Documentation

Implementation Staff hours spent on life cycle


Resource Usage
Effectivity activities.
Task completions. Planned task
Completion Rates
completions.

Errors and criticality. Time of


Testing Correctness finding of errors. Time of error
Effectivity fixes. Code Location of fault.
SATC explanation and relationships with CMMi

The SATC Goals can be mapped to the following CMMi development


processes:-

GOALS CMMi process.


Requirements Development, Requirements
Requirements Quality
Management.
Product (Code) Quality Technical Solution
Implementation
Project Management.
Effectivity
Testing Effectivity Verification and Validation.

Although all of the CMMi process areas are not covered by the SATC goals,
the main stream development activities are, that is:-

Project Management.
Requirements.
Development (coding and implementing).
Testing.

One good addition to this model would be Supportability that includes


number of faults and mean time to repair, such metrics would characterize
the ongoing cost of ownership. That said the SATC model focuses on Project
development activities and is primarily used for mitigating project
development risks.

The measures can also be added to, the idea being to better measure the
software attributes (characteristics). For example for Reusability data
coupling and functional cohesion could be used instead of correlation of
complexity/size. Data coupling measures how closely two components are
tied together, with loose data coupling being desirable. Functional cohesion
measures the extent to which a module performs one function, whenever the
scope of a component is stretched to be multiple functional the code
becomes less reusable. The main goal being to be continually experimenting
with new metrics to make a better (i.e. more useful) determination of the
extent to which the characteristic, that they are measuring, exists in the
software.

The measures listed in the above table, i.e. for traceability, code complexity
etc., would provide a good starting point for any software metrics program.
What is important is to place software metrics in the context of contiunous
improvement. In order to achieve this a well defined quality model is needed
which lists the desired characteristics and how they are measured (i.e.
metrics). The final essential relationship that needs to be established is the
causal relationship between the metrics and the SDLC activities (i.e.
processes, procedures and standards). When these models and relationships
have been implemented then a continuous improvement life cycle can be
implemented (such as CMMi), in short:-

We can only control what we can measure.

You might also like