You are on page 1of 12

Software Testing Tutorial

1. Introduction

Testing is a process used to help identify the correctness, completeness and quality of
developed computer software.

With that in mind, testing can never completely establish the correctness of computer
software. In other words Testing is nothing but CRITICISM or COMPARISION. Here
comparison in the sense comparing the actual value with expected one.

There are many approaches to software testing, but effective testing of complex
products is essentially a process of investigation, not merely a matter of creating and
following rote procedure. One definition of testing is "the process of questioning a
product in order to evaluate it", where the "questions" are things the tester tries to
do with the product, and the product answers with its behavior in reaction to the
probing of the tester. Although most of the intellectual processes of testing are nearly
identical to that of review or inspection, the word testing is connoted to mean the
dynamic analysis of the product—putting the product through its paces.

The quality of the application can and normally does vary widely from system to
system but some of the common quality attributes include reliability, stability,
portability, maintainability and usability.

Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.

2. Types of Testing
2.1. White Box Testing

White box testing is also known as glass box, structural, clear box and open box
testing. This is a software testing technique whereby explicit knowledge of the
internal workings of the item being tested are used to select the test data.

Unlike black box testing, white box testing uses specific knowledge of programming
code to examine outputs. The test is accurate only if the tester knows what the
program is supposed to do. He or she can then see if the program diverges from its
intended goal. White box testing does not account for errors caused by omission, and
all visible code must also be readable.
2.2. Black Box Testing

Testing of a function without knowing internal structure of the program.

Black-box and white-box are test design methods. Black-box test design treats the
system as a "black-box", so it doesn't explicitly use knowledge of the internal
structure. Black-box test design is usually described as focusing on testing functional
requirements. Synonyms for black-box include: behavioral, functional, opaque-box,
and closed-box. White-box test design allows one to peek inside the "box", and it
focuses specifically on using internal knowledge of the software to guide the selection
of test data. Synonyms for white-box include: structural, glass-box and clear-box.

While black-box and white-box are terms that are still in popular use, many people
prefer the terms "behavioral" and "structural". Behavioral test design is slightly
different from black-box test design because the use of internal knowledge isn't
strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use
a single test design method. One has to use a mixture of different methods so that
they aren't hindered by the limitations of a particular one. Some call this "gray-box"
or "translucent-box" test design, but others wish we'd stop talking about boxes
altogether.

It is important to understand that these methods are used during the test design
phase, and their influence is hard to see in the tests once they're implemented. Note
that any level of testing (unit testing, system testing, etc.) can use any test design
methods. Unit testing is usually associated with structural test design, but this is
because testers usually don't have well-defined requirements at the unit level to
validate.

2.3. Unit Testing

In computer programming, a unit test is a method of testing the correctness of a


particular module of source code.
The idea is to write test cases for every non-trivial function or method in the module
so that each test case is separate from the others if possible. This type of testing is
mostly done by the developers.

2.3.1. Benefits

The goal of unit testing is to isolate each part of the program and show that the
individual parts are correct. It provides a written contract that the piece must satisfy.
This isolated testing provides four main benefits:

2.3.2. Encourages change

Unit testing allows the programmer to re-factor code at a later date, and make sure
the module still works correctly (regression testing). This provides the benefit of
encouraging programmers to make changes to the code since it is easy for the
programmer to check if the piece is still working properly.

2.3.3. Simplifies Integration

Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a
bottom-up testing style approach. By testing the parts of a program first and then
testing the sum of its parts will make integration testing easier.

2.3.4. Documents the code

Unit testing provides a sort of "living document" for the class being tested. Clients
looking to learn how to use the class can look at the unit tests to determine how to
use the class to fit their needs.
2.3.5. Separation of Interface from Implementation

Because some classes may have references to other classes, testing a class can
frequently spill over into testing another class. A common example of this is classes
that depend on a database; in order to test the class, the tester finds herself writing
code that interacts with the database. This is a mistake, because a unit test should
never go outside of its own class boundary. As a result, the software developer
abstracts an interface around the database connection, and then implements that
interface with their own Mock Object. This results in loosely coupled code, thus
minimizing dependencies in the system.

2.3.6. Limitations

It is important to realize that unit-testing will not catch every error in the program.
By definition, it only tests the functionality of the units themselves. Therefore, it will
not catch integration errors, performance problems and any other system-wide
issues. In addition, it may not be trivial to anticipate all special cases of input the
program unit under study may receive in reality. Unit testing is only effective if it is
used in conjunction with other software testing activities.

2.4. Integration testing


Integration Testing is the phase of software testing in which individual software
modules are combined and tested as a group.

It follows unit testing and precedes system testing. takes as its input modules that
have been checked out by unit testing, groups them in larger aggregates, applies
tests defined in an Integration test plan to those aggregates, and delivers as its
output the integrated system ready for system testing.
2.4.1. Purpose

The purpose of Integration testing is to verify functional, performance and reliability


requirements placed on major design items. These "design items", i.e. assemblages
(or groups of units), are exercised through their interfaces using Black box testing,
success and error cases being simulated via appropriate parameter and data inputs.
Simulated usage of shared data areas and inter-process communication is tested;
individual subsystems are exercised through their input interface. All test cases are
constructed to test that all components within assemblages interact correctly, for
example, across procedure calls or process activations.

The overall idea is the "building block" approach in which verified assemblages are
added to a verified base which is then used to support the Integration testing of
further assemblages.

2.5. Performance Testing

In software engineering, performance testing is testing that is performed to


determine how fast some aspect of a system performs under a particular workload.

Performance testing can serve different purposes. It can demonstrate that the system
meets performance criteria. It can compare two systems to find which performs
better. Or it can measure what parts of the system or workload cause the system to
perform badly. In the diagnostic case, software engineers use tools such as profilers
to measure what parts of a device or software contribute most to the poor
performance or to establish throughput levels (and thresholds) for maintained
acceptable response time.
In performance testing, it is often crucial (and often difficult to arrange) for the test
conditions to be similar to the expected actual use.

2.5.1. Technology

Performance testing technology employs one or more PCs to act as injectors – each
emulating the presence or numbers of users and each running an automated
sequence of interactions (recorded as a script, or as a series of scripts to emulate
different types of user interaction) with the host whose performance is being tested.
Usually, a separate PC acts as a test conductor, coordinating and gathering metrics
from each of the injectors and collating performance data for reporting purposes. The
usual sequence is to ramp up the load – starting with a small number of virtual users
and increasing the number over a period to some maximum.

The test result shows how the performance varies with the load, given as number of
users vs. response time. Various tools, including Compuware Corporation's QACenter
Performance Edition, are available to perform such tests. Tools in this category
usually execute a suite of tests which will emulate real users against the system.
Sometimes the results can reveal oddities, e.g., that while the average response time
might be acceptable, there are outliers of a few key transactions that take
considerably longer to complete – something that might be caused by inefficient
database queries, etc.

Performance testing can be combined with stress testing, in order to see what
happens when an acceptable load is exceeded –does the system crash? How long
does it take to recover if a large load is reduced? Does it fail in a way that causes
collateral damage?

2.5.2. Performance specifications

Performance testing is frequently not performed against a specification, i.e. no one


will have expressed what the maximum acceptable response time for a given
population of users is. However, performance testing is frequently used as part of the
process of performance profile tuning. The idea is to identify the “weakest link” –
there is inevitably a part of the system which, if it is made to respond faster, will
result in the overall system running faster. It is sometimes a difficult task to identify
which part of the system represents this critical path, and some test tools come
provided with (or can have add-ons that provide) instrumentation that runs on the
server and reports transaction times, database access times, network overhead, etc.
which can be analyzed together with the raw performance statistics. Without such
instrumentation one might have to have someone crouched over Windows Task
Manager at the server to see how much CPU load the performance tests are
generating. There is an apocryphal story of a company that spent a large amount
optimizing their software without having performed a proper analysis of the problem.
They ended up rewriting the system’s ‘idle loop’, where they had found the system
spent most of its time, but even having the most efficient idle loop in the world
obviously didn’t improve overall performance one iota!

Performance testing almost invariably identifies that it is parts of the software (rather
than hardware) that contribute most to delays in processing users’ requests.

Performance testing can be performed across the web, and even done in different
parts of the country, since it is known that the response times of the internet itself
vary regionally. It can also be done in-house, although routers would then need to be
configured to introduce the lag what would typically occur on public networks.

It is always helpful to have a statement of the likely peak numbers of users that
might be expected to use the system at peak times. If there can also be a statement
of what constitutes the maximum allowable 95 percentile response time, then an
injector configuration could be used to test whether the proposed system met that
specification.

2.5.3. Tasks to undertake

Tasks to perform such a test would include:

· Analysis of the types of interaction that should be emulated and the production
of scripts to do those emulations

· Decision whether to use internal or external resources to perform the tests.

· Set up of a configuration of injectors/controller

· Set up of the test configuration (ideally identical hardware to the production


platform), router configuration, quiet network (we don’t want results upset by other
users), deployment of server instrumentation.

· Running the tests – probably repeatedly in order to see whether any


unaccounted for factor might affect the results.

· Analyzing the results, either pass/fail, or investigation of critical path and


recommendation of corrective action.

2.6. Stress Testing

Stress Testing is a form of testing that is used to determine the stability of a given
system or entity.

It involves testing beyond normal operational capacity, often to a breaking point, in


order to observe the results. For example, a web server may be stress tested using
scripts, bots, and various denial of service tools to observe the performance of a web
site during peak loads. Stress testing a subset of load testing. Also see testing,
software testing, performance testing.
2.7. Security Testing

Application vulnerabilities leave your system open to attacks, Downtime, Data theft,
Data corruption and application Defacement. Security within an application or web
service is crucial to avoid such vulnerabilities and new threats.

While automated tools can help to eliminate many generic security issues, the
detection of application vulnerabilities requires independent evaluation of your
specific application's features and functions by experts. An external security
vulnerability review by Third Eye Testing will give you the best possible confidence
that your application is as secure as possible.

2.7.1. Security Testing Techniques

· Vulnerability Scanning

· Network Scanning

· Password Cracking

· Log Views

· Virus Detect

· Penetration Testing

· File Integrity Checkers

· War Dialing

2.8. Usability Testing

Usability testing is a means for measuring how well people can use some human-
made object (such as a web page, a computer interface, a document, or a device) for
its intended purpose, i.e. usability testing measures the usability of the object.

Usability testing focuses on a particular object or a small set of objects, whereas


general human-computer interaction studies attempt to formulate universal
principles.

If usability testing uncovers difficulties, such as people having difficulty understanding


instructions, manipulating parts, or interpreting feedback, then developers should
improve the design and test it again. During usability testing, the aim is to observe
people using the product in as realistic a situation as possible, to discover errors and
areas of improvement. Designers commonly focus excessively on creating designs
that look "cool", compromising usability and functionality. This is often caused by
pressure from the people in charge, forcing designers to develop systems based on
management expectations instead of people's needs. A designers' primary function
should be more than appearance, including making things work with people.

"Caution: simply gathering opinions is not usability testing -- you must arrange an
experiment that measures a subject's ability to use your document."

Rather than showing users a rough draft and asking, "Do you understand this?",
usability testing involves watching people trying to use something for its intended
purpose. For example, when testing instructions for assembling a toy, the test
subjects should be given the instructions and a box of parts. Instruction phrasing,
illustration quality, and the toy's design all affect the assembly process.

Setting up a usability test involves carefully creating a scenario, or realistic situation,


wherein the person performs a list of tasks using the product being tested while
observers watch and take notes. Several other test instruments such as scripted
instructions, paper prototypes, and pre- and post-test questionnaires are also used to
gather feedback on the product being tested. For example, to test the attachment
function of an e-mail program, a scenario would describe a situation where a person
needs to send an e-mail attachment, and ask him or her to undertake this task. The
aim is to observe how people function in a realistic manner, so that developers can
see problem areas, and what people like. The technique popularly used to gather data
during a usability test is called a think aloud protocol.

2.9. Stability Testing

In software testing, stability testing is an attempt to determine if an application will


crash.

In the pharmaceutical field, it refers to a period of time during which a multi-dose


product retains its quality after the container is opened.

2.10.Acceptance Testing

User acceptance testing (UAT) is one of the final stages of a software project and will
often occur before the customer accepts a new system.

Users of the system will perform these tests which, ideally, developers have derived
from the User Requirements Specification, to which the system should conform.

Test designers will draw up a formal test plan and devise a range of severity levels.
The focus in this type of testing is less on simple problems (spelling mistakes,
cosmetic problems) and show stoppers (major problems like the software crashing,
software will not run etc.). Developers should have worked out these issues during
unit testing and integration testing. Rather, the focus is on a final verification of the
required business function and flow of the system. The test scripts will emulate real-
world usage of the system. The idea is that if the software works as intended and
without issues during a simulation of normal use, it will work just the same in
production.

Results of these tests will allow both the customers and the developers to be
confident that the system will work as intended.

2.11.Installation Testing

Installation testing (in software engineering) can simply be defined as any testing
that occurs outside of the development environment.

Such testing will frequently occur on the computer system the software product will
eventually be installed on.

Whilst the ideal installation might simply appear to be to run a setup program, the
generation of that setup program itself and its efficacy in a variety of machine and
operating system environments can require extensive testing before it can be used
with confidence.

In distributed systems, particularly where software is to be released into an already


live target environment (such as an operational web site) installation (or deployment
as it is sometimes called) can involve database schema changes as well as the
installation of new software. Deployment plans in such circumstances may include
back-out procedures whose use is intended to roll the target environment back in the
event that the deployment is unsuccessful. Ideally, the deployment plan itself should
be tested in an environment that is a replica of the live environment. A factor that
can increase the organizational requirements of such an exercise is the need to
synchronize the data in the test deployment environment with that in the live
environment with minimum disruption to live operation.

2.12.Alfa Testing

In software development, testing is usually required before release to the general


public.

In-house developers often test the software in what is known as 'ALPHA' testing
which is often performed under a debugger or with hardware-assisted debugging to
catch bugs quickly.

It can then be handed over to testing staff for additional inspection in an environment
similar to how it was intended to be used. This technique is known as black box
testing. This is often known as the second stage of alpha testing.

2.13.Beta Testing

Many a time, the software is released to a limited audience who would finally form
the end users, to use it / test it and come back with feedback or bugs.

This process helps in determining whether the final software meets its intended
purpose and whether the end users would accept the same.

The product handed out as a Beta Release is not bug free, however no serious or
critical bugs would exist. A beta release is very close to the final release.
2.14.Product Testing

Software Product development companies face unique challenges in testing. Only


suitably organized and executed test process can contribute to the success of a
software product.

Product testing experts design the test process to take advantage of the economies of
scope and scale that are present in a software product.

These activities are sequenced and scheduled so that a test activity occurs
immediately following the construction activity whose output the test is intended to
validate.

2.15.System Testing

According to the IEEE Standard Computer Dictionary, System testing is testing


conducted on a complete, integrated system to evaluate the system's compliance
with its specified requirements.

System testing falls within the scope of Black box testing, and as such, should require
no knowledge of the inner design of the code or logic (IEEE. IEEE Standard Computer
Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York , NY .
1990.).

Alpha testing and Beta testing are sub-categories of System testing.

As a rule, System testing takes, as its input, all of the "integrated" software
components that have successfully passed Integration testing and also the software
system itself integrated with any applicable hardware system(s). The purpose of
Integration testing is to detect any inconsistencies between the software units that
are integrated together called assemblages or between any of the assemblages and
hardware. System testing is more of a limiting type of testing, where it seeks to
detect both defects within the "inter-assemblages" and also the system as a whole.

2.16.Regression Testing

Regression Testing is typically carried out at the end of the development cycle. During
this testing, all bug previously identified and fixed is tested along with it's impacted
areas to confirm the fix and it's impact if any.

According to the IEEE Standard Computer Dictionary, Regression testing is testing


conducted on a complete, integrated system to evaluate the system's compliance
with its specified requirements.

Regression testing falls within the scope of Black box testing, and as such, should
require no knowledge of the inner design of the code or logic (IEEE. IEEE Standard
Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York
, NY . 1990.).

Alpha testing and Beta testing are sub-categories of Regression testing.

As a rule, Regression testing takes, as its input, all of the "integrated" software
components that have successfully passed Integration testing and also the software
Regression itself integrated with any applicable hardware Regression(s). The purpose
of Integration testing is to detect any inconsistencies between the software units that
are integrated together called assemblages or between any of the assemblages and
hardware. Regression testing is more of a limiting type of testing, where it seeks to
detect both defects within the "inter-assemblages" and also the system as a whole.

2.17.Compatibility Testing

One of the challenges of software development is ensuring that the application works
properly on the different platforms and operating systems on the market and also
with the applications and devices in its environment.

You might also like