You are on page 1of 58

Software Quality Engineering

Dr. Muazzam Maqsood

Department of Computer Science


COMSATS University Islamabad, Attock Campus

muazzam.maqsood@cuiatk.edu.pk
METHODS OF TESTING
There are two basic methods of performing software testing:

1. Manual testing

2. Automated testing
TYPES OF TESTING
▪ Black Box Testing

Black box testing is a strategy in which testing is based solely on the


requirements and specifications. Unlike its complement, white box testing, black
box testing requires no knowledge of the internal paths, structure, or
implementation of the software under test.

▪ White Box Testing

White box testing is a strategy in which testing is based on the internal paths,
structure, and implementation of the software under test. Unlike its
complement, black box testing, white box testing generally requires detailed
programming skills.
TYPES OF TESTING
▪ Gray Box Testing
Gray box testing is a software testing technique that uses a combination of
black box testing and white box testing. Gray box testing is not black box
testing, because the tester does know some of the internal workings of the
software under test.

In gray box testing, the tester applies a limited number of test cases to the
internal workings of the software under test. In the remaining part of the gray
box testing, one takes a black box approach in applying inputs to the software
under test and observing the outputs.

This is particularly important when conducting integration testing between two


modules of code written by two different developers, where only the interfaces
are exposed for test.
Black Box Testing
▪ Black box testing (also called functional testing) is testing that ignores the
internal mechanism of a system or component and mainly focuses on the
outputs generated in response to selected inputs and execution conditions.

▪ In the language of V&V, black box testing is often used for validation (Are
we building the right software?)

▪ Black box testing, also called functional testing and behavioral testing,
focuses on determining whether or not a program does what it is supposed
to do based on its functional requirements.
Pros and Cons of Black Box Testing
Pros and Cons of Black Box Testing

▪ Tester can be non- ▪Chances of having repetition of


technical. tests that are already done by
programmer.
▪ This testing is most likely
to find those bugs as the ▪It is difficult to identify all
user would find. possible inputs in limited testing
time.
▪ Test cases can be designed
as soon as the functional
specifications are complete.
Black Box Testing Techniques followed by LMKR
The black box test design methods described above are listed in the table below. When you
are asked to create tests for a new product or feature, use this checklist to determine
which tests should be designed for the application to be tested.

Black Box Testing Methods

▪ Equivalence Class Partitioning Testing


▪ Boundary Value Testing
▪ Omission Testing
▪ Null Case Testing
▪ Volume Testing
▪ Load Testing
▪ Stress Testing
▪ Performance Testing
▪ Resource Testing
▪ Requirements/Specification Testing
▪ Button Press Testing
▪ State Transition Testing
▪ Installation Testing
▪ Security Testing
▪ Integration Testing
▪ Compatibility Testing
▪ Configuration Testing
▪ Documentation Testing
▪ Smoke Testing
▪ Usability Testing
▪ Exploratory Testing
Equivalence Class Partitioning Testing:

▪ Equivalence Partitioning is a black-box testing method that divides


the input domain of a program into classes of data from which
test cases can be derived

▪ An ideal test case single handedly uncovers a class of errors e.g


incorrect processing of all character data that might otherwise
require many cases to be executed before the general error is
observed.

▪ Equivalence Partitioning strives to define the test case that


uncovers classes of errors, there by reducing the total number of
test cases that must be developed.

▪ An equivalence class represents a set of valid or invalid states for


input conditions.
Equivalence Class Partitioning Testing

Equivalence classes can be define according to the following


guidelines;

▪ If an input condition specifies a range one valid and two invalid


equivalence classes are defined.

▪ If an input condition specifies a specific value one valid and two


invalid equivalence classes are defined.

▪ If an input condition specifies a member of a set one valid and one


invalid equivalence class are defined.

▪ If an input condition is Boolean, one valid and one invalid class are
defined.
Equivalence Class Partitioning Testing:

TESTING PROBLEM

The specification for the software system for validating expenses claims for
hotel accommodation includes the following requirements:

1. There is an upper limit of $70 for accommodation expense claims.

2. Any claim above $70 should be rejected & cause an error message to
display.

3. All expense amount should be greater than $0 and an error message


should be displayed if this is not the case.
Equivalence Class Partitioning Testing
ANALYZING THE TESTING REQUIREMENTS

▪ To support the process of analyzing the above requirements, it is useful to


graphically show the partitions and their boundaries, and to state the
ranges of the partitions with respect to the boundary values.

DESIGNING THE TEST CASES

▪ The next step is to design the test cases by drawing up a table showing
the test case ID, a typical value drawn from the partition to be input, the
partition it tests, and the expected output.

Test Case ID Hotel Charge Partition Tested Expected Output

1 50 0< Hotel Charge<=70 OK

2 -25 Hotel Charge<=0 Error Message

3 89 Hotel Charge>70 Error Message


BOUNDARY VALUE TESTING
▪ (Specific case of Equivalence Class
Partitioning Testing)

▪ Boundary value analysis leads to a


selection of test cases that exercise
bounding values. This technique is
developed because a great number of
errors tend to occur at the boundary
of input domain rather than at the
center.

▪ Tests program response to extreme


input or output values in each
equivalence class.

▪ Guideline for BVA are following;

❑ If an input condition specifies a range


bounded by values a and b, test cases
should be designed with values a and
b and just above and below a and b.
BOUNDARY VALUE TESTING
TESTING PROBLEM

▪ The specification for the software system for validating expenses


claims for hotel accommodation includes the following
requirements:

1. There is an upper limit of $70 for accommodation expense claims.

2. Any claim above $70 should be rejected & cause an error message to
display.

3. All expense amount should be greater than $0 and an error message


should be displayed if this is not the case.

ANALYZING THE TESTING REQUIREMENTS

▪ To support the process of analyzing the above requirements, it is


useful to graphically show the boundaries, and to determine the
boundary values and the significant values at either side of the
boundaries.
BOUNDARY VALUE TESTING
DESIGNING THE TEST CASES

▪ The next step is to design the test cases by drawing up a table


showing the test case ID, the values about and on the boundary
to be input for the test, the boundary it tests, and the expected
output.

Test Case ID Hotel Charge Boundary Tested Expected Output


1 -1 Error Message
2 0 0 Error Message
3 1 OK
4 69 OK
70
5 70 OK
6 71 Error Message

Example
OMISSION TESTING
Omission Testing (also called Missing Case Testing):

▪ Exposes defects caused inputting cases (scenarios) the developer


forgot to handle or did not anticipate.

▪ Studies show that many client reported defects are caused by “faults
of omission” :

❑ A study by Sherman on a released Microsoft product reported that 30% of


client reported defects were caused by missing cases.

❑ Other studies show than an average of 22 to 54% of all client reported


defects are caused by missing cases.
NULL TESTING

▪ Null Testing: (a specific case of Omission Testing, but triggers defects


extremely often)

▪ Exposes defects triggered by no data or missing data.

▪ Often triggers defects because developers create programs to act upon


data, they don’t think of the case where the project may not contain
specific data types.

❑ Example: X, Y coordinate missing for drawing various shapes in Graphics


editor.

❑ Example: Blank file names


VOLUME TESTING
▪ Volume testing is done against the efficiency of the application. Huge
amount of data is processed through the application (which is being tested)
in order to check the extreme limitations of the system.

▪ The purpose of Volume Testing is to find weaknesses in the system with


respect to its handling of large amounts of data during short time periods

▪ Such systems can be transactions processing systems capturing real time


sales or could be database updates and or data retrieval.
LOAD TESTING
▪ Exposes defects triggered by peak bursts of activity.

❑ Example: Using automation software to simulate 500 users logging


into a web site and performing end-user activities at the same time.

❑ Example: Typing at 120 words per minute for 3 hours into a word
processor.
STRESS TESTING

▪In stress testing you continually


put excessive load on the system
until the system crashes

▪A test environment is established


with many testing stations. At each
station, a script is exercising the
system.

▪More and more stations are added,


all simultaneous hammering on the
system, until the system breaks.

▪The system is repaired and the


stress test is repeated until a level of
stress is reached that is higher than
expected to be present at a
customer site.
PERFORMANCE TESTING
▪ Exposes defects related to tasks taking too long.

▪ Definition of “taking too long” should be:

❑ Slower than specifications listed in the Requirements or Specification Document


❑ Slower than the previous release performing the same task on same data and
machine.
❑ Slower than a client would reasonably expect. Such as if it takes 3 minutes to
display a normal image clients will not use the product.

▪ Performance testing is designed to test the run time performance of a


software
RESOURCE TESTING
▪ In resource testing you have to check whether an
AUT(Application under test) utilizes more resources
(e.g memory) than it should be utilized.

❑ Example: If a program of 5000 lines of code utilizes 5kb of


memory then in resource testing it must be checked whether
an AUT is using the amount of memory it should use or more
than it should be.
REQUIREMENTS TESTING
▪ Requirements or Specification Testing

▪ Exposes defects in the program design/implementation by


comparing the program to every word in the Requirement
Document or the Function Specification Document. Important
these documents kept up-to-date.

▪ Landmark project groups are using two software tools to help


capture requirements and test cases. Then trace the test cases
back to the requirements. These tools are Caliber Requirement
Management and Test Director.
BUTTON PRESS TESTING
▪ Button Press Testing: (Landmark testing term, not industry
standard)

▪ Exposes functionality defects by methodically pressing every


widget (pull down menu, pop ups, drop down lists, buttons,
icons, etc.) in the program.

▪ Landmark programs have thousands of widgets, so it is not


guaranteed that every widget will be used during regular testing.
Therefore, a Button Press Test should be performed at least once
during Alpha testing to ensures every widget works.

▪ At a minimum, the tester:

❑ looks for a crash

❑ examines results to be sure widget functioned correctly


BUTTON PRESS TESTING
COMPATIBILITY TESTING

▪ Exposes defects related to using


files from output one version of the
software in another version of the
software.

▪ Most Landmark applications are


designed to be “forwards”
compatible, meaning files created
in a previous release of the
software can be used in the version
currently under test.

▪ They are not designed to be


“backwards” compatible, meaning
a file output in the version under
test will not work in a current
released version.
CONFIGURATION TESTING

▪ Configuration testing is also called "Hardware compatibility


testing“ or portable testing , during this testing tester will test
whether the software build is supporting different hardware
technologies or not Ex : printer, Scanners e.t.c

▪ Exposes defects triggered by different computer environments.


INSTALLATION TESTING
▪ Installation Testing refers to testing of the steps of installation process.

▪ Installation testing is a kind of quality assurance work in the software


industry that focuses on what customers will need to do to install and set
up the new software successfully.
SECURITY TESTING
▪ Security Testing refers to testing how well the system protects against
unauthorized internal or external access.

▪ Security Testing attempts to verify that protection mechanisms built into a


system will, in fact, protect it from improper penetration
SECURITY TESTING
▪ During security testing, the tester plays the role of the individual who
desires to penetrate the system.

▪ The tester may attempt to acquire passwords, may attack the system to
breakdown any defenses that have been constructed, may overwhelm the
system thereby denying service to others, may purposely cause system
errors, may browse through insecure data.
INTREGATION TESTING
▪ Integration Testing is a systematic technique for constructing a program
structure while at the same time conducting tests to uncovers errors
associated with interfacing.

▪ There are two main approaches of integration testing.

❑ BIG BANG INTEGRATION

❑ INCREMENTAL INTEGRATION
INTREGATION TESTING
BIG BANG INTEGRATION

▪ There is often a tendency to attempt non incremental integration; that


is, to construct the program using a big bang approach.

▪ All components are combined in advance. The entire program is tested


as a whole. A set of errors is encountered.

▪ Correction is difficult because isolation of causes is complicated by the


vast expense of the entire program. Once these errors are corrected new
ones appear and the process continues in a seemingly endless loop.
INTREGATION TESTING
INCREMENTAL INTEGRATION

▪ In incremental integration the program is constructed and tested in small


increments, where errors are easier to isolate and correct; interfaces are
more likely to be tested completely.

▪ There are two types of integration testing::

❑ TOP DOWN INTEGRATION

❑ BOTTOM UP INTEGRATION
INTREGATION TESTING
TOP DOWN INTEGRATION

▪ Top down integration is an incremental approach to construction of


program structure. Modules are integrated by moving downward through
the control hierarchy, beginning with the main control module (main
program). Modules subordinate to the main module are incorporated into
the structure in either a depth-first or breadth-first manner.
INTREGATION TESTING
TOP DOWN INTEGRATION

Top down integration is performed in a series of steps:

1. The main control module is available and stubs are substituted for all
components directly subordinate to the main module.

2. Depending on the integration approach selected (depth-first or breadth-


first) subordinates stubs are replaced one at a time with actual
components.

3. Test are conducted as each component is integrated.

4. On completion of each set of tests another stub is replaced with actual


component.

5. Regression testing may be conducted to make sure that new errors have
not been introduced.
INTREGATION TESTING

▪In depth first approach all


modules on a control path are
integrated first. See the fig. on
the right. Here sequence of
integration would be (M1, M2,
M3), M4, M5, M6, M7, and M8.

▪In breadth first all modules


directly subordinate at each level
are integrated together. Using
breadth first for this fig. the
sequence of integration would be
(M1, M2, M8), (M3, M6), M4, M7,
andM5.
INTREGATION TESTING
BOTTOM UP INTEGRATION

▪ Bottom up integration as the name implies begins constructing and


testing with atomic modules. Because components are integrated in
bottom up fashion, processing required for components subordinate to a
given level is always available and the need for stub is eliminated.
INTREGATION TESTING
BOTTOM UP INTEGRATION

▪ Bottom up integration is performed in a series of steps:

1. Low level components are combined into clusters.

2. A driver (a control program for testing) is written to coordinate test case


input and output.

3. The cluster is tested.

4. Drivers are removed and clusters are combined moving upward in the
program structure.
INTREGATION TESTING
BOTTOM UP INTEGRATION
DOCUMENTATION TESTING

▪ Exposes defects in the content and

access of on-line user manuals (Help files) and content of training


manuals.

▪ The Testing Group tests that all Help files appear on the screen
when selected.

▪ On-Line documentation is a Landmark requirement for product


release

▪ Documentation testing can be approached in two phases:

❑ 1st Phase is Review and Inspection, examines the documents for


editorial clarity.
❑ 2nd phase is Live Test, which uses
the documentation in conjunction
with the use of the actual program.
SMOKE TESTING

▪ A test of new or repaired equipment by


turning it on. If it smokes... guess what...
it doesn't work! . The term was originally
coined in the manufacture of containers
and pipes, where smoke was introduced to
determine if there were any leaks.

▪ A common practice at Microsoft and some


other software companies is the "daily
build and smoke test" process. Every file
is compiled, linked, and combined into an
executable program every day, and the
program is then put through a "smoke
test," a relatively simple check to see
whether the product "smokes" when it
runs.
SMOKE TESTING
▪ Smoke testing also known as build verification testing: A relatively small
suite of tests is used to qualify a new build. Normally, the tester is asking
whether any components are so obviously or badly broken or some critical
fixes that are the primary intent of the new build didn't work. The typical
result of a failed smoke test is rejection of the build not just a new set of
bug reports.

▪ Smoke tests are designed to confirm that changes in the code function as
expected and do not destabilize an entire build.
USABILITY TESTING
▪ Exposes operations that are difficult, awkward, or inconvenient for users.

▪ Testing for “User-friendliness”

▪ Clearly this is subjective and will depend on the targeted end user

▪ User interviews, surveys, video recording of user sessions, and other


techniques can be used
Regression Testing
▪ Exposes defects in code that should have not changed.

▪ Re-executes some or all existing test cases to exercise code that was
tested in a previous release or previous test cycle.

▪ Performed when previously tested code has been re-linked such as


when:

❑ Ported to a new operating system


❑ A fix has been made to a specific part of the code.

▪ Studies shows that:

❑ The probability of changing the program correctly on the first try is only
50% if the change involves 10 or fewer lines of code.

❑ The probability of changing the program correctly on the first try is only
20% if the change involves around 50 lines of code.
Progressive VS Regressive Testing

▪ When testing new code, you are performing “progressive testing.”

▪ When testing a program to determine if a change has introduced


errors in the unchanged code, you are performing “regression
testing.”

▪ All black box test design methods apply to both progressive and
regressive testing. Eventually, all your “progressive” tests should
become “regression” tests.

▪ The Testing Group performs a lot of Regression Testing because


most Landmark development projects are adding enhancements
(new functionality) to existing programs. Therefore, the existing
code (code that did not change) must be regression tested.
EXPLORATORY SOFTWARE TESTING

▪Exploratory testing is a method of


manual testing.

▪Exploratory testing seeks to find


out how the software actually
works, and to ask questions about
how it will handle difficult and easy
cases. The testing is dependent on
the tester's skill of inventing test
cases and finding defects. The more
the tester knows about the product
and different test methods, the
better the testing will be.

▪When performing exploratory


testing, there are no exact expected
results; it is the tester that decides
what will be verified, critically
investigating the correctness of the
result.
EXPLORATORY SOFTWARE TESTING
▪This testing also known as ad-hoc testing and is done in order to
learn/explore the application.

▪The main advantage of exploratory testing is that less preparation


is needed, important bugs are found fast.
RECOVERY TESTING
▪ Recovery testing is a system test that forces the system to fall in a variety
of ways and verifies that recovery is properly performed. If recovery is
automatic then re-initialization, check pointing mechanisms and data
recovery are evaluated for correctness. If recovery requires human
intervention, the MTTR is evaluated to determine whether it is within
acceptable limits.
RECOVERY TESTING

▪ Recovery testing is basically done in order to check how fast and better the
application can recover against any type of crash or hardware failure or
other catastrophic problems etc.

▪ Type or extent of recovery is specified in the requirement specifications.


STATE TRANSITION TESTING

▪ Exposes defects triggered by moving from one program state to


another.

❑ Example: In case of an ATM machine software, consider the various


operations of ATM like “Withdrawl Cash”, “Balance Inquiry”, “Transfer
Cash” as different states, then the defects that arise from Moving
from the state of Menu selection to Withdrawl cash appears under
State Transition Testing

▪ A state transition model has four basic parts:

❑ The states that the software may occupy (open/closed or


funded/insufficient funds);

❑ The transitions from one state to another (not all transitions are
allowed);

❑ The events that cause a transition (withdrawing money, closing a file);

❑ The actions that result from a transition (an error message, or being
given your cash).
STATE TRANSITION TESTING

Electronic clock example

▪ A simple electronic clock has


four modes, display time,
change time, display date and
change date

▪ The change mode button


switches between display
time and display date

▪ The reset button switches


from display time to adjust
time or display date to adjust
date

▪ The set button returns from


adjust time to display time or
adjust date to display date
STATE TRANSITION TESTING
Electronic clock example
VALIDATION TESTING

▪ Acceptance Testing

▪ Alpha Testing

▪ Beta Testing
ACCEPTANCE TESTING
▪It is virtually impossible for a software developer to foresee how the
customer will really use a program

▪When custom software is built for one customer, a series of acceptance tests
are conducted to enable the customer to validate all requirements

▪Conducted by the end user rather than software engineers

▪An acceptance test can range from an informal test drive to a planned and
systematically executed series of tests
ACCEPTANCE TESTING

▪If the software is developed as a product to be used by many


customers, it is impractical to perform acceptance tests with each
one

▪Most software product builders use a process called alpha and beta
testing to uncover errors
ALPHA TESTING
▪ In this type of testing, the users are invited at the development center
where they use the application and the developers note every particular
input or action carried out by the user. Any type of abnormal behavior of the
system is noted.

▪ Alpha tests are conducted in a controlled environment


BETA TESTING

▪ The beta test is conducted at end user sites. Unlike


alpha testing , the developer is generally not
present.

▪ Therefore the beta test is a live application of the


software in an environment that cannot be
controlled by the developer

▪ In this type of testing, the software is handed over


to the user in order to find out if the software
meets the user expectations and works as it is
expected to.

▪ In software development, beta testing, application


testing, and end user testing - is a phase of
software development in which the software is
tested in the "real world" by the intended audience.

▪ The end user records all problems that are


encountered during beta testing and reports these
to the developer at regular intervals

▪ As a result of problems reported during beta tests,


software engineers make modifications and then
prepare for release of the software product

You might also like