You are on page 1of 33

Manual Testing

Table of Contents
1. Introduction 03
• What is testing? 03
• Need for testing 03
• The benefits for testing 03
• Roles of a tester 03
2. Software Development Life Cycle 03
• Water Fall Model 04
• V-Model 05
• W-Model
• Spiral Model 06
• Incremental Model 07
• Iterative Model 07
• Prototype Model
• RAD Model
• Agile Model
• Big Bang Model
3. Verification & Validation Process 08
• Verification 08
1. Inspections
2. Walkthroughs
3. Buddy Checks
• Validation 09
1. Code validation
2. Integration Validation
3. Functional Validation
4. User Acceptance Testing/System Validation
4. Testing Methods 09

• Static Testing
• Dynamic Testing
• Manual Testing
• Automated Testing
5. Testing Strategy 10
• Black Box Testing 10
• White Box Testing 12
• Gray Box Testing 13
6. Testing Techniques 13
• Black Box Testing Techniques 13

1. Equivalence Partition Method


2. Boundary Value Analysis
3. Error Guessing
• White Box Testing Techniques 14
1. Data-Flow Analysis
2. Code-Based Fault Injection

1
3. Abuse Case
4. Code-Coverage Analysis
7. Terms Involved Testing 22
8. Levels of Testing 15
9. Essential Test Elements for Software Testing 15
• Test Strategy
• Test Plan
• Test Cases
• Test Data
• Test Environment
10. Software Testing Life Cycle 17
• Test Requirements
• Test Planning
• Test Environment Setup
• Test Design
• Test Automation
• Test Execution and Defect Tracking
• Test Reports and Acceptance
11. Bug 18
• Bug life Cycle 18
• Severity 19
• Priority 20
• Guidelines on writing Bug Description 20
• Content of Bug 20
• Defect Tracking 21
• How to Report a Bug 21
12. Management Process
• Project Management
• Software Risk Management
• Configuration Management
• Software Quality Management
13. Traceability Matrix 21
14. Test Deliverables
15. Test Metrics
16. Testing Considerations for various applications 21
• GUI Testing
• Application Testing
• Application Programming Interface Testing
• Middleware Testing
• Database Testing
• Website/Page Testing
17. GUI Testing Checklist
• GUI Compliance Standards
• Testers Screen Validation Checklist
• Validation Testing Standard Action
18. Quality Standards

2
• ISO
• SEI CMM
• SIX SIGMA

What is testing?
Software Testing is the process of executing an application in the intension of finding
errors. The main goal of software testing is to improve quality and to satisfy customer needs.

Need of testing?
The main intention software testing is to find bugs in the project, once if the project is
released without testing and if the project fails at any time, the consequences given by the failure
will be huge(both in cost and human power).

What are the benefits of testing?


• By Conducting Testing one can improve the quality of the product, by which
customer requirements have been met.
• Delivering an error free product.
• Can reduce the maintains costs

Role of a tester:
• A tester is a person who tries to find out all possible errors/bugs in the system with the
help of various inputs to it.
• A tester plays an important part in finding out the problems with the system and helps in
improving its quality.
• A tester has to understand the limits, which can make the system break and work
abruptly.
• The more number of VALID BUGS tester finds out, the better tester he/she is!

Software Development Life Cycle (SDLC):


SDLC describes phases of the software cycle and the order in which these phases are
executed. The various phases of SDLC are,
• Requirement Analysis
• Design
• Coding
• Testing
• Maintenance
Requirement analysis:
• Business requirements are gathered in this phase.
• So, this phase is the main focus of the Project Manger.
• Meetings with managers and users are held in order to determine the
requirements.
• Who is going to use the System?
• How will they use the system?
• What data should be the input into the system?

3
So, these are the general questions that get answered during the requirements gathering
phase. This produces a list of functionality that the system should provide, which
describes functions the system should perform.
Design:
This is produced from the results of the requirements phase.
• Architecture, including hardware and software, communications, software design is all of
the deliverables of a design phase.
• This phase considers the overall structure of the software and defines the strategy for its
development.
• Requirement analysis and design phase is believed to be the crucial part of SDLC.
• Any flaw in this phase may prove very expensive for further stages of life cycle.
• In Simple words, this phase considers the logical system of the product.
Coding:
• This phase involves the code translation of the thought design.
• Desired programs are created using a conventional programming language and with the
help of programming tools like Compilers, Interpreters, Debuggers.
• The Code is generated using various high level programming languages like C, C++,
Pascal, Java, .Net, etc.
Testing:
• Apart from Requirement analysis, testing is another crucial stage of SDLC.
• That decides the effectiveness and functionality of the product.
• During testing, the implementation is tested against the requirements to make sure that
the product is actually solving the needs addressed and gathered during the requirements
phase.
• Unit tests and system/acceptance tests are done during this phase.
• Unit test acts on the specific component of the system, while system tests act on the
system as a whole.
Maintenance:
• This phase is an inevitable (unavoidable) need.
• It undergoes various changes once it is delivered to the client
• Software development should be flexible enough to inculcate (persistent instruction)
required changes with time and according to changing business needs.
• In addition, the changes in the system could directly affect the software operations.
• Therefore, the software should be developed in order to accommodate changes that could
be happen during the post implementation period.
Each and every stage of SDLC carries its own importance and plays a key role in success of any
software development project.

SDLC MODELS:
Water Fall Model:
This is the basic model of all sequential models.

Requirement
Analysis Design
Implementation
Testing
Maintenance

4
• It can be also called as linear sequential modal or classical model.
• This modal is used for the development of the application in late 1970’s.
• This is the most economic modal which is used in the development of the software.
• The main aspect of this modal is,
1. One task should be completed before starting the other task.
2. If any error at one stage, the whole process should be started from the beginning. So
it becomes the most time lagging process.

Adv:
1. It is very simple and easy to implement.
2. It well suited for small projects.
3. Testing is inherent (permanent, essential) to each of the phases.
4. This modal is rigid (unable to bend or be forced out of shape; not flexible).
5. Each of the phases has certain deliverables and a review process done immediately after a
particular phase is over.
Disadvantage:
1. It is high risk.
2. It is not suited for complex or long or where the requirements are change.
3. We cannot be guaranteed that one phase of the modal is perfect before moving to another
phase.
4. If any of the phases is delayed, it automatically affects the remaining phases and project
duration increases and also the cost of the project.
5. This modal can be used for developing the application which has complete requirements.
6. The deliverable software is produced late during the life cycle.

V-MODEL:
This V-Modal can be presumed to be the extension of the water fall model.
• Instead of moving in linear way, the process steps are bending upward after the coding
phase, to form the typical V shape.
• This modal represents the relationship between each phase of the development life cycle
and its associated phase of testing.
• The V can also stand for the terms Verification and Validation.
• The descending hand part represents the Verification.
• The ascending hand represents the Validation part.

5
Adv:
1. Simple and easy to use.
2. Each phase has specific deliverables.
3. Higher chance of success over water fall modal due to the development of test plans early
on during the life cycle.
4. Works well for small projects where requirements are easily understood.
5. Main Advantage of this modal is both development and testing is done Simultaneous.
6. So application can be developed faster.
Disadvantage:
1. Very rigid like water fall modal.
2. Little flexible
3. Adjusting scope is difficult.
4. Expensive.
5. More resources are required.
6. Doesn’t provide clear path for problems found during test phases.
7. Software is developed during the implementation phase, so no early prototypes ( early
sample or model built to test a concept) of the software are produced.

SPIRAL MODEL:
• This modal combines the best of both top down approach bottom up approaches and is
specifically risk driven.
• This modal also combination of classic water fall modal and Risk analysis.
• It is iterative, but each iteration is designed to reduce the risk at the particular stage of the
project.
• This modal is better than water fall modal i.e. it emphasizes more risk management while
the water fall modal emphasizes more on the project management aspects.
• It has Four phases,
1. Planning
2. Risk Analysis

6
3. Engineering
4. Evaluation

Adv:
1. It has strong support for Risk analysis
2. Well suited for complex and large projects
3. Deliverables produced early in the SDLC.
4. This modal can be used where requirements are changed most frequently.
5. Uses prototyping as a risk reduction technique and can reduce risks in the SDLC process
considerably.
Disadvantage:
1. It is high in Cost and risk analysis is also very difficult.
2. Not suited for small projects.
3. Needs considerable Risk Assessment.

INCREMENTAL MODAL:
• This method divides project into modules which are then developed and tested in parallel.
• These modules are divided up into smaller, easily managed iterations.
• Each iterations passes through all the 4 phases (requirements, design, implementation,
Testing).
• It allows full SDLC of prototypes to be made and then tested before moving to next level.
• Here, functionality is produced and delivered to the customer incrementally.
• Starting from the existing situation, it is proceed towards the desired solution in a number
of steps.
• At each of these steps the water fall modal is followed.

7
Adv:
1. It is flexible and easy to manage.
2. Risk management and testing is easy.
3. Deliverables are produced early in the SDLC in each iteration.
Disadvantage:
1. Each phase of iteration is rigid and does not overlap each other.
2. Here, we won’t gather all the information in the beginning, which creates problems at the
later stages in the design and development cycle.

ITERATIVE MODAL:
• This Modal addresses many problems associated with the water fall modal.
• Here analysis is done the same way as it is done in the Water Fall method.
• Once this analysis is over, each requirements is categorized based on their
Priority. Priorities are,
i. High
ii. Medium
iii. Low

Adv:
1. Faster coding, testing and design phases.
2. Facilitates the support for changes within the life cycle.
Disadvantage:
1. More time is spent in review and analysis.
2. A lot steps that need to be followed in this model.
3. Delay in one phase can have detrimental effect on the software as a whole.

How to select the Right SDLC Modal?


1. The Scope of project
2. The Project Budget
3. The Organizational environment
4. Available Resources

Verification and Validation (V&V Process);

8
What is verification?
Is defined as “Are We Building the Right Product?” i.e. software product is
developed in the right way.
1. Inspection
a. This involves a team of about 3-6 people, led by a leader, which formally
reviews the documents and work product during various phases of the product
development life cycle.
b. Bugs that are detected during the inspection are communicated to the next level
in order to take care of them.
2. Formal Review
a. As per plain the schedule.
3. Informal Review
a. Not as per the plain schedule.
4. Walkthroughs
a. Same as inspection without the formal preparation of any presentation or
documentation.
b. This material will be introduced to all the participants in order to make them
familiar with it.
c. This walkthroughs can help in finding potential bugs; they are used for
knowledge sharing or communication purpose.
5. Buddy Checks
a. This is simplest form of review activity used to find out the bugs in a work
product during the verification.
b. Here, one person goes through the documents prepared by another person in
order to find out that person made mistake or not.

Adv:
1. Is to find defects in early stages, which saves time and cost of the project
2. Making the team members to be more carefully and confident to work on the related
applications.

What is Validation?
Is defined as “Are We Building the RIGHT PRODUCT?” i.e. the product should
satisfy all the functional requirements set by the user.
1. Code Validation/Testing:
a. Developers and Testers do the code validation
b. Unit Code Validation or Unit Testing is a type of testing, developers conduct in
order to find out bugs in code module developed by them.
2. Integration Validation/Testing:
a. This test helps in finding out if there is any defect in the interface between
different modules.
3. Functional Validation/Testing:
a. This is use to find if the system meets the functional requirements.
b. This testing is does not deal with internal coding of the project, in stead, it checks
if the system behaves as per the expectations.
4. User Acceptance testing or System Validation:
a. Testing it in real time Scenario.
b. The product is validated to find out if it works according to the system
specifications and satisfies all the user requirements.

TEST METHODS:

9
1. Static Testing:
a. Static testing is nothing but without executing the application, testing the
application by conducting reviews is called as static testing.
b. This is Verification part of the V&V Process.
c. It is not detailed testing, but checks code, algorithm or document.
d. It is mainly syntax checking of the code.
2. Dynamic Testing
a. Dynamic testing is nothing but with executing the application by giving the
inputs to the application randomly is called as dynamic testing.
b. This is Validation part of the V&V Process.
c. Unit, Integration, System and Acceptance Tests are few Dynamic Testing
Methodologies.
3. Manual Testing
a. This is the oldest and the most rigorous types of software testing.
b. Here tester should perform manual test operation on the software without the
help of test automation.
c. Here tester should have some qualities,
i. To be patient
ii. Observant
iii. Speculative
iv. Creative
v. Innovative
vi. Open-minded
vii. Resourceful and
viii. Skillful

d. A manual tester should perform the following steps for manual testing,
i. Understand the functionality of the program
ii. Prepare the Test Environment
iii. Execute the Test Case(s) manually
iv. Verify the actual result
v. Record the result as Pass or Fail
vi. Make a summary report of the Pass or Fail test cases
vii. Publish the report
viii. Record any new defects undiscovered during test case execution.

4. Automated Testing
a. Automated testing is the automating of test cases so that tests may be run
multiple times with minimal efforts. To do this, the test cases themselves are
code. Adv:
i. The human effort can be reduced.
ii. Consistency in testing is maintained in case of automation.
iii. Our time and money can be saved only in the long term process.
iv. It suits regression test, which ensures the system, is meeting all
requirements.
b. Automated tests should follow these guidelines from The Test Automation
Manifesto:
i. Concise-As simple as possible
ii. Self Checking-Test reports its own results; no human interpretation
iii. Repeatable-It can run many times in a row without human
intervention.

10
iv. Robust-It produces same result now and forever. Not affected by
changes in the external environment.
v. Sufficient-Verifies all requirements of the software being tested.
vi. Necessary-contributes to the specification of desired behavior.
vii. Clear-Every statements is easy to understand.
viii. Efficient-Tests run in a reasonable amount of time.
ix. Specific- Failure points to a specific piece of broken functionality.
x. Independent-Test can be run by itself
xi. Maintainable-Easy to understand and modify and extend.
xii. Traceable-To & from the code it tests and to & from the requirements.

Testing Strategy:
• Black Box Testing Strategy
• White Box Testing Strategy
• Gray Box Testing Strategy

Black Box Testing Strategy:


1. Black Box Testing is not a type of testing; instead is a testing strategy, which does not
need any knowledge of internal design or code etc.
2. As the name suggests “BLACK BOX”, no knowledge of internal logic or code structure
is required.
3. The types of testing under this strategy are totally based on the testing requirements and
functionality of work product/software application.
4. Also can be called as “Opaque Testing”, “Functional/Behavioral Testing” and
“Closed Box Testing”.
5. Various testing types that fall under the Black Box Testing Strategy can also be divided
into to two types,
a. Testing method where user is not required.
i. Functional testing
1. Software is tested for the functional requirements.
2. The Tests are written in order to check if the application behaves
as expected.
ii. Stress testing
1. This tested against heavy load such as complex numerical
values, large of inputs, large no of queries etc, which checks for
the stress/load the applications can withstand.
iii. Load testing
1. Tested against heavy loads or input data such as testing of a
website in order to find out where the application fails and
performance degrades.
iv. Ad-hoc testing
1. Testing is done without any formal Test plan or Test Case
creation.
2. Helps in deciding the scope and duration of the various other
testing and it also helps tester to learn prior about the application
starting with any other testing.
v. Exploratory testing
1. Similar to the Ad-hoc testing and is done in order to
learn/explore the application.
vi. Usability testing

11
1. Called as “Testing for User Friendliness”.
2. Testing is done if User Interface of the application stands
an important consideration and needs to be specific for the
specific type of user.
vii. Smoke testing
1. Smoke testing is used to check the testability of the
application.
2. It is also called 'Build Verification Testing or Link Testing'.
3. It checks whether the application is ready for further major
testing and working, without dealing with the finer details.
viii. Sanity testing
1. Sanity testing checks for the behavior of the system.
2. This is also called as “Narrow Regression Testing”.
ix. Recovery testing
1. Recovery testing is very necessary to check how fast the
system is able to recover against any hardware failure,
catastrophic problems or any type of system crash.
x. Volume testing
1. This testing is done, when huge amount of data is
processed through the application.
b. Testing where user plays a role/user is required.
i. User acceptance testing
1. Is performed to verify that the product is acceptable to the
customer and it's fulfilling the specified requirements of
that customer.
2. This testing includes Alpha and Beta testing.
a. Alpha testing-Alpha testing is performed at the
developer's site by the customer in a closed
environment. This testing is done after system
testing.
b. Beta testing-This type of software testing is done at
the customer's site by the customer in the open
environment. The presence of the developer, while
performing these tests, is not mandatory. This is
considered to be the last step in the software
development life cycle as the product is almost
ready.

White Box Testing Strategy:


It is the process of giving the input to the system and checking, how the
system processes the input, to generate the output. It is mandatory for a tester to have the
knowledge of the source code.
1. Unit testing
a. Testing is done at the developer's site to check whether a particular
piece/unit of code is working fine.
b. Unit testing deals with testing the unit as a whole.
2. Static and dynamic analysis

12
a. It is required to go through the code in order to find out any possible
defect in the code.
b. Whereas, in dynamic analysis the code is executed and analyzed for the
output.
3. Statement Coverage
a. Testing assures that the code is executed in such a way that every
statement of the application is executed at least once.
4. Decision Coverage
a. Testing helps in making decision by executing the application, at least
once to judge whether it results in true or false.
5. Condition Coverage
a. Each and every condition is executed by making it true and false, in each
of the ways at least once.
6. Path Coverage
a. Each and every path within the code is executed at least once to get full
path coverage, which is one of the important parts of the white box testing.
7. Integration testing
a. Integration testing is performed when various modules are integrated with
each other to form a sub-system or a system.
b. This mostly focuses in the design and construction of the software
architecture
i. Bottom-up Integration Testing - T he lowest level components are
tested first and then alleviate the testing of higher level
components using 'Drivers'.
ii. Top-Down Integration Testing - This is totally opposite to bottom-up
approach, as it tests the top level modules are tested and the branch
of the module are tested step by step using 'Stubs' until the related
module comes to an end.
8. Security testing
a. Testing that confirms how well a system protects itself against
unauthorized internal or external, or willful damage of code, means
security testing of the system.
b. Security testing assures that the program is accessed by the authorized
personnel only.
9. Mutation testing
a. The application is tested for the code that was modified after fixing a
particular bug/defect.

Gray Box Testing Strategy:


1. Gray/grey box testing is a software testing technique that uses a combination of
black box testing and white box testing.
2. Gray/grey box testing is not black box testing, because the tester does know some
of the internal workings of the software under test.
3. In gray/grey box testing, the tester applies a limited number of test cases to the
internal workings of the software under test.

13
4. In the remaining part of the gray/grey box testing, one takes a black box approach
in applying inputs to the software under test and observing the outputs.
Gray/grey box testing is a powerful idea.
5. The concept is simple; if one knows something about how the product works on
the inside, one can test it better, even from the outside.
6. Gray/grey box testing is not to be confused with white box testing; i.e. a testing
approach that attempts to cover the internals of the product in detail.
7. Gray/grey box testing is a test strategy based partly on internals.
8. The testing approach is known as gray/grey box testing, when one does have
some knowledge, but not the full knowledge of the internals of the product one is
testing.
9. In gray/grey box testing, just as in black box testing, you test from the outside of a
product, just as you do with black box, but you make better-informed testing
choices because you're better informed; because you know how the underlying
software components operate and interact.

Testing Techniques:
Black Box Testing Techniques:
Boundary value analysis and equivalence partitioning both are test case
design strategies in black box testing.
1. Equivalence Partition Method
a. Equivalence partitioning is a black box testing method that divides the
input domain of a program into classes of data from which test cases can
be derived.
b. How is this partitioning performed while testing
i. If an input condition specifies a range, one valid and one two
invalid classes are defined.
ii. If an input condition requires a specific value, one valid and two
invalid equivalence classes are defined.
iii. If an input condition specifies a member of a set, one valid and
one invalid equivalence class is defined.
iv. If an input condition is Boolean, one valid and one invalid class is
defined.
2. Boundary Value Analysis
a. Many systems have tendency to fail on boundary.
b. So testing boundary values of application is important.
c. Boundary Value Analysis (BVA) is a test Functional Testing technique
where the extreme boundary values are chosen.
d. Boundary values include maximum, minimum, just inside/outside
boundaries, typical values, and error values.
e. Extends equivalence partitioning
f. Test both sides of each boundary
g. Look at output boundaries for test cases too
h. Test min, min-1, max, max+1, typical values
i. BVA techniques:
i. Number of variables
For n variables: BVA yields 4n + 1 test case.

14
ii. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of boundary value analysis.
1. Robustness Testing – Boundary Value Analysis plus values
that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
j. Limitations of Boundary Value Analysis
i. Boundary value testing is efficient only for variables of fixed
values i.e. boundary.
3. Error Guessing
a. This is purely based on previous experience and judgment of tester.
b. Error Guessing is the art of guessing where errors can be hidden.
c. For this technique there are no specific tools, writing the test cases that
cover all the application paths.

White Box Testing Techniques:


1. Data-Flow Analysis
a. This can be used to increase program understanding and to develop test cases
based on data flow within the program.
b. The data-flow testing technique is based on investigating the ways values are
associated with variables and the ways that these associations affect the
execution of the program.
c. Data-flow analysis focuses on occurrences of variables, following paths from the
definition (or initialization) of a variable to its uses.
2. Code-Based Fault Injection
a. The fault injection technique troubles program states by injecting software source
code to force changes into the state of the program as it executes.
b. This technique can be used to force error conditions to exercise the error
handling code, change execution paths, input unexpected (or abnormal) data,
change return values, etc.
3. Abuse Case
a. Abuse case help security testers view the software under test in the same light as
attackers do.
b. Abuse cases capture the non-normative behavior of the system.
c. Abuse cases are described more as a design analysis technique than white box
technique, the same technique be used to develop innovative and effective test
cases mirroring the way attackers would view the system.
d. With access to the source code, a tester is in a better position to quickly see
where the weak spots are compared to an outside attacker.
e. The abuse case can also be applied to interactions between components within
the system to capture abnormal behavior, should a component misbehave.
f. This technique can also be used to validate design decisions and assumptions.
g. The simplest, most practical method for creating abuse cases is usually through a
process of informed brainstorming, involving security, reliability and subject
matter expertise.
h. Known attack patterns form a rich source for developing abuse cases.
4. Code-Coverage Analysis
a. This is an important type of test effectiveness measurement.

15
b. This is a way of determining which code statements or paths have been exercised
during testing.
c. With respect to testing, this helps in identifying area of code not exercised by a
set of test cases.
d. During ad-hoc testing, coverage analysis can greatly reduce the time to determine
the code paths exercised and thus improve understanding of code behavior.
e. There various methods for coverage, such as path coverage, path testing,
statement coverage, multiple condition coverage and function coverage.

LEVELS OF TESTING
• Unit Testing
• Integration Testing
• System Testing
• System Integration Testing
• Acceptance Testing

ESSENTIAL TEST ELEMENTS FOR SOFTWARE TESTING


Five essential test elements are required for successful software testing. If any
one misses in this five, the test effort will be affected. Exploring these five essentials can help for
improving the effectiveness and efficiency of any software testing program. The Five elements
are,
• Test strategy
o A test strategy basically describes which types of testing, the order in which to
perform them, the proposed sequence of testing.
o And the optimum amount of effort to put into each test objective to make testing
most effective.
o A test strategy is based on the requirements of the customers.
o The test strategy should be created at the middle of the design phase, as soon as
the requirements have settled down.
• Test plan
o This is simply a part of a project plan that deals with testing tasks.
o It contains the details of the testers, starting date, ending date, features to be
tested, features not to be tested, risk management, dependencies etc.
o It provides a complete list of all the things that need to be done for testing,
including all preparation work during all the phases before testing.
o The details of a testing plan can be filled in starting, as soon as the test strategy
is completed.
o Both the test strategy and testing plan are subject to change as the project
evolves.
o If modification is necessary, first the modifications are done in the test strategy
and then the testing plan.
• Test cases
o Test cases prepared based on the strategy which outlines how much of each type
of testing to do.
o Test cases are developed based on prioritized requirements and acceptance
criteria for the software, keeping in mind the customer’s emphasis on quality
dimensions and the project’s latest risk assessment of what could go wrong.
o Except for a small amount of ad-hoc testing, all test cases should be prepared in
advance of the start of testing.

16
o There are many approaches for developing test cases.
o This is the activity performed in parallel with software development.
o What steps to be taken first,
 The requirements and business rules need to be known well enough to
predict exactly what the excepted results should be.
 Without expected results to compare the actual results, it will be
impossible to say whether a test will pass or fail.
o A good test case checks to make sure requirements are being met and has a good
chance of uncovering defects.
• Test data
o The steps perform to execute test cases; there also is a need to systematically
come up with test data to use.
o This often means sets of names, addresses, product orders, or whatever other
information the system uses. Test data development is usually done
simultaneously with test case development.
• Test environment
o Obviously a place and the right equipment will be needed to do the testing.
o Test environments may be scaled down versions of the real thing, but all the
parts need to be there for the system to actually run.
o Building a test environment usually involves setting aside separate regions on
mainframe computers and/or servers, networks and PCs that can be dedicated to
the test effort and that can reset to restart testing as often as needed.
o Steps to set up the environment are part of the testing plan and need to be
completed before testing begins.
Note:
• It’s essential to make sure that all these five elements are present to improve software
testing.
• Many testers struggle with inadequate resources, undocumented requirements and lack of
involvement with the development process early in the SDLC.
• Pushing all five of the essentials and proper timing is one way to significantly improve
the effectiveness of testing as an essential part of software engineering.

SOFTWARE TESTING LIFE CYCLE


This life cycle is used for standard applications that are built from various custom
technologies and follow the normal or standard testing approach. The various phases in STLC
are:
• Test requirements
o Requirements specification documents
o Functional specification documents
o Design specification documents
o Use case documents
o Test traceability matrix for test coverage
• Test planning
o Test scope, test environment
o Different test phase and test methodologies
o Manual and Automation testing
o Defect Management, Configuration Management, risk Management, etc.
o Evaluation and identification – Test, Defect tracking tools.

17
• Test environment setup
o Test Bed installation and configuration
o Network connectivity’s
o All the software/tools Installation and configuration
o Coordination with Vendors and others
• Test design
o Test traceability Matrix and test coverage
o Test scenarios Identification & Test case preparation
o Test data and Test scripts preparation
o Test case reviews and Approval
o Base lining under configuration management
• Test automation
o Automation requirement identification
o Tool evaluation and Identification
o Designing or Identifying framework and scripting
o Script Integration, Review and Approval
o Base lining under Configuration Management
• Test execution and defect tracking
o Executing test cases
o Testing test scripts
o Capture, review and analyze Test Results
o Raised the defects and tracking for its closure
• Test reports and acceptance
o Test summary reports
o Test metrics and process Improvements made
o Build release
o Receiving acceptance

BUG
• It can be defined as abnormal behavior of the software.
• No software exists without a bug.
• The elimination of the software depends upon the efficiency of testing done on the
software.
• A bug is a specific concern about the quality of the Application Under Test (AUT).
BUG LIFE CYCLE
• In Software Development Process, the bug has a life cycle.
• The bug should go through the life cycle to be closed.
• A specific life cycle ensures that the process is standardized.
• The bug attains different states in the life cycle.
• The life cycle of the bug can be programmatically as follows:

18
The different states of bug can be summarized as follows:
1. New
a. When the bug is posted for the time, its state will be “NEW”.
b. This means that the bug is not yet approved.
2. Open
a. After a tester has posted a bug, the lead of the tester approves that the bug is
genuine and changes the state as “OPEN”.
3. Assign
a. Once the lead changes the state as “OPEN”, he assigns the bug to corresponding
developer or developer team.
b. The state of the bug now is changed to “ASSIGN”.
4. Test
a. Once the developer fixes the bug, he has to assign the bug to the testing team for
round of testing.
b. Before he releases the software with bug fixed, he changes the state of bug to
“TEST”.
c. It specifies that the bug has been fixed and is released to testing team.

5. Deferred
a. The bug changed to deferred means the bug is expected to be fixed in next
releases.
b. The reasons for changing the bug to this state have many factors.
c. Some of them are priority of the bug may be low, lack of time for the release or
the bug may not have major effect on the software.
6. Rejected
a. If the developer feels that the bug is not genuine, he rejects the bug. Then the
state of the bug is changed to “Rejected”.
7. Duplicate
a. If the bug is repeated twice or two bugs mention the same concept of the bug,
then one bug status is changed to ”DUPLICATE”.
8. Verified
a. Once the bug is fixed and the status is changed to “TEST”, the tester tests the
bug.
b. If the bug is not present in the software, he approves that the bug is fixed and
changes the status to “VERIFIED”.
9. Reopened
a. If the bug still exists even after the bug is fixed by developer, the tester changes
the status to “REOPENED”.

19
b. The bug traverses the life cycle once again.
10. Closed
a. Once the bug is fixed, it is tested by the tester.
b. If the tester feels that the bug no longer exists in the software, he changes the
status bug to “CLOSED”.
c. This state means that the bug is fixed, tested and approved.

SEVERITY OF BUG
This indicates the effectiveness of the bug, how severely the particular bug affects the
application. More over severity depends on technical concepts.
A level during the product test phases includes:
• Critical/Show Stopper
o An item that prevents further testing of the product or function under test
can be classified as Critical Bug.
o No workaround is possible for such bugs.
o Examples of this include a missing menu option or security permission
required to access a function under test.
• Major/High
o A defect that does not function as expected/designed or cause other
functionality to fail to meet requirements can be classified as Major Bug.
o The workaround can be provided for such bugs.
o Examples of this include inaccurate calculations; the wrong field being
updated, etc.
• Average/Medium
o The defects which do not conform to standards and conventions can be
classified as Medium Bugs.
o Easy workarounds exists to achieve functionality objectives.
o Examples include matching visual and text links which lead to different
end points.
• Minor/Low
o Cosmetic defects which do not affect the functionality of the system can
be classified as Minor Bugs.

PRIORITY OF BUG
Bug priority is the bug given to the bugs for fixing it. It suggests the developer the order
in which the bugs has to be fixed. This field is used internally for job-time estimation and should
not be set by anyone except the bug assignee. The available priorities are:
• High
o The bug is to be rectified immediately.
• Medium
o These bugs are given less priority
• Low
o These bugs are handled after the high and medium priority bug.

Guidelines on writing Bug Description:

20
Bug can be expressed as “Result followed by the action”. That means, the
unexpected behavior occurring when a particular action takes place can be given as bug
description.
1. Be specific. State the expected behavior which did not occur - such as after pop-
up did not appear and the behavior which occurred instead.
2. Use present tense.
3. Don’t use unnecessary words.
4. Don’t add exclamation points. End sentences with a period.
5. DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).
6. Mention steps to reproduce the bug compulsorily.

CONTENT BUG
When a tester finds a defect, he/she needs to report a bug and enter certain fields, which
helps in uniquely identifying the bug reported by the tester. The contents of the bug report are as
given below:
• Project
• Subject
• Description
• Summary
• Detected By
• Assigned to
• Test Lead
• Detected in Version
• Expected Date of Closure
• Actual Date of Closure
• Priority
• Severity
• Status
• Bug Id
• Attachment
• Effort
• Environment
• Test case Failed

DEFECT TRACKING
Defect Id

Summary

Severity

Priority

Assigned To
Status

Subject

Detected By

Detected On

Closed On

How To Report A Bug:

21
• As we already discussed about the importance of Software Testing in any software
development project, it becomes necessary to log a defect in a proper way, track the
defect, and keep log of defects for future reference etc.
• As tester tests an application and if he/she find any defect, the life cycle of the defect
starts and it becomes very important to communicate the defect to the developers in order
to get it fixed, keep track of current status of the defect, find out if any such defect
(Similar defect) was ever found in last attempts of testing etc.
• For this purpose, previously manually created documents were used, which were
circulated to everyone associated with the software project (developers, testers), now a
days many BUG REPORTING TOOLS are available, which help in tracking and
managing bugs in an effective way.
• It’s a good practice to take screen shots of execution of every step during software
testing.
• At the time of reporting a bug, all the mandatory fields from the contents of bug are filled
and detailed description of the bug is given along with the expected and actual results.
• The screen shots taken at the time of execution of test case are attached to the bug for
reference by the developer.

Traceability Matrix:
It is a table that correlates any two baselined documents that require a many to many
relationship to determine the completeness of the relationship. It often used with high level
requirements (sometimes known as marketing requirements) and detailed requirements of the
software product to the matching parts of high-level design, detailed design, test plan and test
cases.

Testing Considerations for various applications


• GUI Testing
• Communication aspects to be tested here
1. Tool tips and status bar
• Information

• Application Testing
• Application Programming Interface Testing
• Middleware Testing
• Database Testing
• Website/Page Testing

Terms Involved in Testing:


1. Acceptance Testing
a. Acceptance testing (also known as user acceptance testing) is a type of
testing carried out in order to verify if the product is developed as per the
standards and specified criteria and meets all the requirements specified by
customer.
b. This type of testing is generally carried out by a user/customer where the
product is developed externally by another party.
c. Falls under “Black Box Testing” methodology where the user is not very
much interested in internal working/coding of the system, but evaluates

22
the overall functioning of the system and compares it with the
requirements specified by them.
2. Accessibility Testing
a. Testing that determines if software will be usable by people with
disabilities.
3. Ad hoc Testing
a. A testing where the tester tries to break the software by randomly trying
functionality of software.
4. Agile Testing
a. It is a software testing practice that follows the principles of agile software
development.
b. Agile testing does not emphasize testing procedures and focuses on
ongoing testing against newly developed code until quality software from
an end customer's perspective results.
c. Agile testing is built upon the philosophy that testers need to adapt to
rapid deployment cycles and changes in testing patterns.
5. Alpha Testing
a. The Alpha Testing is conducted at the developer sites and in a controlled
environment by the end user of the software.
6. Application Binary Interface (ABI)
a. A specification defining requirements for portability of applications in binary
forms across different system platforms and environments.
7. Application Programming Interface (API)
a. A formalized set of software calls and routines that can be referenced by an
application program in order to access supporting system or network services.
8. Automated Software Quality (ASQ)
a. The use of software tools, such as automated testing tools, to improve software
quality.
9. Automated Testing
a. Testing employing software tools which execute tests without manual
intervention. Can be applied in GUI, performance, API, etc testing.
b. The use of software to control the execution of tests, the comparison of actual
outcomes to predicted outcomes, the setting up of test preconditions and other
test control and test reporting functions.
10. Backus-Naur Form
a. A Meta language used to formally describe the syntax of a language.
11. Basic Block
a. A sequence of one or more consecutive, executable statements containing no
branches.
12. Basic Path Testing
a. A white box test case design technique that uses the algorithmic flow of the
program to design tests.
13. Basis Set
a. The set of tests derived using basis path testing.
14. Baseline
a. The point at which deliverable produced during the software engineering process
is put under formal change control.
15. Benchmark Testing

23
a. Tests that use representative sets of programs and data designed to evaluate the
performance of computer hardware and software in a given configuration.
16. Beta testing
a. Testing the application after the installation at the client place.
17. Binary Portability Testing
a. Testing an executable application for portability across system platforms and
environments, usually for conformation to an ABI specification.
18. Black Box Testing
a. Testing is based on an analysis of the specification of a piece of software without
reference to its internal workings.
b. The goal is to test how well the components.
c. The process is conforms to the published requirements for the component.
19. Bottom Up Testing
a. An approach to the integration testing where the lowest level components are
tested first then used to facilitate the testing of higher level components.
b. The process is repeated until the component at the top of the hierarchy is tested.
20. Boundary Testing
a. Test which focus on the boundary or limit conditions of the software being
tested. (Some these tests are stress tests).
21. Boundary Value Analysis
a. In boundary value analysis, test cases are generated using the extremes of the
input domain, e.g. maximum, minimum, just inside/outside boundaries, typical
values and error values.
b. BVA is similar to Equivalence Partitioning but focuses on “corner cases”.
22. Branch Testing
a. Testing in which all branches in the program source code are tested at least once.
23. Breadth Testing
a. A test suite that exercises the full functionality of a product but does not test
features in detail.
24. Bug
a. A fault in a program which causes the program to perform in an unintended or
unanticipated manner.
25. CAST - Computer Aided Software Testing
26. Capture/Replay Tool
a. A test tool that records test input as it is sent to the software under test.
b. The input cases stored can then be used to reproduce the test at a later
time.
c. Most commonly applied to GUI test tools.

27. CMM
a. The Capability Maturity Model for Software (CMM or SW-CMM) is a
model for judging the maturity of the software processes of an
organization and for identifying the key practices that are required to
increase the maturity of these processes.
28. Cause Effect Graph
a. A graphical representation of inputs and the associated outputs effects
which can be used to design test cases.
29. Code Complete

24
a. Phase of development where functionality is implemented in entirety; bug
fixes are all that are left.
b. All functions found in the Functional Specifications have been
implemented.
30. Code Coverage
a. An analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been
executed and therefore may require additional attention.
31. Code Inspection
a. A formal testing technique where the programmer reviews source code
with a group who ask questions analyzing the program logic, analyzing the
code with respect to a checklist of historically common programming
errors, and analyzing its compliance with coding standards.
32. Code Walkthrough
a. A formal testing technique where source code is traced by a group with a
small set of test cases, while the state of program variables is manually
monitored, to analyze the programmer's logic and assumptions.
33. Coding
a. The generation of source code.
34. Compatibility Testing
a. In Compatibility testing we can test that software is compatible with other
elements of system.
35. Component
a. A minimal software item for which a separate specification is available.
36. Component Testing
a. Testing of individual software components (Unit Testing).
37. Concurrency Testing
a. Multi-user testing geared towards determining the effects of accessing the
same application code, module or database records. Identifies and
measures the level of locking, deadlocking and use of single-threaded
code and locking semaphores.
38. Conformance Testing
a. The process of testing that an implementation conforms to the
specification on which it is based. Usually applied to testing conformance
to a formal standard.

39. Contest Driven Testing


a. The context-driven school of software testing is flavor of Agile Testing
that advocates continuous and creative evaluation of testing opportunities
in light of the potential information revealed and the value of that
information to the organization right now.
40. Conversion Testing
a. Testing of programs or procedures used to convert data from existing
systems for use in replacement systems.
41. Cyclomatic Complexity

25
a. A measure of the logical complexity of an algorithm, used in white-box
testing.
42. Data Dictionary
a. A database that contains definitions of all data items defined during
analysis.
43. Data Flow Diagram
a. A modeling notation that represents a functional decomposition of a
system.
44. Data Driven testing
a. Testing in which the action of a test case is parameterized by externally
defined data values, maintained as a file or spreadsheet. A common
technique in Automated Testing.
45. Debugging
a. The process of finding and removing the causes of software failures.
46. Defect
a. Nonconformance to requirements or functional / program specification.
47. Defect Injection
a.
48. Defect efficiency
a.
49. Dependency Testing
a. Examines an application's requirements for pre-existing software, initial
states and configuration in order to maintain proper functionality.
50. Depth Testing
a. A test that exercises a feature of a product in full detail.
51. Dynamic testing
a. Testing software through executing it. See also Static Testing.
52. Emulator
a. A device, computer program, or system that accepts the same inputs and
produces the same outputs as a given system.
53. Endurance Testing
a. Checks for memory leaks or other problems that may occur with
prolonged execution.
54. End – to – End Testing
a. Testing a complete application environment in a situation that mimics
real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or
systems if appropriate.

55. Equivalence Class


a. A portion of a component's input or output domains for which the
component's behavior is assumed to be the same from the component's
specification.
56. Equivalence Partitioning
a. A test case design technique for a component in which test cases are
designed to execute representatives from equivalence classes.
57. Exhaustive Testing

26
a. Testing which covers all combinations of input values and preconditions
for an element of the software under test.
58. Functional Decomposition
a. A technique used during planning, analysis and design; creates a
functional hierarchy for the software.
59. Functional Specification
a. A document that describes in detail the characteristics of the product with
regard to its intended features.
60. Functional Testing
a. See also Black Box Testing.
b. Testing the features and operational behavior of a product to ensure they
correspond to its specifications.
c. Testing that ignores the internal mechanism of a system or component and
focuses solely on the outputs generated in response to selected inputs and
execution conditions.
61. Glass Box Testing
a. A synonym for White Box Testing.
62. Gorilla Testing
a. Testing one particular module, functionality heavily.
63. Gray Box Testing
a. A combination of Black Box and White Box testing methodologies:
testing a piece of software against its specification but using some
knowledge of its internal workings.
64. High Order Tests
a. Black-box tests conducted once the software has been integrated.
65. Independent Test Group (ITG)
a. A group of people whose primary responsibility is software testing.
66. Inspection
a. A group review quality improvement process for written material.
b. It consists of two aspects; product (document itself) improvement and
process improvement (of both document production and inspection).
67. Integration Testing
a. Testing of combined parts of an application to determine if they function
together correctly.
b. Usually performed after unit and functional testing.
c. This type of testing is especially relevant to client/server and distributed
systems.

68. Installation Testing


a. Confirms that the application under test recovers from expected or
unexpected events without loss of data or functionality. Events can include
shortage of disk space, unexpected loss of communication, or power out
conditions.
69. Localization Testing
a. This term refers to making software specifically designed for a specific
locality.
70. Loop Testing

27
a. A white box testing technique that exercises program loops.
71. Metric
a. A standard of measurement. Software metrics are the statistics describing
the structure or content of a program.
b. A metric should be a real objective measurement of something such as
number of bugs per lines of code.
72. Monkey Testing
a. Testing a system or an Application on the fly, i.e. just few tests here and
there to ensure the system or an application does not crash out.
73. Mutation Testing
a. Mutation testing is a method for determining if a set of test data or test
cases is useful, by deliberately introducing various code changes ('bugs')
and retesting with the original test data/cases to determine if the 'bugs' are
detected. Proper implementation requires large computational resources.
74. Negative Testing
a. Testing aimed at showing software does not work. Also known as "test to
fail". See also Positive Testing.
75. N+1 Testing
a. A variation of Regression Testing. Testing conducted with multiple cycles
in which errors found in test cycle N are resolved and the solution is
retested in test cycle N+1.
b. The cycles are typically repeated until the solution reaches a steady state
and there are no errors.
c. See also Regression Testing.
76. Path Testing
a. Testing in which all paths in the program source code are tested at least
once.
77. Performance Testing
a. Testing conducted to evaluate the compliance of a system or component
with specified performance requirements. Often this is performed using an
automated test tool to simulate large number of users. Also know as "Load
Testing".
78. Positive Testing
a. Testing aimed at showing software works.
b. Also known as "test to pass".
c. See also Negative Testing.

79. Quality Assurance


a. All those planned or systematic actions necessary to provide adequate
confidence that a product or service is of the type and quality needed and
expected by the customer.
80. Quality Audit
a. A systematic and independent examination to determine whether quality
activities and related results comply with planned arrangements and

28
whether these arrangements are implemented effectively and are suitable
to achieve objectives.
81. Quality Circle
a. A group of individuals with related interests that meet at regular intervals
to consider problems or other matters related to the quality of outputs of a
process and to the correction of problems or to the improvement of
quality.
82. Quality Control
a. The operational techniques and the activities used to fulfill and verify
requirements of quality.
83. Quality Management
a. That aspect of the overall management function that determines and
implements the quality policy.
84. Quality Policy
a. The overall intentions and direction of an organization as regards quality
as formally expressed by top management.
85. Quality System
a. The organizational structure, responsibilities, procedures, processes, and
resources for implementing quality management.
86. Race Condition
a. A cause of concurrency problems.
b. Multiple accesses to a shared resource, at least one of which is a write,
with no mechanism used by either to moderate simultaneous access.
87. Ramp Testing
a. Continuously raising an input signal until the system breaks down.
88. Recovery Testing
a. Confirms that the program recovers from expected or unexpected events
without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions.
89. Regression Testing
a. Regression- Check that changes in code have not affected the working
functionality
90. Release Candidate
a. A pre-release version, which contains the desired functionality of the final
version, but which needs to be tested for bugs (which ideally should be
removed before the final version is released).
91. Retesting
a. Retesting- Again testing the functionality of the application.

92. Sanity Testing


a. Brief test of major functional elements of a piece of software to determine
if it’s basically operational.
93. Scalability Testing
a. Performance testing focused on ensuring the application under test
gracefully handles increases in work load.
94. Security Testing

29
a. Testing which confirms that the program can restrict access to authorized
personnel and that the authorized personnel can access the functions
available to their security level.
95. Smoke Testing
a. A quick-and-dirty test that the major functions of a piece of software
work. Originated in the hardware testing practice of turning on a new
piece of hardware for the first time and considering it a success if it does
not catch on fire.
96. Soak Testing
a. Running a system at high load for a prolonged period of time. For
example, running several times more transactions in an entire day (or
night) than would be expected in a busy day, to identify and performance
problems that appear after a large number of transactions have been
executed.
97. Software Requirements Specification
a. A deliverable that describes all data, functional and behavioral
requirements, all constraints, and all validation requirements for software.
98. Software Testing
a. A set of activities conducted with the intent of finding errors in software.
99. Static Analysis
a. Analysis of a program carried out without executing the program.
100. Static Analyzer
a. A tool that carries out static analysis.
101. Static Testing
a. Analysis of a program carried out without executing the program.
102. Storage Testing
a. Testing that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected
termination resulting from lack of space.
b. This is external storage as opposed to internal storage.
103. Stress Testing
a. Stress testing is a form of testing that is used to determine the stability of a
given system or entity. It involves testing beyond normal operational
capacity, often to a breaking point, in order to observe the results.
104. Structural Testing
a. Testing based on an analysis of internal workings and structure of a piece
of software.
b. See also White Box Testing.
105. System Testing
a. Testing that attempts to discover defects that are properties of the entire
system rather than of its individual components.
106. Testability
a. The degree to which a system or component facilitates the establishment
of test criteria and the performance of tests to determine whether those
criteria have been met.
107. Testing
a. The process of exercising software to verify that it satisfies specified
requirements and to detect errors.

30
b. The process of analyzing a software item to detect the differences between
existing and required conditions (that is, bugs), and to evaluate the
features of the software item (Ref. IEEE STD 829).
c. The process of operating a system or component under specified
conditions, observing or recording the results, and making an evaluation of
some aspect of the system or component.
108. Test Automation
a. See Automated Testing.
109. Test Bed
a. An execution environment configured for testing.
b. May consist of specific hardware, OS, network topology, configuration of
the product under test, other application or system software, etc.
c. The Test Plan for a project should enumerate the test beds(s) to be used.
110. Test Case
a. Test Case is a commonly used term for a specific test. This is usually the
smallest unit of testing.
b. A Test Case will consist of information such as requirements testing, test
steps, verification steps, prerequisites, outputs, test environment, etc.
c. A set of inputs, execution preconditions, and expected outcomes
developed for a particular objective, such as to exercise a particular
program path or to verify compliance with a specific requirement.
111. Test Driven Development
a. Testing methodology associated with Agile Programming in which every
chunk of code is covered by unit tests, which must all pass all the time, in
an effort to eliminate unit-level and regression bugs during development.
b. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of
test code to the size of the production code.
112. Test Driver
a. A program or test tool used to execute a test.
b. Also known as a Test Harness.
113. Test Environment
a. The hardware and software environment in which tests will be run, and
any other software with which the software under test interacts when
under test including stubs and test drivers.
114. Test First Design
a. Test-first design is one of the mandatory practices of Extreme
Programming (XP).
b. It requires that programmers do not write any production code until they
have first written a unit test.

115. Test Harness


a. A program or test tool used to execute a test. Also known as a Test Driver.
116. Test Plan
a. A document describing the scope, approach, resources, and schedule of
intended testing activities.

31
b. It identifies test items, the features to be tested, the testing tasks, who will
do each task, and any risks requiring contingency planning. Ref IEEE
STD 829.
117. Test Procedure
a. A document providing detailed instructions for the execution of one or
more test cases.
118. Test Scenario
a. Definition of a set of test cases or test scripts and the sequence in which
they are to be executed.
119. Test Script
a. Commonly used to refer to the instructions for a particular test that will be
carried out by an automated test tool.
120. Test Specification
a. A document specifying the test approach for a software feature or
combination or features and the inputs, predicted results and execution
conditions for the associated tests.
121. Test Suite
a. A collection of tests used to validate the behavior of a product.
b. The scope of a Test Suite varies from organization to organization.
c. There may be several Test Suites for a particular product for example.
d. In most cases however a Test Suite is a high level concept, grouping
together hundreds or thousands of tests related by what they are intended
to test.
122. Test Tools
a. Computer programs used in the testing of a system, a component of the
system, or its documentation.
123. Thread Testing Top Down Testing
a. A variation of top-down testing where the progressive integration of
components follows the implementation of subsets of the requirements, as
opposed to the integration of components by successively lower levels.
124. Top Down Testing
a. An approach to integration testing where the component at the top of the
component hierarchy is tested first, with lower level components being
simulated by stubs.
b. Tested components are then used to test lower level components.
c. The process is repeated until the lowest level components have been
tested.
125. Total Quality Management (TQM)
a. A company commitment to develop a process that achieves high quality
product and customer satisfaction.

126. Traceability Matrix


a. A document showing the relationship between Test Requirements and
Test Cases.
127. Usability Testing
a. Usability testing is for user friendliness.

32
128. User Acceptance Testing
a. User acceptance testing in software engineering is considered to be an
essential step before the system is finally accepted by the end user.
b. In general terms, user acceptance testing is a process of testing the system
before it is finally accepted by user.
129. Unit Testing
a. Testing of individual software components.
130. Validation
a. The process of evaluating software at the end of the software development
process to ensure compliance with software requirements.
b. The techniques for validation are testing, inspection and reviewing.
131. Verification
a. The process of determining whether of not the products of a given phase
of the software development cycle meets the implementation steps and can
be traced to the incoming objectives established during the previous phase.
b. The techniques for verification are testing, inspection and reviewing.
132. Volume Testing
a. We can perform the Volume testing, where the system is subjected to
large volume of data
133. Walkthrough
a. A review of requirements, designs or code characterized by the author of
the material under review guiding the progression of the review.
134. White Box Testing
a. Testing based on an analysis of internal workings and structure of a piece
of software.
b. Includes techniques such as Branch Testing and Path Testing.
c. Also known as Structural Testing and Glass Box Testing.
d. Contrast with Black Box Testing.
135. Work Flow Testing
a. Scripted end-to-end testing which duplicates specific workflows which are
expected to be utilized by the end-user.

33

You might also like