You are on page 1of 39

International Software Testing

Qualifications Board (ISTQBP)


Certified Tester – Foundation Level

Fundamentals of Testing
Why is testing necessary?
Why is Testing Necessary?
Software systems are an integral part of life, from business applications (e.g., banking) to
consumer products (e.g., cars). Most people have had an experience with software that did
not work as expected. Software that does not work correctly can lead to many problems,
including loss of money, time or business reputation, and could even cause injury or death.

• Imagine if self-parking cars had never been tested. There stands a strong possibility that
people could easily be injured or killed as a result, and the company would be sued. If
this happened, it could also destroy consumer confidence in assistive technology as a
whole.
• This is why it is important to be able to identify the root cause of a defect when one is
found/known to exist and to consider all possible effects on all types of users and the
potential outcomes.
• Testing is always necessary regardless of the deliverable. You wouldn’t want your
financial institution to push out the latest release of your online banking website without
having properly tested for security, usability, validation, & verification.
How does testing contribute to higher quality
Testing is a large component of Quality Assurance, because it requires
testers to think both preventatively (keep issues from existing in the
first place) as well as consider all possibilities of functional outcomes
that the end user may experience. This will result in a much more
enjoyable user experience for those that end up using the software.
Anyone who has ever downloaded an App only to discover that it
doesn’t work as expected, can appreciate why testing is necessary.
Causes for bugs / failures
• Miscommunication of requirements introduces error
in code:
• Lack of coding practices experience: BAD CODE /
human error
• Lack of version control: (aka revision control)
• Environmental factors
• Bad or insufficient
• Unrealistic time schedule
Testing Vocabulary
A human being can make an error (mistake), which produces a defect (fault, bug) in the program
code, or in a document. If a defect in code is executed, the system may fail to do what it should do
(or do something it shouldn’t), causing a failure. Defects in software, systems or documents may
result in failures, but not all defects do so.
A mistake in coding is called error ,error found by tester is called defect, defect accepted by
development team then it is called bug ,build does not meet the requirements then it Is failure.
• Error: A mistake made in the code by a developer.
• Mistake: The entering of an incorrect value in the code (see reasons for Error).
• Defect: A mistake or error in the code (found by a tester). Can also be caused by network traffic,
malfunctioning hardware, inadequate electric supply, etc.
• Fault: An outside environmental factor that causes a failure, like a solar flare, magnetic fields,
electronic fields, a false wrong step process or data definition etc.
• Bug: A defect that has been accepted by the development team.
• Failure: The software is not behaving as set forth in the requirements. This can be due to a fault,
incorrect use by the user, or intentional as a result of trying to make a failure occur (testing).
What is Testing?

Test activities exist before and after test execution. These activities include planning and
control, choosing test conditions, designing and executing test cases, checking results,
evaluating exit criteria, reporting on the testing process and system under test, and
finalizing or completing closure activities after a test phase has been completed. Testing
also includes reviewing documents (including source code) and conducting static analysis.
Both dynamic testing and static testing can be used as a means for achieving similar
objectives, and will provide information that can be used to improve both the system being
tested and the development and testing processes.

Testing Objectives:
• Defect finding.
• Gaining confidence regarding level of quality.
• Providing information for decision making.
• Preventing defects.
Testing vs. Debugging
Testers test and developers debug.

TESTING DEBUGGING
• Testers test but do not fix. • Developers debug.
• Subsequent re-testing by a • Debugging is the development
tester ensures that the fix does activity that finds, analyzes and
indeed resolve the failure. removes the cause of the failure.
Static Testing Dynamic Testing
Testing done without executing the program Testing done by executing the program
This testing does verification process Dynamic testing does validation process

Static testing is about prevention of defects Dynamic testing is about finding and fixing the defects

Static testing gives assessment of code and Dynamic testing gives bugs/bottlenecks in the software
documentation system.
Static testing involves checklist and process to be
Dynamic testing involves test cases for execution
followed

This testing can be performed before compilation Dynamic testing is performed after compilation

Cost of finding defects and fixing is less Cost of finding and fixing defects is high
Return on investment will be high as this process Return on investment will be low as this process involves
involved at early stage after the development phase
More reviews comments are highly recommended for
More defects are highly recommended for good quality.
good quality
Requires loads of meetings Comparatively requires lesser meetings
Static vs. Dynamic Testing

STATIC DYNAMIC
• Code is not run. Mainly • Code is run. There is
documentation review. interaction occurring
Everything that occurs between user actions
before the code is run. and the software.
The Seven Testing Principals

Principles:
A number of testing principles have been suggested over the past 40
years and offer general guidelines common for all testing.
Principle 1 – Testing shows presence of defects

• Testing can show that defects are present, but cannot prove that
there are no defects. Testing reduces the probability of undiscovered
defects remaining in the software but, even if no defects are found, it
is not a proof of correctness.
Principle 2 – Exhaustive testing is impossible

• Testing everything (all combinations of inputs and preconditions) is


not feasible except for trivial cases. Instead of exhaustive testing, risk
analysis and priorities should be used to focus testing efforts.
Principle 3 – Early testing

• To find defects early, testing activities shall be started as early as


possible in the software or system development life cycle, and shall
be focused on defined objectives.
Principle 4 – Defect clustering

• Testing effort shall be focused proportionally to the expected and


later observed defect density of modules. A small number of modules
usually contains most of the defects discovered during prerelease
testing, or is responsible for most of the operational failures.
Principle 5 – Pesticide paradox

• If the same tests are repeated over and over again, eventually the
same set of test cases will no longer find any new defects. To
overcome this “pesticide paradox”, test cases need to be regularly
reviewed and revised, and new and different tests need to be written
to exercise different parts of the software or system to find
potentially more defects.
Principle 6 – Testing is context dependent

• Testing is done differently in different contexts. For example, safety-


critical software is tested differently from an e-commerce site.
Principle 7 – Absence-of-errors fallacy

• Finding and fixing defects does not help if the system built is unusable
and does not fulfill the users’ needs and expectations.
The Practical Impossibility of Testing all
Possible Scenarios.
• Even simple applications require an exaggerated and
impractical number of tests in order to verify all possible
scenarios and data combinations. This is why we use the help
of methodological tools such as equivalent partitioning and
model-based testing, but still this is not enough.
In the end of the day most teams will use a risk-based testing
approach to create a sub-set of scenarios and data-sets, and
then use the escaping defects found in the field after the initial
release in order to calibrate and patch any holes that may have
been left the suit.
This means that at the end of the day we don’t test everything
The Pesticide Paradox
• Almost 20 years ago Boris Beizer stated what became known as the
Pesticide Paradox:
“Every method you use to prevent or find bugs leaves a residue of
subtler bugs against which those methods are ineffectual.”
Further understanding of the Pesticide Paradox

The functionality of the application


changes over time.
•If we introduce new features to the product it
may seem trivial that we need to write tests for
them. It is less trivial to remember that we also
need to modify the tests for the existing
features, even if they are only slightly modified
by the new additions.
Life cycle of
a bug or
defect
New: When a defect is logged and posted for the first time. It’s state is given as new.

Assigned: After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he assigns
the bug to corresponding developer and the developer team. It’s state given as assigned.

Open: At this state the developer has started analyzing and working on the defect fix.

Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as
‘Fixed’ and the bug is passed to testing team.

Pending retest: After fixing the defect the developer has given that particular code for retesting to the tester. Here the
testing is pending on the testers end. Hence its status is pending retest.

Retest: At this stage the tester do the retesting of the changed code which developer has given to him to check
whether the defect got fixed or not.

Verified: The tester tests the bug again after it got fixed by the developer. If the bug is not present in the software, he
approves that the bug is fixed and changes the status to “verified”.
Reopen: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “reopened”.
The bug goes through the life cycle once again.

Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he
changes the status of the bug to “closed”. This state means that the bug is fixed, tested and approved.

Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is
changed to “duplicate“.

Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to
“rejected”.

Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for
changing the bug to this state have many factors. Some of them are priority if the bug may be low, lack of time for the
release or the bug may not have major effect on the software.

Not a bug: The state given as “Not a bug” if there is no change in the functionality of the application. For an example: If
customer asks for some change in the look and field of the application like change of color of some text then it is not a bug
but just some change in the looks of the application.
Code of Ethics
• Involvement in software testing enables individuals to learn confidential and privileged
information. A code of ethics is necessary, among other reasons to ensure that the information is
not put to inappropriate use. Recognizing the ACM and IEEE code of ethics for engineers, the
ISTQB states the following code of ethics:
• PUBLIC - Certified software testers shall act consistently with the public interest
• CLIENT AND EMPLOYER - Certified software testers shall act in a manner that is in the best
interests of their client and employer, consistent with the public interest
• PRODUCT - Certified software testers shall ensure that the deliverables they provide (on the
products and systems they test) meet the highest professional standards possible
• JUDGMENT- Certified software testers shall maintain integrity and independence in their
professional judgment
• MANAGEMENT - Certified software test managers and leaders shall subscribe to and promote an
ethical approach to the management of software testing
• PROFESSION - Certified software testers shall advance the integrity and reputation of the
profession consistent with the public interest
The Psychology of Testing
Terms: Error guessing, independence
The mindset to be used while testing and reviewing is
different from that used while developing software. With
the right mindset developers are able to test their own
code, but separation of this responsibility to a tester is
typically done to help focus effort and provide additional
benefits, such as an independent view by trained and
professional testing resources. Independent testing may
be carried out at any level of testing.
There are several level of independence in software
testing which is listed here from the lowest level of
independence to the highest:
• I. Tests by the person who wrote the item.
II. Tests by another person within the same team, like
another programmer.
III. Tests by the person from some different group
such as an independent test team.
IV. Tests by a person from a different organization or
company, such as outsourced testing or certification
by an external body.
Error guessing
•A testing technique that makes use of a tester's
experience and instincts in testing applications
to identify defects that may not be easy to find
using more formal techniques and is typically
done after these more structured techniques are
applied.
Fundamental Test Process
Terms: Confirmation testing, re-testing, exit criteria, incident, regression testing, test basis, test
condition, test coverage, test data, test execution, test log, test plan, test procedure, test policy, test
suite, test summary report, testware.

The most visible part of testing is test execution. But to be effective and efficient, test plans should
also include time to be spent on planning the tests, designing test cases, preparing for execution
and evaluating results.

The fundamental test process consists of the following main activities:


• Test planning and control
• Test analysis and design
• Test implementation and execution
• Evaluating exit criteria and reporting
• Test closure activities
Confirmation Testing, re-testing: Testing that occurs after a bug is fixed to ensure that it is indeed fixed

Exit Criteria: Criterion is used to determine if a test activity has been completed. Exit criteria can applies for all test
activities from planning to execution and can vary depending on risk assessment, project or function and context.

Incident: A disparity between the expected and actual results

Regression Testing: Testing that occurs after a new version of software is released or altered – changes in code after
testing has been completed after which the same tests or modified tests should be performed. Regression testing is
also performed as a result of patches, updates or if there is a new software that the existing software should be
interfacing with.

Test Basis: Basis for a test.  This typically applies to documentation and is simply the information needed to build a
Test Case.

Test Condition: Specifications that must be followed by a tester. The criteria with which a test can go forward.
Example – verifying if user can log on.

Test Coverage: A measure of the testing being performed by test cases. It is an attempt to measure if the test cases
are truly “exercising” the code and an attempt to quantify how much the code is being exercised if so.
Test Data: Documented information used to test software, for example the expected behavior of a function or system.

Test Execution: A comparison of expected and actual results.

Test Log: The test log is the file containing information from the execution of a test: who performed the test, time and
date, actual result, Test result (pass/fail)

Test Plan: A document describing the scope, approach, resources and schedule of intended test activities.

Test Procedures: Instructions for carrying out the test. Sequence of actions to be followed for the test to be performed.

Test Policy: “High Level”. An organization’s philosophy or how they measure success. How testing is defined by an
organization.

Test Suite: Commonly “Validation Suite”. A collection of test cases used to verify that the software behaves as per the
specifications. Test suites can be used to group similar test cases together.

Test Summary Report: A document that summarizes the results of all testing procedures for a particular testing cycle of a
project. Example: Number of test cases executed, passed or failed; number of test cases, build release and number.

Testware: Testing tools. This includes test scripts, test cases, test plans, documentation or additional software, such as
automation software that is used.
Test Planning and Control
Test planning is the activity of
defining the objectives of testing and
the specification of test activities in
order to meet the objectives and
mission.
Test Planning
• To determine the scope of the testing, the risks and identify the
objectives.
• To implement the Test Policy and the test strategy.
• To determine the required test resources like people, PCs,
environmental resources, etc.
• To schedule test analysis and design tasks, test implementation, Test
Execution and evaluation.
• To use Coverage Criteria to determine what the Exit Criteria will be.
Test Control
• To measure and analyze the results of reviews and
testing.
• To monitor and document progress, Test Coverage
and Exit Criteria.
• To provide information on testing.
• To make decisions.
• To initiate corrective actions.
Test Analysis and Design
Test analysis and design is the activity during which general testing
objectives are transformed into tangible test conditions and test cases.
The test analysis and design activity has the following major tasks:
• Reviewing the Test Basis (such as requirements, software integrity
level risk level, architecture, design, and interface specifications).
• Identify test conditions
• Design the tests
• Evaluate testability of the requirements
• Design the test environment and determine which tools are needed
Test Implementation
• To develop and prioritize our test cases and create Test Data for those
tests.
• Test Procedures are created
• Create Test Suites from previous Test Cases
• Verify the environment
Test Execution
• Follow Test Procedures, execute test cases as well as
Testing Suites
• Execute, or re-test previously failed tests
• Log the outcome
• Compare expected and actual results
• Determine if there is any disparity between expected
and actual results and determine if an Incident needs
to be reported
Evaluating Exit Criteria and Reporting
Evaluating Exit Criteria is the activity where test execution is assessed
against the defined objectives. This should be done for each test level.
Evaluating exit criteria has the following major tasks:
• Checking Test Logs against the Exit Criteria specified in test planning.
• Assessing if more tests are needed or if the Exit Criteria specified
should be changed.
• Writing a Test Summary report for stakeholders.
Test Closure Activities
Test closure activities collect data from completed test activities to consolidate
experience, testware, facts and numbers. Test closure activities occur at project
milestones such as when a software system is released, a test project is completed
(or cancelled), a milestone has been achieved, or a maintenance release has been
completed.
Test closure activities include the following major tasks:
• Make sure that all incidents have been resolved.
• Hand over deliverables to stakeholders
• Finalizing and archiving testware, the test environment and the test infrastructure
for later reuse.
• Maintaining new testware for further support and use
• Evaluate how the testing results can be used to execute future projects more
efficiently

You might also like