You are on page 1of 15

Software Testing Life Cycle:

Step 1. Requirement Analysis

During this phase, test team studies the requirements from a testing point of view to identify the
testable requirements.

The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads,
System Architects etc) to understand the requirements in detail.

Requirements could be either Functional (defining what the software must do) or Non Functional
(defining system performance /security availability ).

Activities
o

Identify types of tests to be performed.

Gather details about testing priorities and focus.

Prepare Requirement Traceability Matrix (RTM).

Identify test environment details where testing is supposed to be carried out.

Automation feasibility analysis (if required).

1 | Page

Deliverables
o

RTM

Automation feasibility report. (if applicable)

Step 2: Test Planning

This phase is also called Test Strategy phase. Typically, in this stage, a Senior QA manager will
determine effort and cost estimates for the project and would prepare and finalize the Test Plan.

Activities

Preparation of test plan/strategy document for various types of testing

Test tool selection

Test effort estimation

Resource planning and determining roles and responsibilities.

Training requirement

Deliverables

Test plan /strategy document.

Effort estimation document.

Step 3: Test Case Development


This phase involves creation, verification and rework of test cases & test scripts. Test data , is
identified/created and is reviewed and then reworked as well.

Activities

Create test cases, automation scripts (if applicable)

Review and baseline test cases and scripts

Create test data (If Test Environment is available)

Deliverables

Test cases/scripts

Test data

2 | Page

Step 4: Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is
tested.

Test environment set-up is one of the critical aspects of testing process and can be done in
parallel with Test Case Development Stage. Test team may not be involved in this activity if the
customer/development team provides the test environment in which case the test team is required
to do a readiness check (smoke testing) of the given environment.

Activities

Understand the required architecture, environment set-up and prepare hardware and
software requirement list for the Test Environment.

Setup test Environment and test data

Perform smoke test on the build

Deliverables

Environment ready with test data set up

Smoke Test Results.

Step 5: Test Execution

During this phase test team will carry out the testing based on the test plans and the test cases
prepared. Bugs will be reported back to the development team for correction and retesting will be
performed.

Activities

Execute tests as per plan

Document test results, and log defects for failed cases

Map defects to test cases in RTM

Retest the defect fixes

Track the defects to closure

Deliverables

Completed RTM with execution status

Test cases updated with results

Defect reports

Step 6: Test Cycle Closure

3 | Page

Testing team will meet, discuss and analyze testing artifacts to identify strategies that have to be
implemented in future, taking lessons from the current test cycle. The idea is to remove the
process bottlenecks for future test cycles and share best practices for any similar projects in
future.

Activities

Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software, Critical
Business Objectives , Quality

Prepare test metrics based on the above parameters.

Document the learning out of the project

Prepare Test closure report

Qualitative and quantitative reporting of quality of the work product to the customer.

Test result analysis to find out the defect distribution by type and severity.

Deliverables

Test Closure report

Test metrics

Qualities of a Good Tester


1. Be Skeptical: Dont believe that the build given by developers is bug free or quality outcome.
Question everything. Accept the build only if you test and find it defect free. Dont believe anyone
whatever be the designation they hold, just apply your knowledge and try to find errors. You need to
follow this till the last testing cycle.
2. Dont Compromise on Quality: Dont compromise after certain testing stages. There is no limit
for testing until you produce a quality product. Quality is the word made by software testers to
achieve more effective testing. Compromising at any level leads to defective product, so dont do that
at any situation.
3. Ensure End User Satisfaction: Always think what can make end user happy. How they can use
the product with ease. Dont stop by testing the standard requirements. End user can be happy only
when you provide an error free product.

4 | Page

4. Think from Users Perspective: Every product is developed for customers. Customers may or
may not be technical persons. If you dont consider the scenarios from their perspective you will miss
many important bugs. So put yourself in their shoes. Know your end users first. Their age, education
even the location can matter most while using the product. Make sure to prepare your test scenarios
and test data accordingly. After all project is said to be successful only if end user is able to use the
application successfully.
5. Prioritize Tests: First identify important tests and then prioritize execution based on test
importance. Never ever execute test cases sequentially without deciding priority. This will ensure all
your important test cases get executed early and you wont cut down on these at the last stage of
release cycle due to time pressure. Also consider the defect history while estimating test efforts. In
most cases defect count at the beginning is more and goes on reducing at the end of the test cycle.
6. Never Promise 100% Coverage: Saying 100% coverage on paper is easy but practically it is
impossible. So never promise to anyone including clients about total test coverage. In business there
is a philosophy Under promise and over deliver. So dont goal for 100% coverage but focus on
quality of your tests.
7. Be Open to Suggestions: Listen to everyone even though you are an authority on the project
having in depth project knowledge. There is always scope for improvements and getting suggestions
from fellow software testers is a good idea. Everyones feedback to improve the quality of the project
would certainly help to release a bug free software.
8. Start Early: Dont wait until you get your first build for testing. Start analyzing requirements,
preparing test cases, test plan and test strategy documents in early design phase. Starting early to test
helps to visualize complete project scope and hence planning can be done accordingly. Most of the
defects can be detected in early design and analysis phase saving huge time and money. Early
requirement analysis will also help you to question the design decisions.
9. Identify and Manage Risks: Risks are associated with every project. Risk management is a three
step process. Risk identification, analysis and mitigation. Incorporate risk driven testing process.
Priorities software testing based on risk evaluation.
10. Do Market Research: Dont think that your responsibility is just to validate software against the
set of requirements. Be proactive, do your product market research and provide suggestions to
improve it. This research will also help you understand your product and its market.
11. Develop Good Analyzing Skill: This is must for requirement analysis but even further this could
be helpful for understanding customer feedback while defining test strategy. Question everything
around you. This will trigger the analysis process and it will help you resolve many complex
problems.
5 | Page

V-Model

V- Model means Verification and Validation model. Just like the waterfall model, the VShaped life cycle is a sequential path of execution of processes. Each phase must be completed
before the next phase begins. Testing of the product is planned in parallel with a corresponding
phase of development.
The various phases of the V-model are as follows:
Requirements like BRS and SRS begin the life cycle model just like the waterfall model. But, in
this model before development is started, a system test plan is created. The test plan focuses on
meeting the functionality specified in the requirements gathering.

6 | Page

The high-level design (HLD) phase focuses on system architecture and design. It provide
overview of solution, platform, system, product and service/process. An integration test plan is
created in this phase as well in order to test the pieces of the software systems ability to work
together.
The low-level design (LLD) phase is where the actual software components are designed. It
defines the actual logic for each and every component of the system. Class diagram with all the
methods and relation between classes comes under LLD. Component tests are created in this
phase as well.
The implementation phase is, again, where all coding takes place. Once coding is complete, the
path of execution continues up the right side of the V where the test plans developed earlier are
now put to use.
Coding: This is at the bottom of the V-Shape model. Module design is converted into code by
developers.
Advantages of V-model:

Simple and easy to use.

Testing activities like planning, test designing happens well before coding. This saves a
lot of time. Hence higher chance of success over the waterfall model.

Proactive defect tracking that is defects are found at early stage.

Avoids the downward flow of the defects.

Works well for small projects where requirements are easily understood.

Disadvantages of V-model:

7 | Page

Very rigid and least flexible.

Software is developed during the implementation phase, so no early prototypes of the


software are produced.

If any changes happen in midway, then the test documents along with requirement
documents has to be updated.

When to use the V-model:

The V-shaped model should be used for small to medium sized projects where
requirements are clearly defined and fixed.

The V-Shaped model should be chosen when ample technical resources are available with
needed technical expertise.

High confidence of customer is required for choosing the V-Shaped model approach. Since, no
prototypes are produced, there is a very high risk involved in meeting customer expectations.

Test Coverage
Test coverage measures the amount of testing performed by a set of test. Wherever we can count
things and can tell whether or not each of those things has been tested by some test, then we can
measure coverage and is known as test coverage.
The basic coverage measure is where the coverage item is whatever we have been able to count
and see whether a test has exercised or used this item.

There is danger in using a coverage measure. But, 100% coverage does not mean 100% tested.
Coverage techniques measure only one dimension of a multi-dimensional concept. Two different
8 | Page

test cases may achieve exactly the same coverage but the input data of one may find an error that
the input data of the other doesnt.
Benefit of code coverage measurement:

It creates additional test cases to increase coverage

It helps in finding areas of a program not exercised by a set of test cases

It helps in determining a quantitative measure of code coverage, which indirectly


measure the quality of the application or product.

Defect Report
After discovering a defect (bug), testers generate a formal defect report. The purpose of a defect report is to
state the problem as clearly as possible so that developers can replicate the defect easily and fix it.
DEFECT REPORT TEMPLATE
In general, a defect report can consist of the following elements.

ID

Unique identifier given to the defect. (Usually Automated)

Project

Project name.

Product

Product name.

Release Version

Release version of the product. (e.g. 1.2.3)

Module

Specific module of the product where the defect was detected.

Detected Build Version

Build version of the product where the defect was detected (e.g. 1.2.3.5)

Summary

Summary of the defect. Keep this clear and concise.

Description

Detailed description of the defect. Describe as much as possible but without


repeating anything or using complex words. Keep it simple but comprehensive.

Steps to Replicate

Step by step description of the way to reproduce the defect. Number the steps.

Actual Result

The actual result you received when you followed the steps.

Expected Results

The expected results.

9 | Page

Attachments

Attach any additional information like screenshots and logs.

Remarks

Any additional comments on the defect.

Defect Severity

Severity of the Defect. (See Defect Severity)

Defect Priority

Priority of the Defect. (See Defect Priority)

Reported By

The name of the person who reported the defect.

Assigned To

The name of the person that is assigned to analyze/fix the defect.

Status

The status of the defect. (See Defect Life Cycle)

Fixed Build Version

Build version of the product where the defect was fixed (e.g. 1.2.3.9)

REPORTING DEFECTS EFFECTIVELY


It is essential that you report defects effectively so that time and effort is not unnecessarily wasted in trying to
understand and reproduce the defect. Here are some guidelines:

Be specific:
o

Specify the exact action: Do not say something like Select ButtonB. Do you mean Click
ButtonB or Press ALT+B or Focus on ButtonB and click ENTER? Of course, if the defect
can be arrived at by using all the three ways, its okay to use a generic term as Select but
bear in mind that you might just get the fix for the Click ButtonB scenario.

Do not use vague pronouns: Do not say something like In ApplicationA, open X, Y, and Z,
and then close it. What does the it stand for? Z or, Y, or X or ApplicationA?

Be detailed:
o

Provide more information. In other words, do not be lazy. Developers may or may not use all
the information you provide but they sure do not want to beg you for any information you
have missed.

Be objective:
o

Do not make subjective statements like This is a lousy application or You fixed it real
bad.

10 | P a g e

Stick to the facts and avoid the emotions.

Reproduce the defect:


o

Do not be impatient and file a defect report as soon as you uncover a defect. Replicate it at
least once more to be sure.

Review the report:


o

Do not hit Submit as soon as you write the report. Review it at least once. Remove any
typos.

Defect Tracking

Keeping track of all the defects that have been discovered


Keeping track of all the steps required to validate, correct, and take preventative action
for a defect

Necessary because

to not lose any reported defects


to co-ordinate defect resolution
to ensure coders dont work on non-defects

Features masquerading as defects


Wasting time fixing something that isnt broken
Wasting time chasing down a badly reported defect
to control defect correction activity
ensure the right defects are being worked on

Software Quality Control


Software Quality Control is the set of procedures used by organizations[1] to ensure that a software product
will meet its quality goals at the best value to the customer,[2] and to continually improve the organizations
ability to produce software products in the future.[1]

11 | P a g e

Software quality control refers to specified functional requirements as well as non-functional requirements
such as supportability, performance and usability.[2] It also refers to the ability for software to perform well in
unforeseeable scenarios and to keep a relatively low defect rate.
Software development requires quality control.
These specified procedures and outlined requirements leads to the idea of Verification and Validation and
software testing.
It is distinct from software quality assurance which encompasses processes and standards for ongoing
maintenance of high quality of products, e.g. software deliverables, documentation and processes - avoiding
defects. Whereas software quality control is a validation of artifacts compliance against established criteria finding defects.

Software Quality Assurance


Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and
methods used to ensure quality. The methods by which this is accomplished are many and varied, and may
include ensuring conformance to one or more standards, such as ISO 9000 or a model such as CMMI.
SQA encompasses the entire software development process, which includes processes such as requirements
definition, software design, coding, source code control, code reviews, software configuration
management, testing, release management, and product integration. SQA is organized into goals,
commitments, abilities, activities, measurements, and verifications.

Quality Assurance vs Quality Control


Quality Assurance

Quality Control

It is a process which deliberate on providing QC is a process which deliberates on


assurance that quality request will be
fulfilling the quality request.
achieved.
A QA aim is to prevent the defect.

12 | P a g e

A QC aim is to identify and improve the

defects.
QA is the technique of managing the quality. QC is method to verify the quality.
QA does not involve executing the program. QC always involves executing the program.
All team members are responsible for QA.

Testing team is responsible for QC.

QA e.g. Verification.

QC e.g. Validation.

QA means Planning for doing a process.

QC Means Action for executing the planned


process.

Statistical Technique used on QA is known Statistical Technique used on QC is known


as Statistical Process Control (SPC.)
as Statistical Quality Control (SPC.)
QA makes sure you are doing the right
things.

QC makes sure the results of what youve


done are what you expected.

QA Defines standards and methodologies to QC ensures that the standards are followed
followed in order to meet the customer
requirements.

while working on the product.

QA is the process to create the deliverables. QC is the process to verify that deliverables.
QA is responsible for full software
development life cycle.

QC is responsible for software testing life


cycle.

In QA, processes are planned to evade the defects.


QC agreements with discovery the defects and modifying them while making the product.
QA detects weakness.
QC detects defects.
QA is process oriented
QC is product oriented.
QA is failure prevention system.
QC is failure detection system.

Test Plan and Strategies


A test plan is a document detailing the objectives, target market, internal beta team, and processes for a
specific beta test for a software or hardware product. The plan typically contains a detailed understanding of
the eventual workflow.

13 | P a g e

A test plan documents the strategy that will be used to verify and ensure that a product or system meets its
design specifications and other requirements. A test plan is usually prepared by or with significant input
from test engineers.
Depending on the product and the responsibility of the organization to which the test plan applies, a test plan
may include a strategy for one or more of the following:

Design Verification or Compliance test - to be performed during the development or approval stages
of the product, typically on a small sample of units.

Manufacturing or Production test - to be performed during preparation or assembly of the product in


an ongoing manner for purposes of performance verification and quality control.

Acceptance or Commissioning test - to be performed at the time of delivery or installation of the


product.

Service and Repair test - to be performed as required over the service life of the product.

Regression test - to be performed on an existing operational product, to verify that existing


functionality didn't get broken when other aspects of the environment are changed (e.g., upgrading the
platform on which an existing application runs).

A complex system may have a high level test plan to address the overall requirements and supporting test plans
to address the design details of subsystems and components.
Test plan document formats can be as varied as the products and organizations to which they apply. There are
three major elements that should be described in the test plan: Test Coverage, Test Methods, and Test
Responsibilities. These are also used in a formal test strategy.

Test Strategies
A test strategy is an outline that describes the testing approach of the software development cycle. It is created
to inform project managers, testers, and developers about some key issues of the testing process. This includes
the testing objective, methods of testing new functions, total time and resources required for the project, and
the testing environment.
Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which types of
testing are to be performed, and which entry and exit criteria apply. They are created based on development
design documents. System design documents are primarily used and occasionally, conceptual design
documents may be referred to. Design documents describe the functionality of the software to be enabled in

14 | P a g e

the upcoming release. For every stage of development design, a corresponding test strategy should be created
to test the new feature sets.

15 | P a g e

You might also like