You are on page 1of 30

 

MANUAL TESTING
What is software testing ?

Testing is executing a program with an intention of finding defects.

Fault: Is a condition that causes the software


sof tware to fail to perform its required function.

Error: Error refers to difference between actual output & expected output.

Failure: Is the inability of a system or component to perform the required function


according to its specification.

WHY S/W TESTING ?

• To discover defects.
• To avoid the user from detecting problems.
• To prove that the s/w has no defects.


To ensure
To learn about the reliability
that product worksofasthe software.
user expected.
• To stay in business
• To avoid being sued by customers
• To detect defects early, which helps in reducing the cost of fixing those
defects?

WHY EXACTLY TESTING IS DIFFERENT FROM QA/QC ?

Testing is the process of creating, implementing & evaluating tests. Testing measures
software quality.

Testing can find faults. When they are removed software quality is improved.

Simply: Testing mean


meanss “Quali
“Quality
ty con
control”.
trol”.
Quality control measures the quality of a product.
Quality assurance measures the quality of processes used to create a quality product.

Quality Control: is the process of inspections, walk through & reviews.

Inspection: An inspect
inspection
ion is for
formaliz
malized
ed than a ‘ w
walkth
alkthrough
rough ‘ – typi
typical
cal wit
with
h grou
group
p of 
 people including a moderator, mediator, reader & a recorder to take notes. The subjects of 
the inspection is typically a document such a requirements specifications, or a test plan &
two purpose is to find problems and see what is missing, not to fix anything. The primary
 purpose of inspection is to detect defects of different stages during a project.
 

Walkthrough Informal meeting. The motto of meeting is defined, but the members will
comee wi
com witho
thout
ut any pre
prepar
parati
ation.
on. The auth
author
or descri
describes
bes the wor
work
k pro
produc
ductt in an inf
inform
ormal
al
meeting to his peers or superiors to get feedback or inform or explain to their work 
 product.

Reviews Means Re-verification. Reviews have been found to be extremely effective for 
detecting defects, improving productivity & lo0wering costs. They provide good check 
 points for the management
management to study the progress of a particular
particular project. Reviews are also a
good tool for ensuring quality control. In short, they have been found to be ext4remely
useful by a diverse set of people and have found their way in to standard management &
quality control practice
practice of many institutions. Their use continues to grow.
grow.

Quality Assurance:
Quality assurance measures the quality of processes used to create a quality product.
Software QA involves the entire s/w development process monitoring & improving the
  process, making sure that any agreed upon standards & procedures are followed and
ensuring that problems are found and deal with.

AREAS OF TESTING:

1. Bl
Blac
ackkb box
ox test
testin
ing
g
2. Wh
Whititee b
box
ox test
testin
ingg
3. Gr
Grey
ey boboxx test
testin
ing
g

1. Bl
Blac
ack
kBBox
ox Te
Test
stin
ing
g
Black box testing is also called as functionality testing. In this testing testers
will be asked to test the correctness of the functionality with the help of inputs
& outputs.
Black box testing not based on any knowledge of internal design or code.
Tests are based on requirements & functionality.
Approach
Equivalence Class
Boundary Value Analysis
Error Guessing

Equivalence Class
• For each piece of the specification, generate one or more equivalence class.
• Label the classes as “valid” or “invalid”.
• Generate one test case for each Invalid Equivalence Class.
• Generate a test cases that covers as many as possible equivalence classes.
Eg:: In L
Eg LIC
IC d
dif
iffe
fere
rent
nt ttyp
ypes
es o
off po
poli
lici
cies
es aare
re tthe
here
re

Policy type Age


1 0-5 years
 

2 6-12 years
3 13-21 years
4 21-40 years
5 40-60 years

Here we test each & every point.

Suppose 0-5 means we write Test cases for 0,1,2,3,4 & 5.

Here we divide who comes under which policy & write TC’s for valid & invalid classes.

Boundary values Analysis

• Generate test cases for the boundary values.


• Minimum value, minimum value+1 , minimum value-1
• Maximum value, Maximum value +1 , Maximum value –1

Eg: In LIC,

When user applies for type-5 insurance, system asks to enter the age of the
custo0mer. Here age limit is greater than 40 yrs. & less than 60 yrs.

Here just we will test boundary values.

40-60

Minimum =40 Maximum =60


Minimum + 1 =41 Maximum +1 = 61
Minimum – 1 =39 Maximum -1 = 59

Here we write test cases for this step only.

Error Guessing:

Generate test cases against to the specification


Eg:
Type-5 policy.
It takes age limits only 40-60. But here we write test cases against to
that like 30, 20, 70 & 65.

WHITE BOX TESTING:


 

White box testing also called as Structural Testing. White box testing based on
knowledg
know ledgee of the inter
internal
nal logic of an applica
application’s
tion’s code.
code. Tests are based on covera
coverage
ge of 
code, statements, branches, paths, conditions & loops.
Structure = 1 Entry + 1 Exit with certain constrains, conditions and loops.
Why do we go for White Box Testing ?

When black box testing is used to find


f ind defects.
Approach
Basic Path Testing

Cyclomatic Complexity
MC cabe complexity

Structure Testing
• Conditions Testing
• Dataflow Testing
• Loop Testing

GREY BOX TESTING

This is just a combination


combination of both bla
black
ck box and white box testesting.
ting. Teste
Testerr should
have the knowledge of both the internal and externals of the function.

Tester should have good knowledge of white box testing & complete knowledge of 
 black box testing

Grey box testi


testing
ng is espec
especially
ially importan
importantt with web & internet
internet applicat
applications,
ions, because
the internet is built around loosely integrated components that connect via relatively well-
defined interfaces.
PHASES OF TESTING – V MODEL

BRS Acceptance
Test

  Verification   Validation

SRS System Test

  Verification   Validation

Design Integration
Testing Test

  Verification   Validation
 

Build System Unit Test

  Verification   Validation 

V – MODEL

‘V’ st
‘V’ stan
ands
ds for
for ve
veri
rific
ficat
atio
ion
n & va
valilida
dati
tion
on.. It is a suit
suitab
able
le mo
mode
dell for
for la
larg
rgee scal
scalee
companies to maintain testing process. This model defines co-existence relation between
development process and testing process.

Draw back  Cost & Time.

PHASES ARE

1) Un
Unitit T
Tes
esti
ting
ng
2) Int
Integr
egrati
ation
on Tes
Testin
ting
g
3) Sy
Syst
stem
em Te
Teststin
ing
g
4) Use
Userr Acc
Accept
eptanc
ancee Tes
Testin
ting
g
1) Un
Unit
it Te
Teststin
ing
g

The main goal is to test the internal logic of the module. In unit testing tester 
is supposed to check each and every micro function. All field level validations
are expected to be tested at this stage of testing. In most cases the developer 
will do this.

• In unit testing both black box & white box testing conducted by
developers.
• Depends on LLD
• Follows white box testing techniques.
• Basic path testing
• Loop coverage
• Program technique testing
Approach:
i. Equivalence Cl
Class

iiiii.. B
Eroruonr dgaureysv
va
sianlgue aan
nalysis
 

2) In
Integ
tegrat
ration
ion T
Test
esting
ing::
In this the primary objective of Integration Testing is to discover errors
in the interface between modules / sub-systems.
App server and database server.

In this many unit tested modules are combined into sub-systems. The
goal here is to see if the modules
modules are combined can be integrated
integrated properly. Follows white
 box testing techniques to verify coupling of corresponding modules.
Approach
i. To
Top-
p-do
down
wn ap
appr
proa
oach
ch --- this
this is used
used for
for ne
new
w syst
system
ems.s.
ii. Bot
Bottom
tom-up
-up app
approa
roach
ch --- thi
thiss is used
used for exi
existi
sting
ng sys
system
tems.
s.

Top-down Approach

Testing main module without coming sub modules is called top-down approach. We
can use temporary programs instead of sub modules is called stub.

Bottom-up approach:

Testing sub modules with out coming main modules is called bottom-up approach.
We can use temporary programs instead of main module is called driver.
3) Sy
Syst
stem
em Te
Test
stin
ing;
g;
The primary objective of system testing is to discover errors when the system is
tested as a whole. System testing is also called as End – to – End testing. Tester is expected
to test from login to logout by covering various business functionalities, conducted by test
engineers. Depends on SRS.

Follows black box testing techniques.


The main goal is to see if the s/w meets its requirement.

Approach:
•Identify the end-to-end business life cycle.
• Design the test data.
• Optimize the end-to-end business life cycle.
4) Acceptance testing:
Acceptance testing is to get the acceptance from the client. Client will be
using the system against the business requirements. Client side tests the real-
life data of the client.
Approach:
• Building a team with real-time users, functional users and developers.
• Execution of business test cases.
 

WHAT IS A TEST CASE ?

• Test case is a description of what is to be tested. What data to be used and


what actions to be done to check the actual result against the expected result.

• A test case is simply a test with formal steps and instructions.

• Test cases are valuable because they are repeatable, reproducible under the
same/different environments.

• A test case is a document that describes an input action or event and an


expect
expected
ed res
respon
ponse
se to det
determi
ermine
ne if a fea
featur
turee of an app
applic
licati
ation
on is wo
worki
rking
ng
correctly.

WHAT ARE THE ITEMS OF A TEST CASE ?

Test case item are,


• Test case number(unique number)
• Pre-condition (The assertion(declaration) about the i/p condition is called the pre-
condition.
• Description (what data to be used, what data to be provided & what data to be
inserted)
• Expected output ( The assertion about the expected final state of a program is (called
 post-condition))
• Actual output( what ever system displays)
• Status (pass/fail)
• Remarks

CAN THESE TEST CASES BE REUSED ?

Yes, test cases can be reused.


Test cases developed for functionality testing can be used for 
integration/system/regression testing and performance testing with few modifications.

WHAT ARE THE CHARACTERISTICS OF A GOOD TEST CASE ?

A good test case should have the following:


• TC should start with “ what yo
youu are testing “
• TC should be independent
• TC should not contain “if “ statements.
• TC should be uniform
Eg: <Action Buttons>, “Links “.
 

ARE THERE ANY ISSUES TO BE CONSIDERED ?

Yes there are few issues…


• All the TCs should be traceable.
tr aceable.
• There should not be too many duplicate test cases.
• Out dated test cases should be cleared off.
• All the test cases should be executable.

FURRPSC MODEL: (Types of Testing)

F  Functionality Testing
U  Usability Testing
R  Reliability Testing
R  Regression Testing
P  Performance Testing
S  Scalability Testing
C  Compatibility Testing

1) Fu
Funct
nction
ionali
ality
ty Tes
Testin
ting
g

To conform that all the requirements are covered. Functional requirements


specif
spe cify
y which o/p sho
should
uld be prod
produce
uced
d from the given i/p i/p.. The
They
y descri
describe
be the
relationship between Input and Output of the system.
A major part in black box testing is called functional testing
Eg:
Here we test…,
• Input domain -- (whether taking right values of i/p or not)
• Error handling -- ( whether the applicati
application
on reporting to wrong dat
dataa or not)
• URL’s checking – ( for only web application, all links are correcting working
or not)

  Testing Approach :
• Equivalence class
• Boundary value analysis
• Error guessing

2) Us
Usabi
abilit
lity
yTTes
estin
ting:
g:
To test the ease(comfort, facility) and user-friendliness of the system.
  Approach:

Qualitative and quantitative Heuristic Check List.


Classifications of checking:
• Accessibility
• Clarity of communication
 

• Consistency
•  Navigation
• Design & maintenance
• Visual representation

Qualitative approach

i. Eac
Eachh anand
d ev
every
ery ffunc
unctio
tion
n sh
shoul
ouldd be ava
availa
ilable
ble fro
from
m all the pag
pages
es o
off th
thee
site.
ii. Use
Userr shshoul
oulddb
bee aable
ble to sub
submit
mit req
reques
uestt w
with
ithin
in 4-5 act
action
ions.
s.
iii.
iii. Con
Confirm
firmati
ation
on mes
messag
sagee sshou
hould
ld be dis
displa
played
yed for eac
each
h ssubm
ubmit.
it.

Quantative approach:

The average of 10 different people should be considered as the final result.


Eg:
Some people may feel the system is more user friendly, if the submit button is on the
leftt side of the scre
lef screen.
en. At the sam
samee time some oth
others
ers may fee
feell its bett
better
er if the su
submi
bmitt
 button is placed on the right side.

3) Rel
Reliab
iabili
ility
ty Tes
Testin
ting
g

Which defines how well the software meets its requirements?

Objective is to find mean time between failure/time available under specific load
 pattern and mean for recovery.

Eg:
23 hours/day availability & 1 hour for recovery (system).
City bank – have 4 servers
servers in each region. Every 6 hrs. It will change servers.

Approach

RRT ( Ration Real time tool)

4) Reg
Regres
ressi
sion
on Tes
Testin
ting
g

To check the new functionalities have been incorporated correctly without failing
the existing functionalities.

Approach: Automation Tool.

The bugs need to be communicated and assign


assigned
ed to developers that can fix it. After 
the problem is resolved, fixes should be re-tested, and determination mode regarding
requirements for regression testing to check that fixes did not create problems else where.
 

5) Per
Perfor
forman
mance
ce T
Tes
estin
ting
g

Primary
Prima ry ob
obje
ject
ctiv
ivee of the
the pe
perfo
rforma
rmanc
ncee te
test
stin
ing
g is “ to demo
demons
nstr
trat
atee th
thee syst
system
em
functions
functions to speci
specificati
fications
ons with acce
acceptabl
ptablee response time
timess while proces
processing
sing the
required transaction volume on a production sized data base.
Objectives:
• Assessing the system capacity for growth.
• Identifying weak points in the architecture.
• Detect obscure bugs in the software.

Performance parameters;

• Request – response time


• Transactions per second
• Turn around time
• Page down load time
• Through put
Approach:

Classification of• performance


Load test testing.
• Volume test
• Stress test

Stress testing:

Finding break point of application. Max. no.of users that an application can
handle(at the same time)

Approach:

RCQE
• R epeatedly
epeatedly working on the same functionality.
• Critical Query Execution.
• To emulate peak load.

Volume testing:

Execution
Execut ion of our app
applic
licati
ation
on und
under
er hug
hugee amo
amount
untss of res
resour
ources
ces is cal
called
led
volume testing.
To find out threshold point we may use this test
Approach: Data Profile.
 

Load testing :

With the load that customer wants ( not at the same time) . Load is increasing
continuously till the customer is required load.

Gra
rad
dua
uall
lly
y inc
increas
reasin
ing
g the load
load on th
thee app
ppllic
icat
atio
ion
n and
and ch
chec
eck
kin
ing
g th
thee
 performance.

Approach: Load profile.

6) Sca
Scalab
labili
ility
ty te
testi
sting:
ng:

To find the maximum number of user system can handle. (customer will give
max. no. )

Approach:  performance tools.

Classification:

 Network scalability
•Server scalability
• Application scalability
7) Compatibility testing:

How a product will perform over a wide range of hardware, software & network 
configuration and to isolate the specific problems.

Approach: ET Approach.

Environment Selection :
• Understanding the end users application environment.
• Importance of selecting both old browser & new browser.
• Selection of the operating system.

Test Bed Creation :


Partition of the hard disk.
• Whether our application run on all customer expected platforms or not?
• Platforms means that the required system software to run our application such
as operating system, compiler, interpreters, browsers ….etc.
… .etc.

What is the software life cycle ?


 

The life cycle begins when an application is first conceived (imagine) and ends
when it is no longer in use. It includes aspects such a s initial concept , requirements
analysis, functional design, internal design, documentation planning, test planning, coding ,
document preparation , integration testing, maintenances , updates, re-testing, phase-out,
and other aspects.

When should we start designing , test cases / testing ?

V model is the most suitable way to follow for deciding when to start writing test
cases and conduct testing.
Testing limitations: ?

• We can only test against system requirements.


r equirements.
o May not detect errors in the requirements.
o Incomplete or ambiguous requirements may lead to inadequate or incorrect
testing.
• Exhaustive (total) testing is impossible in present scenario.
• Time and budget constraints normally require very careful planning of the testing
effort.
• Compromise between through ness and budget.
• Test results are used to make business decisions for release dates.

Test stop criteria:


• Maximum number of test cases successfully executed.
• Uncover minimum number of defects (16/1000 stm).
• Statement coverage.
• Testing uneconomical.
• Reliability model.

Tester responsibilities :
• Follow the test plans, scripts etc, as documented.
• Report faults objectively and factually.
• Check tests are correct before reporting s/w faults.
• Assess risk objectively.
• Prioritize what you report.
• Communicate the truth.

When should prioritize tests ?

like, soWe can’t


what test should
testing every thing.
you doThere
? is never enough time to do all testing you would
Prioritize tests, so that, whenever you stop testing, you have done best testing in the
time available.
 

Tips :
• Possible ranking criteria (all risk based)
• Test where a failure would be most severe
• Test where failures would be most visible.
• Take the help of customer in understand what is most important to him.
• What is most critical to the customers business.
• Areas changed most often.
• Areas with most problems in the past.
• Most complex areas, or technically critical.

Software :

Software is a collection / set of instructions, programs & documents.

Software development life cycle (SDLC) :

Before
If we feel it isstarting
feasiblethe analysis
then we willwe
gofirst checkphases.
to SDLC the feasibility of the project/work/system.

In feasibility we will see the below functions.

• Finance feasibility
• Cost feasibility
• Resource feasibility
• Ability to accept

SDLC includes 4 phases :

Analysis

Design

Codin

Testing
 

Analysis :
i. Req
Requir
uireme
ements
nts aanal
nalysi
ysiss is d
done
one to un under
dersta
stand
nd thethe pro
proble
blemm the sof
softwa
tware
re sy
syste
stem
m
is to solve.
ii. Und
Unders
erstan
tandin
dingg th
thee re
requi
quireme
rementnt of tthe
he sys
system
tem is a major
major tas
task.
k.
iii.
iii. Ana
Analys
lysis
is is on ide
identi
ntifyi
fying
ng wha
whatt isis n
need
eed from the sys system
tem..
iv. Mai
Main n goa
goall of th
thee req
requir
uireme
ements
nts sspec
pecifi
ificat
cation
ion iiss to p
prod
roduce
uce tthe
he SR
SRSS doc
docume
ument.nt.
v. Onc
Oncee he u unde
nderst
rstood
ood the reqrequir
uireme
ement nt mu
must
st be spe
specifi
cified
ed in
in th
thee do
docum
cument
ent..

Design :
i. Pu
Purp
rpos
osee of th
thee de
desi
sign
gn is to pla plann a sol
solututio
ion
n of th
thee prob
proble
lem
m spec
specif
ifie
ied
d by th
thee
requirement documents.
ii. Thi
Thiss ph
phase
ase firs
firstt ste
step
p is m movi
oving
ng fr
from
om tthehe prob
problem
lem dom
domain
ain to solu
solutio
tion
n do
domai
main.
n.
iii.
iii. Th
Thee o
o/p
/p of this
this phas
phasee iiss tthe
he de
desi
sign
gn do
docucume
ment
nt..
iv.
iv. This
This docu
docume
mentnt si
simi
mila
larr to a b blu
luee p
pri
rint
nt..

Coding:
i. On
Once
ce th
thee des
desigignn is com
compl
plet
ete,
e, mo
most st of ththee maj
majoror de
deci
cisi
sion
onss abo
about
ut th
thee sys
syste
tem
m
have been made.
ii.
ii. Th
Thee g
goa
oall o
off th
thee ccod
odin
ing
g ph
phas
asee iiss to
to ttra
rans
nsla
late
te th
thee des
desig
ign.
n.
iii.
iii. The cod
coding
ing effe
effect
ct bot
both
h te
testi
sting
ng & ma maintintena
enance
nce.. Wel
Well-w
l-writ
ritten
ten cod
codee ccan
an reduce
reduce
the testing & maintenance efforts. Because of testing and maintenance costs of 
s/w are much higher than to coding cost.
 
So the goal of the coding should be to reduce the testing & maintenance efforts.
 
Testing :
i. Tes
Testin
ting
g is th
thee maj
major
or ququali
ality
ty ccont
ontrol
rol m meas
easure
ure u
used
sed dduri
uringng s/
s/w
w de
devel
velopm
opment
ent.. Its
 basic function is to detect errors in the s/w.
ii.
ii. Afte
Afterr the cod
codining
g ,co
,compmput
uter
er pro
progr
gramamss are ava
avail
ilab
able
le tha
thatt can be execu
execute
ted
d for 
for 
testing purpose different levels of testing are used.
iii.
iii. The sstar
tartin
ting
g poi
point
nt ooff test
testing
ing iiss uni
unitt tes
testin
ting.
g. A mo
moduldulee is te
teste
sted
d sep
separa
aratel
tely.
y. This
This
is done by the coder himself simultaneously along with the coding of the
module.
iv.
iv. Afte
Afterr this
this mod
modul
ules
es are gra
gradu
dual
ally
ly int
integ
egra
rate
ted
d in
into
to sub
subsy
syst
stem
emss wh
whic
ich
h are then
integrated from the entire system. We do integration tests.
v. System testing: system is tested against the requirement to see if all the
requirement are met all the specified by the documents.
vi. Acceptance testing : client side on the real-life
r eal-life data of the client.

TYPES OF SOFTWARE MODELS

1. Wa
Wate
terr F
Falalll M
Mod
odel
el::
It includes all phases of SDLC. This is the simplest process model.
 

  O/P in water fall model:


Requir
Requireme
ements
nts doc
docume
ument,
nt, pro
projec
jectt pla
plan,
n, sys
system
tem design
design doc
docume
ument,
nt, det
detail
ailed
ed
design document, test plan and test reports, final code software manuals, review reports.

  Draw back :
Once request made freeze, it cannot be changed i.e. changes cannot be done
after requirements are freezed.

  Uses:
It is well suited for routine type of projects where the requirements are well
understood & small project.

2. Pr
Prot
otot
otyp
ypee M
Mod
odel
el::

In this model the requirements are not freeze before any design or can proceed.
The prototype is developed based on the currently known requirements.
It is sample of how actual system looks like,

Requirement
Analysis D
Design
C T
Code
3.Test
It
Iter
erat
ativ
ivee M
Mod
odel
el::

In this model we can make changes at any level, but all the four phases of 
SDLC will take place again.

It is like continuous model.

A A A

D D
D

C C C

T T T
 

4. Sp
Spir
iral
al Mo
Mode
dell :

In this model system is divided into modules and each module follows phases
of SDLC. It is good & successful model.

C C C A

Module 2

T
Module 3
 

TEST LIFE CYCLE(TLC)

TLC PHASES:

System study

Scope/Approach/Estimation

Test Plan Design

Test Case Design

Test Case Review

Test Case Execution

Defect Handling

GAP Analysis

1. Sy
Syst
stem
em sstu
tudy
dy::

We will study the particular s/w or project/system.

• Domain:
In domain, there may be different types of domains like banking,
fin
finance, Insuran
rance, Marketing, Real-t
-tiime,
me, ERP, SEIB
SEIBEEL,
Manufacturing etc.
2. Sof
ofttwa
ware
re::

Front End/Back End/ Process.

Front End: GUI, VB, D2K.


Back end : Oracle, Sybase, SQL server, MS access, DB2
Process: Languages. Eg: c, c++, Java..etc.
 

• Hardware: servers, internet, intranet applications.


• Functional Point/LOC:
 No of lines writing for a micro function.
1 F.P = 10 lines of code.

•  No. of pages of software/system


•  No. of resources of software/system

 No. of days to be taken to develop software/system
•  No. of modules in the software/system
• Pick one priority  High / Medium / Low.

3. Sco
Scope/
pe/Ap
Appro
proach
ach/Es
/Estim
timati
ation
on::

What to be tested.
Scope
What not be tested.

Eg:
U I S A

         

         
Module
           
         

Approach: Test Life Cycle ( All the p


phases
hases of TLC)
Estimation:
LOC (lines of code) / F.P (functional point) / Resource.

4. Te
Test
st Pl
Plan
an De
Desi gn:: 1 P.F =10 lines of code.
sign
• About the client/company (details of the company)
• Reference documents  we are used to design the documents.
• Summary of the APP  overview.
• Each testing
o Definition
o Technique
o Start criteria
o Stop criteria

 
• Resources  it include roles/responsibilities
 

• Defects
• Schedules  Risks / contingencies / mitigation (how much we can recover)
• Deliverable  to whom.

5. Test case Design: (heart of testing)


• Test case is description of what is to be tested what data to be used and what
actions to be done to check the actual risk against the expected result.
• A test case is simply a test with formal steps and instructions.
• Testt cas
Tes cases
es are val
valuab
uable
le becaus
becausee the
they
y are rep
repeat
eatabl
able,
e, rep
reprod
roduci
ucible
ble und
under
er the
same/different environments and easy to improve upon with feedback.

6. Te
Test
st ccas
asee it
item
emss ar
aree :

 TC no.
 Pre-condition
 Description
 Expected output
 Actual output
 Status
 Remarks

7. Te
Test
st Ca
Case
se Re
Revi
view
ew::
Review means re-verification of test case. These are included in the review
format.

First Time Right (FTR)

TYPES OF REVIEWS:
 Peer – peer review  same level
 Team lead review
 Team Manager review

REVIEW PROCESS:

Take demo of the functionality


 

Go through use case / function


f unction specification

Try to see TC & find out the gap between


Test cases Vs. Use Cases

Submit the review report

8. Tes
Testt C
Case
ase Exe
Execut
cution
ion :

This case execution includes mainly 3 things.


i. I/P:
 Test cases
 Test data
 Review comments
 SRS
 BRS
 System availability
 Data availability
 Database
 Review doc

ii. Process: Test it.

iii. Output:
 Raise the defect
 Take a screen shot & save it.

9. De
Defe
fect
ct Hand
Handli
ling
ng :

Identify the following things in defect handling.


 Defect No./Id.
 Description
 Origin TC id
 Severity
o Critical
o Major 
o
Medium
o Minor 
o Cosmetic
 

 Priority
o High
o Medium
o Low
 status

Following is the flow of defect handling :

 Raise the defect


 Review it internally
 Submit to developer 
We have to declare severity of defect & after declare the priority.
According to priority, we will test the defect.
10.GAP Analysis:

Finding
Finding the dif
differe
ference
nce bet
betwee
ween
n the cli
client
ent req
requir
uireme
ement
nt & the app
applic
licati
ation
on
developed .

Deliverables:
 Test plan
 Test scenarios
 Defect reports

BRs Vs SRs.
SRs Vs Test Case.
TC vs. Defect.
Defect is open / closed.

TEST PLAN DESIGN :

What is Test Plan : A software project test plan is a document that describes
the obj
object
ective
ives,
s, sco
scope,
pe, app
approa
roach
ch & foc
focus
us of a sof
softwa
tware
re tes
testin
ting
g effo
effort.
rt. The com
comple
pleted
ted
document will help people outside the test group understand the “why & how “ of product
validation.

WHAT IS DEFECT:

In compute
computerr tec
techno
hnolog
logy,
y, a def
defect
ect is a cod
coding
ing erro
errorr in a com
comput
puter
er progr
program.
am. It is
defined by saying that “ A software error is present when the program does not do what its
end user reasonably expects it to do.”

WHO CAN REPORT A DEFECT:


Any one who has involved in software development lifecycle and who is using the
software can report a defect. In most of the cases defects are reported by testing team.
 

A short list of people expected to report bugs.


 Testers / QA engineers.
 Developers.
 Technical support.
 End users.
 Sales and marketing engineers.

TYPES OF DEFECTS:
 Cosmetic flow
 Data corruption
 Data loss
 Documentation issue.
 Incorrect operation.
 Installation problem.
 Missing feature.
 Slow performance
 Unexpected behavior 
 Unfriendly behavior 

HOW TO DECIDE THE SEVERITY OF THE DEFECT:

Severity Description Response time /


Level Turn around time
High
High A de
defec
fectt oc
occu
curre
rred
d du
duee to the
the inab
inabil
ilit
ity
y of a kekey
y Defect should be
func
functi
tion
on to per
perfor
form.
m. Th This
is pro
probl
blem
em caucause
sess th
thee re
resp
spon
onde
ded
d to wi with
thin
in 24
system to hang or the user dropped out of the sys. hrs & the situation should
 be resolved test exit.
Mediu A de
defe
fect
ct oc
occu
curre
rred
d wi
with
th seseve
vere
rely
ly rest
restri
rict
ctio
ion
n th
thee A response or action plan
m system such as the inability to use a major function should be provided within
of the system. There is no acceptable work around 3 working days.
 but the problem does not inhibit the testing of other 
function
Low A defec
defectt is occur
occurred
red whic
which h place
placess minor restri
restriction
ction A response or action plan
a function that is not critical. There is an acceptable should be provided within
work-around for the defect. 5 working days.

DEFECT SEVERITY Vs. DEFECT PRIORITY:

Severity: How much the defect is effecting application.

Priority: 
 Relative importance of the defect, how fast the developer has to take up the defect.
 

 The general rule fortune fixing the defects will depend on the severity. All the high
severity defects should be fixed first.
 This may not be the same in all cases some times even though severity of the bug is
high it may not be taken as the high priority.
 At the same time the low severity bug may be considered as high priority.

What kind of testing should be considered ?

1. BL
BLAC
ACK
KBBOX
OX TEST
TESTIN
ING:
G:

 Not based on any knowledge of internal design or code. Tests are based on
requirements and functionality.

2. WH
WHIT
ITE
EBBOX
OX TE
TEST
STING
ING::

Based on knowledge
knowledge of the inte
internal
rnal logic of an application
application cod
code.
e. These are
 based on coverage of code statements, branches, paths, conditions, loops…etc.

3. IN
INTE
TEGR
GRAT
ATIO
ION
N TEST
TESTIN
ING:
G:

Testing of combined parts of an application to determine function together 


correctly.

4. FU
FUNC
NCTIO
TIONA
NAL
L TES
TESTIN
TING:
G:

Black box type of testing. This type of testing should be done by testers. This
does not mean that the programmers should not check that their code works
 before releasing it.

5. RE
REGR
GRESS
ESSIO
ION
N TES
TESTIN
TING:
G:

It can be difficult to determine how much re-testing is needed, especially near 


the end of the developme
development nt cycle
cycle.. Autom
Automated
ated tes
testing
ting too
tools
ls can be especi
especially
ally
useful for this type of testing.

6. SY
SYST
STEM
EM T
TES
ESTI
TING
NG::

Black box type testing that is based on over all requirements specifications.
Covers all combined parts of a system.

7. AC
ACCE
CEPTA
PTANC
NCE
E TES
TESTI
TING
NG::

Final testing based on specification of the end-user or customer or based on


use by end-users / customers over same limited period of time.
 

8. RE
RECO
COVE
VERY
RY TE
TEST
STING
ING::

Testing how well a system recovers from crashes, hardware failures or other 
catastrophic(sudden calamity) problems.

9. SEC
SECUR
URITY
ITY TE
TEST
STIN
ING:
G:

How well the system protects against unauthorized internal or external access.

10.COMPATABILITY TESTING:

Testing how well software performs in a particular hardware / software / network 


etc. environment.

11.ALPHA TESTING:

Testing of an application when development is nearing completion, minor 


design changes may still be made as a result of such testing.
Typically done by end-users or others not by programmers or testers.

12.BETA TESTING:

Testing when development and testing are essentially completed and final
 bugs and problems need
need to be foun
found
d before final release
release.. Typically done by end-
users or others not by programmers or testers.

13.SANITY TESTING:

This is before testing. Application is stable or not, we want to write test cases
for product whether development team released build is able to conduct complete
testing or not ?

14.SMOKE TESTING:

After testing major & medium or critical functions are closed or not

15.MONKEY TESTING:

Testing like monkey . As no proper approach. Taking any functions and test it.
Coverage of main activities during testing is called monkey testing (If give one
day for testing)

16.MUTENT TESTING:
 

Is the defect we have to inject defect into application and test.

KONDA
User Name:

Password : *************

17. BIG BANG TESTIN


TESTING:
G: (Informal testing)

A single stage of testing after completion of entire coding is called Big


 bang testing (no reviews i.e. direct system testing)

18.BIG BANG THEORY:

Approach for the integration when checking the errors between module or sub
module.

19.AD-HOC TESTING:

Doing a short cut way, does not following a sequential order mentioned in the
test cases or test plan.

20.PATH TESTING:

To check every possible condition at least one navigation of flow.

SOFTWARE QUALITY:

 Meet customer requirements.


 Meet customer expectations
 Possible cost
 Time to market

BRS:
It specifies needs of customer.
Total business logic documents.
SRS:
It specifies Functional Specifications to develop,
HLD:
High level design document.
 

It specifies interconnection of modules.


LLD:
It specifies Internal logic of sub-modules.

TESTING TEAM:

Quality Control

Quality Analyst

Test Manager

Test Lead

Test Engineers

REVIEWS DURING ANALYSIS:


 Conducted by business analyst
 Verifies completeness and correctness in BRs & SRs
 Are they right requirement ?
 Are they complete ?
 Are they reasonable ?
 Are they achievable ?
 Are they testable ?

REVIEWS DURING DESIGN:


 Conducted by designers
 Verifies completeness and correctness in HLD & LLD
 Is the design good?
 Is the design complete ?
 Is the design possible ?
 Does the design meet requirements ?

WHY DOES S/W HAVE BUGS:


 Programming errors -- programmers like anyone else can make mismistakes.
takes.
 Changing requirements
 Poo
Poorly
rly docu
documen
mented
ted code -- its tou
tough
gh to mai
mainta
ntain
in and modify
modify code
code tha
thatt is badly
written or poorly documents; the result is bug
 

 Software development tools: visual tools, class libraries, compilers, scripting tools
else often introduce their own bugs or poorly documented, resulting in added bugs.

WHAT IS VERIFICATION & VALIDATION:

VERIFICATION:
Typi
Ty pica
call
lly
y invo
involv
lves
es rev
revie
iews
ws an
and
d me
meetetin
ings
gs to ev
eval
alua
uate
te(( esti
estima
mate
te , calc
calcul
ulat
ate)
e)
documents, plans, code requirements and specifications. This can be done with check lists,
issue lists, walk through & inspections meetings.
VALIDATION:
Typically involves actual testing and takes place after verifications are completed.

SEVERITY:

Relative impact of the system.


i.e how far the application is affected
aff ected by this defect ( low, medium, high, critical).

PRIORITY:

Relative importance of the defect.


(i.e. giving preference to the defect low , medium , high).

Which life cycle method is followed in Your organization:

 Now we are using V model and we will also include in some other methods like
 prototype and spiral in single application.

What is software Quality:

Quality s/w is reasona


Quality reasonably
bly bug-free,
bug-free, deliv
delivered
ered on time and with in bud
budget,
get, meats
requirements and/or expectations and is maintainable.

SEI:
Software Engineering Institute.
Initiated by the U.S. defense department to help improve software development
 processes.

CMM:

Capabi
Capa bili
lity
ty ma
matu
turi
rity
ty mo
modedell de
deve
velo
lope
ped
d by th
thee SE
SEI.
I. It
It’s
’s a mo
modedell of 5 le
leve
vels
ls of 
organizational maturity that determine effectiveness in delivering quality software.

ANSI --- American National Standards Institute

Will automated testing tools make testing easier ?


 

Possible: For smal


Possible: smalll projec
project,
t, the ttime
ime nee
needed
ded to le
learn
arn and iimpleme
mplementnt them may
may not be
worth it. For larger projects or on-going lon
long-term
g-term projects, they can be valuable.
valuable.

WEB TEST TOOL:


To check that links are valid , HTML code usage is correct, client-side and
server-side programs work, a web site’s interactions are secure.

WHAT MAKES A GOOD TEST ENGINEER ?

A good test engineer has a “test to break “ attitude (approach, manner) an ability to
take the point of view of the customer, a strong desire for quality and attention to details.

WHATS THE ROLW OF DOCUMENTATION IN QA ?

Critic
Critical
al,, QA prac
practi
tice
cess shou
should
ld be dodocu
cume
ment
nted
ed such
such th
that
at th
they
ey are
are repe
repeat
atab
able
le
specifications, designs business rules, inspection reports, configurations, code changes, test
 plans. etc…..

WHATS A TEST CASE ?

A test case is a document that describes an input action or event and an expected
response, to determine if a feature of an application is working correctly.

HOW CAN IT BE KNOWN WHEN TO STOP TESTING ?

This can be difficult to determine.


determine. Common factors in deciding wh
when
en to stop are …
 Dead lines ( release deadlines, testing dead lines etc..)
 TC completed with certain percentage passed
 Test budget / depleted (used U P)
 Bug rate falls below a certain level
 Beta or alpha testing period ends.

WHAT CAN BE DONE IF REQUIREMENTS ARE CHANGING


CONTINUOUSLY:
Use rapid prototyping whenever possible to help customers feel sure of their requirements
and minimize changes.
The project initial schedule should allow for some extra time corresponding with the
 possibility of changes.
Focus less on detailed test plans and test cases and more an ad-hoc testing.

WHAT IS THE DIFFERENCE BETWEEN A PRODUCT AND A PROJECT ?


PRODUCT:
 

Developin
Developingg a pro
produc
ductt wit
withou
houtt int
intera
eracti
ctions
ons to two cli
client
ent bef
before
ore the
 product release
PROJECT:
Developing a product based on the client needs or requirements.
WHAT IS A TEST PROCEDURE ?

Execution of one or more test cases.

WHAT ARE THE DEFECT PARAMETERS ?

There are 5 parameters.


 Source
 Error Description
 Status
 Priority
 Severity

WHAT IS TRACABILITY MATRIX ?


To map the test requirement and the test case ID, whether it is fulfilling the coverage
or not.

WHAT IS TEST STRATEGY ?

Applying a type of testing techniques to explore the maximum bugs.

WHAT IS CONFIGURATION MANAGEMENT ?

It is version control. It covers the process which is used to control, co-ordinate and
track the requirement documentation, the problem faced, change request and design and
the tools to be used.
used. The changes made again and who made the change
changes.
s.

WHEN U START WRITING TEST CASES ?

Once the requirements are frozen, we begin writing test cases.

TESTING TECHNIQUE:

Way of executing and preparing the test cases.

TESTING METHODOLOGIES:

Way of developing the test.


 

WHATS THE DIFFERENCE BETWEEN IST & VAT ?

Particulars IST UAT


Acronym Integration System User Acceptance Test
Testing
Base line Fu
Func
ncti
tion
onal
al Sp
Spec
ecif
ific
icat
atio
ion
n Bu
Busi
sine
ness
ss Re
Requ
quir
irem
emen
ents
ts
Doc’s
Location Off site On site
Data Simulated Live data
Purpose Validation & Verification User needs.

You might also like