You are on page 1of 78

Certifi Tester C ied r Found dation Lev Sy n vel yllabu us

Released R Ver rsion 201 11

Int ternatio onal Software Testing Qualif g fication Board ns r

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Copyrigh Notice ht This doc cument may be copied in its entirety, or extracts made, if the s m source is ack knowledged. Copyrigh Notice In ht nternational Software Te esting Qualific cations Boar (hereinafte called ISTQB) rd er ISTQB is a registered trademark of the Intern s d national Softw ware Testing Qualifications Board, g Copyrigh 2011 the authors for the update 2011 (Thomas Mller (ch ht e r hair), Debra Friedenberg, and the ISTQ WG Foun QB ndation Level) Copyrigh 2010 the authors for the update 2010 (Thomas Mller (ch ht e r hair), Armin B Beer, Martin Klonk, R Rahul Verma) ) Copyrigh 2007 the authors for the update 2007 (Thomas Mller (ch ht e r hair), Dorothy Graham, Debra y D Friedenb berg and Erik van Veenendaal) k Copyrigh 2005, th authors (T ht he Thomas Mlle (chair), Re Black, Sig Eldh, Dorothy Graham, er ex grid Klaus Ol lsen, Maaret Pyhjrvi, G Geoff Thompson and Erik van Veenen k ndaal). All rights reserved. s The auth hors hereby t transfer the c copyright to t Internatio the onal Softwar Testing Qu re ualifications Board (ISTQB). The author (as current copyright holders) and ISTQB (as th future cop rs t I he pyright holder) have agr reed to the fo ollowing cond ditions of use e: 1) Any individual or training com r mpany may u this sylla use abus as the b basis for a tra aining course if the e ed ource and co opyright owners of the sy yllabus authors and the ISTQB are acknowledge as the so at rtisement of such a train ning course m mention the syllabu only may n us and provided tha any adver r n on aining mater rials to an I ISTQB recognized after submission for official accreditatio of the tra Natio onal Board. 2) Any individual or group of in r ndividuals ma use this syllabus as t basis for articles, boo ay s the oks, or othe derivative writings if th authors a er he and the ISTQB are acknowledged a the sourc and as ce copy yright owners of the syllabus. s 3) Any ISTQB-reco ognized Natio onal Board m translate this syllabu and licens the syllab (or may e us se bus ranslation) to other parties. o its tr

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 2 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Revision Histo ory


Version ISTQB 2 2011 Date D Effective 1-Ap E pr-2011 Remarks s Certified Tester Foundation Level Syllabus l Maintena ance Release see Appe e endix E Re elease Notes Certified Tester Foundation Level Syllabus l Maintena ance Release see Appe e endix E Re elease Notes Certified Tester Foundation Level Syllabus l Maintena ance Release e Certified Tester Foundation Level Syllabus l ASQF Sy yllabus Foundation Level Version 2.2 Lehrplan Grundlagen des Softwa n n are-testens ISEB Sof ftware Testin Foundatio Syllabus V2.0 ng on V 25 February 1999

ISTQB 2 2010

Effective 30-M E Mar-2010

ISTQB 2 2007 ISTQB 2 2005 ASQF V2.2 ISEB V2 2.0

01-May-2007 0 7 01-July-2005 0 July-2003 J 25-Feb-1999 2

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 3 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Table of Conte ents


Acknowledgements.. ........................................ ........................................ ................................................... 7 Introduct tion to this S Syllabus............................ ........................................ ................................................... 8 Purpo of this Do ose ocument .......................... ........................................ ................................................... 8 The C Certified Test Foundatio Level in S ter on Software Testing .............. ................................................... 8 Learn ning Objective es/Cognitive Level of Kno owledge .......................... ................................................... 8 The E Examination . ........................................ ........................................ ................................................... 8 Accre editation ........ ........................................ ........................................ ................................................... 8 Level of Detail...... ........................................ ........................................ ................................................... 9 How t this Syllabus is Organized .................. ........................................ ................................................... 9 1. Fun ndamentals o Testing (K of K2)................ ........................................ ................................................. 10 1.1 Why is Te esting Necessary (K2) ..... ........................................ ................................................. 11 1.1.1 Software Systems C Context (K1) ....................................... ) ................................................. 11 1.1.2 Causes of Software Defects (K2 .................................... s e 2) ................................................. 11 1.1.3 Role of Testing in S f Software Dev velopment, Maintenance and Operations (K2) ............... 11 M 1.1.4 Testing and Quality (K2) ........... g y ........................................ ................................................. 11 1.1.5 How Much Testing is Enough? (K2) ................................ ................................................. 12 1.2 What is Testing? (K2) .................... ........................................ ................................................. 13 Seven Testing Princip 1.3 ples (K2) ....... ........................................ ................................................. 14 Fundamental Test Pro 1.4 ocess (K1) ... ........................................ ................................................. 15 1.4 Test Planning and C 4.1 Control (K1) ....................................... ................................................. 15 1.4 Test An 4.2 nalysis and D Design (K1) . ........................................ ................................................. 15 1.4 Test Im 4.3 mplementatio and Execu on ution (K1)......................... ................................................. 16 1.4 Evaluating Exit Crit 4.4 teria and Rep porting (K1) ..................... ................................................. 16 1.4 Test Cl 4.5 losure Activit ties (K1) ...... ........................................ ................................................. 16 1.5 The Psych hology of Testing (K2) .... ........................................ ................................................. 18 1.6 Code of E Ethics ............................... ........................................ ................................................. 20 2. Tes sting Throug ghout the Sof ftware Life C Cycle (K2) ......................... ................................................. 21 2.1 Software Developmen Models (K2 .................................... nt 2) ................................................. 22 2.1.1 V-mode (Sequentia Development Model) (K2) .............. el al ................................................. 22 2.1.2 Iterative e-incrementa Development Models (K2) ............. al ( ................................................. 22 2.1.3 Testing within a Life Cycle Model (K2) ............................ g e ................................................. 22 2.2 Test Leve (K2) ............................ els ........................................ ................................................. 24 2.2 Compo 2.1 onent Testing (K2) ........... g ........................................ ................................................. 24 2.2 Integra 2.2 ation Testing (K2) ............ ........................................ ................................................. 25 2.2 System Testing (K2 ................. 2.3 m 2) ........................................ ................................................. 26 2.2 Acceptance Testing (K2)........... 2.4 g ........................................ ................................................. 26 Test Type (K2) ............................. 2.3 es ........................................ ................................................. 28 2.3 Testing of Function (Functional Testing) (K2 ................. 3.1 g n 2) ................................................. 28 2.3 Testing of Non-func 3.2 g ctional Softw ware Characte eristics (Non n-functional T Testing) (K2) ......... 28 2.3 Testing of Software Structure/A 3.3 g e Architecture (Structural Te esting) (K2) .............................. 29 2.3 Testing Related to Changes: Re 3.4 g e-testing and Regression Testing (K2 ........................... 29 d n 2) 2.4 Maintenan Testing ( nce (K2) ............. ........................................ ................................................. 30 3. Sta Techniqu (K2)........................... atic ues ........................................ ................................................. 31 3.1 Static Tec chniques and the Test Pr d rocess (K2) ...................... ................................................. 32 3.2 Review Process (K2) ..................... ........................................ ................................................. 33 3.2 Activitie of a Form Review (K ................................... 2.1 es mal K1) ................................................. 33 3.2 Roles a Respons 2.2 and sibilities (K1) ....................................... ) ................................................. 33 3.2 Types o Reviews ( 2.3 of (K2) .............. ........................................ ................................................. 34 3.2 Succes Factors fo Reviews (K ................................... 2.4 ss or K2) ................................................. 35 3.3 Static Ana alysis by Too (K2) ........ ols ........................................ ................................................. 36 4. Tes Design Te st echniques (K ................ K4) ........................................ ................................................. 37 4.1 The Test Developmen Process (K ................................... nt K3) ................................................. 38 4.2 es esign Techniq ques (K2) ........................ ................................................. 39 Categorie of Test De
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 4 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

4.3 Specificat tion-based or Black-box T r Techniques (K3) ............. ( ................................................. 40 4.3 Equivalence Partitio 3.1 oning (K3) ... ........................................ ................................................. 40 4.3 Bounda Value An 3.2 ary nalysis (K3) .. ........................................ ................................................. 40 4.3 Decisio Table Tes 3.3 on sting (K3) ..... ........................................ ................................................. 40 4.3 State T 3.4 Transition Testing (K3) .... ........................................ ................................................. 41 4.3 Use Ca Testing ( 3.5 ase (K2).............. ........................................ ................................................. 41 4.4 Structure-based or Wh hite-box Techniques (K4 .................. 4) ................................................. 42 4.4 Statem 4.1 ment Testing a Coverag (K4) ............................ and ge ................................................. 42 4.4 Decisio Testing an Coverage (K4) ............................... 4.2 on nd e ................................................. 42 4.4 Other S 4.3 Structure-bas Techniqu (K1) .......................... sed ues ................................................. 42 4.5 Experienc ce-based Tec chniques (K2 ..................................... 2) ................................................. 43 4.6 Choosing Test Techni iques (K2).... ........................................ ................................................. 44 5. Tes Management (K3) .......................... st ........................................ ................................................. 45 Test Orga 5.1 anization (K2 .................. 2) ........................................ ................................................. 47 5.1.1 Test Organization a Independ and dence (K2) ...................... ................................................. 47 5.1.2 Tasks o the Test L of Leader and T Tester (K1) ....................... ................................................. 47 5.2 Test Planning and Est timation (K3)....................................... ) ................................................. 49 5.2 Test Planning (K2) .................... 2.1 ........................................ ................................................. 49 5.2 Test Planning Activ 2.2 vities (K3) ..... ........................................ ................................................. 49 5.2 Entry C 2.3 Criteria (K2) ..................... ........................................ ................................................. 49 5.2 Exit Criteria (K2)........................ 2.4 ........................................ ................................................. 49 5.2 Test Es 2.5 stimation (K2 ................. 2) ........................................ ................................................. 50 5.2 Test St 2.6 trategy, Test Approach (K .................................. t K2) ................................................. 50 5.3 Test Prog gress Monitor ring and Con ntrol (K2) ......................... ................................................. 51 5.3 Test Pr 3.1 rogress Monitoring (K1) .. ........................................ ................................................. 51 5.3 Test Re 3.2 eporting (K2)................... ........................................ ................................................. 51 5.3 Test Co 3.3 ontrol (K2)....................... ........................................ ................................................. 51 5.4 Configura ation Manage ement (K2) ... ........................................ ................................................. 52 5.5 Risk and T Testing (K2) .................... ........................................ ................................................. 53 5.5 Project Risks (K2) ..................... 5.1 t ........................................ ................................................. 53 5.5 Produc Risks (K2) .................... 5.2 ct ........................................ ................................................. 53 5.6 Incident M Management (K3) ............ ........................................ ................................................. 55 6. Too Support fo Testing (K2 ol or 2)................. ........................................ ................................................. 57 6.1 Types of T Test Tools (K ............... K2) ........................................ ................................................. 58 6.1.1 Tool Su upport for Te esting (K2) ... ........................................ ................................................. 58 6.1.2 Test To Classifica ool ation (K2) ..... ........................................ ................................................. 58 6.1.3 Tool Su upport for Ma anagement o Testing an Tests (K1) ............................................... 59 of nd ) 6.1.4 Tool Su upport for Sta Testing (K1) ................................ atic ................................................. 59 6.1.5 Tool Su upport for Te Specificat est tion (K1) .......................... ................................................. 59 6.1.6 Tool Su upport for Te Execution and Loggin (K1) ......... est n ng ................................................. 60 6.1.7 Tool Su upport for Pe erformance a Monitorin (K1)......... and ng ................................................. 60 6.1.8 Tool Su upport for Sp pecific Testin Needs (K1 ................. ng 1) ................................................. 60 6.2 Effective U of Tools Potential B Use s: Benefits and Risks (K2) .. ................................................. 62 6.2 Potential Benefits a Risks of Tool Suppor for Testing (for all tools (K2) ................... 62 2.1 and rt s) 6.2 Special Considerations for Som Types of Tools (K1) .... 2.2 me T ................................................. 62 Introducin a Tool into an Organiz 6.3 ng o zation (K1) ....................... ................................................. 64 7. References ...... ........................................ ........................................ ................................................. 65 Stand dards ............ ........................................ ........................................ ................................................. 65 Books s................... ........................................ ........................................ ................................................. 65 8. Appendix A S Syllabus Background ....... ........................................ ................................................. 67 Histor of this Doc ry cument ............................ ........................................ ................................................. 67 Objec ctives of the F Foundation C Certificate Qualification ...................... ................................................. 67 Objec ctives of the I International Qualification (adapted fr n rom ISTQB m meeting at So ollentuna, Novem mber 2001).. ........................................ ........................................ ................................................. 67 Entry Requiremen for this Qu nts ualification ... ........................................ ................................................. 67
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 5 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

Backg ground and H History of the Foundation Certificate in Software T e n Testing ..................................... 68 Appendix B L Learning Obje ectives/Cogn nitive Level of Knowledge ............................................... 69 o e Level 1: Remember (K1) ............................ ........................................ ................................................. 69 Level 2: Understand (K2) ........................... ........................................ ................................................. 69 Level 3: Apply (K3 ..................................... 3) ........................................ ................................................. 69 Level 4: Analyze ( (K4) ................................. ........................................ ................................................. 69 10. Appendix C Rules App A plied to the IS STQB ............................... ................................................. 71 Found dation Syllab ................................... bus ........................................ ................................................. 71 10. Genera Rules ........................... .1.1 al ........................................ ................................................. 71 10. Current Content ........................ .1.2 ........................................ ................................................. 71 10. Learnin Objectives .................. .1.3 ng s ........................................ ................................................. 71 10. Overall Structure ....................... .1.4 l ........................................ ................................................. 71 11. Appendix D Notice to T A Training Prov viders .............................. ................................................. 73 12. Appendix E Release Notes............. A ........................................ ................................................. 74 Relea 2010 ...... ase ........................................ ........................................ ................................................. 74 Relea 2011 ...... ase ........................................ ........................................ ................................................. 74 13. Index ........... ........................................ ........................................ ................................................. 76 9.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 6 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Ackno owledgements
Internatio onal Softwar Testing Qu re ualifications Board Working Group Fo oundation Le evel (Edition 2011): Thomas Mller (chair), Debra Friedenberg. T core team thanks the review team (Dan Almog The m m g, Armin Be Rex Black, Julie Gar eer, rdiner, Judy McKay, Tuul Pkkne Eric Riou du Cosquier Hans la en, r Schaefer, Stephanie Ulrich, Erik van Veenendaal) and all National Bo oards for the suggestions for the curre version o the syllabus. ent of Internatio onal Softwar Testing Qu re ualifications Board Working Group Fo oundation Le evel (Edition 2010): Thomas Mller (chair), Rahul Verma, Martin K Klonk and Ar rmin Beer. T core team thanks the The m review te eam (Rex Bla ack, Mette B Bruhn-Peders son, Debra Friedenberg, Klaus Olsen Judy McKa F n, ay, Tuula P kknen, M Meile Posthum Hans Sc ma, chaefer, Step phanie Ulrich, Pete William Erik van ms, Veenend daal) and all National Boa ards for their suggestions r s. Internatio onal Softwar Testing Qu re ualifications Board Working Group Fo oundation Le evel (Edition 2007): Thomas Mller (chair), Dorothy G Graham, Deb Friedenberg, and Erik van Veenendaal. The core bra k c team tha anks the revie team (Ha Schaefer Stephanie Ulrich, Meile Posthuma, Anders ew ans r, e Pettersson, and Won Kwon) and all the National Boards for their sug nil d ggestions. Internatio onal Softwar Testing Qu re ualifications Board Working Group Fo oundation Le evel (Edition 2005): Thomas Mller (chair), Rex Black Sigrid Eldh Dorothy Graham, Klau Olsen, Ma k, h, G us aaret Pyhjr rvi, hompson and Erik van Ve d eenendaal an the review team and a National B nd w all Boards for their Geoff Th suggestions.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 7 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

Introd duction to this Syllabus n s


Purpo of this Docume ose s ent
This sylla abus forms t basis for the International Softwar Testing Qualification a the Founda the re at ation Level. Th Internatio he onal Software Testing Qualifications Board (ISTQB provides it to the Natio e B B) onal Boards f them to accredit the tr for raining provid ders and to derive examination quest d tions in their local language Training p e. providers will determine a appropriate te eaching meth hods and pro oduce course eware for accre editation. The syllabus w help candidates in their preparation for the exa will amination. Information on the history and ba ackground of the syllabus can be foun in Append A. f s nd dix

The C Certified T Tester Foundation Level in Software Testing e


The Foundation Leve qualificatio is aimed a anyone inv el on at volved in soft tware testing This includ g. des people in roles such as testers, te analysts, test enginee test cons n est , ers, sultants, test managers, user t acceptan testers a software developers. This Founda nce and ation Level q qualification i also appro is opriate for anyone who want a basic un ts nderstanding of software testing, such as project m h managers, quality manager software developmen managers, business an rs, nt nalysts, IT dir rectors and m management consulta ants. Holders of the Foundation Certif ficate will be able to go on to a higher n r-level softwa are testing q qualification.

Learni Objec ing ctives/Co ognitive Level of Knowledge K e


Learning objectives a indicated for each section in this syllabus and classified as follows: g are d s d s o K1: r remember o K2: u understand o K3: a apply o K4: a analyze Further d details and e examples of l learning obje ectives are given in Appe endix B. All terms listed under Terms jus below chap heading shall be re s st pter gs emembered ( (K1), even if not explicitly mentioned in the learnin objectives y ng s.

The E Examinatio on
The Foundation Leve Certificate examination will be base on this sy el n ed yllabus. Answ wers to ation question may require the use o material ba ns of ased on more than one section of this e s examina syllabus. All sections of the syllab are exam s bus minable. The form of the exa mat amination is multiple cho oice. Exams m be taken as part of a accredited training cou may n an d urse or taken independen (e.g., at an n ntly a examina ation center o in a public exam). Com or mpletion of an accredited training cou a d urse is not a prerequisite for the exam e m.

Accred ditation
An ISTQ National B QB Board may accredit training providers whose cour material f s rse follows this syllabus. Training pro oviders shou obtain acc uld creditation guidelines from the board or body that t performs the accreditation. An ac s ccredited cou urse is recog gnized as con nforming to this syllabus, and is allowe to have an ISTQB exa ed n amination as part of the course. guidance for training prov viders is give in Append D. en dix Further g

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 8 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Level of Detail
The leve of detail in this syllabus allows inter el s rnationally co onsistent teaching and ex xamination. In order to achieve this goal, the syllabus consis of: sts o General instructional objectiv describin the intentio of the Fou ves ng on undation Lev vel o A list of informati to teach, including a d ion description, and referenc to additio a ces onal sources if requ uired o Lear rning objectiv for each knowledge a ves area, describ bing the cogn nitive learning outcome and a mind dset to be ac chieved o A list of terms tha students m at must be able to recall and understand e d d o A de escription of t key conc the cepts to teach, including sources such as accepte literature or s h ed o standards The sylla abus content is not a des t scription of th entire kno he owledge area of software testing; it ref a flects the level of detail to b covered in Foundation Level traini courses. be n n ing

How th Syllab is Or his bus rganized


There ar six major c re chapters. The top-level h heading for each chapter shows the h highest level of learning objectives th is covere within the chapter and specifies the time for the chapter. Fo hat ed e e or example e:

2. Tes sting Thr roughout the Sof t ftware Life Cycle (K2)

115 min nutes

This hea ading shows that Chapter 2 has learning objective of K1 (ass r es sumed when a higher level is shown) a K2 (but n K3), and it is intended to take 115 minutes to teach the material in the and not d 5 e chapter. Within each chapter there are a num mber of sectio ons. Each se ection also ha the learning as objective and the am es mount of time required. S e Subsections that do not h have a time g given are included within the time for the section. e

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 9 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

1.

Fundam mentals of Test ting (K2 2)

15 minu 55 utes

Learni Objec ing ctives for Fundam r mentals of Testing f


The obje ectives identify what you will be able t do followin the compl to ng letion of each module. h

1.1 Wh is Testing Necess hy sary? (K2)


LO-1.1.1 1 LO-1.1.2 2 LO-1.1.3 3 LO-1.1.4 4 LO-1.1.5 5 Describe with examples, the way in which a defect in sof e, y ftware can ca ause harm to a o person, t the enviro to onment or to a company (K2) ( Distinguish between the root cau of a defec and its effe use ct ects (K2) Give rea asons why testing is nece essary by giv ving example (K2) es Describe why testing is part of qu e g uality assurance and give examples o how testing e of contribut to higher quality (K2) tes r Explain a compare the terms e and e error, defect, fault, failure and the cor e, rresponding terms mistake and bug, usi examples (K2) ing s

1.2 Wh is Testing? (K2) hat


LO-1.2.1 1 LO-1.2.2 2 LO-1.2.3 3 Recall th common o he objectives of testing (K1) f Provide examples for the objectiv of testing in different phases of th software life ves g he cycle (K2 2) Different tiate testing f from debugg ging (K2)

1.3 Sev Testin Princip ven ng ples (K2)


LO-1.3.1 1 Explain t seven pr the rinciples in te esting (K2)

1.4 Fun ndamenta Test Pro al ocess (K1) )


LO-1.4.1 1 Recall th five fundamental test a he activities and respective t d tasks from planning to closure (K1)

1.5 The Psychology of Tes e sting (K2) )


LO-1.5.1 1 LO-1.5.2 2 Recall th psycholog he gical factors t that influence the succes of testing ( e ss (K1) Contrast the mindset of a tester a of a deve t t and eloper (K2)

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 10 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

1.1
Terms

Why is Testing Necess g sary (K2 2)

20 minut tes

Bug, def fect, error, fa ailure, fault, m mistake, qual lity, risk

1.1.1

Software Systems Context ( e s (K1)

Software systems ar an integral part of life, f e re l from busines applications (e.g., ban ss nking) to cons sumer products (e.g., cars). Most people have had a experienc with softwa that did n work as s . e an ce are not expected Software t d. that does not work correc can lead to many pro t ctly oblems, including loss of money, t time or busin ness reputation, and coul even caus injury or de ld se eath.

1.1.2

Causes o Softwar Defects (K2) of re s

A human being can m n make an erro (mistake), which produ or uces a defect (fault, bug) in the progra am code, or in a docume If a defec in code is executed, th system ma fail to do w ent. ct he ay what it shoul do ld (or do so omething it shouldnt), ca ausing a failure. Defects in software, s systems or d documents may m result in failures, but not all defec do so. cts Defects occur because human be eings are fallible and bec cause there is time press sure, complex x code, co omplexity of infrastructure changing t e, technologies, and/or man system int ny teractions. Failures can be caus by enviro sed onmental con nditions as well. For example, radiati w ion, magnetis sm, electroni fields, and pollution can cause faults in firmwar or influenc the execut ic re ce tion of softwa by are changing the hardwa conditions. g are

1.1.3 Role of T Testing in Software Developm ment, Main ntenance a and tions (K2) Operat
Rigorous testing of s s systems and documentati can help to reduce th risk of problems occurring ion he during operation and contribute to the quality of the software system, if the defects found are d o s corrected before the system is re d eleased for operational us se. Software testing may also be req e y quired to mee contractua or legal req et al quirements, o industry-specific or standard ds.

1.1.4

Testing a and Qualit (K2) ty

With the help of testing, it is poss sible to meas sure the qual of software in terms o defects fou lity of und, for both f functional an non-functi nd ional softwar requireme re ents and char racteristics (e e.g., reliabilit ty, usability, efficiency, m maintainability and portab bility). For more information on non-fu unctional tes sting see Cha apter 2; for more informat tion on software characte eristics see S Software Eng gineering Software Product Qu e uality (ISO 9126). Testing c give con can nfidence in th quality of t software if it finds few or no defec A proper he the w cts. rly designed test that pa d asses reduce the overall level of risk in a system When testing does find es k m. d defects, the quality o the softwar system inc of re creases whe those defe en ects are fixed d. Lessons should be le s earned from previous pro ojects. By understanding the root causes of defec cts found in other projec processe can be imp cts, es proved, whic in turn sho ch ould prevent those defect from ts reoccurring and, as a consequen nce, improve the quality of future syst o tems. This is an aspect of o quality assurance. Testing s should be int tegrated as o of the qu one uality assurance activities (i.e., alongs s side develop pment standard training and defect an ds, nalysis).
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 11 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

1.1.5

How Muc Testing is Enoug ch g gh? (K2)

Deciding how much t g testing is eno ough should take accoun of the leve of risk, inclu nt el uding technic cal, safety, a business risks, and p and s project constr raints such as time and b a budget. Risk is discussed k d further in Chapter 5. n Testing s should provid sufficient information t stakeholders to make informed decisions abou the de to ut release o the softwa or system being teste for the next development step or h of are m ed, handover to custome ers.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 12 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

1.2
Terms

What is Testing? (K2) s

30 minut tes

Debugging, requirem ment, review, test case, te esting, test objective o

Backgr round
A common perceptio of testing i that it only consists of running tests i.e., execu on is y s, uting the softw ware. This is p of testing but not all o the testing activities. part g, of g Test acti ivities exist b before and af test exec fter cution. Thes activities in se nclude plann ning and cont trol, choosing test conditions, designing and exec g cuting test cases, checkin results, ev ng valuating exit t criteria, r reporting on the testing p process and system unde test, and fi er inalizing or c completing closure activities after a test phase has b s been complet ted. Testing also includes reviewing d s documents (including source cod and cond de) ducting static analysis. c Both dyn namic testing and static te g esting can be used as a means for ac chieving sim milar objective es, and will provide infor rmation that c be used to improve both the syst can b tem being tes sted and the e developm ment and tes sting process ses. Testing c have the following ob can e bjectives: o Finding defects o Gain ning confiden about the level of qua nce e ality o Prov viding information for dec cision-making g o Prev venting defec cts The thou ught process and activitie involved in designing tests early in the life cycle (verifying the s es n t e test basis via test design) can he to prevent defects from being intro elp t m oduced into c code. Review of ws documen (e.g., req nts quirements) a the ident and tification and resolution o issues also help to prev d of o vent defects a appearing in the code. Different viewpoints in testing tak different o t ke objectives into account. F example, in developm o For ment testing (e e.g., compon nent, integrat tion and syst tem testing), the main ob bjective may be to cause as many fai ilures as pos ssible so that defects in th software are identified and can be fixed. In t he a d e acceptan testing, t main obje nce the ective may b to confirm that the system works as expected, to be gain con nfidence that it has met th requireme he ents. In some cases the m e main objectiv of testing may ve be to ass sess the qua of the so ality oftware (with no intention of fixing defe ects), to give information to e n stakeholders of the r of releasing the syste at a given time. Maint risk em n tenance testi often incl ing ludes testing th no new d hat defects have been introdu uced during development of the chan d nges. During operational testing, the main obje ective may be to assess system chara s acteristics su as reliab uch bility or availability. Debugging and testin are differe Dynamic testing can show failure that are ca ng ent. c es aused by def fects. Debugging is the dev velopment ac ctivity that fin nds, analyzes and remov the cause of the failur ves e re. Subsequ uent re-testin by a tester ensures tha the fix doe indeed res ng at es solve the failure. The responsi ibility for thes activities i usually tes se is sters test and developers debug. d s The proc cess of testin and the te ng esting activitie are explained in Section 1.4. es

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 13 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

1.3
Terms

Seven Testing Principles (K2) )

35 minut tes

Exhaustive testing

Princip ples
A numbe of testing p er principles ha been sug ave ggested over the past 40 years and offer general r guideline common f all testing es for g. esence of d defects Principle 1 Testing shows pre Testing c show tha defects are present, bu cannot pro that there are no defe can at ut ove e ects. Testing g reduces the probability of undisco overed defec remaining in the softw cts g ware but, eve if no defec are en cts found, it is not a proo of correctn of ness. Principle 2 Exhau ustive testing is imposs sible Testing e everything (a combinatio of inputs and precon all ons s nditions) is no feasible ex ot xcept for trivi ial cases. In nstead of exh haustive test ting, risk ana alysis and priorities should be used to focus testing d o efforts. Principle 3 Early t testing To find d defects early, testing activ vities shall be started as early as pos ssible in the s software or system s developm ment life cycle, and shall be focused on defined objectives. o t Principle 4 Defect clustering Testing e effort shall be focused pr e roportionally to the expec cted and later observed d defect density of y modules A small number of mod s. dules usually contains mo of the def y ost fects discove ered during prep release t testing, or is responsible for most of t operation failures. the nal Principle 5 Pestic cide paradox x If the sam tests are repeated ov and over again, event me ver tually the sam set of tes cases will no me st longer fin any new d nd defects. To o overcome thi pesticide paradox, test cases nee to be regu is ed ularly reviewed and revised and new a different tests need to be written t exercise d d d, and o to different parts of s the softw ware or syste to find potentially mor defects. em re t t Principle 6 Testing is context dependent Testing i done differ is rently in diffe erent context For example, safety-critical softwa is tested ts. are differentl from an ely -commerce s site. nce-of-errors fallacy s Principle 7 Absen Finding a fixing de and efects does n help if the system built is unusable and does n fulfill the users not e e not needs an expectatio nd ons.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 14 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

1.4
Terms

Fundam mental T Test Pro ocess (K K1)

35 minut tes

Confirma ation testing, re-testing, e criteria, incident, regr , exit ression testin test basis test condit ng, s, tion, test cove erage, test da test execution, test log, test plan, test proced ata, dure, test policy, test suite test e, summary report, test y tware

Backgr round
The mos visible part of testing is test executi st t s ion. But to be effective an efficient, t e nd test plans sh hould also inclu time to b spent on p ude be planning the tests, designing test cas ses, preparin for execution ng and eval luating result ts. The fund damental tes process co st onsists of the following ma activities: ain o Test planning an control t nd o Test analysis and design t o Test implementa t ation and exe ecution o Evaluating exit c criteria and re eporting o Test closure activities t Although logically se h equential, the activities in the process may overlap or take plac concurren e p ce ntly. Tailoring these main activities wit g thin the context of the system and the project is u e usually requir red.

1.4.1

Test Plan nning and Control ( d (K1)

Test plan nning is the a activity of de efining the ob bjectives of te esting and th specificatio of test ac he on ctivities in order to meet the o objectives an mission. nd Test con ntrol is the on ngoing activit of comparing actual pr ty rogress again the plan, and reportin the nst ng status, in ncluding deviations from the plan. It in nvolves takin actions ne ng ecessary to m meet the mis ssion and obje ectives of the project. In o e order to contr testing, th testing activities shoul be monitor rol he ld red througho the projec Test planning takes in account the feedback from monito out ct. nto k oring and con ntrol activities s. nning and co ontrol tasks a defined in Chapter 5 of this syllab are n o bus. Test plan

1.4.2

Test Ana alysis and Design (K K1)

Test ana alysis and de esign is the a activity during which gene testing objectives are transformed into g eral e d tangible test conditio and test c ons cases. The test analysis and design acti d ivity has the following ma tasks: ajor o Revi iewing the te basis (suc as require est ch ements, softw ware integrity level1 (risk level), risk y analysis reports, architecture design, inte e, erface specif fications) o Evaluating testab bility of the te basis and test objects est d s o Identifying and p prioritizing tes conditions based on an st nalysis of tes items, the specification st n, beha avior and stru ucture of the software e o Desi igning and prioritizing hig level test c gh cases o Identifying neces ssary test data to support the test con t nditions and test cases o Desi igning the tes environme setup and identifying any required infrastructu and tools st ent d d ure o Crea ating bi-direc ctional tracea ability betwee test basis and test cas en ses
1

The degr to which sof ree ftware complies or must comply with a set of stakeholder-sele s y s ected software a and/or software-based system cha aracteristics (e.g software com g., mplexity, risk as ssessment, safe level, securit level, desired performance, ety ty reliability, o cost) which a defined to re or are eflect the importa ance of the soft tware to its stak keholders.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 15 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

1.4.3

Test Imp plementation and Ex xecution (K1)

Test imp plementation and executio is the activity where te procedur or scripts are specifie by on est res s ed combinin the test ca ng ases in a par rticular order and includin any other information needed for test r ng t executio the enviro on, onment is set up and the tests are run t n. Test imp plementation and executio has the fo on ollowing majo tasks: or o Fina alizing, implem menting and prioritizing t test cases (in ncluding the identification of test data n a) o Deve eloping and prioritizing te procedure creating test data and optionally, preparing te est es, t d, est harn nesses and w writing autom mated test scr ripts o Crea ating test suit from the test procedu tes ures for efficient test exec cution o Verif fying that the test environ e nment has be set up co een orrectly o Verif fying and updating bi-dire ectional trace eability between the test basis and te cases est o Exec cuting test pr rocedures either manuall or by using test execut ly g tion tools, ac ccording to th he planned sequenc ce o Logg ging the outc come of test execution an recording the identities and versions of the sof nd s ftware unde test, test to er ools and test tware o Com mparing actua results with expected r al h results o Repo orting discrepancies as in ncidents and analyzing th d hem in order to establish their cause (e.g., r h a de efect in the co ode, in specified test data in the test document, o a mistake in the way th test a, or he was executed) o Repe eating test activities as a result of act tion taken for each discre epancy, for e example, reexec cution of a te that previo est ously failed in order to co onfirm a fix (c confirmation testing), exe ecution of a corrected tes and/or exe st ecution of tes in order to ensure tha defects have not been sts t at intro oduced in unc changed areas of the sof ftware or that defect fixing did not unc cover other defe ects (regressi testing) ion

1.4.4

Evaluatin Exit Cr ng riteria and Reporting (K1) g

Evaluatin exit criteria is the activ where te execution is assessed against the defined ng vity est d objective This shou be done f each test level (see Section 2.2). es. uld for S Evaluatin exit criteria has the following majo tasks: ng or o Chec cking test log against th exit criteria specified in test plannin gs he a n ng o Asse essing if mor tests are n re needed or if t exit criter specified should be ch the ria hanged o Writi a test sum ing mmary repor for stakeho rt olders

1.4.5

Test Clos sure Activ vities (K1)

Test clos sure activities collect data from comp a pleted test ac ctivities to consolidate exp perience, testware facts and n e, numbers. Tes closure ac st ctivities occur at project m r milestones su as when a uch software system is re e eleased, a te project is completed (o cancelled), a milestone has been est or achieved or a mainte d, enance relea has been completed. ase n

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 16 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Test clos sure activities include the following m e major tasks: o Chec cking which planned deliverables hav been deliv ve vered o Clos sing incident reports or ra aising change records for any that rem e r main open o Docu umenting the acceptance of the syste e e em o Fina alizing and ar rchiving testw ware, the tes environmen and the te infrastruct st nt est ture for later reuse o Hand ding over the testware to the mainten e o nance organi ization o Anal lyzing lesson learned to determine c ns o changes nee eded for futur releases a projects re and o Usin the information gathere to improv test maturity ng ed ve

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 17 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

1.5
Terms

The Ps sycholog of Tes gy sting (K2)

25 minut tes

Error gue essing, indep pendence

Backgr round
The mind dset to be us while tes sed sting and rev viewing is diff ferent from th used whi developing hat ile software With the rig mindset d e. ght developers a able to te their own code, but se are est eparation of this t responsi ibility to a tes is typically done to help focus eff and provide additiona benefits, su as ster fort al uch an indep pendent view by trained a professio w and onal testing resources. In r ndependent t testing may be b carried o at any lev of testing. out vel A certain degree of in n ndependenc (avoiding t author bias) often ma ce the akes the teste more effec er ctive at finding defects and failures. Ind g d dependence is not, howe e ever, a replac cement for fa amiliarity, an nd develope can efficiently find ma defects in their own code. Severa levels of in ers any c al ndependence can e be define as shown here from lo to high: ed n ow o Test designed b the person(s) who wro the softw ts by ote ware under test (low level of independence) o Test designed b another person(s) (e.g from the development team) ts by g., d t o Test designed b a person(s) from a diff ts by ferent organi izational grou (e.g., an i up independent test t team or test spe m) ecialists (e.g. usability or performanc test specia ., r ce alists) o Test designed b a person(s) from a diff ts by ferent organi ization or com mpany (i.e., outsourcing or certification by an external bo ody) People a projects are driven by objectives. People tend to align the plans with the objectives set and y . d eir by mana agement and other stakeh holders, for e example, to find defects o to confirm that softwar f or m re meets its objectives. Therefore, it is important to clearly state the obje s t ectives of testing. Identifyin failures du ng uring testing may be perc ceived as criticism agains the produc and agains the st ct st author. A a result, te As esting is ofte seen as a destructive activity, even though it is very constru en a n s uctive in the ma anagement o product ris of sks. Looking for failures in a system r requires curio osity, profess sional pessimis a critical eye, attentio to detail, g sm, on good commu unication with developme peers, and h ent experien on which to base erro guessing. nce or If errors, defects or fa ailures are co ommunicated in a constr ructive way, bad feelings between the e testers a the analy and ysts, designe and developers can be avoided. T ers b This applies t defects fou to und during re eviews as we as in testin ell ng. The teste and test le er eader need g good interper rsonal skills to communic cate factual information about a defects, progress and risks in a c constructive w way. For the author of the software o document, or defect in nformation ca help them improve the skills. Defe an m eir ects found and fixed duri testing will ing w save time and money later, and r y reduce risks. . Commun nication prob blems may oc ccur, particularly if testers are seen o only as messengers of unwante news abou defects. However, ther are severa ways to im ed ut re al mprove comm munication an nd relations ships betwee testers and others: en d

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 18 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

o o

o o

Start with collabo t oration rather than battles remind everyone of th common goal of bette s e he er quality systems Com mmunicate fin ndings on the product in a neutral, fac e ct-focused w without cr way riticizing the pers who crea son ated it, for example, write objective an factual inc nd cident reports and review s w findings Try t understand how the ot to ther person f feels and why they react as they do Conf firm that the other person has unders n stood what yo have said and vice ve ou d ersa

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 19 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

1.6

Code o Ethics of s

10 minut tes

Involvem ment in software testing e enables indiv viduals to learn confidential and privile eged informa ation. A code of e ethics is necessary, amo other rea ong asons to ensu that the information is not put to ure s inapprop priate use. Re ecognizing th ACM and IEEE code of ethics for engineers, th ISTQB states the he d he following code of ethics: g PUBLIC - Certified so oftware teste shall act consistently with the pub interest ers blic CLIENT AND EMPLO OYER - Cert tified softwar testers sha act in a m re all manner that is in the best interests s of their c client and em mployer, cons sistent with th public inte he erest PRODUC - Certified software te CT d esters shall e ensure that th deliverables they prov he vide (on the products p and syst tems they tes meet the highest profe st) essional stan ndards possible JUDGME ENT- Certifie software t ed testers shall maintain inte egrity and ind dependence in their profe essional judgmen nt MANAGEMENT - Ce ertified softwa test man are nagers and le eaders shall subscribe to and promote an ethical a approach to the managem ment of softw ware testing on PROFES SSION - Cer rtified softwar testers shall advance the integrity and reputatio of the pro re ofession consistent with the public interest t COLLEA AGUES - Cer rtified softwa testers sh be fair to and support are hall tive of their c colleagues, and a promote cooperation with software developer n rs SELF - C Certified softw ware testers shall particip pate in lifelon learning r ng regarding the practice of their e professio and shall promote an ethical appro on oach to the practice of the profession p n

Refere ences
1.1.5 Black, 2001, K Kaner, 2002 zer, ack, 2001, M Myers, 1979 1.2 Beiz 1990, Bla 1.3 Beiz 1990, He zer, etzel, 1988, M Myers, 1979 1.4 Hetz 1988 zel, 1.4.5 Black, 2001, C Craig, 2002 1.5 Blac 2001, Het ck, tzel, 1988

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 20 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

2. T Testing Throug g ghout th Softw he ware Lif fe Cycle (K2) e

1 minutes 115

Learni Objec ing ctives for Testing Througho the S r out Software L Cycle Life e
The obje ectives identify what you will be able t do followin the compl to ng letion of each module. h

2.1 Sof ftware Dev velopmen Models ( nt (K2)


LO-2.1.1 1 LO-2.1.2 2 LO-2.1.3 3 Explain t relationship between developmen test activit the nt, ties and work products in the n developm ment life cyc by giving examples us cle, sing project a product types (K2) and Recognize the fact th software developmen models mu be adapte to the con hat nt ust ed ntext of projec and produc characteris ct ct stics (K1) Recall ch haracteristics of good tes s sting that are applicable t any life cy e to ycle model (K K1)

2.2 Tes Levels ( st (K2)


LO-2.2.1 1 Compare the differen levels of te e nt esting: major objectives, typical objec of testing, r cts , typical ta argets of test ting (e.g., fun nctional or st tructural) and related wor products, people d rk who test types of de t, efects and failures to be id dentified (K2 2)

2.3 Tes Types (K2) st


LO-2.3.1 1 LO-2.3.2 2 LO-2.3.3 3 LO-2.3.4 4 LO-2.3.5 5 Compare four softwa test types (functional, non-functional, structura and chang e are s , al gerelated) by example (K2) Recognize that funct tional and str ructural tests occur at any test level (K1) s Identify a describe non-functio and e onal test type based on n es non-functional requireme ents (K2) Identify a describe test types b and e based on the analysis of a software sy e ystems struc cture or archite ecture (K2) Describe the purpose of confirma e e ation testing and regression testing (K K2)

2.4 Maintenance Testing ( e (K2)


LO-2.4.1 1 LO-2.4.2 2 LO-2.4.3 3. Compare maintenance testing (te e esting an existing system to testing a new application m) with resp pect to test ty ypes, triggers for testing and amount of testing (K K2) Recognize indicators for mainten s nance testing (modificatio migration and retirement) g on, (K1) Describe the role of r e regression te esting and im mpact analysis in mainten nance (K2)

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 21 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

2.1
Terms

Softwa Deve are elopmen Models (K2) nt

20 minut tes

Commer rcial Off-The-Shelf (COTS iterative-i S), incremental development model, validation, verification, V-model

Backgr round
Testing d does not exis in isolation test activiti are relate to softwar developme activities. st n; ies ed re ent Different developmen life cycle m t nt models need different approaches to testing. d

2.1.1

V-model (Sequential Develo opment Mo odel) (K2)

Although variants of the V-model exist, a com h l mmon type of V-model us four test levels, f ses correspo onding to the four develop e pment levels s. r bus The four levels used in this syllab are: o Com mponent (unit testing t) o Integ gration testin ng o Syst tem testing o Acce eptance testi ing In practic a V-mode may have more, fewer or different levels of dev ce, el r velopment an testing, nd dependin on the pro ng oject and the software pr e roduct. For example, ther may be co re omponent integratio testing aft compone testing, an system in on ter ent nd ntegration tes sting after sy ystem testing g. Software work produ e ucts (such as business sc s cenarios or use cases, re u equirements s specification ns, design d documents an code) pro nd oduced during developme are often the basis of testing in on or ent f ne more tes levels. Ref st ferences for g generic work products include Capab k bility Maturity Model Integ y gration (CMMI) o Software life cycle pro or ocesses (IEE EE/IEC 1220 Verification and valid 07). dation (and early e test design) can be c carried out du uring the dev velopment of the software work produ f e ucts.

2.1.2

Iterative-incremen Develo ntal opment Models (K2)

Iterative-incremental developmen is the proc nt cess of estab blishing requi irements, designing, build ding and testi a system in a series o short deve ing m of elopment cyc cles. Examples are: proto otyping, Rapid Applicati Developm ion ment (RAD), Rational Un nified Process (RUP) and agile develo s opment mode A els. system t that is produc using the models m be teste at several test levels d ced ese may ed during each iteration. An increme added to others deve . ent, eloped previo ously, forms a growing pa artial system, which sh hould also be tested. Reg e gression testing is increas singly import tant on all ite erations after the r first one. Verification and validatio can be ca . on arried out on each increm ment.

2.1.3

Testing w within a Life Cycle M Model (K2 2)

In any lif cycle model, there are several characteristics of good testin fe o ng: o For e every develo opment activi there is a corresponding testing ac ity ctivity o Each test level h test objec h has ctives specifi to that leve ic el o The analysis and design of te d ests for a giv test level should begi during the correspondi ven in ing deve elopment act tivity o Test ters should b involved in reviewing d be n documents as soon as dr a rafts are available in the deve elopment life cycle Test leve can be co els ombined or r reorganized d depending on the nature of the projec or the syst ct tem architect ture. For exa ample, for the integration of a Comme e ercial Off-The e-Shelf (COT software TS) product into a system the purcha m, aser may per rform integra ation testing a the system level (e.g., at m
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 22 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

integratio to the infr on rastructure and other systems, or sys stem deploym ment) and ac cceptance tes sting (function and/or no nal on-functional, and user an , nd/or operational testing) ).

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 23 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

2.2
Terms

Test Le evels (K K2)

40 minut tes

Alpha testing, beta te esting, comp ponent testing driver, field testing, fun g, d nctional requ uirement, integratio integratio testing, no on, on on-functional requiremen robustness testing, stu system te l nt, s ub, esting, test environment, tes level, test-d st driven development, use acceptance testing er e

Backgr round
For each of the test levels, the fo h ollowing can b identified: the generic objectives, t work be c the product(s) being refe erenced for d deriving test c cases (i.e., th test basis the test ob he s), bject (i.e., wh is hat being tes sted), typical defects and failures to b found, tes harness requirements a tool supp l d be st and port, and spec approac cific ches and responsibilities. Testing a systems configuration data shall be considered during test planning, e d

2.2.1

Component Testin (K2) ng

Test bas sis: o Com mponent requ uirements o Deta ailed design o Code e Typical t test objects: o Com mponents o Prog grams o Data conversion / migration p a programs o Data abase modules Compon nent testing (a also known a unit, modu or progra testing) searches for d as ule am defects in, an nd verifies t functionin of, softwa modules, programs, objects, class the ng are o ses, etc., that are separat tely testable. It may be do in isolati from the rest of the sy . one ion ystem, depending on the context of the e developm ment life cycle and the sy ystem. Stubs drivers and simulators may be used s, d d. nent testing m include t may testing of fun nctionality an specific no nd on-functional characterist l tics, Compon such as resource-behavior (e.g., searching fo memory le or eaks) or robu ustness testin as well as ng, s structura testing (e.g decision c al g., coverage). Te cases are derived fro work prod est e om ducts such as a s specifica ation of the component, th software design or the data model. he e Typically component testing occ y, curs with acce to the co being tes ess ode sted and with the support of a h t developm ment environ nment, such as a unit test framework or debugging tool. In practice, comp ponent testing u usually involv the progr ves rammer who wrote the co ode. Defects are typically fixed as soo as y on they are found, witho formally m out managing the defects. ese One app proach to com mponent test ting is to prepare and aut tomate test c cases before coding. This is e s called a test-first app proach or tes st-driven deve elopment. Th approach is highly iterative and is his h s based on cycles of developing te cases, the building and integratin small piec of code, and n est en ng ces a executing the compo onent tests co orrecting any issues and iterating unt they pass. y til

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 24 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

2.2.2

Integratio Testing (K2) on g

Test bas sis: o Softw ware and sys stem design o Arch hitecture o Workflows o Use cases Typical t test objects: o Subs systems o Data abase implem mentation o Infra astructure o Inter rfaces o Syst tem configura ation and configuration d data Integratio testing tests interfaces between components, interactions with differen parts of a on nt system, such as the operating sy ystem, file sy ystem and ha ardware, and interfaces b between syst tems. There may be more t than one lev of integrat vel tion testing and it may be carried out on test objec of a e cts varying s size as follow ws: 1. Com mponent integ gration testin tests the in ng nteractions between softw b ware compo onents and is done s after component testing r 2. Syst tem integratio testing tests the intera on actions betwe different systems or between een t hard dware and so oftware and m be done after system testing. In this case, the developing may e m g orga anization may control only one side of the interface. This migh be conside y y f ht ered as a risk k. Business proces sses impleme ented as wor rkflows may involve a ser ries of system Cross-platform ms. issue may be si es ignificant. The grea the scop of integrat ater pe tion, the more difficult it becomes to is b solate defect to a specif ts fic compone or system which may lead to incr ent m, y reased risk and additiona time for tro a al oubleshooting g. Systema integratio strategies may be bas on the sy atic on sed ystem archite ecture (such as top-down and n bottom-u functiona tasks, tran up), al nsaction proc cessing sequ uences, or so ome other as spect of the system s or compo onents. In or rder to ease fault isolation and detect defects earl integration should nor n t ly, n rmally be increm mental rathe than big bang. er Testing o specific no of on-functional characterist (e.g., performance) m be includ in integr l tics may ded ration testing a well as fun as nctional testin ng. At each stage of inte egration, teste concentr ers rate solely on the integrat n tion itself. Fo example, if they or f are integ grating modu A with mo ule odule B they are intereste in testing the commun ed nication betw ween the modu ules, not the functionality of the indivi y idual module as that was done during component e s g t testing. B Both function and struc nal ctural approaches may be used. e Ideally, t testers should understand the archite d ecture and inf fluence integ gration plann ning. If integra ation tests are planned before compon e nents or syste ems are built, those com mponents can be built in th n he order req quired for mo efficient t ost testing.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 25 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

2.2.3

System T Testing (K K2)

Test bas sis: o Syst tem and softw ware require ement specification o Use cases o Func ctional specif fication o Risk analysis rep k ports Typical t test objects: o Syst tem, user and operation m manuals o Syst tem configura ation and configuration d data System t testing is con ncerned with the behavio of a whole system/prod h or duct. The tes sting scope shall s be clearl addressed in the Master and/or Lev Test Plan for that test level. ly d vel n In system testing, the test environ m e nment should correspond to the final target or pro d oduction environm ment as much as possible in order to minimize the risk of environment-spe e e ecific failures not s being fou in testing und g. testing may include tests based on risks and/or on requirements specifica s o ations, busine ess System t processe use case or other high level text descriptions or models of system be es, es, t s ehavior, interactio with the operating sy ons ystem, and sy ystem resources. System t testing should investigate functional a non-func e and ctional requir rements of th system, and he data qua characte ality eristics. Teste also need to deal with incomplete or undocum ers d h e mented requirem ments. System testing of f m functional requirements starts by usin the most a s ng appropriate specifica ation-based ( (black-box) te echniques fo the aspect of the syste to be teste For exam or t em ed. mple, a decision table may b created for combinations of effects described in business ru be s n ules. Structurebased te echniques (w white-box) ma then be us to asses the thoroughness of th testing with ay sed ss he respect t a structura element, s to al such as menu structure or web page n u o navigation (s Chapter 4). see An indep pendent test team often c carries out sy ystem testing g.

2.2.4

Acceptan Testin (K2) nce ng

Test bas sis: o User requiremen r nts o Syst tem requirem ments o Use cases sses o Business proces o Risk analysis rep k ports Typical t test objects: o Business proces sses on fully integrated sy ystem o Operational and maintenance processes e o User procedures r s o Form ms o Repo orts o Conf figuration da ata Acceptance testing is often the re s esponsibility of the customers or user of a system other rs m; e s stakeholders may be involved as well. The goal in acceptan testing is to establish confidence in the system parts of th system or nce s h m, he r specific non-functional characteri istics of the s system. Find ding defects is not the ma focus in ain acceptan testing. A nce Acceptance t testing may assess the systems read s diness for de eployment an nd
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 26 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

use, alth hough it is no necessarily the final lev of testing. For example, a large-sc ot y vel cale system integratio test may c on come after th acceptanc test for a system. he ce Acceptance testing m occur at various time in the life cycle, for exa may t es ample: o A CO OTS software product ma be accept ay tance tested when it is in nstalled or int tegrated o Acce eptance testi of the usa ing ability of a co omponent may be done d during component testing g o Acce eptance testi of a new functional en ing nhancement may come b t before system testing m Typical f forms of acce eptance testi include th following: ing he ceptance te esting User acc Typically verifies the fitness for use of the sys y stem by business users. onal (accept tance) testin ng Operatio The acce eptance of th system by the system administrato including he y ors, g: o Test ting of backu up/restore o Disa aster recover ry o User manageme r ent o Main ntenance tas sks o Data load and m a migration task ks o Perio odic checks of security vulnerabilities s ct ation accept tance testin ng Contrac and regula Contract acceptance testing is pe t e erformed aga ainst a contra acts accepta ance criteria for producing custom-d developed so oftware. Acc ceptance crite should be defined wh the parti agree to the eria b hen ies t contract. Regulation acceptance testing is pe . erformed aga ainst any regu ulations that must be adh hered to, such as governme legal or safety regula ent, ations. g Alpha and beta (or field) testing Developers of marke or COTS, software ofte want to get feedback from potentia or existing et, en al g custome in their ma ers arket before the software product is put up for sale commercia e p ally. Alpha te esting is performed at the d developing or rganizations site but not by the developing team. Beta testing or s g, field-test ting, is perfor rmed by cust tomers or po otential custo omers at their own locatio ons. Organiza ations may u other term as well, s use ms such as facto acceptanc testing an site accep ory ce nd ptance testing fo systems th are tested before and after being moved to a customers s or hat d site.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 27 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

2.3
Terms

Test Ty ypes (K2 2)

40 minut tes

Black-bo testing, co coverage functional testing, inter ox ode e, roperability te esting, load t testing, maintain nability testing performan testing, p g, nce portability tes sting, reliabili testing, se ity ecurity testing, stress te esting, structu testing, u ural usability test ting, white-bo testing ox

Backgr round
A group of test activities can be a aimed at veri ifying the sof ftware system (or a part o a system) based m of on a spe ecific reason or target for testing. A test typ is focused on a partic pe d cular test obje ective, which could be an of the follo h ny owing: o A fun nction to be performed by the software y o A no on-functional quality characteristic, su as reliabi or usability uch ility o The structure or architecture of the softwa or system are m o Change related, i.e., confirmi that defe ing ects have bee fixed (con en nfirmation tes sting) and loo oking for u unintended ch hanges (regr ression testin ng) A model of the software may be d developed and/or used in structural te n esting (e.g., a control flow w model or menu struc r cture model), non-function testing (e nal e.g., performa ance model, usability mo odel security threat modeling), and fun nctional testing (e.g., a process flow m model, a stat transition model te or a plain language s n specification) ).

2.3.1

Testing o Functio (Functio of on onal Testi ing) (K2)

The func ctions that a system, subs system or co omponent are to perform may be described in work e products such as a re s equirements specification use cases or a functio s n, s, onal specifica ation, or they may y be undoc cumented. T functions are what t system does. The s the d Function tests are based on fun nal nctions and f features (des scribed in do ocuments or u understood by the b testers) a their inte and eroperability with specific systems, an may be pe c nd erformed at a test levels (e.g., all s tests for components may be bas on a com s sed mponent specification). Specifica ation-based t techniques m be used to derive tes conditions and test cas from the may st s ses functiona of the so ality oftware or sy ystem (see C Chapter 4). Fu unctional tes sting conside the extern ers nal behavior of the softw r ware (black-b testing). box A type of functional testing, security testing, in f nvestigates the functions (e.g., a firew t s wall) relating to g detection of threats, such as virus n ses, from ma alicious outsiders. Anothe type of fun er nctional testing, interoper rability testin evaluates the capabili of the soft ng, s ity tware produc to interact with one or more ct specified component or systems d ts s.

2.3.2 Testing o Non-fun of nctional S Software Characteris C stics (Non n-functional Testing (K2) g)
Non-func ctional testing includes, b is not limited to, perfo but ormance testing, load testing, stress testing, u usability testing, maintain nability testin reliability testing and p ng, t portability tes sting. It is the e testing o how the s of system works s. Non-func ctional testing may be pe erformed at a test levels. The term non-functiona testing des all al scribes the tests required to measure cha s aracteristics of systems and software that can be quantified on a a e o varying s scale, such a response times for per as rformance te esting. These tests can be referenced to a e d quality m model such a the one de as efined in Sof ftware Engine eering Soft tware Product Quality (IS SO
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 28 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

9126). N Non-functiona testing con al nsiders the external beha avior of the so oftware and in most case es uses bla ack-box test d design techn niques to acc complish that t.

2.3.3

Testing o Softwar Structu of re ure/Archite ecture (Str ructural Testing) (K K2)

Structura (white-box testing may be perform at all test levels. Structural techni al x) y med t iques are best used afte specification-based techniques, in order to help measure th thoroughn er p he ness of testin ng through assessment of coverage of a type of structure. e Coverag is the exte that a stru ge ent ucture has be exercise by a test s een ed suite, expressed as a percenta of the ite age ems being co overed. If cov verage is not 100%, then more tests m be desig may gned to test th hose items th were miss to increa coverage Coverage techniques a covered in hat sed ase e. are Chapter 4. At all tes levels, but especially in component testing and component integration te st n t esting, tools can be used to measure the code cov verage of ele ements, such as stateme h ents or decisions. Structural testing m be based on the arch may d hitecture of th system, such as a cal he s lling hierarch hy. Structura testing app al proaches can also be applied at syste system i n em, integration or acceptance e testing le evels (e.g., to business m o models or me structure enu es).

2.3.4

Testing R Related to Changes Re-testing and Re o s: egression Testing (K2)

After a d defect is dete ected and fixe the softw ed, ware should be re-tested t confirm th the origina b to hat al defect ha been succ as cessfully rem moved. This i called confirmation. De is ebugging (loc cating and fix xing a defect) is a developm s ment activity, not a testing activity. g Regress sion testing is the repeate testing of an already te s ed ested progra after mod am, dification, to discover any defects introduced o uncovered as a result of the chang r s or d ge(s). These defects may be y either in the software being tested, or in anoth related or unrelated s e her o software com mponent. It is s performe when the software, or its environm ed ment, is changed. The ext tent of regres ssion testing is g based on the risk of not finding defects in soft n ftware that was working p previously. hould be repe eatable if the are to be u ey used for conf firmation test ting and to assist regress sion Tests sh testing. Regress sion testing m be performed at all te levels, an includes functional, no may est nd on-functional and structura testing. Re al egression tes suites are r many tim and gene st run mes erally evolve slowly, so e regressio testing is a strong can on ndidate for au utomation.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 29 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

2.4
Terms

Maintenance T Testing ( (K2)

15 minut tes

Impact a analysis, maintenance tes sting

Backgr round
Once de eployed, a so oftware syste is often in service for years or decades. During this time the em n y g system, its configura ation data, or its environm ment are often corrected, changed or extended. Th n he planning of releases in advance i crucial for successful maintenance testing. A distinction has to be g is m e s made be etween plann releases and hot fixe Maintenan testing is done on an existing ned es. nce s n operational system, a is triggered by modif and fications, mig gration, or retirement of th software or he system. Modifica ations include planned en e nhancement c changes (e.g release-ba g., ased), correc ctive and emergen changes, and change of environ ncy es nment, such as planned o a operating sys stem or database upgrades, planned upgrade of Co ommercial-O Off-The-Shelf software, or patches to correct newly f r exposed or discovere vulnerabilities of the o d ed operating sys stem. Maintena ance testing for migration (e.g., from one platform to another) should inclu operation n m ude nal tests of t new environment as w as of the changed so the well e oftware. Migration testing (conversion g n testing) i also neede when data from anoth applicatio will be mig is ed a her on grated into th system be he eing maintain ned. Maintena ance testing for the retire ement of a sy ystem may in nclude the te esting of data migration or a archiving if long data g a-retention pe eriods are required. In additio to testing what has be changed, maintenanc testing inc on een ce cludes regression testing to g parts of t system that have not been chang the t ged. The sco of mainte ope enance testin is related to the ng t risk of th change, th size of the existing sys he he e stem and to the size of th change. D t he Depending on the n changes maintenanc testing may be done a any or all test levels an for any or all test types. s, ce at t nd r Determin ning how the existing sys e stem may be affected by changes is c called impact analysis, an is t nd used to h help decide h how much re egression tes sting to do. The impact analysis may be used to T determin the regres ne ssion test sui ite. Maintena ance testing can be diffic if specific cult cations are out of date or missing, or testers with o r domain k knowledge a not availa are able.

Refere ences
2.1.3 CM MMI, Craig, 2 2002, Hetzel 1988, IEEE 12207 l, E 2.2 Hetz 1988 zel, 2.2.4 Co opeland, 200 Myers, 19 04, 979 2.3.1 Be eizer, 1990, B Black, 2001, Copeland, 2 2004 2.3.2 Black, 2001, IS 9126 SO 2.3.3 Be eizer, 1990, C Copeland, 20 004, Hetzel, 1988 2.3.4 He etzel, 1988, I IEEE STD 82 29-1998 2.4 Blac 2001, Cra 2002, He ck, aig, etzel, 1988, I IEEE STD 82 29-1998

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 30 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

3.

Static T S Techniques (K2 2)

6 minut 60 tes

Learni Objec ing ctives for Static Te r echniques


The obje ectives identify what you will be able t do followin the compl to ng letion of each module. h

3.1 Sta Techniques and the Test Process (K2) atic d (


LO-3.1.1 1 LO-3.1.2 2 LO-3.1.3 3 Recognize software work produc that can be examined by the differ cts b rent static techniqu (K1) ues Describe the importa e ance and valu of conside ue ering static te echniques fo the assess or sment of softwa work products (K2) are Explain t differenc between s the ce static and dyn namic techniques, consid dering object tives, types of defects to be identified, a the role of these tech e and hniques with the softwa life hin are cycle (K2 2)

3.2 Rev view Proc cess (K2)


LO-3.2.1 1 LO-3.2.2 2 LO-3.2.3 3 Recall th activities, roles and responsibilities of a typical formal revie (K1) he s ew Explain t differenc between different type of reviews informal re the ces es s: eview, techni ical review, w walkthrough and inspection (K2) Explain t factors fo successful performanc of reviews (K2) the or ce s

3.3 Sta Analys by Too (K2) atic sis ols


LO-3.3.1 1 LO-3.3.2 2 LO-3.3.3 3 Recall ty ypical defects and errors identified by static analys and comp s y sis pare them to o reviews and dynamic testing (K1) c Describe using exam e, mples, the ty ypical benefits of static an nalysis (K2) List typic code and design defe cal ects that may be identified by static an y d nalysis tools (K1)

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 31 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

3.1 Static T Techniques and the Tes Proce d st ess (K2)


Terms
Dynamic testing, stat testing c tic

15 minut tes

Backgr round
Unlike dy ynamic testin which req ng, quires the ex xecution of so oftware, static testing tec chniques rely on y the manu examinat ual tion (reviews and autom s) mated analysi (static ana is alysis) of the code or othe er project d documentatio without the execution of the code. on Reviews are a way o testing soft s of tware work p products (including code) and can be performed well ) w before dynamic test e execution. D Defects detec cted during re eviews early in the life cy ycle (e.g., def fects found in requirement are often much cheap to remove than those detected by running test on ts) per e y ts the exec cuting code. A review could be do entirely a a manual activity, but there is also tool support The main w one as t. manual a activity is to e examine a w work product and make co omments about it. Any so oftware work k product c be reviewed, includin requireme can ng ents specifica ations, desig specifications, code, te gn est plans, te specifications, test cas est ses, test scri ipts, user guides or web pages. Benefits of reviews in nclude early defect detec ction and cor rrection, deve elopment pro oductivity improvem ments, reduc developm ced ment timesca ales, reduced testing cos and time, li d st ifetime cost reduction fewer def ns, fects and improved comm munication. Reviews can find omissio R n ons, for exam mple, in require ements, whic are unlike to be foun in dynamic testing. ch ely nd Reviews static analy and dyna s, ysis amic testing have the same objective identifying defects. Th e g hey are complementary; the different techniques can find diffe erent types o defects effe of ectively and efficiently. Compared to dynamic testing, stat technique find causes of failures (defects) rather d c tic es than the failures them mselves. Typical d defects that a easier to find in reviews than in dynamic testin include: d are ng deviations fro om standard requireme defects, d ds, ent design defec insufficie maintaina cts, ent ability and inc correct interfa ace specifica ations.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 32 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

3.2
Terms

Review Proces (K2) w ss

25 minut tes

Entry criteria, formal review, infor rmal review, inspection, metric, mode m erator, peer r review, review wer, scribe, te echnical review, walkthro ough

Backgr round
The diffe erent types o reviews vary from inform characterized by no written instr of mal, o ructions for reviewer to systematic, charact rs, terized by tea participat am tion, docume ented results of the review and s w, documen nted procedu ures for cond ducting the re eview. The fo ormality of a review proce is related to ess d factors s such as the m maturity of the developme process, any legal or regulatory re ent equirements or the s need for an audit trai r il. The way a review is carried out d y depends on t agreed objectives of t review (e the the e.g., find defe ects, gain und derstanding, educate test ters and new team memb w bers, or discu ussion and d decision by consens sus).

3.2.1

Activities of a Form Revie (K1) s mal ew

A typical formal revie has the fo l ew ollowing main activities: n 1. Plan nning D Defining the review criter ria S Selecting the personnel e A Allocating ro oles D Defining the entry and ex criteria for more forma review type (e.g., insp xit r al es pections) S Selecting wh hich parts of documents t review to C Checking en criteria (f more form review types) ntry for mal 2. Kick k-off D Distributing d documents E Explaining th objectives process an document to the participants he s, nd ts 3. Indiv vidual prepar ration P Preparing for the review meeting by r reviewing the document(s) e N Noting poten ntial defects, questions an comment nd ts 4. Exam mination/eva aluation/recor rding of resu (review meeting) ults m D Discussing o logging, with documented results or minutes (fo more form review typ or o or mal pes) N Noting defec making re cts, ecommenda ations regarding handling the defects, making dec cisions about the de a efects E Examining/e evaluating and recording issues during any physic meetings or tracking any g cal a group electro g onic commun nications 5. Rew work F Fixing defect found (typ ts pically done b the author by r) R Recording updated status of defects (in formal reviews) 6. Follo ow-up C Checking tha defects ha been add at ave dressed G Gathering metrics C Checking on exit criteria (for more for n rmal review types) t

3.2.2

Roles an Respon nd nsibilities (K1)

A typical formal revie will includ the roles b l ew de below: o Manager: decide on the exe es ecution of rev views, alloca ates time in p project sched dules and dete ermines if the review obje e ectives have been met.
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 33 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

o o

Moderator: the p person who le eads the review of the do ocument or s of docume set ents, includin ng planning the revi iew, running the meeting, and followin ng-up after th meeting. If necessary the he y, moderator may m mediate betw ween the various points of view and is often the pe o s erson upon whom w the s success of th review res he sts. Auth the writer or person w chief res hor: with sponsibility fo the docum or ment(s) to be reviewed. Revi iewers: indiv viduals with a specific technical or bus siness backg ground (also called check kers or inspe ectors) who, after the necessary prep paration, iden ntify and des scribe finding (e.g., defe gs ects) in the p product unde review. Re er eviewers sho ould be chose to represe different perspectives and en ent s roles in the revie process, a should ta part in any review me s ew and ake eetings. Scrib (or record be der): docume ents all the is ssues, proble ems and open points that were identif t fied durin the meetin ng ng.

Looking at software p products or r related work products fro different p om perspectives and using checklist can make reviews mor effective a efficient. For example a checklist based on various ts re and e, t v perspect tives such as user, maint s tainer, tester or operation or a chec r ns, cklist of typica requirements al problems may help to uncover pr s o reviously und detected issu ues.

3.2.3

Types of Reviews (K2) f

A single software pro oduct or relat work pro ted oduct may be the subject of more than one review If e n w. more tha one type o review is u an of used, the ord may vary For example, an inform review ma be der y. mal ay carried o before a t out technical rev view, or an in nspection ma be carried out on a req ay quirements specifica ation before a walkthroug with customers. The main characte gh m eristics, optio and purp ons poses of comm review ty mon ypes are: Informal Review o No fo ormal proces ss o May take the form of pair pro m ogramming o a technical lead review or wing designs and code o Resu may be d ults documented o Varie in usefuln es ness dependi on the re ing eviewers o Main purpose: in n nexpensive w to get so way ome benefit rough Walkthr o Mee eting led by a author o May take the form of scenarios, dry runs, peer group participation m , n ssions o Open-ended ses Optional pre-meeting pre eparation of r reviewers O O Optional preparation of a review repo including list of finding ort gs o Optio onal scribe (who is not th author) he o May vary in prac ctice from qui informal to very forma ite al o Main purposes: learning, gaining unders n standing, find ding defects cal Technic Review o Docu umented, de efined defect-detection pr rocess that in ncludes peer and techni rs ical experts with w optio onal manage ement particip pation o May be performe as a peer review witho managem ed out ment participation o Idea led by trained modera (not the a ally ator author) o Pre-meeting prep paration by r reviewers o Optio onal use of c checklists o Prep paration of a review repor which inclu rt udes the list of findings, t verdict whether the the softw ware product meets its re t equirements and, where appropriate, recommendations relate to a ed findings o May vary in prac ctice from qui informal to very forma ite al o Main purposes: discussing, making decis n sions, evalua ating alternat tives, finding defects, sol g lving technical problem and chec ms cking conform mance to spe ecifications, p plans, regula ations, and standards
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 34 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Inspecti ion o Led by trained m moderator (no the author) ot o Usua conducte as a peer examination ally ed n o Defin roles ned o Inclu udes metrics gathering o Form process b mal based on rules and chec cklists o Spec cified entry a exit criter for accep and ria ptance of the software pro oduct o Pre-meeting prep paration o Inspection report including lis of findings t st o Form follow-up process (wi optional p mal p ith process impro ovement com mponents) o Optio onal reader o Main purpose: fin n nding defects s Walkthro oughs, technical reviews and inspecti ions can be performed w p within a peer g group, i.e., colle eagues at the same organizational lev This type of review is called a pe review. e vel. e s eer

3.2.4

Success Factors f Review (K2) s for ws

Success factors for r s reviews include: o Each review has clear predef h s fined objectiv ves o The right people for the revie objectives are involved ew s d o Test ters are value reviewers who contrib ed s bute to the re eview and als learn abou the produc so ut ct whic enables th ch hem to prepa tests earlier are o Defe ects found ar welcomed and express objective re sed ely o Peop issues an psycholog ple nd gical aspects are dealt with (e.g., mak s king it a positive experien for nce the a author) o The review is conducted in an atmospher of trust; th outcome w not be us for the re he will sed evaluation of the participants e s o Revi iew techniqu are applie that are s ues ed suitable to ac chieve the ob bjectives and to the type and a level of software work produc and revie cts ewers o Chec cklists or role are used if appropriate to increase effectivenes of defect identification es e e ss n o Train ning is given in review techniques, es specially the more formal techniques such as l inspe ection o Management sup pports a goo review pro od ocess (e.g., by incorporat b ting adequate time for rev e view vities in proje schedules ect s) activ o Ther is an emphasis on lear re rning and pro ocess improv vement

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 35 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

3.3
Terms

Static A Analysis by Too (K2) s ols

20 minut tes

Compiler, complexity control flow data flow, static analys y, w, sis

Backgr round
The obje ective of stati analysis is to find defects in softwa source co and softw ic s are ode ware models s. Static an nalysis is per rformed witho actually e out executing the software be e eing examine by the too ed ol; dynamic testing does execute the software co c s e ode. Static analysis can l locate defect that are ha to ts ard find in dy ynamic testin As with re ng. eviews, static analysis fin defects r c nds rather than fa ailures. Static c analysis tools analyz program c ze code (e.g., co ontrol flow an data flow) as well as g nd ), generated ou utput such as HTML and X XML. The valu of static an ue nalysis is: o Early detection o defects prior to test exe y of ecution o Early warning ab y bout suspicio aspects o the code or design by t calculatio of metrics such ous of o the on s, as a high comple exity measur re o Identification of d defects not e easily found b dynamic testing by t o Dete ecting dependencies and inconsistenc cies in softw ware models s such as links s o Impr roved mainta ainability of c code and des sign o Prev vention of defects, if lesso are learn in develo ons ned opment Typical d defects disco overed by sta analysis tools include atic e: o Refe erencing a va ariable with a undefined value an d o Inconsistent interfaces betwe modules and compon een s nents o Varia ables that ar not used o are improp re or perly declared d o Unre eachable (de ead) code o Miss sing and erro oneous logic (potentially infinite loops) o Overly complicat construct ted ts o Prog gramming sta andards viola ations o Secu urity vulnerab bilities o Synt violations of code and software m tax s d models Static an nalysis tools are typically used by dev velopers (che ecking against predefined rules or d programming standa ards) before a during component an integration testing or w and nd when checking-in configuration manageme tools, and by designers during sof n ent d ftware model ling. Static code to c analysis tools may produce a larg number o warning me ge of essages, which need to b well-mana be aged to allow the most effe ective use of the tool. f Compilers may offer some suppo for static a ort analysis, incl luding the ca alculation of m metrics.

Refere ences
3.2 IEEE 1028 E 3.2.2 Gi 1993, van Veenendaa 2004 ilb, n al, 3.2.4 Gi 1993, IEE 1028 ilb, EE 3.3 van Veenendaal 2004 l,

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 36 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

4.

Test De T esign Te echniqu (K4) ues )

28 minu 85 utes

Learni Objec ing ctives for Test Des r sign Tech hniques
The obje ectives identify what you will be able t do followin the compl to ng letion of each module. h

4.1 The Test Dev e velopment Process (K3) t


LO-4.1.1 1 LO-4.1.2 2 LO-4.1.3 3 LO-4.1.4 4 Different tiate between a test desig specificat n gn tion, test case specificatio and test on procedure specificati (K2) ion Compare the terms t e test condition test case and test proc n, a cedure (K2) Evaluate the quality of test cases in terms of clear traceab e s bility to the re equirements and s expected results (K2 d 2) Translate test cases into a well-s e structured tes procedure specification at a level of st n o detail rel levant to the knowledge o the testers (K3) of s

4.2 Cat tegories o Test Des of sign Tech hniques (K K2)


LO-4.2.1 1 LO-4.2.2 2 Recall re easons that b both specification-based (black-box) a structure and e-based (whitebox) test design tech t hniques are u useful and lis the commo technique for each (K st on es K1) Explain t characteristics, comm the monalities, an difference between s nd es specification-based testing, s structure-bas testing a experienc sed and ce-based tes sting (K2)

4.3 Spe ecification n-based or Black-bo Techniques (K3) ox


LO-4.3.1 1 LO-4.3.2 2 LO-4.3.3 3 Write tes cases from given softw st m ware models using equiva alence partiti ioning, bound dary value an nalysis, decis sion tables an state transition diagra nd ams/tables (K K3) Explain t main pur the rpose of each of the four testing techn h niques, what level and ty of t ype testing c could use the technique, a how cov e and verage may b measured (K2) be d Explain t concept of use case testing and its benefits (K the K2)

4.4 Str ructure-ba ased or Wh hite-box T Techniques (K4)


LO-4.4.1 1 LO-4.4.2 2 Describe the concep and value o code cove e pt of erage (K2) Explain t concepts of statemen and decision coverage and give re the s nt e, easons why these t concepts can also be used at tes levels othe than component testing (e.g., on s e st er g business procedures at system le s s evel) (K2) Write tes cases from given contr flows usin statement and decision test design st m rol ng t n techniqu (K3) ues Assess s statement an decision c nd coverage for completenes with respe to defined exit ss ect d criteria. ( (K4)

LO-4.4.3 3 LO-4.4.4 4

4.5 Exp perience-b based Tec chniques ( (K2)


LO-4.5.1 1 LO-4.5.2 2 Recall re easons for w writing test ca ases based on intuition, e o experience an knowledg nd ge about co ommon defec (K1) cts Compare experience e e-based tech hniques with specification n-based testin technique (K2) ng es

4.6 Cho oosing Te Techni est iques (K2) )


LO-4.6.1 1 Classify test design t techniques a according to their fitness t a given co t to ontext, for the test e basis, re espective mo odels and sof ftware charac cteristics (K2 2)

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 37 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

4.1
Terms

The Te Deve est elopmen Proces (K3) nt ss

15 minut tes

Test cas specificatio test desig test exec se on, gn, cution schedule, test proc cedure speci ification, test t script, tra aceability

Backgr round
The test developmen process de nt escribed in th section ca be done in different w his an ways, from ve ery informal with little or no documen ntation, to very formal (as it is describ below). T level of s bed The formality depends on the context of the testin including the maturity of testing an development y n t ng, nd processe time cons es, straints, safe or regulat ety tory requirem ments, and th people inv he volved. During te analysis, the test basis document est tation is analyzed in orde to determin what to te er ne est, i.e., to id dentify the tes conditions A test cond st s. dition is defin as an item or event th could be ned hat verified b one or mo test case (e.g., a fun by ore es nction, transa action, quality characteris or structu stic ural element) ). Establish hing traceability from test conditions b t back to the specifications and require s s ements enab bles both effe ective impact analysis wh requirem t hen ments change and determ e, mining requir rements cove erage for a set of tests. Dur ring test analysis the deta ailed test approach is implemented to select the test o t design te echniques to use based o among o o on, other conside erations, the identified risks (see Chapter 5 for more on risk analysis). During te design th test cases and test dat are create and specif est he s ta ed fied. A test c case consists of a s set of inp values, e put execution pre econditions, e expected res sults and exe ecution postc conditions, de efined to cover a certain tes objective(s or test con st s) ndition(s). The Standard for Software Test Docume entation (IEE STD 829-1998) descri EE ibes the cont tent of test design specifi ications (containi test cond ing ditions) and test case spe ecifications. Expected results sho d ould be produ uced as part of the specification of a test case and include outputs, changes to data and states, and any other co s onsequences of the test. If expected r s results have not been def fined, then a plausible, but erroneous result may be interpret as the co s, y ted orrect one. Expected results sho d ould ideally b defined pr to test ex be rior xecution. During te implemen est ntation the te cases are developed, implemente prioritized and organiz in est e ed, d zed the test p procedure sp pecification (IEEE STD 829-1998). Th test proce he edure specifies the seque ence of action for the exe ns ecution of a t test. If tests a run using a test execution tool, the sequence of are g actions is specified in a test scrip (which is an automated test procedure). n pt d The vario test proc ous cedures and automated t test scripts are subseque ently formed into a test executio schedule t on that defines t order in w the which the various test pro ocedures, an possibly nd automate test script are execu ed ts, uted. The test execution schedule will take into account such factors a regression tests, prioritization, and technical an logical dependencies. as n nd

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 38 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

4.2 Catego ories of T Test Design Tec chnique es (K2)


Terms

15 minut tes

Black-bo test design technique, experienceox n -based test design techni d ique, test des sign techniqu ue, white-bo test design technique ox n

Backgr round
The purp pose of a tes design tech st hnique is to i identify test conditions, te cases, an test data. c est nd assic distinction to denote test techniq e ques as blac ck-box or whi ite-box. Blac ck-box test de esign It is a cla techniqu (also called specificat ues tion-based te echniques) are a way to d a derive and select test condition test cases, or test dat based on an analysis of the test ba documentation. This ns, ta o asis s includes both functio onal and non-functional te esting. Black k-box testing, by definition, does not use u any infor rmation rega arding the inte ernal structure of the com mponent or s system to be tested. Whit te-box test design technique (also calle structural or structurees ed -based techn niques) are b based on an analysis of the struct ture of the co omponent or system. Black-box and w white-box tes sting may als be so combine with exper ed rience-based techniques to leverage the experien of develo d nce opers, testers and s users to determine w what should b tested. be Some techniques fall clearly into a single cate egory; others have eleme s ents of more than one category y. This sylla abus refers t specificatio to on-based tes design tec st chniques as b black-box tec chniques and d structure e-based test design techn niques as wh hite-box techniques. In ad ddition exper rience-based test d design te echniques ar covered. re Common characteris n stics of specification-base test design techniques include: ed s o Models, either fo ormal or infor rmal, are use for the spe ed ecification of the problem to be solved the f m d, softw ware or its co omponents o Test cases can b derived sy t be ystematically from these models y Common characteris n stics of struct ture-based te design te est echniques inc clude: o Infor rmation abou how the so ut oftware is constructed is used to deriv the test ca ve ases (e.g., code c and detailed des sign informati ion) o The extent of cov verage of the software ca be measu e an ured for exist ting test case and furthe test es, er case can be derived system es matically to in ncrease cove erage Common characteris n stics of exper rience-based test design techniques include: d o The knowledge a experien of people are used to derive the t and nce e o test cases o The knowledge o testers, de of evelopers, us sers and othe stakeholde about the software, it er ers e ts usag and its en ge nvironment is one source of informatio s on o Know wledge abou likely defec and their distribution is another so ut cts i ource of infor rmation

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 39 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

4.3 Specification-b based or Black-b r box Techn niques ( (K3)


Terms

1 minu 150 utes

Boundar value analysis, decisio table testin equivalen partitioni ry on ng, nce ing, state transition testin use ng, case tes sting

4.3.1

Equivale ence Partit tioning (K K3)

In equiva alence partiti ioning, inputs to the softw s ware or syste are divide into group that are em ed ps expected to exhibit s d similar behav vior, so they a likely to be processed in the same way. are b d Equivale ence partition (or classes can be fou for both valid data, i.e., values that should be ns s) und e accepted and invalid data, i.e., va d alues that sh hould be rejected. Partitio can also be identified for ons d outputs, internal valu ues, time-rela ated values ( (e.g., before or after an e event) and for interface parameters (e.g., inte egrated com mponents bein tested during integrati testing). T ng ion Tests can be e d l nvalid partitions. Equivale ence partition ning is applic cable at all levels of designed to cover all valid and in testing. Equivale ence partition ning can be u used to achie input and output cove eve d erage goals. It can be ap pplied to human input, input via interfac to a syste or interfa paramete in integra ces em, ace ers ation testing.

4.3.2

Boundar Value A ry Analysis (K K3)

Behavior at the edge of each equ r e uivalence partition is mor likely to be incorrect th behavior within re e han the partit tion, so boun ndaries are a area wher testing is likely to yield defects. The maximum and an re e minimum values of a partition are its boundar values. A boundary va m e ry b alue for a vali partition is a id s valid bou undary value the bounda of an inva partition is an invalid boundary va e; ary alid alue. Tests can be c designed to cover bo valid and invalid boun d oth ndary values. When desig gning test ca ases, a test fo or each bou undary value is chosen. e Boundar value analysis can be applied at all test levels. It is relatively easy to apply and its defectry finding c capability is h high. Detailed specificatio are helpf in determi d ons ful ining the inte eresting boundar ries. This tech hnique is ofte considere as an exte en ed ension of equ uivalence partitioning or o other black-b box test design technique It can be used on equ es. uivalence cla asses for use input on sc er creen as well as, for exam mple, on time ranges (e.g., time out, tr ransactional speed requirements) or t table ranges (e.g., s table size is 256*256 6).

4.3.3

Decision Table Testing (K3) n )

Decision tables are a good way t capture sy n to ystem require ements that c contain logic conditions and cal s, to docum ment internal system design. They ma be used to record com ay o mplex business rules that a t system is to impleme When cr ent. reating decision tables, th specificati is analyz he ion zed, and cond ditions and actio of the sy ons ystem are ide entified. The input conditions and actions are mos often stated in st such a w that they must be true or false (Boolean). The decision tab contains the triggering way y e ble condition often com ns, mbinations of true and false for all inp conditions and the res f put s, sulting action for ns each com mbination of conditions. E Each column of the table corresponds to a busine rule that n e ess defines a unique com mbination of conditions an which res in the exe nd sult ecution of the actions associated with that rule. The cov verage stand dard commonly used with decision table testing is to h s have at l least one tes per column in the table which typic st n e, cally involves covering all combination of s l ns triggering conditions. g .

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 40 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

The strength of decis sion table tes sting is that it creates com t mbinations o conditions that otherwis of se might no have been exercised during testing It may be applied to all situations w ot g. a when the actio of on the softw ware depends on several logical decis sions.

4.3.4

State Tra ansition Te esting (K3 3)

A system may exhibi a different response de m it epending on current cond ditions or previous history (its y state). In this case, th aspect of the system can be show with a sta transition diagram. It allows n hat f wn ate a the teste to view the software in terms of its states, trans er e sitions betwee states, the inputs or events en e that trigg state cha ger anges (transit tions) and the actions wh hich may result from thos transitions The se s. states of the system or object und test are s f der separate, ide entifiable and finite in num d mber. A state table shows t relationship between the states and inputs, an can highli the a nd ight possible transition that are in ns nvalid. Tests ca be designe to cover a typical sequ an ed uence of states, to cover every state, to exercise every r , transition to exercise specific seq n, e quences of t transitions or to test invalid transitions r s. State tra ansition testin is much used within th embedded software in ng he d ndustry and technical automation in genera However, the techniqu is also suitable for mo al. ue odeling a bus siness object having s specific states or testing s s screen-dialog flows (e. for Intern applicatio or busine gue .g., net ons ess scenario os).

4.3.5

Use Case Testing (K2) e

Tests ca be derived from use ca an d ases. A use c case describ interactio between actors (user or bes ons rs systems), which prod duce a result of value to a system use or the cus t er stomer. Use c cases may be b describe at the abst ed tract level (business use case, techno ology-free, b business proc cess level) or at o the syste level (sys em stem use cas on the sys se stem function nality level). Each use ca has ase preconditions which need to be m for the us case to work successf met se w fully. Each use case terminate with postc es conditions which are the observable results and final state of t system after r the a the use c case has bee completed A use cas usually has a mainstre en d. se eam (i.e., most likely) sce enario and alter rnative scena arios. es the s ugh m s e Use case describe t process flows throu a system based on its actual likely use, so the test cases de erived from u cases are most usefu in uncover use ul ring defects in the proces flows durin ss ng real-worl use of the system. Use cases are v ld e very useful fo designing acceptance tests with or custome er/user participation. They also help u y uncover integ gration defects caused by the interact y tion and inter rference of d different components, which individua component testing wou not see. al t uld Designin test cases from use ca ng s ases may be combined with other spe w ecification-ba ased test techniqu ues.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 41 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

4.4 Structu ure-base or Wh ed hite-box Techn niques (K4)


Terms
Code co overage, deci ision coverag statemen coverage, structure-ba ge, nt ased testing

60 minut tes

Backgr round
Structure e-based or w white-box testing is based on an ident d tified structur of the soft re tware or the system, as seen in th following examples: he o Com mponent level: the structu of a softw ure ware component, i.e., stat tements, dec cisions, branc ches or ev distinct p ven paths o Integ gration level: the structure may be a c tree (a diagram in wh call hich modules call other s modules) tem level: the structure m be a men structure, business pr e may nu rocess or web page struc cture o Syst In this se ection, three code-related structural te design te d est echniques for code cover rage, based on o statemen branches and decisio nts, ons, are disc cussed. For decision test d ting, a contro flow diagra ol am may be u used to visua alize the alte ernatives for e each decisio on.

4.4.1

Statemen Testing and Cove nt g erage (K4)

In compo onent testing statement coverage is the assessm g, ment of the pe ercentage of executable f echnique derives statemen that have been exerc nts e cised by a tes case suite. The statem st ment testing te test case to execute specific sta es e atements, normally to increase statem ment coverag ge. Statement coverage is determine by the num ed mber of exec cutable statements cover by (desig red gned or execu uted) test cas divided b the numbe of all exec ses by er cutable statem ments in the code under test.

4.4.2

Decision Testing a Cover n and rage (K4)

Decision coverage, r n related to bra anch testing, is the asses ssment of the percentage of decision e e outcome (e.g., the T es True and Fal options o an IF statement) that ha been ex lse of ave xercised by a test case suite. The decis sion testing t technique de erives test ca ases to execu specific d ute decision outc comes. Branche originate fr es rom decision points in the code and show the tran n e s nsfer of contr to differen rol nt locations in the code s e. Decision coverage is determined by the numb of all dec n s d ber cision outcom covered by (designe or mes ed executed test cases divided by t number o all possible decision ou d) s the of e utcomes in th code under he test. Decision testing is a form of cont flow testin as it follow a specific flow of cont through the n trol ng ws c trol t decision points. Deci ision coverag is stronge than statem ge er ment coverag 100% de ge; ecision cover rage guarante 100% sta ees atement cove erage, but no vice versa ot a.

4.4.3

Other Structure-ba ased Tech hniques (K K1)

There ar stronger le re evels of struc ctural covera beyond decision cove age d erage, for ex xample, cond dition coverage and multiple condition c e coverage. The conc cept of coverage can also be applied at other test levels For example, at the integration d level the percentage of modules, components or classes that have be exercised by a test ca s een d ase suite cou be expres uld ssed as mod dule, compon nent or class coverage. Tool sup pport is usefu for the stru ul uctural testing of code. g
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 42 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

4.5
Terms

Experie ence-ba ased Tec chniques (K2) s

30 minut tes

Exploratory testing, ( (fault) attack

Backgr round
Experien nce-based te esting is where tests are d derived from the testers skill and intu uition and the eir experien with similar applicatio and technologies. Wh used to a nce ons hen augment sys stematic techniqu ues, these tec chniques can be useful in identifying special tests not easily c n n s captured by formal f techniqu ues, especially when applied after mo formal ap ore pproaches. However, this technique may m yield wid dely varying d degrees of effectiveness, depending on the tester experienc rs ce. A commonly used ex xperience-ba ased techniqu is error gu ue uessing. Gen nerally tester anticipate rs defects b based on exp perience. A s structured ap pproach to th error gues he ssing techniq is to que enumera a list of possible defects and to de ate esign tests th attack the defects. This systematic hat ese approach is called fa attack. Th ault hese defect and failure lists can be built based on experience n e, available defect and failure data, and from co e ommon know wledge about why softwar fails. t re Exploratory testing is concurrent test design, test executio test loggi and learn s on, ing ning, based on a o test char containin test object rter ng tives, and ca arried out within time-box xes. It is an a approach that is most use where th eful here are few or inadequat specificati te ions and sev vere time pre essure, or in order o to augme or complement other more forma testing. It can serve as a check on the test proc ent r, al c s cess, to help e ensure that th most serio defects a found. he ous are

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 43 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

4.6
Terms

Choosi Test Techniques (K ing t K2)

15 minut tes

No specific terms.

Backgr round
The choice of which test techniqu to use de ues epends on a number of fa actors, includ ding the type of e system, regulatory st tandards, customer or co ontractual req quirements, level of risk, type of risk, test objective documenta e, ation availab knowledg of the test ble, ge ters, time and budget, de d evelopment li ife cycle, us case models and prev se vious experie ence with types of defects found. s Some techniques are more applicable to cert e tain situations and test levels; others are applicab to ble all test le evels. When cr reating test c cases, testers generally u a combin s use nation of test techniques including pro ocess, rule and data-driven techniques t ensure adequate cove to erage of the o object under test.

Refere ences
4.1 Crai 2002, Het ig, tzel, 1988, IE EEE STD 829-1998 4.2 Beiz 1990, Co zer, opeland, 200 04 4.3.1 Co opeland, 200 Myers, 19 04, 979 4.3.2 Co opeland, 200 Myers, 19 04, 979 4.3.3 Be eizer, 1990, C Copeland, 20 004 4.3.4 Be eizer, 1990, C Copeland, 20 004 4.3.5 Co opeland, 200 04 4.4.3 Be eizer, 1990, C Copeland, 20 004 4.5 Kaner, 2002 4.6 Beiz 1990, Co zer, opeland, 200 04

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 44 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5.

Test Ma T anagem ment (K3 3)

17 minu 70 utes

Learni Objec ing ctives for Test Managemen r nt


The obje ectives identify what you will be able t do followin the compl to ng letion of each module. h

5.1 Tes Organiz st zation (K2)


LO-5.1.1 1 LO-5.1.2 2 LO-5.1.3 3 LO-5.1.4 4 Recognize the impor rtance of inde ependent tes sting (K1) Explain t benefits and drawbac of indepe the cks endent testin within an o ng organization (K2) Recognize the differe team members to be considered for the creation of a test team ent (K1) Recall th tasks of a typical test l he leader and te ester (K1)

5.2 Tes Planning and Est st timation (K K3)


LO-5.2.1 1 LO-5.2.2 2 Recognize the differe levels an objectives of test plann ent nd ning (K1) Summar rize the purpose and content of the te plan, test design spec est cification and test d procedure document according to the Stand ts dard for Softw ware Test Documentatio on (IEEE St 829-1998) (K2) td ) Different tiate between conceptual different te approach n lly est hes, such as analytical, modelm based, m methodical, p process/stand dard complia dynamic/heuristic, co ant, onsultative and regressio on-averse (K K2) Different tiate between the subject of test planning for a sy n t ystem and sc cheduling tes st executio (K2) on Write a t test execution schedule f a given se of test cas for et ses, consider ring prioritiza ation, and tech hnical and log gical depend dencies (K3) List test preparation and executio activities that should b considered during test on t be t planning (K1) g Recall ty ypical factors that influenc the effort related to testing (K1) s ce Different tiate between two concep n ptually differe estimatio approache the metric ent on es: csbased ap pproach and the expert-b d based approa (K2) ach Recognize/justify ade equate entry and exit crit y teria for spec test leve and group of cific els ps test case (e.g., for integration te es esting, accep ptance testing or test case for usability g es testing) ( (K2)

LO-5.2.3 3

LO-5.2.4 4 LO-5.2.5 5 LO-5.2.6 6 LO-5.2.7 7 LO-5.2.8 8 LO-5.2.9 9

5.3 Tes Progres Monitor st ss ring and C Control (K K2)


LO-5.3.1 1 LO-5.3.2 2 LO-5.3.3 3 Recall co ommon metr used for monitoring test preparation and exec rics cution (K1) Explain a compare test metrics for test rep and e s porting and te control (e est e.g., defects found f and fixed and tests p d, passed and f failed) relate to purpose and use (K ed e K2) Summar rize the purpose and content of the te summary report document accord est y ding to the Stan ndard for Sof ftware Test D Documentatio (IEEE Std 829-1998) (K2) on

5.4 Configuratio Manage on ement (K2)


LO-5.4.1 1 Summar rize how configuration ma anagement supports test s ting (K2)

5.5 Ris and Tes sk sting (K2)


LO-5.5.1 1 LO-5.5.2 2 LO-5.5.3 3 LO-5.5.4 4 LO-5.5.5 5
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Describe a risk as a possible pro e oblem that wo ould threaten the achieve n ement of one or e more sta akeholders p project objectives (K2) Rememb that the level of risk is determined by likelihoo (of happen ber s d od ning) and impact (harm re esulting if it does happen) (K1) ) Distinguish between the project a product risks (K2) and Recognize typical product and pr roject risks (K K1) Describe using exam e, mples, how r analysis and risk man risk nagement may be used for test f planning (K2) g
Page 45 of 78 31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5.6 Inc cident Man nagement (K3)


LO-5.6.1 1 LO-5.6.2 2 Recognize the conte of an incid ent dent report according to t Standard for Softwar a the d re Test Doc cumentation (IEEE Std 8 829-1998) (K K1) Write an incident rep covering the observa port ation of a failu during te ure esting. (K3)

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 46 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5.1
Terms

Test Organizat tion (K2)

30 minut tes

Tester, test leader, te manager est r

5.1.1

Test Org ganization and Indep pendence (K2) e

The effectiveness of finding defects by testing and review can be imp g ws proved by us sing independent testers. O Options for in ndependenc include the following: ce e o No in ndependent testers; deve elopers test t their own cod de o Inde ependent test ters within th developme teams he ent o Inde ependent test team or gro within the organizatio reporting to project management or t oup e on, o exec cutive manag gement o Inde ependent test ters from the business or e rganization or user comm o munity o Inde ependent test specialists f specific te types suc as usability testers, se t for est ch ecurity tester or rs certification teste (who cert a softwar product ag ers tify re gainst standa ards and regu ulations) o Inde ependent test ters outsourc or extern to the org ced nal ganization For large complex o safety critic projects, it is usually best to have multiple leve of testing, with e, or cal b els some or all of the lev vels done by independent testers. Development s staff may part ticipate in tes sting, especially at the lowe levels, but their lack of objectivity often limits th effectiveness. The er t f o heir independ dent testers may have th authority to require and define test processes a rules, bu he o d and ut testers s should take o such process-related r on roles only in the presence of a clear m e management t mandate to do so. e The benefits of indep pendence inc clude: o Inde ependent test ters see othe and differe defects, and are unbiased er ent a o An in ndependent tester can ve erify assump ptions people made during specificatio and e on imple ementation o the system of m Drawbac include: cks o Isola ation from the developme team (if tr e ent reated as tot tally independent) o Deve elopers may lose a sense of respons e sibility for qua ality o Inde ependent tes sters may be seen as a b bottleneck or blamed for d delays in rele ease Testing t tasks may be done by pe e eople in a specific testing role, or may be done by someone in g y y n another role, such as a project m s manager, qua manager developer, business an domain ex ality r, nd xpert, cture or IT operations. infrastruc

5.1.2

Tasks of the Test Leader an Tester (K1) f nd (

In this sy yllabus two te positions are covered test leader and tester. The activities and tasks est s d, r performe by people in these two roles depen on the pro ed e o nd oject and pro oduct contex the people in the xt, e roles, an the organization. nd Sometim the test leader is called a test ma mes anager or tes coordinator The role of the test leader st r. f may be p performed by a project m y manager, a de evelopment manager, a q quality assur rance manag or ger the mana ager of a tes group. In la st arger projects two positio may exist: test leader and test ons r manager Typically th test leade plans, mon r. he er nitors and co ontrols the testing activitie and tasks as es s defined i Section 1.4. in Typical t test leader ta asks may inc clude: o Coordinate the te strategy a plan with project managers and o est and h others o Write or review a test strateg for the pro e gy oject, and tes policy for th organizati st he ion
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 47 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

o o

o o o o o o o o

Cont tribute the te esting perspe ective to othe project act er tivities, such as integratio planning on Plan the tests c n considering t context a understa the and anding the test objectives and risks s inclu uding selectin test appro ng oaches, estim mating the tim effort and cost of testing, acquirin me, ng reso ources, defini test levels, cycles, an planning in ing nd ncident management Initia the specif ate fication, prep paration, imp plementation and execution of tests, m monitor the te est results and chec the exit criteria ck Adap planning b pt based on tes results and progress (sometimes do st d ocumented in status repo n orts) and take any act tion necessary to compen nsate for pro oblems Set u adequate configuratio managem up e on ment of testwa for tracea are ability Intro oduce suitabl metrics for measuring test progress and evalua le r s ating the qua of the tes ality sting and the product Deci what sho ide ould be autom mated, to what degree, and how Sele tools to su ect upport testing and organi any traini in tool us for testers g ize ing se s Deci about the implementa ide e ation of the te environm est ment Write test summary reports b e based on the information gathered du e uring testing

Typical t tester tasks m include: may o Revi iew and cont tribute to test plans t o Anal lyze, review and assess user requirements, specifications and models for testability d r o Crea test spec ate cifications o Set u the test environment ( up (often coordinating with system administration and network s management) o Prep pare and acq quire test data o Implement tests on all test levels, execute and log the tests, evalu e e uate the resu and docu ults ument the d deviations fro expected results om d o Use test adminis stration or ma anagement tools and test monitoring tools as requ uired o Auto omate tests (may be supp ported by a d developer or a test autom mation expert t) o Mea asure perform mance of com mponents and systems (if applicable) f o Revi iew tests dev veloped by o others People w work on test analysis test design specific test types or te automatio may be who s, n, est on specialis in these ro sts oles. Depend ding on the t test level and the risks re d elated to the p product and the project, d different peo ople may take over the role of tester, keeping som degree of independence. e k me Typically testers at th componen and integr y he nt ration level would be developers, test w ters at the acceptan test leve would be business expe and use and teste for operat nce el erts ers, ers tional accept tance testing w would be ope erators.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 48 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5.2
Terms

Test Pl lanning and Est timation (K3)

40 minut tes

Test app proach, test s strategy

5.2.1

Test Plan nning (K2) )

This sec ction covers t purpose o test planning within de the of evelopment a impleme and entation proje ects, and for m maintenance activities. Planning may be documen e y nted in a master test plan and in sepa n arate test plan for test lev ns vels such as system testin and acceptance testin The outlin of a testng ng. ne planning document is covered by the Standa for Softwa Test Doc g s y ard are cumentation (IEEE Std 8298 1998). Planning is influence by the test policy of the organizatio the scope of testing, o g ed t e on, e objectives, risks, constrain criticality testability a the avail nts, y, and lability of res sources. As the project an test plann nd ning progress more inform s, mation becomes availabl and more detail can be included in the plan. le e n Test plan nning is a co ontinuous act tivity and is p performed in all life cycle processes a activities and s. Feedbac from test a ck activities is u used to recog gnize changin risks so th planning can be adjusted. ng hat

5.2.2

Test Plan nning Activities (K3 3)

Test plan nning activities for an ent system o part of a sy tire or ystem may in nclude: o Dete ermining the scope and ri isks and iden ntifying the objectives of testing o o Defin ning the overall approach of testing, i h including the definition of the test leve and entry and e f els y exit c criteria o Integ grating and c coordinating the testing a activities into the software life cycle ac e ctivities (acquisition, supply, developm ment, operat tion and maintenance) o Mak king decisions about what to test, wha roles will perform the te activities, how the tes s t at p est , st activ vities should be done, and how the te results wil be evaluate est ll ed o Sche eduling test a analysis and design activ vities o Sche eduling test i implementati ion, executio and evalua on ation o Assigning resour rces for the d different activ vities defined d o Defin ning the amo ount, level of detail, struc f cture and tem mplates for th test docum he mentation o Sele ecting metrics for monitor s ring and cont trolling test preparation a execution defect reso p and n, olution and risk issues o Setti the level of detail for test procedu ing ures in order to provide en nough inform mation to sup pport repro oducible test preparation and executi t n ion

5.2.3

Entry Criteria (K2) )

Entry criteria define w when to start testing such as at the beginning of a test level or when a set of t h t tests is r ready for exe ecution. Typically entry criteria may cover the following: y r o Test environmen availability and readine t nt ess o Test tool readine in the tes environment t ess st o Test table code av vailability o Test data availab t bility

5.2.4

Exit Crite (K2) eria

Exit crite define wh to stop t eria hen testing such as at the end of a test lev or when a set of tests has d vel s achieved specific goa d al.
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 49 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Typically exit criteria may cover t following: y the o Thor roughness m measures, such as covera of code, functionality or risk age y o Estim mates of defe density o reliability m ect or measures o Cost t o Resi idual risks, such as defec not fixed or lack of tes coverage i certain are cts st in eas o Sche edules such as those bas on time t market sed to

5.2.5

Test Esti imation (K K2)

Two app proaches for the estimatio of test effo are: on ort o The metrics-base approach estimating the testing effort based o metrics of former or si ed h: e on f imilar ects or based on typical v d values proje o The expert-based approach: estimating the tasks bas on estimates made b the owner of the sed by tasks or by experts s Once the test effort is estimated, resources c be identif e s can fied and a sc chedule can b drawn up be p. ay n of cluding: The testing effort ma depend on a number o factors, inc o Characteristics o the produc the quality of the speci of ct: y ification and other inform mation used fo test or models (i.e., the test basis), t size of th product, th complexit of the prob the he he ty blem domain, the requ ion uirements for reliability an security, a the requi r nd and irements for documentati o Characteristics o the development proce of ess: the stabi of the org ility ganization, to ools used, te est proc cess, skills of the people i f involved, and time pressure d o The outcome of t testing: the n number of de efects and th amount of rework requ he f uired

5.2.6

Test Stra ategy, Tes Approac (K2) st ch

The test approach is the impleme entation of th test strategy for a spec project. The test app he cific proach is define and refined in the test plans and te designs. It typically inc ed d est cludes the de ecisions mad de based on the (test) p n projects goal and risk ass l sessment. It is the startin point for planning the test ng t process, for selecting the test des , g sign techniqu and test types to be applied, and for defining the ues d entry and exit criteria d a. The sele ected approa depends on the conte and may consider risk hazards a safety, ach ext ks, and available resources a skills, the technology the nature of the system (e.g., cust e and y, tom built vs. COTS), t test objective and regu es, ulations. approaches i include: Typical a o Anal lytical approa aches, such as risk-base testing where testing is directed to areas of gre ed s eatest risk proaches, su as stocha uch astic testing using statist tical informat tion about fai ilure o Model-based app rates (such as re s eliability grow models) o usage (such as operat wth or tional profiles s) o Meth hodical appro oaches, such as failure-b h based (includ ding error guessing and f fault attacks), expe erience-base checklisted, -based, and q quality chara acteristic-bas sed o Proc cess- or standard-complia approach ant hes, such as those specif fied by indus stry-specific standards or the various agile methodolo e ogies o Dyna amic and heuristic approaches, such as explorato testing w ory where testing is more reac ctive to even than pre-planned, and where exec nts d cution and ev valuation are concurrent tasks e o Cons sultative app proaches, such as those in which test coverage is driven primarily by the advice t s a and guidance of technology a and/or business domain experts outs side the test t team o Regression-aver approach rse hes, such as those that in nclude reuse of existing test material, extensive automation of func ctional regres ssion tests, and standard test suites a d Different approaches may be com t s mbined, for e example, a risk-based dynamic appro oach.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 50 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5.3 Test Pr rogress Monitor ring and Control (K2)


Terms
Defect density, failur rate, test c re control, test m monitoring, te summary report est y

20 minut tes

5.3.1

Test Progress Mon nitoring (K K1)

The purp pose of test m monitoring is to provide f s feedback and visibility ab d bout test activ vities. Inform mation to be mo onitored may be collected manually or automatica and may be used to m y d ally measure exit t criteria, s such as cove erage. Metric may also b used to assess progre against t planned cs be ess the schedule and budget Common t e t. test metrics include: o Perc centage of wo done in t ork test case pre eparation (or percentage of planned te cases est prep pared) o Perc centage of wo done in t ork test environm ment prepara ation o Test case execution (e.g., nu t umber of test cases run/n run, and t t not test cases pa assed/failed) ) o Defe informatio (e.g., defe density, d ect on ect defects found and fixed, f d failure rate, a re-test re and esults) o Test coverage of requiremen risks or c t f nts, code o Subj jective confid dence of test ters in the pr roduct o Date of test mile es estones o Test ting costs, including the c cost compare to the ben ed nefit of finding the next de efect or to run the next test t

5.3.2

Test Rep porting (K2 2)

Test reporting is concerned with summarizing information about the te g n esting endea avor, includin ng: o Wha happened during a per at riod of testing such as da g, ates when ex criteria we met xit ere o Anal lyzed informa ation and me etrics to supp recommendations an decisions about future port nd e actio ons, such as an assessm ment of defects remaining the econom benefit of continued g, mic f testing, outstand ding risks, and the level o confidence in the tested software of e d The outline of a test summary rep is given in Standard for Software Test Docum port d e mentation (IEEE Std 829-1998). Metrics s should be co ollected durin and at the end of a tes level in ord to assess ng st der s: o The adequacy of the test objectives for th test level f hat o The adequacy of the test app f proaches tak ken o The effectivenes of the testi with resp ss ing pect to the ob bjectives

5.3.3

Test Con ntrol (K2)

Test con ntrol describe any guidin or correcti actions ta es ng ive aken as a res of inform sult mation and metrics m gathered and reporte Actions m cover an test activit and may a d ed. may ny ty affect any oth software life her cycle act tivity or task. . Example of test con es ntrol actions include: o Mak king decisions based on in s nformation fr rom test mon nitoring o Re-p prioritizing tests when an identified ris occurs (e.g., software delivered lat sk te) nment o Changing the tes schedule d to availa st due ability or unav vailability of a test environ o Setti an entry criterion requ ing uiring fixes to have been re-tested (confirmation t o tested) by a deve eloper before accepting them into a b e build

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 51 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5.4
Terms

Configu uration M Manage ement (K K2)

10 minut tes

Configur ration manag gement, vers sion control

Backgr round
The purp pose of configuration management is to establish and maintain the integrit of the prod ty ducts (compon nents, data a documen and ntation) of the software or system thro e r ough the proj ject and prod duct life cycle e. For testing, configura ation manage ement may involve ensur ring the following: o All items of testw ware are iden ntified, versio controlled, tracked for changes, related to each other on h and related to de evelopment it tems (test ob bjects) so tha traceability can be maintained at y throu ughout the te process est o All id dentified doc cuments and software item are refere ms enced unambiguously in test docu umentation For the t tester, config guration management hel to unique identify (a to reprod lps ely and duce) the tes sted item, tes documents the tests and the test h st s, harness(es). During te planning, the configur est , ration manag gement procedures and i infrastructure (tools) shou be e uld chosen, documented and implem d mented.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 52 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5.5
Terms

Risk an Testing (K2) nd

30 minut tes

Product risk, project risk, risk, risk-based test ting

Backgr round
Risk can be defined as the chanc of an even hazard, th n ce nt, hreat or situa ation occurrin and result ng ting in undesira able consequ uences or a p potential prob blem. The level of risk will be determi ined by the likelihood of an adve d erse event ha appening and the impact (the harm re d esulting from that event).

5.5.1

Project R Risks (K2) )

Project r risks are the risks that surround the projects capa ability to deliv its object ver tives, such as: o Orga anizational fa actors: Skill, trai ining and sta shortages aff s Personn issues nel Political issues, such as: h Problems with testers co ommunicating their needs and test res g s sults Failure by the team to follow up on in nformation fo ound in testin and review ng ws (e.g., not imp proving deve elopment and testing prac d ctices) Improper attitude tow ward or expectations of te esting (e.g., n appreciating the value of not finding d defects during testing) g o Tech hnical issues s: Problem in defining the right req ms quirements The exte to which r ent requirements cannot be met given ex s xisting constr raints Test env vironment no ready on tim ot me Late data conversion migration p a n, planning and development and testing data d conversi ion/migration tools n Low qua of the de ality esign, code, c configuration data, test data and tests n s o Supp plier issues: Failure o a third part of ty Contract tual issues When an nalyzing, managing and m mitigating the risks, the test manag is followin well-estab ese e ger ng blished project m management principles. T Standar for Software Test Docu t The rd umentation ( (IEEE Std 82 291998) ou utline for test plans requir risks and contingenci to be stat t res d ies ted.

5.5.2

Product Risks (K2 2)

Potential failure area (adverse f as future events or hazards) in the software or system are known as s ) m n product r risks, as they are a risk to the quality of the produ y o uct. These in nclude: o Failu ure-prone software delive ered o The potential tha the softwar at re/hardware could cause harm to an individual or company e r o Poor software ch r haracteristics (e.g., functionality, reliability, usability and perfor s rmance) o Poor data integri and qualit (e.g., data migration issues, data c r ity ty conversion pr roblems, data a trans sport problem violation of data standards) ms, o Softw ware that does not perfor its intended functions rm Risks are used to de e ecide where t start testin and where to test more testing is u to ng e e; used to reduce the risk of an adverse eff n fect occurring, or to reduce the impac of an adve ct erse effect.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 53 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Product risks are a s special type o risk to the success of a project. Tes of sting as a ris sk-control act tivity provides feedback ab s bout the residual risk by measuring th effectiven he ness of critica defect rem al moval and of co ontingency p plans. A risk-ba ased approac to testing provides pro ch oactive oppo ortunities to re educe the lev vels of produ uct risk, star rting in the in nitial stages o a project. I involves the identificatio of produc risks and th of It on ct heir use in gu uiding test planning and c control, spec cification, pre eparation and execution o tests. In a riskd of based ap pproach the risks identifie may be used to: ed o Dete ermine the te technique to be employed est es o Dete ermine the ex xtent of testin to be carr ng ried out o Prior ritize testing in an attemp to find the critical defec as early a possible pt cts as o Dete ermine wheth any non-t her testing activi ities could be employed t reduce risk (e.g., provi e to iding training to inexpe erienced des signers) Risk-bas testing draws on the collective kn sed nowledge and insight of th project sta d he akeholders to t determin the risks a the levels of testing r ne and s required to ad ddress those risks. e To ensur that the ch re hance of a product failure is minimize risk mana e ed, agement acti ivities provide a discipline approach to: ed o Asse (and reassess on a regular basis) what can go wrong (risk ess ks) o Dete ermine what risks are imp portant to deal with o Implement action to deal wit those risks ns th In additio testing m support t identificat on, may the tion of new risks, may he to determ r elp mine what risk ks should b reduced, a may lower uncertaint about risks be and ty s.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 54 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5.6
Terms

Inciden Manag nt gement (K3)

40 minut tes

Incident logging, incident manage ement, incide report ent

Backgr round
Since on of the obje ne ectives of tes sting is to find defects, the discrepanc d cies between actual and n expected outcomes n d need to be lo ogged as inc cidents. An in ncident must be investiga t ated and may turn out to be a defect. Ap e ppropriate ac ctions to disp pose incidents and defec should be defined. Inc cts e cidents and defe ects should b tracked fro discovery and classification to cor be om y rrection and confirmation of the n solution. In order to m manage all in ncidents to c completion, an organizatio should es a on stablish an in ncident management proces and rules f classificat ss for tion. Incidents may be rais during development, review, test s sed , ting or use of a software product. The may f ey be raised for issues i code or the working sy d in ystem, or in any type of d a documentatio including on requirem ments, develo opment docu uments, test d documents, and user info ormation suc as Help or ch installatio guides. on Incident reports have the followin objectives e ng s: o Prov vide develope and othe parties with feedback about the pro ers er h a oblem to enable identifica ation, isola ation and cor rrection as ne ecessary o Prov vide test lead ders a means of tracking the quality of the system under test a the progress s o m and of the testing o Prov vide ideas for test proces improveme r ss ent Details o the inciden report may include: of nt y o Date of issue, iss e suing organiz zation, and a author o Expe ected and ac ctual results o Identification of t test item (configuratio item) and environment the on t o Softw ware or syste life cycle process in w em which the inc cident was ob bserved o Desc cription of the incident to enable repro oduction and resolution, including log database d gs, e dum or screen mps nshots o Scop or degree of impact on stakeholde pe e n er(s) interests s o Seve erity of the im mpact on the system o Urge ency/priority to fix o Statu of the inci us ident (e.g., o open, deferre duplicate, waiting to b fixed, fixed awaiting re ed, , be d e-test, close ed) o Conc clusions, rec commendatio and appr ons rovals o Glob issues, su as other areas that m be affect by a change resulting from the inc bal uch may ted g cident o Change history, such as the sequence of actions take by project team memb f en t bers with res spect to the incident to isolate, repa and conf o air, firm it as fixed d o Refe erences, inclu uding the ide entity of the t test case spe ecification tha revealed t problem at the The structure of an in ncident report is also cov vered in the Standard for Software Te r est Docume entation (IEE Std 829-1998). EE

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 55 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Refere ences
5.1.1 Black, 2001, H Hetzel, 1988 5.1.2 Black, 2001, H Hetzel, 1988 5.2.5 Black, 2001, C Craig, 2002, I IEEE Std 829 9-1998, Kaner 2002 5.3.3 Black, 2001, C Craig, 2002, H Hetzel, 1988 IEEE Std 829-1998 8, 8 5.4 Crai 2002 ig, 5.5.2 Black, 2001 , IEEE Std 829 9-1998 5.6 Blac 2001, IEE Std 829-1 ck, EE 1998

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 56 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

6.

Tool Su T upport f Testi (K2) for ing )

8 minutes 80

Learni Objec ing ctives for Tool Sup r pport for Testing
The obje ectives identify what you will be able t do followin the compl to ng letion of each module. h

6.1 Typ of Tes Tools (K pes st K2)


LO-6.1.1 1 LO-6.1.3 3 Classify different types of test too according to their purpose and to the activities of ols g s the funda amental test process and the softwar life cycle ( t d re (K2) Explain t term test tool and the purpose of tool support for testing (K 2 the t e K2)

6.2 Effe ective Use of Tools Potentia Benefits and Risk (K2) e s: al s ks
LO-6.2.1 1 LO-6.2.2 2 Summar rize the poten ntial benefits and risks of test automa s f ation and too support for ol r testing (K K2) Rememb special c ber consideration for test exe ns ecution tools static analy s, ysis, and tes st management tools (K K1)

6.3 Intr roducing a Tool into an Orga o anization (K1)


LO-6.3.1 1 LO-6.3.2 2 LO-6.3.3 3 State the main princi e iples of introd ducing a tool into an orga anization (K1 1) State the goals of a p e proof-of-conc cept for tool evaluation and a piloting phase for to ool impleme entation (K1) Recognize that facto other than simply acquiring a tool are required for good too ors n ol support (K1)

LO-6.1.2 Intentiona skipped ally


Page 57 of 78 31-Ma ar-2011

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

6.1
Terms

Types of Test T Tools (K K2)

45 minut tes

Configur ration manag gement tool, coverage too debugging tool, dynam analysis tool, inciden ol, mic nt management tool, load testing to modeling tool, monito ool, g oring tool, performance te esting tool, probe effect, re equirements managemen tool, review tool, security tool, static analysis tool, stress tes nt w c sting tool, test comparator test data pr t r, reparation to test desig tool, test h ool, gn harness, test execution tool, t test man nagement too unit test fr ol, ramework too ol

6.1.1

Tool Sup pport for T Testing (K K2)

Test tool can be use for one or more activit ls ed r ties that support testing. These inclu ude: 1. Tools that are dir rectly used in testing suc as test exe n ch ecution tools test data ge s, eneration too ols and result compa arison tools 2. Tools that help in managing t testing process such as those used to manag tests, test n the ge results, data, req quirements, incidents, defects, etc., and for report ting and mon nitoring test exec cution 3. Tools that are us in reconn sed naissance, or, in simple terms: explor t ration (e.g., t tools that mo onitor file a activity for an application) n ) 4. Any tool that aids in testing (a spreadshe is also a test tool in this meaning) s a eet t Tool sup pport for testing can have one or more of the follow e e wing purpose depending on the con es ntext: o Impr rove the effic ciency of test activities by automating repetitive ta t y asks or suppo orting manua test al activ vities like test planning, te design, te reporting and monitor t est est ring o Auto omate activiti that require significan resources when done m ies nt manually (e.g static test g., ting) o Auto omate activiti that cann be execut manually (e.g., large scale perfor ies not ted y rmance testin of ng clien nt-server app plications) o Incre ease reliabilit of testing (e.g., by auto ty omating large data comp parisons or si imulating beha avior) The term test frameworks is als frequently used in the industry, in a least three meanings: m so at e o Reus sable and ex xtensible test ting libraries that can be used to build testing tool (called tes d ls st harn nesses as we ell) o A typ of design of test autom pe mation (e.g., data-driven, keyword-driven) , o Overall process of execution of testing For the p purpose of th syllabus, the term tes framework is used in its first two meanings as his st ks s describe in Section 6.1.6. ed

6.1.2

Test Too Classific ol cation (K2 2)

There ar a number of tools that support diffe re erent aspects of testing. T s Tools can be classified based e on sever criteria su as purpose, commerc / free / op ral uch cial pen-source / shareware, technology used and so fo orth. Tools a classified in this syllab according to the testi activities that they support. are bus ing Some to ools clearly su upport one a activity; other may suppo more than one activity but are rs ort n y, classified under the a d activity with w which they are most clos sely associate Tools fro a single ed. om provider, especially t those that ha been des ave signed to work together, may be bund dled into one e package e. Some types of test to ools can be intrusive, which means th they can affect the ac hat ctual outcome of e the test. For example the actual timing may b different due to the ex instructio that are e, be d xtra ons executed by the tool, or you may get a differe measure of code cove d , y ent erage. The c consequence of e intrusive tools is calle the probe effect. e ed e
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 58 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Some to ools offer sup pport more ap ppropriate fo developers (e.g., tools that are used during or s d compone and component integ ent gration testing Such tools are marke with (D) i the list bel g). ed in low.

6.1.3

Tool Sup pport for M Manageme of Testing and T ent Tests (K1)

Management tools apply to all tes activities o st over the entir software life cycle. re Test Ma anagement T Tools These to ools provide interfaces for executing t tests, tracking defects an managing requirement nd ts, along with support fo quantitative analysis an reporting of the test objects. They also suppor or e nd rt he cts ement specifications and might have a independent version control an c tracing th test objec to require capability or an interf face to an ex xternal one. Require ements Mana agement To ools These to ools store req quirement sta atements, store the attrib butes for the requirement (including ts priority), provide uniq identifier and suppo tracing the requiremen to individu tests. Th que rs ort e nts ual hese ay with ng ent tools ma also help w identifyin inconsiste or missing requirements. t ent Defect Tracking Tools) Incident Manageme Tools (D These to ools store and manage in ncident repor i.e., defec failures, change requ rts, cts, uests or perc ceived problems and anoma s alies, and he in managi the life cy elp ing ycle of incide ents, optionally with supp for port statistica analysis. al Configu uration Mana agement Tools Although not strictly t h test tools, the are nece ese essary for sto orage and ve ersion manag gement of testware and related software especially whe configuring more than one hardware/software e en g environm ment in terms of operating system ver s g rsions, comp pilers, browse etc. ers,

6.1.4

Tool Sup pport for S Static Test ting (K1)

Static tes sting tools pr rovide a cost effective wa of finding more defects at an earlie stage in th t ay er he developm ment process s. Review Tools ools assist with review pro ocesses, che ecklists, revie guideline and are us to store and ew es sed a These to commun nicate review comments a report on defects and effort. They can be of f w and n d y further help by b providing aid for onlin reviews fo large or ge g ne or eographically dispersed t y teams. Static A Analysis Too (D) ols These to ools help dev velopers and testers find defects prior to dynamic testing by providing support r for enfor rcing coding standards (in ncluding sec cure coding), analysis of s structures an dependen nd ncies. They can also help in planning or risk analysi by providin metrics fo the code (e n n r is ng or e.g., complex xity). Modelin Tools (D) ng These to ools are used to validate software mo d odels (e.g., physical data model (PDM for a relational M) database by enume e), erating incon nsistencies and finding de efects. These tools can o e often aid in generating some test cases base on the mo ed odel.

6.1.5

Tool Sup pport for T Test Speci ification (K K1)

Test Des sign Tools These to ools are used to generate test inputs or executable tests and/o test oracle from d e or es requirem ments, graphi ical user inte erfaces, desig models (s gn state, data or object) or c r code.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 59 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

ta Test Dat Preparation Tools Test data preparation tools manip a n pulate databases, files or data transm missions to set up test da to ata be used during the e execution of t tests to ensu security th ure hrough data anonymity.

6.1.6

Tool Sup pport for T Test Execu ution and Logging ( (K1)

Test Exe ecution Too ols These to ools enable te ests to be ex xecuted auto omatically, or semi-autom r matically, usin stored inp ng puts and expe ected outcom mes, through the use of a scripting language and usually prov h vide a test log for g each tes run. They c also be u st can used to recor tests, and usually support scripting languages or rd g GUI-bas configura sed ation for para ameterization of data and other customization in th tests. n d he Test Framew work Tools ( (D) Test Harness/Unit T A unit test harness o framework facilitates th testing of components or parts of a system by or he c ng onment in wh hich that test object will ru through the provision of mock objects un, simulatin the enviro as stubs or drivers. s Test Comparators Test com mparators de etermine diffe erences betw ween files, da atabases or t test results. T Test executio on tools typ pically include dynamic co e omparators, but post-exe ecution comp parison may b done by a be separate comparison tool. A test comparator may use a te oracle, especially if it is automate e n est t ed. ge ment Tools (D) Coverag Measurem These to ools, through intrusive or non-intrusive means, me e easure the pe ercentage of specific types of f code stru uctures that have been e exercised (e.g statements, branches or decisions and module or g., s s, function calls) by a set of tests. Security Testing To y ools These to ools are used to evaluate the security characterist of softwa d e y tics are. This inc cludes evalua ating the abilit of the softw ty ware to prote data conf ect fidentiality, in ntegrity, authentication, a authorization, , availability, and non-repudiation. Security too are mostly focused on a particular technology, ols y n platform, and purpos se.

6.1.7

Tool Sup pport for P Performan and Mo nce onitoring (K1)

Dynamic Analysis T c Tools (D) Dynamic analysis too find defec that are e c ols cts evident only when softwa is executi w are ing, such as time depende encies or memory leaks. They are typ pically used in componen and compo nt onent integra ation testing, a when tes and sting middlew ware. mance Testin ng/Load Tes sting/Stress Testing Too ols Perform Performa ance testing tools monito and report on how a sy or ystem behaves under a v variety of sim mulated usage co onditions in t terms of num mber of concu urrent users, their ramp-u pattern, fr up requency and d relative p percentage o transaction The simu of ns. ulation of load is achieved by means o creating vi d d of irtual users ca arrying out a selected set of transactio ons, spread across variou test mach a us hines commo only known a load gener as rators. Monitor ring Tools Monitorin tools cont ng tinuously ana alyze, verify and report on usage of s specific syste resources and em s, give war rnings of pos ssible service problems. e

6.1.8

Tool Sup pport for S Specific Te esting Nee (K1) eds

Data Qu uality Assessment Data is a the center of some pro at ojects such as data conve ersion/migrat tion projects and applicat tions like data warehouses and its attri a s ibutes can va in terms of criticality a volume. In such cont ary and texts, tools nee to be emp ed ployed for da quality as ata ssessment to review and verify the da conversio and o ata on
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 60 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

migration rules to ensure that the processed data is corre complete and complie with a pre n e ect, e es edefined c context-spec standard cific d. Other tes sting tools ex for usabi testing. xist ility

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 61 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

6.2 Effectiv Use o Tools: Poten ve of ntial Benefits and Risks (K K2)
Terms
Data-driv testing, k ven keyword-driv testing, s ven scripting lang guage

20 minut tes

6.2.1 (K2)

Potential Benefits and Risks of Tool Support fo Testing (for all to s S or g ools)

Simply p purchasing or leasing a to does not guarantee success with that tool. Each type of to ool ool may requ additional effort to ac uire chieve real a lasting be and enefits. Ther are potent benefits and re tial a opportun nities with the use of tools in testing, b there are also risks. e s but e Potential benefits of using tools in nclude: o Repe etitive work i reduced (e is e.g., running regression tests, re-ente ering the sam test data, and me chec cking against coding stan t ndards) o Grea consiste ater ency and repe eatability (e.g tests exec g., cuted by a to in the sam order with the ool me h same frequency, and tests de , erived from r requirements s) o Obje ective assess sment (e.g., static measu ures, coverag ge) o Ease of access t information about tests or testing (e e to n s e.g., statistic and graphs about test cs prog gress, inciden rates and performance nt e) Risks of using tools include: o Unre ealistic expec ctations for th tool (inclu he uding functionality and ea of use) ase o Unde erestimating the time, co and effort for the initia introduction of a tool (in ost al n ncluding train ning and external exp pertise) o Unde erestimating the time and effort need to achiev significant and continu d ded ve t uing benefits from the t tool (including the need fo changes in the testing process and continuous improvement of or g d s the w the tool is used) way o Unde erestimating the effort required to ma aintain the test assets generated by th tool he o Over-reliance on the tool (rep n placement fo test design or use of au or n utomated tes sting where manual testing w would be bett ter) o Neglecting versio control of test assets w on f within the too ol o Neglecting relatio onships and interoperabi issues be ility etween critic tools, such as requirem cal ments management too version c ols, control tools, incident management to ools, defect tr racking tools and s tools from multip vendors s ple o Risk of tool vend going out of business, retiring the tool, or sellin the tool to a different k dor t ng o vend dor o Poor response fr r rom vendor f support, u for upgrades, an defect fixe nd es o Risk of suspension of open-s k source / free tool project o Unfo oreseen, suc as the inab ch bility to support a new pla atform

6.2.2

Special C Considera ations for Some Typ of Too (K1) pes ols

Test Exe ecution Too ols Test exe ecution tools execute test objects usin automated test scripts This type o tool often t ng d s. of requires significant e effort in order to achieve s r significant be enefits. Capturin tests by re ng ecording the actions of a manual teste seems attractive, but t er this approach does h not scale to large numbers of aut e tomated test scripts. A ca aptured scrip is a linear r pt representatio on with specific data and actions as part of each script. This type of scrip may be uns d h pt stable when unexpec cted events o occur.
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 62 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

A data-d driven testing approach separates out the test inputs (the data usually int a spreadsheet, g t a), to and uses a more gen s neric test scr that can r ript read the inpu data and e ut execute the s same test script with diffe erent data. Testers who a not famili with the scripting lang are iar s guage can then create the test e data for these predef fined scripts. . There ar other techniques employed in data re a-driven techniques, wher instead of hard-coded data re f combina ations placed in a spreads sheet, data is generated using algorit thms based o configura on able parameters at run tim and supplied to the ap me pplication. Fo example, a tool may use an algorit or thm, which ge enerates a ra andom user I and for re ID, epeatability in pattern, a s n seed is empl loyed for controllin randomne ng ess. In a keyw word-driven t testing appro oach, the spr readsheet co ontains keyw words describ bing the actio to ons be taken (also called action word and test data. Testers (even if the are not fam n d ds), s ey miliar with th he scripting language) c then define tests usin the keywo can ng ords, which c be tailore to the can ed application being tes sted. Technica expertise in the scriptin language is needed fo all approac al ng or ches (either by testers or by r specialis in test aut sts tomation). Regardle of the sc ess cripting techn nique used, th expected results for e he each test nee to be store for ed ed later com mparison. Analysis Too ols Static A Static an nalysis tools applied to so ource code c enforce coding standards, but if a can c applied to exi isting code ma generate a large quant of messa ay tity ages. Warnin messages do not stop the code fro ng s om being tra anslated into an executab program, but ideally should be addressed so t ble s that maintena ance of the co is easier in the future A gradual implementati of the analysis tool w initial filte to ode e. ion with ers exclude some messa ages is an ef ffective appro oach. anagement T Tools Test Ma Test management to ools need to i interface with other tools or spreadsh h heets in order to produce useful information in a format that fits th needs of the organizat he tion.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 63 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

6.3 Introdu ucing a T Tool into an Org o ganizatio on (K1)


Terms
No specific terms.

15 minut tes

Backgr round
The main considerat tions in selec cting a tool fo an organiz or zation include e: o Asse essment of o organizationa maturity, st al trengths and weaknesses and identif d fication of oppo ortunities for an improved test proces supported by tools d ss o Evaluation again clear requ nst uirements an objective criteria nd c o A pro oof-of-conce by using a test tool during the eva ept, aluation phas to establis whether it se sh t perfo orms effectiv vely with the software und test and within the cu der urrent infrastr ructure or to identify changes needed to th infrastruc hat cture to effec ctively use th tool he o Evaluation of the vendor (inc e cluding trainin support and commerc aspects) or service support ng, a cial ) s supp pliers in case of non-com e mmercial tools s o Identification of internal requi irements for coaching an mentoring in the use o the tool nd of o Evaluation of training needs considering the current test teams te automatio skills est on o Estim mation of a c cost-benefit r ratio based o a concrete business ca on e ase Introducing the selec cted tool into an organiza ation starts with a pilot pro w oject, which has the follow wing objective es: o Lear more deta about the t rn ail tool o Evaluate how the tool fits with existing pr e rocesses and practices, a determin what would d and ne need to change d o Deci on standard ways of using, mana ide aging, storing and maintaining the too and the tes g ol st asse (e.g., dec ets ciding on nam ming convent tions for files and tests, c s creating libraries and defining the m modularity of test suites) f o Asse whether the benefits will be achie ess eved at reaso onable cost Success factors for the deployme of the too within an organization include: s ent ol o o Rolling out the to to the res of the orga ool st anization incr rementally o Adap pting and improving proc cesses to fit w the use of the tool with o Prov viding training and coaching/mentorin for new us g ng sers o Defin ning usage g guidelines o Implementing a w to gathe usage information from the actual u way er m use o Monitoring tool u and bene use efits o Prov viding suppor for the test team for a g rt t given tool o Gath hering lesson learned fro all teams ns om s

Refere ences
6.2.2 Bu uwalda, 2001 Fewster, 1 1, 1999 6.3 Few wster, 1999

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 64 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

7.

Referen nces

Stand dards
ISTQB G Glossary of T Terms used in Software T Testing Versi 2.1 ion [CMMI] C Chrissis, M.B Konrad, M and Shrum S. (2004) CMMI, Guidelines for Pro B., M. m, ocess Integr ration and Prod duct Improve ement, Addis Wesley: Reading, MA son A See Sec ction 2.1 [IEEE St 829-1998] IEEE Std 82 td 29 (1998) IEEE Standa for Softw ard ware Test Doc cumentation, See Sec ctions 2.3, 2.4 4.1, 5.2, 5 5.5, 5.6 4, 5.3, [IEEE 10 028] IEEE St 1028 (20 td 008) IEEE Standard for Software Rev S views and Au udits, See Sec ction 3.2 [IEEE 12 2207] IEEE 1 12207/ISO/IE 12207-20 EC 008, Software life cycle processes, e See Sec ction 2.1 [ISO 912 ISO/IEC 9 26] 9126-1:2001 Software E 1, Engineering Software P Product Quality, See Sec ction 2.3

Books s
[Beizer, 1990] Beizer B. (1990) S r, Software Tes sting Techniq ques (2nd ed dition), Van N Nostrand Reinhold: Boston See Sec ctions 1.2, 1.3 2.3, 4.2, 4 4.4, 4.6 3, 4.3, [Black, 2 2001] Black, R. (2001) Ma anaging the Testing Proc cess (3rd edi ition), John W Wiley & Sons New s: York See Sec ctions 1.1, 1.2 1.4, 1.5, 2 2.4, 5.1, 5.2, 5.3, 5.5, 5.6 2, 2.3, , [Buwalda 2001] Buw a, walda, H. et a (2001) Int al. tegrated Test Design and Automation Addison Wesley: d n, W Reading, MA See Sec ction 6.2 [Copelan 2004] Co nd, opeland, L. (2 2004) A Prac ctitioners Gu uide to Softw ware Test Des sign, Artech House: N Norwood, MA A See Sec ctions 2.2, 2.3 4.2, 4.3, 4 4.6 3, 4.4, [Craig, 2 2002] Craig, R D. and J Rick Jaskiel, Stef P. (2002) Systematic Software Te fan ) esting, Artech h House: N Norwood, MA A See Sec ctions 1.4.5, 2 2.1.3, 2.4, 4.1, 5.2.5, 5.3, 5.4 [Fewster 1999] Fewster, M. and Graham, D. (1999) Softw r, ware Test Au utomation, A Addison Wesl ley: Reading, MA See Sec ctions 6.2, 6.3 3 [Gilb, 1993]: Gilb, To and Graham, Dorothy (1993) Software Inspection, Addison Wesley: om y n Reading, MA See Sec ctions 3.2.2, 3 3.2.4 [Hetzel, 1988] Hetzel, W. (1988) Complete G Guide to Softw ware Testing QED: Welle g, esley, MA See Sec ctions 1.3, 1.4 1.5, 2.1, 2 2.3, 2.4, 4 5.1, 5.3 4, 2.2, 4.1, [Kaner, 2 2002] Kaner, C., Bach, J. and Petttico B. (2002 Lessons L , ord, 2) Learned in So oftware Testi ing, John Wil & Sons: N ley New York See Sec ctions 1.1, 4.5 5.2 5,
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 65 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

[Myers 1979] Myers, Glenford J. (1979) The A of Softwa Testing, J Art are John Wiley & Sons: New York w See Sec ctions 1.2, 1.3 2.2, 4.3 3, [van Vee enendaal, 20 004] van Vee enendaal, E. (ed.) (2004) The Testing Practitioner (Chapters 6, 8, g r 6 10), UTN Publishers: The Nether N rlands See Sec ctions 3.2, 3.3 3

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 66 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

8.

Append A S A dix Syllabus Backg ground

History of this D ry Documen nt


This doc cument was p prepared bet tween 2004 a 2011 by a Working G and y Group compr rised of mem mbers appointe by the Inte ed ernational So oftware Testing Qualificat tions Board ( (ISTQB). It w initially was reviewed by a select review pa d ted anel, and the by represe en entatives dra awn from the internationa al software testing com e mmunity. The rules used in the produc ction of this d document are shown in e Appendix C. This doc cument is the syllabus for the Internat e r tional Founda ation Certific cate in Software Testing, the first level internationa qualificatio approved by the ISTQ (www.istqb.org). al on QB

Objectives of th Found he dation Ce ertificate Qualificat Q tion


o o o o o o o To g gain recogniti for testing as an esse ion ential and pro ofessional so oftware engin neering spec cialization To p provide a stan ndard framew work for the developmen of testers' c nt careers To e enable profes ssionally qua alified testers to be recognized by employers, customers and peers, s p and to raise the p profile of testers To p promote cons sistent and good testing p practices within all softwa engineer are ring discipline es To id dentify testing topics that are relevant and of value to industry t t y To e enable softwa suppliers to hire certified testers and thereby g are s a gain commercial advanta age over their compe r etitors by adv vertising their tester recru uitment policy y To p provide an op pportunity for testers and those with an interest in testing to ac r a cquire an inter rnationally re ecognized qu ualification in the subject

Objectives of th International Q he Qualificatio (adapted from ISTQB on meetin at Soll ng lentuna, N Novembe 2001) er
o o o o o o To b able to com be mpare testing skills acros different countries ss c To e enable testers to move ac cross country borders mo easily y ore To e enable multin national/intern national projects to have a common u understandin of testing issues ng To in ncrease the n number of qu ualified teste worldwide ers e To h have more im mpact/value a an interna as ationally-base initiative than from any country-specific ed y appr roach To d develop a com mmon international body of understanding and kn y nowledge ab bout testing throu the sylla ugh abus and term minology, and to increase the level of knowledge about testing for e f g all pa articipants To p promote testing as a profe ession in mo countries ore To e enable testers to gain a re ecognized qu ualification in their native language n To e enable sharin of knowled and reso ng dge ources acros countries ss To p provide intern national reco ognition of tes sters and this qualification due to par s rticipation from many countries

o o o o

Entry Requirem ments for this Qua r alification


The entr criterion fo taking the ISTQB Foun ry or ndation Certif ficate in Softw ware Testing examinatio is g on that cand didates have an interest in software t e testing. Howe ever, it is stro ongly recommended that t candidat also: tes o Have at least a m e minimal back kground in either software developme or software testing, su as e ent uch six m months experience as a s system or us acceptanc tester or a a software developer ser ce as e
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 67 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Take a course th has been accredited t ISTQB sta e hat to andards (by o of the IS one STQB-recogn nized Natio onal Boards) ).

Backg ground an History of the F nd y Foundatio Certific on cate in So oftware Testin ng


The inde ependent cer rtification of s software test ters began in the UK with the British C n h Computer Society's Information Systems Ex s n xamination B Board (ISEB) when a Software Testin Board wa set ), ng as up in 199 (www.bcs 98 s.org.uk/iseb). In 2002, A ASQF in Germ many began to support a German tes ster qualification scheme (www.asqf.d This syll de). labus is base on the ISE and ASQ syllabi; it ed EB QF includes reorganized updated an additional content, and the empha is directe at topics that d, nd l d asis ed will provide the most practical help to testers. t . An existi Foundation Certificate in Software Testing (e.g., from ISEB, ASQF or a ISTQBing e e an recogniz National Board) awar zed rded before t this Internatio onal Certifica was relea ate ased, will be deemed to be equiva alent to the In nternational Certificate. The Foundation Certificat does not expire T te e and does not need to be renewed The date i was awarded is shown on the Certificate. s o d. it Within ea participa ach ating country local aspec are contro y, cts olled by a na ational ISTQB B-recognized d Software Testing Boa Duties o National Boards are sp e ard. of pecified by th ISTQB, bu are implem he ut mented within ea country. The duties o the country boards are expected to include accreditation of ach of y o training p providers and the setting of exams. g

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 68 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

9. A Append B L dix Learnin Objec ng ctives/C Cognitiv Level of ve Know wledge


The follo owing learnin objectives are defined as applying to this syllab ng bus. Each top in the syllabus pic will be ex xamined acc cording to the learning ob e bjective for it. .

Level 1: Remember (K1 1)


The cand didate will re ecognize, rem member and recall a term or concept. m . Keyword Rememb retrieve, recall, recog ds: ber, gnize, know Example e Can reco ognize the de efinition of fa ailure as: o Non n-delivery of service to an end user or any other stakeholder or n s o Actu deviation of the comp ual n ponent or sys stem from its expected delivery, service or result s

Level 2: Under rstand (K2 2)


The cand didate can se elect the rea asons or expl lanations for statements related to the topic, and can e summarize, compare classify, ca e, ategorize and give examples for the t d testing conce ept. Keyword Summar ds: rize, generali ize, abstract, classify, compare, map, contrast, ex , , xemplify, inte erpret, translate represent, infer, conclu e, ude, categorize, construct models es Example Can exp plain the reas why tests should be d son s designed as early as pos ssible: o To find defects w when they are cheaper to remove e o o To find the most important de efects first Can exp plain the similarities and d differences between integ gration and s system testin ng: o Similarities: testing more than one compo n onent, and ca test non-f an functional as spects o Diffe erences: integration testin concentra ng ates on interf faces and int teractions, an system te nd esting conc centrates on whole-system aspects, s such as end-to-end proce essing

Level 3: Apply (K3)


The cand didate can se elect the cor rrect applicat tion of a conc cept or techn nique and ap pply it to a giv ven context. Keyword Impleme execute, use, follow a procedure, apply a proc ds: ent, cedure Example e o Can identify boundary values for valid and invalid par s rtitions o Can select test c cases from a given state transition dia agram in order to cover a transitions all s

Level 4: Analyz (K4) ze


The cand didate can se eparate infor rmation relat to a proce ted edure or tech hnique into it constituen parts ts nt for better understand ding, and can distinguish between fac and infere n cts ences. Typic application is to cal analyze a document, software or project situa , r ation and pro opose approp priate actions to solve a s problem or task. Keyword Analyze, organize, fin coherenc integrate, outline, pars structure, attribute, ds: , nd ce, se, , deconstr ruct, differentiate, discrim minate, disting guish, focus, select ,

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 69 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications Board s

e Example o Anal lyze product risks and propose preve entive and co orrective mitig gation activit ties o Desc cribe which p portions of an incident report are factual and whic are inferre from results n ch ed

Refere ence
(For the cognitive lev vels of learning objectives s) Anderso L. W. and Krathwohl, D. R. (eds) (2001) A Tax on, xonomy for Learning, Tea aching, and Assessin A Revisio of Bloom's Taxonomy of Education Objective Allyn & Bacon ng: on s y nal es,

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 70 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

10. A Append C R dix Rules A Applied to the ISTQB


Found dation Sy yllabus
The rules listed here were used in the develo opment and review of this syllabus. (A TAG is sh r s A hown after eac rule as a s ch shorthand ab bbreviation o the rule.) of

10.1.1 General Rules


SG1. The syllabus sh hould be und derstandable and absorb e bable by peop with zero to six month (or ple o hs more) ex xperience in testing. (6-M MONTH) SG2. The syllabus sh hould be pra actical rather than theoret tical. (PRACT TICAL) hould be clea and unam ar mbiguous to it intended r ts readers. (CLE EAR) SG3. The syllabus sh SG4. The syllabus sh hould be und derstandable to people fr e rom different countries, and easily translata able into diffe erent languag ges. (TRANS SLATABLE) SG5. The syllabus sh hould use Am merican English. (AMERICAN-ENGLISH)

10.1.2 Current C Content


SC1. The syllabus sh hould include recent testi concepts and should reflect curre best pract e ing s ent tices in softwa testing where this is g are generally agr reed. The syllabus is sub bject to review every three to w e five year (RECENT rs. T) SC2. The syllabus sh hould minimi time-relat issues, such as curre market co ize ted s ent onditions, to enable it to have a sh life of thr to five ye t helf ree ears. (SHELF F-LIFE).

10.1.3 Learning Objective g es


LO1. Lea arning object tives should distinguish b between item to be reco ms ognized/reme embered (cognitive level K1) items the c ), candidate should underst tand concept tually (K2), it tems the can ndidate should be able to p practice/use ( (K3), and items the candidate should be able to u to analyz a document, use ze software or project situation in co e ontext (K4). (KNOWLEDG GE-LEVEL) LO2. The description of the conte should be consistent with the lear e n ent e rning objectiv ves. (LOCONSIS STENT) LO3. To illustrate the learning ob e bjectives, sam mple exam questions for each major s section shou be uld issued a along with the syllabus. (L e LO-EXAM)

10.1.4 Overall S Structure


ST1. The structure o the syllabus should be clear and allow cross-ref e of ferencing to and from oth her parts, fro exam que om estions and f from other re elevant documents. (CRO OSS-REF) ST2. Overlap betwee sections o the syllabu should be minimized. (OVERLAP) en of us ST3. Eac section of the syllabus should hav the same structure. (STRUCTURE ch f s ve s E-CONSISTE ENT) ST4. The syllabus sh e hould contain version, da of issue and page num n ate a mber on ever page. ry (VERSIO ON) ST5. The syllabus sh e hould include a guideline for the amou of time to be spent in each sectio (to e unt o n on reflect th relative im he mportance of each topic). (TIME-SPEN NT)

Refere ences
SR1. So ources and re eferences wil be given fo concepts in the syllabu to help training provide ll or n us ers find out m more informa ation about the topic. (RE EFS) SR2. Wh here there ar not readily identified an clear sources, more d re y nd detail should be provided in the syllabus. For exampl definitions are in the G le, s Glossary, so only the term are listed in the syllab ms bus. (NON-REF DETAIL)

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 71 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

Source of Infor es rmation


Terms used in the sy yllabus are defined in the ISTQB Glos e ssary of Term used in S ms Software Test ting. A version o the Glossa is availab from ISTQ of ary ble QB. A list of r recommende books on software tes ed sting is also issued in par rallel with this syllabus. The s T main boo list is part of the Refer ok t rences sectio on.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 72 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

11.

Appendi D No A ix otice to T Training Provider rs

Each ma subject h ajor heading in th syllabus is assigned an allocated ti he s n ime in minute The purp es. pose of this is bo to give gu oth uidance on th relative pr he roportion of time to be allocated to ea section of an t ach o accredite course, and to give an approximat minimum time for the t ed n te t teaching of e each section. . Training providers may spend mo time than is indicated and candidates may spend more tim ore n d me again in reading and research. A course curri iculum does not have to follow the sa ame order as the s syllabus. The sylla abus contains references to establish standards, which mus be used in the prepara s hed st n ation of trainin material. E ng Each standar used mus be the vers rd st sion quoted in the current version of this t syllabus. Other publications, templates or sta andards not referenced in this syllabus may also be r n b used and referenced but will not be examine d d, t ed. All K3 an K4 Learning Objective require a p nd es practical exe ercise to be in ncluded in th training he materials s.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 73 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

12.

Appendi E Re A ix elease No otes

Release 2010
1. C Changes to Learning Ob bjectives (LO) include som clarificatio me on a. W Wording cha anged for the following LO (content a level of L remains e Os and LO unchanged): LO-1.2.2, LO-1.3.1, LO: -1.4.1, LO-1. .5.1, LO-2.1.1, LO-2.1.3, LO2.4.2, LO-4.1 LO-4.2.1 LO-4.2.2, LO-4.3.1, LO 2 1.3, 1, O-4.3.2, LO-4 4.3.3, LO-4.4 4.1, LO-4.4.2, LO O-4.4.3, LO-4 4.6.1, LO-5.1 LO-5.2.2 LO-5.3.2, L 1.2, 2, LO-5.3.3, LO O5.5.2, LO-5.6 LO-6.1.1 LO-6.2.2, LO-6.3.2. 5 6.1, 1, b. LO-1.1.5 has been rewor s rded and upg graded to K2 Because a comparison of 2. n terms of defe related te t ect erms can be expected. c. LO-1.2.3 (K2 has been a 2) added. The content was already cov s vered in the 2007 2 syllabus. s d. LO-3.1.3 (K2 now comb 2) bines the con ntent of LO-3.1.3 and LO-3.1.4. e. LO-3.1.4 has been removed from the 2010 syllab s e bus, as it is p partially redun ndant with w LO-3.1.3. f. LO-3.2.1 has been rewor s rded for cons sistency with the 2010 sy h yllabus conte ent. g. LO-3.3.2 has been modif s fied, and its level has bee changed from K1 to K2, for l en K consistency with LO-3.1.2. c h. LO 4.4.4 has been modif s fied for clarity and has been changed from a K3 to a y, d t K4. Reason LO-4.4.4 had already been written i a K4 mann n: b in ner. i. LO-6.1.2 (K1 was dropp from the 2010 syllabu and was r 1) ped us replaced with LOh 6.1.3 (K2). T 6 There is no L LO-6.1.2 in th 2010 sylla he abus. 2. C Consistent u for test approach acc use cording to the definition in the glossary The term test e n y. t strategy will not be required as term t recall. s to 3. C Chapter 1.4 now contains the concep of traceability between test basis and test cases. pt 4. C Chapter 2.x now contains test objects and test ba s s asis. 5. R Re-testing is now the ma term in the glossary in s ain nstead of con nfirmation tes sting. 6. T aspect d The data quality a testing h been add at severa locations in the syllabu and has ded al us: data quality a risk in C d and Chapter 2.2, 5 6.1.8. 5.5, 7. C Chapter 5.2.3 Entry Crite are adde as a new subchapter. Reason: Consistency to Exit eria ed s Criteria (-> e C entry criteria added to LO O-5.2.9). 8. C Consistent u of the ter use rms test strat tegy and test approach w their definition in the t with glossary. g 9. C Chapter 6.1 shortened be ecause the t tool descriptions were too large for a 45 minute le o esson. 10. IEEE Std 829:2008 has b been release This vers ed. sion of the sy yllabus does not yet consider this new edit t tion. Section 5.2 refers to the docume Master Test Plan. The content of the o ent e Master Test Plan is cove M ered by the co oncept that the documen Test Plan covers diffe t nt erent levels of plan l nning: Test p plans for the test levels ca be create as well as a test plan on the an ed o project level covering mu p ultiple test lev vels. Latter is named Master Test Pla in this syllabus s an and a in the IS STQB Glossa ary. 11. C Code of Ethics has been moved from the CTAL to CTFL. m o

Release 2011
Changes made with the mainten s nance releas 2011 se 1. G General: Wo orking Party r replaced by W Working Gro oup 2. R Replaced po ost-conditions by postcon s nditions in ord to be con der nsistent with the ISTQB Glossary 2.1. G 3. F First occurre ence: ISTQB replaced by ISTQB 4. Introduction to this Syllab bus: Descript tions of Cogn nitive Levels of Knowledg removed, s ge because this was redund b s dant to Appendix B.
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 74 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

5. S Section 1.6: Because the intent was not to define a Learning Objective for the Code of e e r o Ethics, the c E cognitive leve for the sec el ction has bee removed. en 6. S Section 2.2.1 2.2.2, 2.2.3 and 2.2.4, 3.2.3: Fixed formatting is 1, ssues in lists s. 7. S Section 2.2.2 The word f 2 failure was no correct for isolate fa ot r ailures to a s specific comp ponent . Therefor replaced w defect in that sente re with ence. 8. S Section 2.3: Corrected fo ormatting of b bullet list of test objective related to test terms in t es n section Test Types (K2). s 9. S Section 2.3.4 Updated d 4: description of debugging to be consistent with Ver f rsion 2.1 of the ISTQB Gloss sary. 10. S Section 2.4 r removed wor extensive from inclu rd e udes extensiv regression testing, ve n because the extensive depends on the change (size, risks, v b value, etc.) a written in the as t next sentenc n ce. 11. S Section 3.2: The word in ncluding has been remov to clarify the sentenc s ved y ce. 12. S Section 3.2.1 Because t activities of a formal review had b 1: the r been incorrec formatted the ctly d, review proce had 12 m r ess main activities instead of six, as intend s s ded. It has be changed back een d to t six, which makes this s section comp pliant with th Syllabus 2 he 2007 and the ISTQB Advanced e Level Syllabu 2007. L us 13. S Section 4: W Word develop ped replaced by defined because test cases ge defined an not d et nd developed. d 14. S Section 4.2: Text change to clarify ho black-box and white-b testing co e ow x box ould be used in d conjunction w experien c with nce-based te echniques. 15. S Section 4.3.5 text change ..between actors, inclu 5 e uding users a the syste and em.. to between acto (users or systems), . b ors r 16. S Section 4.3.5 alternative path replace by alterna 5 ed ative scenario o. 17. S Section 4.4.2 In order to clarify the te branch testing in the text of Sect 2: o erm t e tion 4.4, a sentence to clarify the focus of branc testing has been chang s ch s ged. 18. S Section 4.5, Section 5.2.6: The term experienced d-based tes sting has bee replaced by the en b correct term experience-based. c 19. S Section 6.1: Heading 6.1.1 Understa anding the Meaning and Purpose of T M Tool Support for t Testing (K2) replaced by 6.1.1 Tool Support for Testing (K2). T y l 20. S Section 7 / B Books: The 3 edition of [Black,2001] listed, repla 3rd acing 2nd edit tion. 21. A Appendix D: Chapters re equiring exerc cises have been replaced by the gen b neric requirem ment that all Learn t ning Objectiv K3 and h ves higher require exercises. This is a req e quirement specified in i the ISTQB Accreditatio Process ( B on (Version 1.26 6). 22. A Appendix E: The change learning objectives bet ed tween Versio 2007 and 2010 are no on ow correctly liste c ed.

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 75 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

13.

Index
dy ynamic testin ..................... 13, 31, 32, 36 ng 3 em mergency ch hange ................................. 30 en nhancement .................................... 27, 30 2 en criteria .. ntry ........................................... 33 eq quivalence p partitioning .......................... 40 10, er ............... rror ................... 1 11, 18, 43, 50 4 er guessing ............................. 18, 43, 50 rror g 4 ex xhaustive tes sting ................................... 14 ex criteria13, 15, 16, 33, 35, 45, 48, 49, 50, xit , 4 51 ex xpected resu ...................... 16, 38, 48, 63 ult 4 ex xperience-ba ased technique ....... 37, 39, 43 3 ex xperience-ba ased test des sign techniqu 39 ue ex xploratory tes sting ............................. 43, 50 4 fa actory accept tance testing ...................... 27 g fa ailure10, 11, 13, 14, 18, 2 24, 26, 32 36, 21, 2, 43, 46, 50, 51, 53, 54, 6 69 fa ailure rate ..... ..................................... 50, 51 5 fa ............... ault ............................... 10, 11, 43 fa attack ..... ault ........................................... 43 fie testing .... eld ..................................... 24, 27 2 fo ollow-up ........ ............................... 33, 34, 35 3 fo ormal review . ..................................... 31, 33 3 fu unctional requ uirement ...................... 24, 26 2 fu unctional spe ecification ............................ 28 fu unctional task ......................................... 25 k fu unctional test .......................................... 28 t fu unctional test ..................................... 28 ting fu unctionality ... ............. 24, 2 28, 50, 53, 62 25, 5 im mpact analysis ........................... 21, 30, 38 3 incident ... 15, 16, 17, 19, 2 46, 48, 55 58, 24, 5, 59, 62 incident loggin ....................................... 55 ng incident mana agement.................. 48, 55, 58 5 incident mana agement tool ................. 58, 59 5 incident report ................................... 46, 55 t 4 independence ............................. 18, 47, 48 e 4 informal review ............................ 31, 33, 34 w 3 inspection ...... ......................... 31, 33, 34, 35 3 inspection leader ..................................... 33 integration13, 22, 24, 25, 2 29, 36, 40, 41, 27, 42, 45, 48, 59, 60, 69 integration tes sting22, 24, 2 29, 36, 40 45, 25, 0, 59, 60, 69 interoperability testing ............................. 28 y introducing a t tool into an o organization5 64 57, IS 9126 ....... SO ......................... 11, 29, 30, 65 3 de evelopment m model................................. 22 ite erative-increm mental development mod 22 del ke eyword-drive approach........................ 63 en ke eyword-drive testing ............................ 62 en kick-off ........... ........................................... 33 le earning objec ctive ... 8, 9, 10, 21, 31, 37 45, 7, 57, 69, 70, 71 lo testing .... oad ............................... 28, 58, 60 5
Page 76 of 78 31-Ma ar-2011

action word .............. ................................ 63 alpha tes sting ............ .......................... 24, 27 architect ture .............. 15, 21, 22, 25, 28, 29 .. archiving .................. g .......................... 17, 30 automation ............... ................................ 29 benefits of independe ence ....................... 47 benefits of using tool ............................... 62 l beta test ting .............. .......................... 24, 27 black-bo technique . ox .................... 37, 39, 40 black-bo test design technique ............. 39 ox n black-bo testing ...... ox ................................ 28 bottom-u ................. up ................................ 25 boundar value analy ......................... 40 ry ysis bug ........................... ................................ 11 captured script ......... d ................................ 62 checklist ................. ts .......................... 34, 35 choosing test techniq .......................... 44 g que code cov verage ......... ........ 28, 29, 37, 42, 58 commerc off the sh (COTS) ............ 22 cial helf compiler ................... r ................................ 36 complex ................ xity .............. 11, 36, 50, 59 compone integratio testing22, 25, 29, 59, ent on 60 compone testing22 24, 25, 27, 29, 37, 41, ent 2, , 42 configura ation management ......... 45, 48, 52 Configur ration manag gement tool ............. 58 confirma ation testing.. 13, 15, 16, 21, 28, 29 .. contract acceptance testing .................... 27 control fl ............... low .............. 28, 36, 37, 42 coverage 15, 24, 28, 29, 37, 38, 39, 40, 42, e 50, 51 58, 60, 62 1, coverage tool ........... e ................................ 58 custom-d developed so oftware.................... 27 data flow .................. w ................................ 36 data-driv approach .............................. 63 ven h data-driv testing ... ven ................................ 62 debuggin ................ ng .............. 13, 24, 29, 58 debuggin tool ......... ng .......................... 24, 58 decision coverage .... .......................... 37, 42 decision table testing ........................ 40, 41 g decision testing ........ ................................ 42 defect10 11, 13, 14, 16, 18, 21, 24, 26, 28, 0, 29, 31 32, 33, 34, 35, 36, 37, 39, 40, 41, 1, 43, 44 45, 47, 49, 50, 51, 53, 54, 55, 59, 4, 60, 69 9 defect de ensity........... .......................... 50, 51 defect tra acking tool... ................................ 59 developm ment .. 8, 11, 12, 13, 14, 18, 21, 22, 24, 29 32, 33, 36, 38, 44, 47, 49, 50, 52, 9, 53, 55 59, 67 5, developm ment model . .......................... 21, 22 drawbac of indepe cks endence ................... 47 driver ........................ ................................ 24 c ol dynamic analysis too ....................... 58, 60
Version 2 2011
Internationa Software Testing Q al Qualifications Board

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

ting tool........ ................................ 58 load test maintain nability testing ............................. 28 g maintena ance testing ......................... 21, 30 management tool ..... .............. 48, 58, 59, 63 maturity .................... .............. 17, 33, 38, 64 metric ....................... .................... 33, 35, 45 mistake .................... .................... 10, 11, 16 modelling tool........... ................................ 59 moderator ................ .................... 33, 34, 35 monitorin tool ......... ng .......................... 48, 58 non-func ctional requir rement ......... 21, 24, 26 non-func ctional testing ....................... 11, 28 g objective for testing ............................... 13 es off-the-shelf .............. ................................ 22 operational acceptan testing ............... 27 nce operational test ........ .................... 13, 23, 30 patch ........................ ................................ 30 peer review .............. .................... 33, 34, 35 performa ance testing . .......................... 28, 58 performa ance testing tool ................... 58, 60 pesticide paradox..... e ................................ 14 portabilit testing ...... ty ................................ 28 probe eff .............. ffect ................................ 58 procedur ................. re ................................ 16 product r .............. risk .............. 18, 45, 53, 54 project risk ............... .................... 12, 45, 53 prototyping ............... ................................ 22 quality 8 10, 11, 13, 19, 28, 37, 38, 47, 48, 8, 50, 53 55, 59 3, rapid application dev velopment (R RAD) ..... 22 Rational Unified Proc cess (RUP) ............. 22 recorder ................... r ................................ 34 regressio testing .... 15, 16, 21, 28, 29, 30 on .. Regulation acceptance testing ............... 27 reliability ................... 11, 13, 28, 50, 53, 58 y .. reliability testing ....... y ................................ 28 requirem .............. ment ........ 13, 22, 24, 32, 34 requirem ments manag gement tool .............. 58 requirem ments specific cation ................ 26, 28 responsi ibilities ......... .................... 24, 31, 33 re-testing . 29, See co g onfirmation t testing, See confirm mationtesting g review13 19, 31, 32, 33, 34, 35, 36, 47, 48, 3, 53, 55 58, 67, 71 5, review to ................ ool ................................ 58 reviewer ................... r .......................... 33, 34 risk11, 12, 13, 14, 25 26, 29, 30, 38, 44, 45, 5, , 49, 50 51, 53, 54 0, risk-base approach ............................... 54 ed risk-base testing..... ed .................... 50, 53, 54 risks ......................... .............. 11, 25, 49, 53 risks of u using tool ..... ................................ 62 robustne testing.... ess ................................ 24 roles ................ 8, 31, 33, 34, 35, 47, 48, 49 root caus ................ se .......................... 10, 11 scribe ....................... .......................... 33, 34 scripting language.... .................... 60, 62, 63 security .................... 27, 28, 36, 47, 50, 58 ..
Version 2 2011
Internationa Software Testing Q al Qualifications Board

se ecurity testing ........................................ 28 se ecurity tool ... ..................................... 58, 60 5 simulators ...... ........................................... 24 site acceptanc testing ........................... 27 ce so oftware deve elopment ............. 8, 11, 21, 22 2 so oftware deve elopment model .................. 22 sp pecial consid derations for some types of tool 62 te case ........ est ........................................... 38 sp pecification-b based technique..... 29, 39, 40 3 sp pecification-b based testing...................... 37 g 18, st takeholders .. 12, 13, 16, 1 26, 39, 45, 54 . 4 st tate transition testing ....................... 40, 41 n 4 st tatement cov verage ................................ 42 st tatement test ..................................... 42 ting st tatic analysis .................................... 32, 36 s 3 st tatic analysis tool ........... 3 36, 58, 59, 63 s 31, 5 st tatic techniqu ................................. 31, 32 ue 3 st tatic testing .. ..................................... 13, 32 st tress testing . ............................... 28, 58, 60 5 st tress testing tool .............................. 58, 60 5 st tructural testi .................... 24, 28, 29, 42 ing 2 st tructure-base technique ................ 39, 42 ed e 3 st tructure-base test design technique .... 42 ed st tructure-base testing ..................... 37, 42 ed 3 st ............... tub ........................................... 24 su uccess factors ....................................... 35 sy ystem integra ation testing ................. 22, 25 2 sy ystem testing g13, 22, 24, 2 26, 27, 49, 69 25, 4 te echnical revie .................... 31, 33, 34, 35 ew 3 te analysis .. est ......................... 15, 38, 48, 49 4 te approach ........................ 38, 48, 50, 51 est 5 te basis ....... est ........................................... 15 te case . 13, 14, 15, 16, 2 28, 32, 37 38, est 24, 7, 39, 40, 41, 42, 45, 51, 5 59, 69 55, te case spec est cification ................. 37, 38, 55 3 te cases ...... est ........................................... 28 te closure .... est ............................... 10, 15, 16 te condition . est ........................................... 38 15, te conditions ........... 13, 1 16, 28, 38, 39 est s 3 te control..... est ............................... 15, 45, 51 4 te coverage .................................... 15, 50 est te data ........ 15, 16, 38, 4 58, 60, 62, 63 est . 48, 6 te data preparation tool .................. 58, 60 est 5 te design13, 15, 22, 37, 38, 39, 43, 48, 58, est , 4 62 te design sp est pecification .......................... 45 te design tec est chnique .................. 37, 38, 39 3 te design too .................................. 58, 59 est ol 5 Te Developm est ment Proces ..................... 38 ss te effort ....... est ........................................... 50 17, te environme . 15, 16, 1 24, 26, 48, 51 est ent 4 te estimatio ....................................... 50 est on te execution est n13, 15, 16, 3 36, 38, 43 45, 32, 3, 57, 58, 60 te execution schedule .......................... 38 est n te execution tool ..... 16, 3 57, 58, 60, 62 est n 38, 6 te harness... est ................... 1 24, 52, 58, 60 16, 5 te implemen est ntation ..................... 16, 38, 49 3
Page 77 of 78 31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications Board s

der .............. 18, 45, 47, 55 test lead ................ test lead tasks....... der ................................ 47 test level. 21, 22, 24, 28, 29, 30, 37, 40, 42, 44, 45 48, 49 5, test log ..................... .............. 15, 16, 43, 60 test man nagement ..... .......................... 45, 58 test man nagement too ....................... 58, 63 ol test man nager ............ ...................... 8, 47, 53 test mon nitoring ......... .......................... 48, 51 test obje ective ....... 13 22, 28, 43, 44, 48, 51 3, test orac ................ cle ................................ 60 test orga anization ...... ................................ 47 test plan .. 15, 16, 32 45, 48, 49, 52, 53, 54 n 2, test plan nning ............ 15, 16, 45, 49, 52, 54 .. test plan nning activities ......................... 49 test proc cedure .......... 15, 16, 37, 38, 45, 49 .. test proc cedure specif fication .............. 37, 38 test prog gress monitoring ......................... 51 test repo ................. ort .......................... 45, 51 test repo orting ............ .......................... 45, 51 test scrip ................. pt .................... 16, 32, 38 test strat tegy ............. ................................ 47 test suite .................. e ................................ 29 test sum mmary report . ........ 15, 16, 45, 48, 51 test tool classification .............................. 58 n test type ................... e ........ 21, 28, 30, 48, 75 test-drive developm en ment ......................... 24 tester 10 13, 18, 34, 41, 43, 45, 4 48, 52, 0, 47, 62, 67 7 tester tas .............. sks ................................ 48 test-first approach .... ................................ 24

te esting and qu uality ................................... 11 te esting princip ............................... 10, 14 ples 15, te estware......... ................... 1 16, 17, 48, 52 4 to support ... ool ................... 2 32, 42, 57, 62 24, 5 to support fo manageme of testing and ool or ent g tests .......... ........................................... 59 to support fo performance and moni ool or itoring 60 to support fo static testin ................... 59 ool or ng to support fo test execution and logg ool or ging60 to support fo test specif ool or fication ............ 59 to support fo testing ...................... 57, 62 ool or 5 to op-down ....... ........................................... 25 tra aceability ..... ............................... 38, 48, 52 4 tra ansaction pro ocessing seq quences ......... 25 ty ypes of test to ................................ 57, 58 ool 5 un test frame nit ework ...................... 24, 58, 60 5 un test frame nit ework tool ..................... 58, 60 5 up pgrades ....... ........................................... 30 us sability ......... ............. 11, 2 28, 45, 47, 53 27, 4 us sability testin ................................. 28, 45 ng 2 us case test . se ..................................... 37, 40 3 us case testing .......................... 37, 40, 41 se 4 us cases ...... se ......................... 22, 26, 28, 41 2 us acceptan testing .......................... 27 ser nce va alidation ....... ........................................... 22 ve erification ..... ........................................... 22 ve ersion contro ......................................... 52 ol V-model ......... ........................................... 22 walkthrough ... ............................... 31, 33, 34 3 white-box test design tech t hnique ....... 39, 42 3 white-box test ............................... 28, 42 ting 2

Version 2 2011
Internationa Software Testing Q al Qualifications Board

Page 78 of 78

31-Ma ar-2011

You might also like