You are on page 1of 78

C Certifi ied Tester r Found dation n Lev vel Sy yllabu us

Released R Ver rsion 201 11

Int ternatio onal Software Testing g Qualif fication ns Boar rd

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Copyrigh ht Notice This doc cument may be copied in its entirety, or extracts made, m if the source s is ack knowledged. Copyrigh ht Notice In nternational Software Te esting Qualific cations Boar rd (hereinafte er called ISTQB) ISTQB is s a registered d trademark of the Intern national Softw ware Testing g Qualifications Board, Copyrigh ht 2011 the e authors for r the update 2011 (Thomas Mller (ch hair), Debra Friedenberg, and the ISTQ QB WG Foun ndation Level) Copyrigh ht 2010 the e authors for r the update 2010 (Thomas Mller (ch hair), Armin B Beer, Martin Klonk, Rahul R Verma) ) Copyrigh ht 2007 the e authors for r the update 2007 (Thomas Mller (ch hair), Dorothy y Graham, Debra D Friedenb berg and Erik k van Veenendaal) Copyrigh ht 2005, th he authors (T Thomas Mlle er (chair), Re ex Black, Sig grid Eldh, Dorothy Graham, Klaus Ol lsen, Maaret Pyhjrvi, Geoff G Thompson and Erik k van Veenen ndaal). All rights s reserved. The auth hors hereby transfer t the copyright c to the t Internatio onal Softwar re Testing Qu ualifications Board (ISTQB). The author rs (as current t copyright holders) and ISTQB I (as th he future cop pyright holder) have agr reed to the fo ollowing cond ditions of use e: 1) Any individual or r training com mpany may use u this sylla abus as the basis b for a tra aining course e if the ed as the so ource and co opyright owners of the sy yllabus authors and the ISTQB are acknowledge at any adver rtisement of such a train ning course may m mention n the syllabu us only and provided tha r submission n for official accreditatio on of the tra aining mater rials to an I ISTQB recognized after Natio onal Board. 2) Any individual or r group of in ndividuals ma ay use this syllabus s as the t basis for articles, boo oks, or othe er derivative writings if th he authors and a the ISTQB are acknowledged a as the sourc ce and copy yright owners s of the syllabus. 3) Any ISTQB-reco ognized Natio onal Board may m translate e this syllabu us and licens se the syllab bus (or ranslation) to o other parties. its tr

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 2 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Revision Histo ory


Version ISTQB 2011 2 D Date E Effective 1-Ap pr-2011 Remarks s Certified Tester Foundation Level l Syllabus Maintena ance Release e see Appe endix E Re elease Notes Certified Tester Foundation Level l Syllabus Maintena ance Release e see Appe endix E Re elease Notes Certified Tester Foundation Level l Syllabus Maintena ance Release e Certified Tester Foundation Level l Syllabus ASQF Sy yllabus Foundation Level Version 2.2 Lehrplan n Grundlagen n des Softwa are-testens ISEB Sof ftware Testin ng Foundatio on Syllabus V2.0 V 25 February 1999

ISTQB 2010 2

E Effective 30-M Mar-2010

ISTQB 2007 2 ISTQB 2005 2 ASQF V2.2 ISEB V2 2.0

0 01-May-2007 7 01-July-2005 0 J July-2003 2 25-Feb-1999

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 3 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Table of Conte ents


Acknowledgements.. ........................................ ........................................ ................................................... 7 Introduct tion to this Syllabus S ............................ ........................................ ................................................... 8 Purpo ose of this Do ocument .......................... ........................................ ................................................... 8 The Certified C Test ter Foundatio on Level in Software S Testing .............. ................................................... 8 Learn ning Objective es/Cognitive Level of Kno owledge .......................... ................................................... 8 The Examination E . .................... .................... ........................................ ................................................... 8 Accre editation ........ ........................................ ........................................ ................................................... 8 Level of Detail...... ........................................ ........................................ ................................................... 9 How this t Syllabus is Organized .................. ........................................ ................................................... 9 1. Fun ndamentals of o Testing (K K2)................ ........................................ ................................................. 10 1.1 Why is Te esting Necessary (K2) ..... ........................................ ................................................. 11 1.1.1 Software Systems Context C (K1) ) ....................................... ................................................. 11 1.1.2 Causes s of Software e Defects (K2 2) .................................... ................................................. 11 1.1.3 Role of and Operations (K2) ............... 11 f Testing in Software S Dev velopment, Maintenance M 1.1.4 Testing g and Quality y (K2) ........... ........................................ ................................................. 11 1.1.5 How Much Testing is Enough? (K2) ................................ ................................................. 12 1.2 What is Testing? (K2) .................... ........................................ ................................................. 13 Seven Testing Princip 1.3 ples (K2) ....... ........................................ ................................................. 14 Fundamental Test Pro 1.4 ocess (K1) ... ........................................ ................................................. 15 1.4 4.1 Test Planning and Control C (K1) ....................................... ................................................. 15 1.4 4.2 Test An nalysis and Design D (K1) .................... . .................... ................................................. 15 1.4 4.3 Test Im mplementatio on and Execu ution (K1)......................... ................................................. 16 1.4 4.4 Evaluating Exit Crit teria and Rep porting (K1) ..................... ................................................. 16 1.4 4.5 Test Cl losure Activit ties (K1) ...... ........................................ ................................................. 16 1.5 The Psych hology of Testing (K2) .... ........................................ ................................................. 18 1.6 Code of Ethics E ............................... ........................................ ................................................. 20 2. Tes sting Throug ghout the Sof ftware Life Cycle C (K2) ......................... ................................................. 21 2.1 Software Developmen nt Models (K2 2) .................................... ................................................. 22 2.1.1 V-mode el (Sequentia al Development Model) (K2) .............. ................................................. 22 2.1.2 Iterative e-incrementa al Development Models (K2) ( ............. ................................................. 22 2.1.3 Testing g within a Life e Cycle Model (K2) ............................ ................................................. 22 2.2 Test Leve els (K2) ............................ ........................................ ................................................. 24 2.2 2.1 Compo onent Testing g (K2) ........... ........................................ ................................................. 24 2.2 2.2 Integra ation Testing (K2) ............ ........................................ ................................................. 25 2.2 2.3 System m Testing (K2 2) ................. ........................................ ................................................. 26 2.2 2.4 Acceptance Testing g (K2)........... ........................................ ................................................. 26 Test Type 2.3 es (K2) ............................. ........................................ ................................................. 28 2.3 3.1 Testing g of Function n (Functional Testing) (K2 2) ................. ................................................. 28 2.3 3.2 Testing g of Non-func ctional Softw ware Characte eristics (Non n-functional T Testing) (K2) ......... 28 2.3 3.3 Testing g of Software e Structure/A Architecture (Structural Te esting) (K2) .............................. 29 2.3 3.4 Testing g Related to Changes: Re e-testing and d Regression n Testing (K2 2)........................... 29 2.4 Maintenan nce Testing (K2) ( ............. ........................................ ................................................. 30 3. Sta atic Techniqu ues (K2)........................... ........................................ ................................................. 31 3.1 Static Tec chniques and d the Test Pr rocess (K2) ...................... ................................................. 32 3.2 Review Process (K2) ..................... ........................................ ................................................. 33 3.2 2.1 Activitie es of a Form mal Review (K K1) ................................... ................................................. 33 3.2 2.2 Roles and a Respons sibilities (K1) ) ....................................... ................................................. 33 3.2 2.3 Types of o Reviews (K2) ( .............. ........................................ ................................................. 34 3.2 2.4 Succes ss Factors fo or Reviews (K K2)................................... ................................................. 35 3.3 Static Ana alysis by Too ols (K2) ........ ........................................ ................................................. 36 4. Tes st Design Te echniques (K K4) ................ ........................................ ................................................. 37 4.1 The Test Developmen nt Process (K K3) ................................... ................................................. 38 4.2 es of Test De esign Techniq ques (K2) ........................ ................................................. 39 Categorie
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 4 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

4.3 Specificat tion-based or r Black-box Techniques T (K3) ( ............. ................................................. 40 4.3 3.1 Equivalence Partitio oning (K3) ... ........................................ ................................................. 40 4.3 3.2 Bounda ary Value An nalysis (K3) .. ........................................ ................................................. 40 4.3 3.3 Decisio on Table Tes sting (K3) ..... ........................................ ................................................. 40 4.3 3.4 State Transition T Testing (K3) .... ........................................ ................................................. 41 4.3 3.5 Use Ca ase Testing (K2) ( .............. ........................................ ................................................. 41 4.4 Structure-based or Wh hite-box Techniques (K4 4).................. ................................................. 42 4.4 4.1 Statem ment Testing and a Coverag ge (K4) ............................ ................................................. 42 4.4 4.2 Decisio on Testing an nd Coverage e (K4) ............................... ................................................. 42 4.4 4.3 Other Structure-bas S sed Techniqu ues (K1) .......................... ................................................. 42 4.5 Experienc ce-based Tec chniques (K2 2) ..................................... ................................................. 43 4.6 Choosing Test Techni iques (K2).... ........................................ ................................................. 44 5. Tes st Management (K3) .......................... ........................................ ................................................. 45 Test Orga 5.1 anization (K2 2) .................. ........................................ ................................................. 47 5.1.1 Test Organization and a Independ dence (K2) ...................... ................................................. 47 5.1.2 Tasks of o the Test Leader L and Tester T (K1) ....................... ................................................. 47 5.2 Test Planning and Est timation (K3) )....................................... ................................................. 49 5.2 2.1 Test Planning (K2) .................... ........................................ ................................................. 49 5.2 2.2 Test Planning Activ vities (K3) ..... ........................................ ................................................. 49 5.2 2.3 Entry Criteria C (K2) ..................... ........................................ ................................................. 49 5.2 2.4 Exit Criteria (K2)........................ ........................................ ................................................. 49 5.2 2.5 Test Es stimation (K2 2) ................. ........................................ ................................................. 50 5.2 2.6 Test St trategy, Test t Approach (K K2) .................................. ................................................. 50 5.3 Test Prog gress Monitor ring and Con ntrol (K2) ......................... ................................................. 51 5.3 3.1 Test Pr rogress Monitoring (K1) .. ........................................ ................................................. 51 5.3 3.2 Test Re eporting (K2)................... ........................................ ................................................. 51 5.3 3.3 Test Co ontrol (K2)....................... ........................................ ................................................. 51 5.4 Configura ation Manage ement (K2) ... ........................................ ................................................. 52 5.5 Risk and Testing T (K2) .................... ........................................ ................................................. 53 5.5 5.1 Project t Risks (K2) ..................... ........................................ ................................................. 53 5.5 5.2 Produc ct Risks (K2) .................... ........................................ ................................................. 53 5.6 Incident Management M (K3) ............ ........................................ ................................................. 55 6. Too ol Support fo or Testing (K2 2)................. ........................................ ................................................. 57 6.1 Types of Test T Tools (K K2) ............... ........................................ ................................................. 58 6.1.1 Tool Su upport for Te esting (K2) ... ........................................ ................................................. 58 6.1.2 Test To ool Classifica ation (K2) ..... ........................................ ................................................. 58 6.1.3 Tool Su upport for Ma anagement of o Testing an nd Tests (K1) ) ............................................... 59 6.1.4 Tool Su upport for Sta atic Testing (K1) ................................ ................................................. 59 6.1.5 Tool Su upport for Te est Specificat tion (K1) .......................... ................................................. 59 6.1.6 Tool Su upport for Te est Execution n and Loggin ng (K1) ......... ................................................. 60 6.1.7 Tool Su upport for Pe erformance and a Monitorin ng (K1)......... ................................................. 60 6.1.8 Tool Su upport for Sp pecific Testin ng Needs (K1 1) ................. ................................................. 60 6.2 Effective Use U of Tools s: Potential Benefits B and Risks (K2) .. ................................................. 62 6.2 2.1 Potential Benefits and a Risks of Tool Suppor rt for Testing (for all tools s) (K2) ................... 62 6.2 2.2 Special Considerations for Som me Types of Tools T (K1) .... ................................................. 62 Introducin 6.3 ng a Tool into o an Organiz zation (K1) ....................... ................................................. 64 7. References ...... ........................................ ........................................ ................................................. 65 Stand dards ............ ........................................ ........................................ ................................................. 65 Books s................... ........................................ ........................................ ................................................. 65 8. Appendix A Syllabus S Background ....... ........................................ ................................................. 67 Histor ry of this Doc cument ............................ ........................................ ................................................. 67 Objec ctives of the Foundation F C Certificate Qualification ...................... ................................................. 67 Qualification Objec ctives of the International I n (adapted fr rom ISTQB meeting m at So ollentuna, Novem mber 2001).. ........................................ ........................................ ................................................. 67 Entry Requiremen nts for this Qu ualification ... ........................................ ................................................. 67
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 5 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

Backg ground and History H of the e Foundation n Certificate in Software Testing T ..................................... 68 Appendix B Learning L Obje ectives/Cogn nitive Level of o Knowledge e ............................................... 69 Level 1: Remember (K1) ............................ ........................................ ................................................. 69 Level 2: Understand (K2) ........................... ........................................ ................................................. 69 Level 3: Apply (K3 3) ..................................... ........................................ ................................................. 69 Level 4: Analyze (K4) ( ................................. ........................................ ................................................. 69 10. A Appendix C Rules App plied to the IS STQB ............................... ................................................. 71 Found dation Syllab bus ................................... ........................................ ................................................. 71 10. .1.1 Genera al Rules ........................... ........................................ ................................................. 71 10. .1.2 Current Content ........................ ........................................ ................................................. 71 10. .1.3 Learnin ng Objectives s .................. ........................................ ................................................. 71 10. .1.4 Overall l Structure ....................... ........................................ ................................................. 71 11. A Appendix D Notice to Training T Prov viders .............................. ................................................. 73 12. A Appendix E Release Notes............. ........................................ ................................................. 74 Relea ase 2010 ...... ........................................ ........................................ ................................................. 74 Relea ase 2011 ...... ........................................ ........................................ ................................................. 74 13. Index ........... ........................................ ........................................ ................................................. 76 9.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 6 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Ackno owledgements
Internatio onal Softwar re Testing Qu ualifications Board Working Group Fo oundation Le evel (Edition 2011): Thomas Mller (chair), Debra Friedenberg. The T core team m thanks the review team m (Dan Almog g, Armin Be eer, Rex Black, Julie Gar rdiner, Judy McKay, Tuul la Pkkne en, Eric Riou du Cosquier r Hans Schaefer, Stephanie Ulrich, Erik van Veenendaal) and all National Bo oards for the suggestions for the curre ent version of o the syllabus. Internatio onal Softwar re Testing Qu ualifications Board Working Group Fo oundation Le evel (Edition 2010): Thomas Mller (chair), Rahul Verma, Martin Klonk K and Ar rmin Beer. The T core team m thanks the review te eam (Rex Bla ack, Mette Bruhn-Peders B son, Debra Friedenberg, F Klaus Olsen n, Judy McKa ay, Tuula P kknen, Meile M Posthum ma, Hans Sc chaefer, Step phanie Ulrich, Pete William ms, Erik van Veenend daal) and all National Boa ards for their r suggestions s. Internatio onal Softwar re Testing Qu ualifications Board Working Group Fo oundation Le evel (Edition 2007): Thomas Mller (chair), Dorothy Graham, G Deb bra Friedenberg, and Erik k van Veenendaal. The core c team tha anks the revie ew team (Ha ans Schaefer r, Stephanie Ulrich, Meile e Posthuma, Anders Pettersson, and Won nil Kwon) and d all the National Boards for their sug ggestions. Internatio onal Softwar re Testing Qu ualifications Board Working Group Fo oundation Le evel (Edition 2005): Thomas Mller (chair), Rex Black k, Sigrid Eldh h, Dorothy Graham, G Klau us Olsen, Ma aaret Pyhjr rvi, hompson and d Erik van Ve eenendaal an nd the review w team and all a National B Boards for their Geoff Th suggestions.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 7 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

Introd duction n to this s Syllabus


Purpo ose of this s Docume ent
This sylla abus forms the t basis for the International Softwar re Testing Qualification a at the Founda ation Level. Th he Internatio onal Software e Testing Qualifications Board B (ISTQB B) provides it to the Natio onal Boards for f them to accredit the tr raining provid ders and to derive d examination quest tions in their local language e. Training providers p will determine appropriate a te eaching meth hods and pro oduce course eware for accre editation. The syllabus will w help candidates in their preparation for the exa amination. Information on the history and ba ackground of f the syllabus s can be foun nd in Append dix A.

The Certified C T Tester Foundation Level in Software e Testing


The Foundation Leve el qualificatio on is aimed at a anyone inv volved in soft tware testing g. This includ des people in n roles such as testers, te est analysts, , test enginee ers, test cons sultants, test t managers, user acceptan nce testers and a software developers. This Founda ation Level qualification q i is also appro opriate for anyone who want ts a basic un nderstanding of software testing, such h as project m managers, quality manager rs, software developmen nt managers, business an nalysts, IT dir rectors and m management consulta ants. Holders of the Foundation Certif ficate will be able to go on n to a higher r-level softwa are testing qualification. q

Learni ing Objec ctives/Co ognitive Level of Knowledge K e


Learning g objectives are a indicated d for each section in this syllabus s and d classified as s follows: o K1: remember r o K2: understand u o K3: apply a o K4: analyze a Further details d and examples e of learning l obje ectives are given in Appe endix B. All terms s listed under Terms jus st below chap pter heading gs shall be re emembered ( (K1), even if not explicitly y mentioned in the learnin ng objectives s.

The Examinatio E on
The Foundation Leve el Certificate examination n will be base ed on this sy yllabus. Answ wers to ation question ns may require the use of o material ba ased on more e than one section of this s examina syllabus. All sections s of the syllab bus are exam minable. The form mat of the exa amination is multiple cho oice. Exams may m be taken n as part of an a accredited d training cou urse or taken n independen ntly (e.g., at an a examina ation center or o in a public exam). Com mpletion of an a accredited d training cou urse is not a prerequisite e for the exam m.

Accred ditation
An ISTQ QB National Board B may accredit training providers s whose cour rse material f follows this syllabus. Training pro oviders shou uld obtain acc creditation guidelines from the board or body that t performs s the accreditation. An ac ccredited cou urse is recog gnized as con nforming to this syllabus, and is allowe ed to have an n ISTQB exa amination as part of the course. g for training prov viders is give en in Append dix D. Further guidance

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 8 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Level of Detail
The leve el of detail in this syllabus s allows inter rnationally co onsistent teaching and ex xamination. In order to achieve this goal, the syllabus consis sts of: o General instructional objectiv ves describin ng the intentio on of the Fou undation Lev vel o A list of informati ion to teach, including a description, d and a referenc ces to additio onal sources if requ uired o Lear rning objectiv ves for each knowledge area, a describ bing the cogn nitive learning outcome and a mind dset to be ac chieved o A list of terms tha at students must m be able e to recall and d understand d o A de escription of the t key conc cepts to teach, including sources s such h as accepte ed literature or o standards The sylla abus content t is not a des scription of th he entire kno owledge area a of software testing; it ref flects the level of detail to be b covered in n Foundation n Level traini ing courses.

How th his Syllab bus is Or rganized


There ar re six major chapters. c The top-level heading h for each chapter shows the h highest level of learning objectives th hat is covere ed within the chapter and specifies the e time for the e chapter. Fo or example e:

2. Tes sting Thr roughout t the Sof ftware Life Cycle (K2)

115 min nutes

This hea ading shows that Chapter r 2 has learning objective es of K1 (ass sumed when a higher level is shown) and a K2 (but not n K3), and it is intended d to take 115 5 minutes to teach the material in the e chapter. Within each chapter there are a num mber of sectio ons. Each se ection also ha as the learning objective es and the am mount of time e required. Subsections S that do not have h a time g given are included within the time for the e section.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 9 of 78 7

31-Mar r-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

1.

Fundam mentals of Test ting (K2 2)

15 55 minu utes

Learni ing Objec ctives for r Fundam mentals of f Testing


The obje ectives identify what you will be able to t do followin ng the compl letion of each h module.

1.1 Wh hy is Testing Necess sary? (K2)


LO-1.1.1 1 LO-1.1.2 2 LO-1.1.3 3 LO-1.1.4 4 LO-1.1.5 5 Describe e, with examples, the way y in which a defect in sof ftware can ca ause harm to oa person, to t the enviro onment or to a company (K2) ( Distinguish between the root cau use of a defec ct and its effe ects (K2) Give rea asons why testing is nece essary by giv ving example es (K2) Describe e why testing g is part of qu uality assurance and give e examples o of how testing contribut tes to higher r quality (K2) Explain and a compare e the terms error, e defect, fault, failure e, and the cor rresponding terms mistake and bug, usi ing examples s (K2)

1.2 Wh hat is Testing? (K2)


LO-1.2.1 1 LO-1.2.2 2 LO-1.2.3 3 Recall th he common objectives o of f testing (K1) Provide examples for the objectiv ves of testing g in different phases of th he software life cycle (K2 2) Different tiate testing from f debugg ging (K2)

1.3 Sev ven Testin ng Princip ples (K2)


LO-1.3.1 1 Explain the t seven pr rinciples in te esting (K2)

1.4 Fun ndamenta al Test Pro ocess (K1) )


LO-1.4.1 1 Recall th he five fundamental test activities a and d respective tasks t from planning to closure (K1)

1.5 The e Psychology of Tes sting (K2) )


LO-1.5.1 1 LO-1.5.2 2 Recall th he psycholog gical factors that t influence e the succes ss of testing ( (K1) Contrast t the mindset t of a tester and a of a deve eloper (K2)

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 10 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

1.1
Terms

Why is Testing g Necess sary (K2 2)

20 minut tes

Bug, def fect, error, fa ailure, fault, mistake, m qual lity, risk

1.1.1

Software e Systems s Context (K1) (

Software e systems ar re an integral l part of life, from f busines ss applications (e.g., ban nking) to cons sumer products s (e.g., cars). . Most people e have had an a experienc ce with softwa are that did n not work as expected d. Software that t does not t work correc ctly can lead to many pro oblems, including loss of money, time t or busin ness reputation, and coul ld even caus se injury or de eath.

1.1.2

Causes of o Softwar re Defects s (K2)

A human n being can make m an erro or (mistake), which produ uces a defect (fault, bug) in the progra am code, or in a docume ent. If a defec ct in code is executed, th he system ma ay fail to do w what it shoul ld do (or do so omething it shouldnt), ca ausing a failure. Defects in software, systems s or d documents may m result in failures, but not all defec cts do so. Defects occur because human be eings are fallible and bec cause there is time press sure, complex x code, co , and/or man omplexity of infrastructure e, changing technologies t ny system int teractions. Failures can be caus sed by enviro onmental con nditions as well. w For example, radiati ion, magnetis sm, electroni ic fields, and pollution can cause faults in firmwar re or influenc ce the execut tion of softwa are by changing g the hardwa are conditions.

1.1.3 Role of Testing T in Software Developm ment, Main ntenance a and tions (K2) Operat
Rigorous s testing of systems s and documentati ion can help to reduce th he risk of problems occurring during operation and d contribute to o the quality of the software system, if the defects s found are corrected d before the system is re eleased for operational us se. Software e testing may y also be req quired to mee et contractua al or legal req quirements, o or industry-specific standard ds.

1.1.4

Testing and a Qualit ty (K2)

With the help of testing, it is poss sible to meas sure the qual lity of software in terms o of defects fou und, for both functional f an nd non-functi ional softwar re requireme ents and char racteristics (e e.g., reliabilit ty, usability, efficiency, maintainabili m ty and portab bility). For more information on non-fu unctional tes sting see Cha apter 2; for more informat tion on software characte eristics see S Software Eng gineering Software e Product Qu uality (ISO 9126). Testing can c give con nfidence in th he quality of the t software if it finds few w or no defec cts. A proper rly designed d test that pa asses reduce es the overall level of risk k in a system m. When testing does find d defects, the quality of o the softwar re system inc creases whe en those defe ects are fixed d. Lessons s should be le earned from previous pro ojects. By understanding the root causes of defec cts found in other projec cts, processe es can be imp proved, whic ch in turn sho ould prevent those defect ts from reoccurring and, as a consequen nce, improve the quality of o future syst tems. This is an aspect of o quality assurance. Testing should s be int tegrated as one o of the qu uality assurance activities s (i.e., alongs side develop pment standard ds, training and defect an nalysis).
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 11 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

1.1.5

How Muc ch Testing g is Enoug gh? (K2)

Deciding g how much testing t is eno ough should take accoun nt of the leve el of risk, inclu uding technic cal, safety, and a business s risks, and project p constr raints such as a time and budget. b Risk k is discussed d further in n Chapter 5. Testing should s provid de sufficient information to t stakeholders to make informed decisions abou ut the release of o the softwa are or system m being teste ed, for the next development step or h handover to custome ers.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 12 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

1.2
Terms

What is s Testing? (K2)

30 minut tes

Debugging, requirem ment, review, test case, te esting, test objective o

Backgr round
A common perceptio on of testing is i that it only y consists of running tests s, i.e., execu uting the softw ware. This is part p of testing g, but not all of o the testing g activities. Test acti ivities exist before b and af fter test exec cution. Thes se activities in nclude plann ning and cont trol, choosing g test conditions, designing and exec cuting test cases, checkin ng results, ev valuating exit t criteria, reporting r on the testing process p and system unde er test, and fi inalizing or c completing closure activities s after a test phase has been b complet ted. Testing also includes s reviewing d documents (including source cod de) and cond ducting static c analysis. Both dyn namic testing g and static te esting can be used as a means for ac chieving sim milar objective es, and will provide infor rmation that can c be used to improve both b the syst tem being tes sted and the e developm ment and tes sting process ses. Testing can c have the e following ob bjectives: o Finding defects o Gain ning confiden nce about the e level of qua ality o Prov viding information for dec cision-making g o Prev venting defec cts The thou ught process s and activitie es involved in n designing tests t early in the life cycle e (verifying the test basis via test design) can he elp to prevent t defects from m being intro oduced into c code. Review ws of documen nts (e.g., req quirements) and a the ident tification and d resolution of o issues also o help to prev vent defects appearing a in the code. Different t viewpoints in testing tak ke different objectives o into o account. For F example, in developm ment testing (e e.g., compon nent, integrat tion and syst tem testing), the main ob bjective may be to cause as many fai ilures as pos ssible so that t defects in th he software are a identified d and can be e fixed. In acceptan nce testing, the t main obje ective may be b to confirm that the system works as expected, to gain con nfidence that it has met th he requireme ents. In some e cases the main m objectiv ve of testing may be to ass sess the qua ality of the so oftware (with no intention of fixing defe ects), to give e information n to stakeholders of the risk r of releasing the syste em at a given n time. Maint tenance testi ing often incl ludes testing th t of the chan hat no new defects d have been introdu uced during developmen d nges. During operational testing, the main obje ective may be to assess system s chara acteristics su uch as reliab bility or availability. Debugging and testin ng are differe ent. Dynamic c testing can show failure es that are ca aused by def fects. Debugging is the dev velopment ac ctivity that fin nds, analyzes and remov ves the cause e of the failur re. Subsequ uent re-testin ng by a tester ensures tha at the fix doe es indeed res solve the failure. The responsi ibility for thes se activities is i usually tes sters test and d developers s debug. The proc cess of testin ng and the te esting activitie es are explained in Section 1.4.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 13 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

1.3
Terms

Seven Testing Principles (K2) )

35 minut tes

Exhaustive testing

Princip ples
A numbe er of testing principles p ha ave been sug ggested over r the past 40 years and offer general guideline es common for f all testing g. esence of defects d Principle 1 Testing shows pre Testing can c show tha at defects are present, bu ut cannot pro ove that there e are no defe ects. Testing g reduces the probability of undisco overed defec cts remaining g in the softw ware but, eve en if no defec cts are found, it is not a proo of of correctn ness. Principle 2 Exhau ustive testing is imposs sible Testing everything e (a all combinatio ons of inputs s and precon nditions) is no ot feasible ex xcept for trivi ial cases. In nstead of exh haustive test ting, risk ana alysis and priorities should d be used to o focus testing efforts. Principle 3 Early testing t To find defects d early, testing activ vities shall be started as early as pos ssible in the s software or system s developm ment life cycle, and shall be focused on defined objectives. o t clustering Principle 4 Defect Testing effort e shall be e focused pr roportionally to the expec cted and later observed d defect density y of modules s. A small number of mod dules usually y contains mo ost of the def fects discove ered during prep release testing, t or is responsible for most of the t operation nal failures. Principle 5 Pestic cide paradox x If the sam me tests are repeated ov ver and over again, event tually the sam me set of tes st cases will no longer fin nd any new defects. d To overcome o thi is pesticide paradox, test cases nee ed to be regu ularly reviewed d and revised d, and new and a different tests need to o be written to t exercise d different parts s of the softw ware or syste em to find potentially mor re defects. t dependent t Principle 6 Testing is context Testing is i done differ rently in diffe erent context ts. For example, safety-critical softwa are is tested differentl ly from an e-commerce site. s nce-of-errors s fallacy Principle 7 Absen Finding and a fixing de efects does not n help if the e system built is unusable e and does n not fulfill the users needs an nd expectatio ons.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 14 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

1.4
Terms

Fundam mental Test T Pro ocess (K K1)

35 minut tes

Confirma ation testing, , re-testing, exit e criteria, incident, regr ression testin ng, test basis s, test condit tion, test cove erage, test da ata, test execution, test log, test plan, test proced dure, test policy, test suite e, test summary y report, test tware

Backgr round
The mos st visible part t of testing is s test executi ion. But to be e effective an nd efficient, t test plans sh hould also inclu ude time to be b spent on planning p the tests, designing test cas ses, preparin ng for execution and eval luating result ts. The fund damental tes st process co onsists of the following ma ain activities: o Test t planning an nd control o Test t analysis and design o Test t implementa ation and exe ecution o Evaluating exit criteria c and re eporting o Test t closure activities Although h logically se equential, the e activities in the process may overlap p or take plac ce concurren ntly. Tailoring g these main activities wit thin the context of the system and the e project is u usually requir red.

1.4.1

Test Plan nning and d Control (K1) (

Test plan nning is the activity a of de efining the ob bjectives of te esting and th he specificatio on of test ac ctivities in order to meet the objectives o an nd mission. Test con ntrol is the on ngoing activit ty of comparing actual pr rogress again nst the plan, and reportin ng the status, in ncluding deviations from the plan. It in nvolves takin ng actions ne ecessary to m meet the mis ssion and obje ectives of the e project. In order o to contr rol testing, th he testing activities shoul ld be monitor red througho out the projec ct. Test planning takes in nto account the feedback k from monito oring and con ntrol activities s. nning and co ontrol tasks are a defined in n Chapter 5 of o this syllab bus. Test plan

1.4.2

Test Ana alysis and Design (K K1)

Test ana alysis and de esign is the activity a during g which gene eral testing objectives are e transformed d into tangible test conditio ons and test cases. c The test analysis and d design acti ivity has the following ma ajor tasks: o Revi iewing the te est basis (suc ch as require ements, softw ware integrity y level1 (risk level), risk analysis reports, architecture e, design, inte erface specif fications) o Evaluating testab bility of the te est basis and d test objects s o Identifying and prioritizing p tes st conditions based on an nalysis of tes st items, the specification n, beha avior and stru ucture of the e software o Desi igning and prioritizing hig gh level test cases c o Identifying neces ssary test data to support t the test con nditions and test cases o Desi igning the tes st environme ent setup and d identifying any required d infrastructu ure and tools o Crea ating bi-direc ctional tracea ability betwee en test basis and test cas ses
1

The degr ree to which sof ftware complies s or must comply y with a set of stakeholder-sele s ected software a and/or software-based system cha aracteristics (e.g g., software com mplexity, risk as ssessment, safe ety level, securit ty level, desired performance, reliability, or o cost) which are a defined to re eflect the importa ance of the soft tware to its stak keholders.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 15 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

1.4.3

Test Imp plementation and Ex xecution (K1)

Test imp plementation and executio on is the activity where te est procedur res or scripts s are specifie ed by combinin ng the test ca ases in a par rticular order r and includin ng any other information needed for test t executio on, the enviro onment is set t up and the tests are run n. Test imp plementation and executio on has the fo ollowing majo or tasks: o Fina alizing, implem menting and prioritizing test t cases (in ncluding the identification n of test data a) o Deve eloping and prioritizing te est procedure es, creating test t data and d, optionally, preparing te est harn nesses and writing w autom mated test scr ripts o Crea ating test suit tes from the test procedu ures for efficient test exec cution o Verif fying that the e test environ nment has be een set up co orrectly o Verif fying and updating bi-dire ectional trace eability between the test basis and te est cases o Exec cuting test pr rocedures either manuall ly or by using g test execut tion tools, ac ccording to th he planned sequenc ce o Logg ging the outc come of test execution an nd recording the identities s and versions of the sof ftware unde er test, test to ools and test tware o Com mparing actua al results with h expected results r o Repo orting discrepancies as in ncidents and d analyzing th hem in order r to establish h their cause (e.g., a de efect in the co ode, in specified test data a, in the test document, or o a mistake in the way th he test was executed) o Repe eating test activities as a result of act tion taken for each discre epancy, for e example, reexec cution of a te est that previo ously failed in order to co onfirm a fix (c confirmation testing), exe ecution of a corrected tes st and/or exe ecution of tes sts in order to t ensure tha at defects have not been intro oduced in unc changed areas of the sof ftware or that defect fixing did not unc cover other defe ects (regressi ion testing)

1.4.4

Evaluatin ng Exit Cr riteria and Reporting g (K1)

Evaluatin ng exit criteria is the activ vity where te est execution is assessed d against the defined objective es. This shou uld be done for f each test level (see Section S 2.2). Evaluatin ng exit criteria has the following majo or tasks: o Chec cking test log gs against th he exit criteria a specified in n test plannin ng o Asse essing if mor re tests are needed n or if the t exit criter ria specified should be ch hanged o Writi ing a test sum mmary repor rt for stakeho olders

1.4.5

Test Clos sure Activ vities (K1)

Test clos sure activities collect data a from comp pleted test ac ctivities to consolidate exp perience, testware e, facts and numbers. n Tes st closure ac ctivities occur r at project milestones m su uch as when a software e system is re eleased, a te est project is completed (o or cancelled), a milestone has been achieved d, or a mainte enance relea ase has been n completed.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 16 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Test clos sure activities include the e following major m tasks: o Chec cking which planned deliverables hav ve been deliv vered o Clos sing incident reports or ra aising change e records for r any that rem main open o Docu umenting the e acceptance e of the syste em o Fina alizing and ar rchiving testw ware, the tes st environmen nt and the te est infrastruct ture for later reuse o Hand ding over the e testware to o the mainten nance organi ization o Anal lyzing lesson ns learned to o determine changes c nee eded for futur re releases a and projects o Usin ng the information gathere ed to improv ve test maturity

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 17 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

1.5
Terms

The Ps sycholog gy of Tes sting (K2)

25 minut tes

Error gue essing, indep pendence

Backgr round
The mind dset to be us sed while tes sting and rev viewing is diff ferent from th hat used whi ile developing software e. With the rig ght mindset developers d a able to te are est their own code, but se eparation of this t responsi ibility to a tes ster is typically done to help focus eff fort and provide additiona al benefits, su uch as an indep pendent view w by trained and a professio onal testing resources. r In ndependent t testing may be b carried out o at any lev vel of testing. A certain n degree of in ndependenc ce (avoiding the t author bias) often ma akes the teste er more effec ctive at finding g defects and d failures. Ind dependence e is not, howe ever, a replac cement for fa amiliarity, an nd develope ers can efficiently find ma any defects in their own code. c Severa al levels of in ndependence e can be define ed as shown n here from lo ow to high: o Test ts designed by b the person(s) who wro ote the softw ware under test (low level of independence) o Test ts designed by b another person(s) (e.g g., from the development d t team) o Test ts designed by b a person(s) from a diff ferent organi izational grou up (e.g., an i independent t test team m) or test spe ecialists (e.g. ., usability or r performanc ce test specia alists) o Test ts designed by b a person(s) from a diff ferent organi ization or com mpany (i.e., outsourcing or certification by an external bo ody) People and a projects are driven by y objectives. . People tend d to align the eir plans with the objectives set by mana agement and other stakeh holders, for example, e to find f defects or o to confirm m that softwar re meets its s objectives. Therefore, it t is important to clearly state the obje ectives of testing. Identifyin ng failures du uring testing may be perc ceived as criticism agains st the produc ct and agains st the author. As A a result, te esting is ofte en seen as a destructive activity, a even n though it is s very constru uctive in the ma anagement of o product ris sks. Looking for failures in a system requires r curio osity, profess sional pessimis sm, a critical eye, attentio on to detail, good g commu unication with h developme ent peers, and experien nce on which to base erro or guessing. If errors, defects or fa ailures are co ommunicated in a constr ructive way, bad feelings between the e testers and a the analy ysts, designe ers and developers can be b avoided. This T applies t to defects fou und during re eviews as we ell as in testin ng. The teste er and test le eader need good g interper rsonal skills to communic cate factual information about a defects, progress and risks in a constructive c w way. For the author of the software o or document, defect in nformation ca an help them m improve the eir skills. Defe ects found and fixed duri ing testing will w save time and money y later, and reduce r risks. . Commun nication prob blems may oc ccur, particularly if testers are seen only o as messengers of unwante ed news abou ut defects. However, ther re are severa al ways to im mprove comm munication an nd relations ships betwee en testers and d others:

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 18 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

o o

o o

Start t with collabo oration rather than battles s remind everyone e of th he common goal of bette er quality systems Com mmunicate fin ndings on the e product in a neutral, fac ct-focused way w without cr riticizing the pers son who crea ated it, for example, write objective an nd factual inc cident reports s and review w findings Try to t understand how the ot ther person feels f and why they react as they do Conf firm that the other person n has unders stood what yo ou have said d and vice ve ersa

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 19 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

1.6

Code of o Ethics s

10 minut tes

Involvem ment in software testing enables e indiv viduals to learn confidential and privile eged informa ation. A code of ethics e is necessary, amo ong other rea asons to ensu ure that the information is s not put to inapprop priate use. Re ecognizing th he ACM and d IEEE code of ethics for engineers, th he ISTQB states the following g code of ethics: PUBLIC - Certified so oftware teste ers shall act consistently with the pub blic interest CLIENT AND EMPLO OYER - Cert tified softwar re testers sha all act in a manner m that is s in the best interests of their client c and em mployer, cons sistent with th he public inte erest PRODUC CT - Certified d software te esters shall ensure e that th he deliverables they prov vide (on the products p and syst tems they tes st) meet the highest profe essional stan ndards possible JUDGME ENT- Certifie ed software testers t shall maintain inte egrity and ind dependence in their profe essional judgmen nt MANAGEMENT - Ce ertified softwa are test man nagers and le eaders shall subscribe to and promote an ethical approach a to the managem ment of softw ware testing on of the pro PROFES SSION - Cer rtified softwar re testers shall advance the integrity and reputatio ofession consistent with the public interest t COLLEA AGUES - Cer rtified softwa are testers sh hall be fair to and support tive of their c colleagues, and a promote cooperation n with software developer rs SELF - Certified C softw ware testers shall particip pate in lifelon ng learning regarding r the e practice of their professio on and shall promote an ethical appro oach to the practice p of the profession n

Refere ences
1.1.5 Black, 2001, Kaner, K 2002 zer, 1990, Bla ack, 2001, Myers, M 1979 1.2 Beiz 1.3 Beiz zer, 1990, He etzel, 1988, Myers, M 1979 1.4 Hetz zel, 1988 1.4.5 Black, 2001, Craig, C 2002 1.5 Blac ck, 2001, Het tzel, 1988

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 20 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

2. Testing T g Throug ghout th he Softw ware Lif fe Cycle e (K2)

1 115 minutes

Learni ing Objec ctives for r Testing Througho out the Software S L Life Cycle e
The obje ectives identify what you will be able to t do followin ng the compl letion of each h module.

2.1 Sof ftware Dev velopmen nt Models (K2) (


LO-2.1.1 1 LO-2.1.2 2 LO-2.1.3 3 Explain the t relationship between developmen nt, test activit ties and work products in n the developm ment life cyc cle, by giving examples us sing project and a product types (K2) Recognize the fact th hat software developmen nt models mu ust be adapte ed to the con ntext of projec ct and produc ct characteris stics (K1) Recall ch haracteristics s of good tes sting that are e applicable to t any life cy ycle model (K K1)

2.2 Tes st Levels (K2) (


LO-2.2.1 1 Compare e the differen nt levels of te esting: major r objectives, typical objec cts of testing, , typical ta argets of test ting (e.g., fun nctional or st tructural) and d related wor rk products, people who test t, types of de efects and failures to be id dentified (K2 2)

2.3 Tes st Types (K2)


LO-2.3.1 1 LO-2.3.2 2 LO-2.3.3 3 LO-2.3.4 4 LO-2.3.5 5 Compare e four softwa are test types s (functional, , non-functional, structura al and chang gerelated) by example (K2) Recognize that funct tional and str ructural tests s occur at any test level (K1) al requireme Identify and a describe e non-functio onal test type es based on non-function n ents (K2) Identify and a describe e test types based b on the e analysis of a software sy ystems struc cture or archite ecture (K2) Describe e the purpose e of confirma ation testing and regression testing (K K2)

2.4 Maintenance e Testing (K2) (


LO-2.4.1 1 LO-2.4.2 2 LO-2.4.3 3. Compare e maintenance testing (te esting an existing system m) to testing a new application with resp pect to test ty ypes, triggers for testing and amount of testing (K K2) Recognize indicators s for mainten nance testing g (modificatio on, migration and retirement) (K1) Describe e the role of regression r te esting and im mpact analysis in mainten nance (K2)

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 21 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

2.1
Terms

Softwa are Deve elopmen nt Models (K2)

20 minut tes

Commer rcial Off-The-Shelf (COTS S), iterative-i incremental development model, validation, verification, V-model

Backgr round
Testing does d not exis st in isolation n; test activiti ies are relate ed to softwar re developme ent activities. Different t developmen nt life cycle models m need d different approaches to testing.

2.1.1

V-model (Sequential Develo opment Mo odel) (K2)

Although h variants of the V-model l exist, a com mmon type of f V-model us ses four test levels, correspo onding to the e four develop pment levels s. r levels used in this syllab bus are: The four o Com mponent (unit t) testing o Integ gration testin ng o Syst tem testing o Acce eptance testi ing In practic ce, a V-mode el may have more, fewer r or different levels of dev velopment an nd testing, dependin ng on the pro oject and the e software pr roduct. For example, ther re may be co omponent integratio on testing aft ter compone ent testing, an nd system in ntegration tes sting after sy ystem testing g. Software e work produ ucts (such as s business sc cenarios or use u cases, re equirements s specification ns, design documents d an nd code) pro oduced during developme ent are often the basis of f testing in on ne or more tes st levels. Ref ferences for generic g work k products include Capab bility Maturity y Model Integ gration (CMMI) or o Software life cycle pro ocesses (IEE EE/IEC 1220 07). Verification and valid dation (and early e test design) can be carried c out du uring the dev velopment of f the software e work produ ucts.

2.1.2

Iterative-incremen ntal Develo opment Models (K2)

Iterative-incremental developmen nt is the proc cess of estab blishing requi irements, designing, build ding and testi ing a system m in a series of o short deve elopment cyc cles. Examples are: proto otyping, Rapid Applicati ion Developm ment (RAD), Rational Un nified Process s (RUP) and agile develo opment mode els. A system that t is produc ced using the ese models may m be teste ed at several test levels d during each iteration. . An increme ent, added to others deve eloped previo ously, forms a growing pa artial system, which sh hould also be e tested. Reg gression testing is increas singly import tant on all ite erations after r the first one. . Verification and validatio on can be ca arried out on each increm ment.

2.1.3

Testing within w a Life Cycle Model M (K2 2)

In any lif fe cycle model, there are several characteristics of o good testin ng: o For every e develo opment activi ity there is a corresponding testing ac ctivity o Each h test level has h test objec ctives specifi ic to that leve el o The analysis and d design of te ests for a giv ven test level should begi in during the correspondi ing deve elopment act tivity o Test ters should be b involved in n reviewing documents d as a soon as dr rafts are available in the deve elopment life cycle Test leve els can be co ombined or reorganized r d depending on the nature of the projec ct or the syst tem architect ture. For exa ample, for the e integration of a Comme ercial Off-The e-Shelf (COT TS) software product into a system m, the purcha aser may per rform integra ation testing at a the system m level (e.g.,
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 22 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

integratio on to the infr rastructure and other systems, or sys stem deploym ment) and ac cceptance tes sting (function nal and/or no on-functional, , and user an nd/or operational testing) ).

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 23 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

2.2
Terms

Test Le evels (K K2)

40 minut tes

Alpha testing, beta te esting, comp ponent testing g, driver, field d testing, fun nctional requ uirement, integratio on, integratio on testing, no on-functional l requiremen nt, robustness s testing, stu ub, system te esting, test environment, tes st level, test-d driven development, use er acceptance e testing

Backgr round
For each h of the test levels, the fo ollowing can be b identified: the generic c objectives, t the work product(s) being refe erenced for deriving d test cases c (i.e., th he test basis s), the test ob bject (i.e., wh hat is being tes sted), typical l defects and d failures to be b found, tes st harness requirements a and tool supp port, and spec cific approac ches and responsibilities. Testing a systems configuration data shall be e considered d during test planning,

2.2.1

Component Testin ng (K2)

Test bas sis: o Com mponent requ uirements o Deta ailed design o Code e Typical test t objects: o Com mponents o Prog grams o Data a conversion / migration programs p o Data abase modules Compon nent testing (a also known as a unit, modu ule or progra am testing) searches for d defects in, an nd verifies the t functionin ng of, softwa are modules, programs, objects, o class ses, etc., that are separat tely testable. . It may be do one in isolati ion from the rest of the sy ystem, depending on the e context of the developm ment life cycle and the sy ystem. Stubs s, drivers and d simulators may be used d. nent testing may m include testing t of fun nctionality an nd specific no on-functional l characterist tics, Compon such as resource-behavior (e.g., searching fo or memory le eaks) or robu ustness testin ng, as well as s structura al testing (e.g g., decision coverage). c Te est cases are e derived fro om work prod ducts such as sa specifica ation of the component, th he software design or the e data model. Typically y, component testing occ curs with acce ess to the co ode being tes sted and with h the support t of a developm ment environ nment, such as a unit test framework or debugging tool. In practice, comp ponent testing usually u involv ves the progr rammer who wrote the co ode. Defects are typically y fixed as soo on as they are found, witho out formally managing m the ese defects. One app proach to com mponent test ting is to prepare and aut tomate test cases c before e coding. This s is called a test-first app proach or tes st-driven deve elopment. Th his approach h is highly iterative and is s based on n cycles of developing te est cases, the en building and integratin ng small piec ces of code, and a executing the compo onent tests co orrecting any y issues and iterating unt til they pass.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 24 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

2.2.2

Integratio on Testing g (K2)

Test bas sis: o Softw ware and sys stem design o Arch hitecture o Workflows o Use cases Typical test t objects: o Subs systems o Data abase implem mentation o Infra astructure o Inter rfaces o Syst tem configura ation and configuration data d Integratio on testing tests interfaces between components, interactions with differen nt parts of a system, such as the operating sy ystem, file sy ystem and ha ardware, and interfaces b between syst tems. There may be more than t one lev vel of integrat tion testing and a it may be e carried out on test objec cts of varying size s as follow ws: 1. Com mponent integ gration testin ng tests the in nteractions between b softw ware compo onents and is s done after r component testing 2. Syst tem integratio on testing tests the intera actions betwe een different t systems or between hard dware and so oftware and may m be done e after system m testing. In this case, the developing g orga anization may y control only y one side of f the interface. This migh ht be conside ered as a risk k. Business proces sses impleme ented as wor rkflows may involve a ser ries of system ms. Cross-platform issue es may be si ignificant. The grea ater the scop pe of integrat tion, the more difficult it becomes b to is solate defect ts to a specif fic compone ent or system m, which may y lead to incr reased risk and a additiona al time for tro oubleshooting g. Systema atic integratio on strategies may be bas sed on the sy ystem archite ecture (such as top-down n and bottom-u up), functiona al tasks, tran nsaction proc cessing sequ uences, or so ome other as spect of the system s or compo onents. In or rder to ease fault isolation n and detect t defects earl ly, integration n should nor rmally be increm mental rathe er than big bang. Testing of o specific no on-functional l characterist tics (e.g., performance) may m be includ ded in integr ration testing as a well as fun nctional testin ng. At each stage of inte egration, teste ers concentr rate solely on n the integrat tion itself. Fo or example, if f they are integ grating modu ule A with mo odule B they are intereste ed in testing the commun nication betw ween the modu ules, not the functionality y of the indivi idual module e as that was s done during g component t testing. Both B function nal and struc ctural approaches may be e used. Ideally, testers t should understand d the archite ecture and inf fluence integ gration plann ning. If integra ation tests are e planned before compon nents or syste ems are built, those com mponents can n be built in th he order req quired for mo ost efficient testing. t

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 25 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

2.2.3

System Testing T (K K2)

Test bas sis: o Syst tem and softw ware require ement specification o Use cases o Func ctional specif fication o Risk k analysis rep ports Typical test t objects: o Syst tem, user and operation manuals m o Syst tem configura ation and configuration data d System testing t is con ncerned with h the behavio or of a whole system/prod duct. The tes sting scope shall s be clearl ly addressed d in the Master and/or Lev vel Test Plan n for that test level. In system m testing, the e test environ nment should correspond d to the final target or pro oduction environm ment as much as possible e in order to minimize the e risk of environment-spe ecific failures s not being fou und in testing g. t may include tests s based on risks and/or on o requirements specifica ations, busine ess System testing processe es, use case es, or other high level text t descriptions s or models of system be ehavior, interactio ons with the operating sy ystem, and sy ystem resources. System testing t should investigate e functional and a non-func ctional requir rements of th he system, and data qua ality characte eristics. Teste ers also need d to deal with h incomplete e or undocum mented requirem ments. System m testing of functional f requirements starts s by usin ng the most a appropriate specifica ation-based (black-box) ( te echniques fo or the aspect t of the syste em to be teste ed. For exam mple, a decision table may be b created for combinations of effects s described in n business ru ules. Structurebased te echniques (w white-box) ma ay then be us sed to asses ss the thoroughness of th he testing with respect to t a structura al element, such s as menu u structure or o web page navigation n (s see Chapter 4). An indep pendent test team often carries c out sy ystem testing g.

2.2.4

Acceptan nce Testin ng (K2)

Test bas sis: o User r requiremen nts o Syst tem requirem ments o Use cases sses o Business proces o Risk k analysis rep ports Typical test t objects: o Business proces sses on fully integrated sy ystem o Operational and maintenance e processes o User r procedures s o Form ms o Repo orts o Conf figuration da ata Acceptance testing is s often the re esponsibility of the customers or user rs of a system m; other e involved as s well. stakeholders may be The goal in acceptan nce testing is s to establish h confidence in the system m, parts of th he system or r specific non-functional characteri istics of the system. s Find ding defects is not the ma ain focus in acceptan nce testing. Acceptance A t testing may assess the systems s read diness for de eployment an nd
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 26 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

use, alth hough it is no ot necessarily y the final lev vel of testing. For example, a large-sc cale system integratio on test may come c after th he acceptanc ce test for a system. Acceptance testing may m occur at t various time es in the life cycle, for exa ample: o A CO OTS software product ma ay be accept tance tested when it is in nstalled or int tegrated o Acce eptance testi ing of the usa ability of a co omponent may be done during d component testing g o Acce eptance testi ing of a new functional en nhancement t may come before b system m testing Typical forms f of acce eptance testi ing include th he following: ceptance te esting User acc Typically y verifies the fitness for use of the sys stem by business users. onal (accept tance) testin ng Operatio The acce eptance of th he system by y the system administrato ors, including g: o Test ting of backu up/restore o Disa aster recover ry o User r manageme ent o Main ntenance tas sks o Data a load and migration m task ks o Perio odic checks of security vulnerabilities s ct and regula ation accept tance testin ng Contrac Contract t acceptance e testing is pe erformed aga ainst a contra acts accepta ance criteria for producing custom-d developed so oftware. Acc ceptance crite eria should be b defined wh hen the parti ies agree to the t contract. . Regulation acceptance testing is pe erformed aga ainst any regu ulations that must be adh hered to, such as governme ent, legal or safety regula ations. g Alpha and beta (or field) testing Developers of marke et, or COTS, software ofte en want to get feedback from potentia al or existing g custome ers in their ma arket before the software e product is put p up for sale commercia ally. Alpha te esting is performed at the developing d or rganizations s site but not by the developing team. Beta testing g, or field-test ting, is perfor rmed by cust tomers or po otential custo omers at their own locatio ons. Organiza ations may use u other term ms as well, such s as facto ory acceptanc ce testing an nd site accep ptance testing fo or systems th hat are tested before and d after being moved to a customers s site.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 27 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

2.3
Terms

Test Ty ypes (K2 2)

40 minut tes

Black-bo ox testing, co ode coverage e, functional testing, inter roperability te esting, load t testing, maintain nability testing g, performan nce testing, portability p tes sting, reliabili ity testing, se ecurity testing, stress te esting, structu ural testing, usability u test ting, white-bo ox testing

Backgr round
A group of test activities can be aimed a at veri ifying the sof ftware system m (or a part o of a system) based on a spe ecific reason or target for testing. A test typ pe is focused d on a partic cular test obje ective, which h could be an ny of the follo owing: o A fun nction to be performed by y the software o A no on-functional quality characteristic, su uch as reliabi ility or usability o The structure or architecture of the softwa are or system m o Change related, i.e., confirmi ing that defe ects have bee en fixed (con nfirmation tes sting) and loo oking for unintended u ch hanges (regr ression testin ng) A model of the software may be developed d and/or used in n structural te esting (e.g., a control flow w model or r menu struc cture model), non-function nal testing (e e.g., performa ance model, usability mo odel security threat modeling), and fun nctional testing (e.g., a process flow model, m a stat te transition model or a plain n language specification) s ).

2.3.1

Testing of o Functio on (Functio onal Testi ing) (K2)

The func ctions that a system, subs system or co omponent are e to perform may be described in work products s such as a re equirements s specification n, use cases s, or a functio onal specifica ation, or they y may be undoc cumented. The T functions s are what the t system does. d Function nal tests are based on fun nctions and features f (des scribed in do ocuments or u understood by b the testers) and a their inte eroperability with specific c systems, an nd may be pe erformed at a all test levels s (e.g., tests for components s may be bas sed on a com mponent specification). Specifica ation-based techniques t m be used to derive tes may st conditions s and test cas ses from the functiona ality of the so oftware or sy ystem (see Chapter C 4). Fu unctional tes sting conside ers the extern nal behavior r of the softw ware (black-b box testing). A type of f functional testing, security testing, in nvestigates the t functions s (e.g., a firew wall) relating g to detection n of threats, such as virus ses, from ma alicious outsiders. Anothe er type of fun nctional testing, interoper rability testin ng, evaluates s the capabili ity of the soft tware produc ct to interact with one or more specified d component ts or systems s.

2.3.2 Testing of o Non-fun nctional Software S Characteris C stics (Non n-functional Testing g) (K2)
Non-func ctional testing includes, but b is not limited to, perfo ormance testing, load testing, stress testing, usability u testing, maintain nability testin ng, reliability testing t and portability p tes sting. It is the e testing of o how the system s works s. Non-func ctional testing may be pe erformed at all a test levels. The term non-functiona al testing des scribes the tests s required to measure cha aracteristics of systems and a software e that can be quantified on o a varying scale, s such as a response times for per rformance te esting. These e tests can be referenced d to a quality model m such as a the one de efined in Sof ftware Engine eering Soft tware Product Quality (IS SO
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 28 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

9126). Non-functiona N al testing con nsiders the external beha avior of the so oftware and in most case es uses bla ack-box test design d techn niques to acc complish that t.

2.3.3

Testing of o Softwar re Structu ure/Archite ecture (Str ructural Testing) (K K2)

Structura al (white-box x) testing may y be perform med at all test t levels. Structural techni iques are best used afte er specification-based techniques, in order to help p measure th he thoroughn ness of testin ng through assessment of coverage e of a type of structure. Coverag ge is the exte ent that a stru ucture has be een exercise ed by a test suite, s expressed as a percenta age of the ite ems being co overed. If cov verage is not 100%, then more tests m may be desig gned to test th hose items th hat were miss sed to increa ase coverage e. Coverage techniques a are covered in Chapter 4. At all tes st levels, but especially in n component t testing and component integration te esting, tools can be used to measure the code cov verage of ele ements, such h as stateme ents or decisions. Structural testing may m be based d on the arch hitecture of th he system, such s as a cal lling hierarch hy. Structura al testing app proaches can n also be applied at syste em, system integration i or acceptance e testing le evels (e.g., to o business models m or me enu structure es).

2.3.4

Testing Related R to o Changes s: Re-testing and Re egression Testing (K2)

After a defect d is dete ected and fixe ed, the softw ware should be b re-tested to t confirm th hat the origina al defect ha as been succ cessfully rem moved. This is i called confirmation. De ebugging (loc cating and fix xing a defect) is s a developm ment activity, not a testing g activity. Regress sion testing is s the repeate ed testing of an already te ested progra am, after mod dification, to discover r any defects s introduced or o uncovered d as a result of the chang ge(s). These defects may y be either in the software e being tested, or in anoth her related or o unrelated software s com mponent. It is s performe ed when the software, or its environm ment, is changed. The ext tent of regres ssion testing g is based on n the risk of not finding defects in soft ftware that was working previously. p hould be repe eatable if the ey are to be used u for conf firmation test ting and to assist regress sion Tests sh testing. Regress sion testing may m be performed at all te est levels, an nd includes functional, no on-functional and structura al testing. Re egression tes st suites are run r many tim mes and gene erally evolve e slowly, so regressio on testing is a strong can ndidate for au utomation.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 29 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

2.4
Terms

Maintenance Testing T ( (K2)

15 minut tes

Impact analysis, a maintenance tes sting

Backgr round
Once de eployed, a so oftware syste em is often in n service for years y or decades. During g this time the system, its configura ation data, or its environm ment are often n corrected, changed or extended. Th he planning g of releases in advance is i crucial for successful maintenance m e testing. A distinction has s to be made be etween plann ned releases and hot fixe es. Maintenan nce testing is s done on an n existing operational system, and a is triggered by modif fications, mig gration, or retirement of th he software or system. Modifica ations include e planned en nhancement changes c (e.g g., release-ba ased), correc ctive and emergen ncy changes, and change es of environ nment, such as a planned operating o sys stem or database upgrades, planned upgrade of Co ommercial-O Off-The-Shelf f software, or r patches to correct newly exposed d or discovere ed vulnerabilities of the operating o sys stem. Maintena ance testing for migration n (e.g., from one platform m to another) should inclu ude operation nal tests of the t new environment as well w as of the e changed so oftware. Migration testing g (conversion n testing) is i also neede ed when data a from anoth her applicatio on will be mig grated into th he system be eing maintain ned. Maintena ance testing for the retire ement of a sy ystem may in nclude the te esting of data a migration or archiving g if long data a-retention pe eriods are required. In additio on to testing what has be een changed, maintenanc ce testing inc cludes regression testing g to parts of the t system that have not t been chang ged. The sco ope of mainte enance testin ng is related to t the risk of th he change, th he size of the e existing sys stem and to the t size of th he change. D Depending on n the changes s, maintenanc ce testing may be done at a any or all test t levels an nd for any or r all test types. Determin ning how the e existing sys stem may be affected by changes is called c impact t analysis, an nd is used to help h decide how h much re egression tes sting to do. The T impact analysis may be used to determin ne the regres ssion test sui ite. Maintena ance testing can be diffic cult if specific cations are out o of date or r missing, or testers with domain knowledge k a not availa are able.

Refere ences
2.1.3 CM MMI, Craig, 2002, 2 Hetzel l, 1988, IEEE E 12207 2.2 Hetz zel, 1988 2.2.4 Co opeland, 200 04, Myers, 19 979 2.3.1 Be eizer, 1990, Black, B 2001, Copeland, 2004 2 2.3.2 Black, 2001, IS SO 9126 2.3.3 Be eizer, 1990, Copeland, C 20 004, Hetzel, 1988 2.3.4 He etzel, 1988, IEEE I STD 82 29-1998 2.4 Blac ck, 2001, Cra aig, 2002, He etzel, 1988, IEEE I STD 82 29-1998

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 30 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

3.

S Static T Techniq ues (K2 2)

6 60 minut tes

Learni ing Objec ctives for r Static Te echniques


The obje ectives identify what you will be able to t do followin ng the compl letion of each h module.

3.1 Sta atic Techniques and d the Test Process (K2) (


LO-3.1.1 1 LO-3.1.2 2 LO-3.1.3 3 Recognize software work produc cts that can be b examined by the differ rent static techniqu ues (K1) Describe e the importa ance and valu ue of conside ering static te echniques fo or the assess sment of softwa are work products (K2) Explain the t differenc ce between static s and dyn namic techniques, consid dering object tives, types of defects to be e identified, and a the role of these tech hniques with hin the softwa are life cycle (K2 2)

3.2 Rev view Proc cess (K2)


LO-3.2.1 1 LO-3.2.2 2 LO-3.2.3 3 Recall th he activities, roles and responsibilities s of a typical formal revie ew (K1) Explain the t differenc ces between different type es of reviews s: informal re eview, techni ical and inspection (K2) review, walkthrough w Explain the t factors fo or successful performanc ce of reviews s (K2)

3.3 Sta atic Analys sis by Too ols (K2)


LO-3.3.1 1 LO-3.3.2 2 LO-3.3.3 3 Recall ty ypical defects s and errors identified by y static analys sis and comp pare them to o reviews and dynamic c testing (K1) Describe e, using exam mples, the ty ypical benefits of static an nalysis (K2) List typic cal code and design defe ects that may y be identified d by static an nalysis tools (K1)

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 31 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

3.1 Static Techniq T ues and d the Tes st Proce ess (K2)
Terms
Dynamic c testing, stat tic testing

15 minut tes

Backgr round
Unlike dy ynamic testin ng, which req quires the ex xecution of so oftware, static testing tec chniques rely y on the manu ual examinat tion (reviews s) and autom mated analysi is (static ana alysis) of the code or othe er project documentatio d on without the execution of the code. Reviews s are a way of o testing soft tware work products p (including code) ) and can be performed well w before dynamic test execution. e D Defects detec cted during re eviews early in the life cy ycle (e.g., def fects found in requirement ts) are often much cheap per to remove e than those detected by y running test ts on the exec cuting code. A review w could be do one entirely as a a manual activity, but there is also tool support t. The main manual activity a is to examine e a work w product and make co omments about it. Any so oftware work k product can c be reviewed, includin ng requireme ents specifica ations, desig gn specifications, code, te est plans, te est specifications, test cas ses, test scri ipts, user guides or web pages. Benefits of reviews in nclude early defect detec ction and cor rrection, deve elopment pro oductivity improvem ments, reduc ced developm ment timesca ales, reduced d testing cos st and time, li ifetime cost reduction ns, fewer def fects and improved comm munication. Reviews R can n find omissio ons, for exam mple, in require ements, whic ch are unlike ely to be foun nd in dynamic testing. Reviews s, static analy ysis and dyna amic testing have the same objective e identifying g defects. Th hey are complementary; the different techniques can find diffe erent types of o defects effe ectively and efficiently. Compared d to dynamic c testing, stat tic technique es find causes of failures (defects) rather than the failures them mselves. Typical defects d that are a easier to find in reviews than in dynamic testin ng include: d deviations fro om standard ds, requireme ent defects, design d defec cts, insufficie ent maintaina ability and inc correct interfa ace specifica ations.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 32 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

3.2
Terms

Review w Proces ss (K2)

25 minut tes

Entry criteria, formal review, infor rmal review, inspection, metric, m mode erator, peer r review, review wer, scribe, te echnical review, walkthro ough

Backgr round
The diffe erent types of o reviews vary from inform mal, characterized by no o written instr ructions for reviewer rs, to systematic, charact terized by tea am participat tion, docume ented results s of the review w, and documen nted procedu ures for cond ducting the re eview. The fo ormality of a review proce ess is related d to factors such s as the maturity m of the developme ent process, any legal or regulatory re equirements s or the need for r an audit trai il. The way y a review is carried out depends d on the t agreed objectives of the t review (e e.g., find defe ects, gain und derstanding, educate test ters and new w team memb bers, or discu ussion and d decision by consens sus).

3.2.1

Activities s of a Form mal Revie ew (K1)

A typical l formal revie ew has the fo ollowing main n activities: 1. Plan nning Defining D the review criter ria Selecting S the e personnel Allocating A ro oles Defining D the entry and ex xit criteria for r more forma al review type es (e.g., insp pections) Selecting S wh hich parts of documents to t review Checking C en ntry criteria (f for more form mal review types) 2. Kick k-off Distributing D d documents Explaining E th he objectives s, process an nd document ts to the participants 3. Indiv vidual prepar ration Preparing P for the review meeting by reviewing r the e document(s) Noting N poten ntial defects, questions an nd comment ts 4. Exam mination/eva aluation/recor rding of resu ults (review meeting) m Discussing D o logging, with documented results or or o minutes (fo or more form mal review typ pes) Noting N defec cts, making re ecommenda ations regarding handling the defects, making dec cisions a about the de efects Examining/e E evaluating and recording issues during g any physic cal meetings or tracking any a g group electro onic commun nications 5. Rew work Fixing F defect ts found (typ pically done by b the author r) Recording R updated status of defects (in formal reviews) 6. Follo ow-up Checking C tha at defects ha ave been add dressed Gathering G metrics Checking C on n exit criteria (for more for rmal review types) t

3.2.2

Roles an nd Respon nsibilities (K1)

A typical l formal revie ew will includ de the roles below: b o Manager: decide es on the exe ecution of rev views, alloca ates time in project p sched dules and dete ermines if the e review obje ectives have been met.
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 33 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

o o

Moderator: the person p who le eads the review of the do ocument or set s of docume ents, includin ng planning the revi iew, running the meeting, and followin ng-up after th he meeting. If necessary y, the moderator may mediate m betw ween the various points of o view and is s often the pe erson upon whom w the success s of th he review res sts. Auth hor: the writer or person with w chief res sponsibility fo or the docum ment(s) to be reviewed. Revi iewers: indiv viduals with a specific technical or bus siness backg ground (also called check kers or inspe ectors) who, after the necessary prep paration, iden ntify and des scribe finding gs (e.g., defe ects) in the product p unde er review. Re eviewers sho ould be chose en to represe ent different perspectives s and roles s in the revie ew process, and a should ta ake part in any review me eetings. Scrib be (or record der): docume ents all the is ssues, proble ems and open points that t were identif fied durin ng the meetin ng.

Looking at software products p or related r work products fro om different perspectives p and using checklist ts can make reviews mor re effective and a efficient. For example e, a checklist t based on various v perspect tives such as s user, maint tainer, tester r or operation ns, or a chec cklist of typica al requirements problems s may help to o uncover pr reviously und detected issu ues.

3.2.3

Types of f Reviews (K2)

A single software pro oduct or relat ted work pro oduct may be e the subject of more than n one review w. If more tha an one type of o review is used, u the ord der may vary y. For example, an inform mal review ma ay be carried out o before a technical t rev view, or an in nspection ma ay be carried out on a req quirements specifica ation before a walkthroug gh with customers. The main m characte eristics, optio ons and purp poses of comm mon review ty ypes are: Informal Review o No fo ormal proces ss o May take the form m of pair pro ogramming or o a technical lead review wing designs and code o Resu ults may be documented d o Varie es in usefuln ness dependi ing on the re eviewers o Main n purpose: in nexpensive way w to get so ome benefit rough Walkthr o Mee eting led by author a o May take the form m of scenarios, dry runs, , peer group participation n ssions o Open-ended ses O pre-meeting pre eparation of reviewers r Optional Optional O preparation of a review repo ort including list of finding gs o Optio onal scribe (who is not th he author) o May vary in prac ctice from qui ite informal to very forma al o Main n purposes: learning, gaining unders standing, find ding defects cal Review Technic o Docu umented, de efined defect-detection pr rocess that in ncludes peer rs and techni ical experts with w optio onal manage ement particip pation o May be performe ed as a peer review witho out managem ment participation o Idea ally led by trained modera ator (not the author) a o Pre-meeting prep paration by reviewers r o Optio onal use of checklists c o Prep paration of a review repor rt which inclu udes the list of findings, the t verdict whether the recommendations relate softw ware product t meets its re equirements and, where appropriate, a ed to findings o May vary in prac ctice from qui ite informal to very forma al o Main n purposes: discussing, making decis sions, evalua ating alternat tives, finding g defects, sol lving technical problem ms and chec cking conform mance to spe ecifications, plans, p regula ations, and standards
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 34 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Inspecti ion o Led by trained moderator m (no ot the author) o Usua ally conducte ed as a peer examination n o Defin ned roles o Inclu udes metrics gathering o Form mal process based b on rules and chec cklists o Spec cified entry and a exit criter ria for accep ptance of the software pro oduct o Pre-meeting prep paration o Inspection report t including lis st of findings o Form mal follow-up p process (wi ith optional process p impro ovement com mponents) o Optio onal reader o Main n purpose: fin nding defects s Walkthro oughs, technical reviews and inspecti ions can be performed p w within a peer g group, i.e., colle eagues at the e same organizational lev vel. This type e of review is s called a pe eer review.

3.2.4

Success s Factors for f Review ws (K2)

Success s factors for reviews r include: o Each h review has s clear predef fined objectiv ves o The right people for the revie ew objectives s are involved d o Test ters are value ed reviewers s who contrib bute to the re eview and als so learn abou ut the produc ct whic ch enables th hem to prepa are tests earlier o Defe ects found ar re welcomed and express sed objective ely o Peop ple issues an nd psycholog gical aspects s are dealt with (e.g., mak king it a positive experien nce for the author) a o The review is conducted in an atmospher re of trust; th he outcome will w not be us sed for the evaluation of the e participants s o Revi iew techniqu ues are applie ed that are suitable s to ac chieve the ob bjectives and to the type and a level of software work produc cts and revie ewers o Chec cklists or role es are used if appropriate e to increase e effectivenes ss of defect identification n o Train ning is given in review techniques, es specially the more formal l techniques such as inspe ection o Management sup pports a goo od review pro ocess (e.g., by b incorporat ting adequate e time for rev view vities in proje ect schedules s) activ o Ther re is an emphasis on lear rning and pro ocess improv vement

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 35 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

3.3
Terms

Static Analysis A s by Too ols (K2)

20 minut tes

Compiler, complexity y, control flow w, data flow, static analys sis

Backgr round
The obje ective of stati ic analysis is s to find defects in softwa are source co ode and softw ware models s. Static an nalysis is per rformed witho out actually executing e the e software be eing examine ed by the too ol; dynamic c testing does s execute the e software co ode. Static analysis can locate l defect ts that are ha ard to find in dy ynamic testin ng. As with re eviews, static c analysis fin nds defects rather r than fa ailures. Static c analysis tools analyz ze program code c (e.g., co ontrol flow an nd data flow) ), as well as g generated ou utput such as HTML and XML. X The valu ue of static an nalysis is: o Early y detection of o defects prior to test exe ecution o Early y warning ab bout suspicio ous aspects of o the code or o design by the t calculatio on of metrics s, such as a high comple exity measur re o Identification of defects d not easily e found by b dynamic testing t o Dete ecting dependencies and inconsistenc cies in softw ware models such s as links s o Impr roved mainta ainability of code c and des sign o Prev vention of defects, if lesso ons are learn ned in develo opment Typical defects d disco overed by sta atic analysis tools include e: o Refe erencing a va ariable with an a undefined d value o Inconsistent interfaces betwe een modules s and compon nents o Varia ables that ar re not used or o are improp perly declared d o Unre eachable (de ead) code o Miss sing and erro oneous logic (potentially infinite loops) o Overly complicat ted construct ts o Prog gramming sta andards viola ations o Secu urity vulnerab bilities o Synt tax violations s of code and d software models m Static an nalysis tools are typically used by dev velopers (che ecking against predefined d rules or programming standa ards) before and a during component an nd integration testing or w when checking-in c n manageme ent tools, and d by designers during sof ftware model ling. Static code to configuration analysis tools may produce a larg ge number of o warning me essages, which need to b be well-mana aged to allow the most effe ective use of f the tool. Compilers may offer some suppo ort for static analysis, a incl luding the ca alculation of m metrics.

Refere ences
3.2 IEEE E 1028 3.2.2 Gi ilb, 1993, van n Veenendaa al, 2004 3.2.4 Gi ilb, 1993, IEE EE 1028 3.3 van Veenendaal l, 2004

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 36 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

4.

T Test De esign Te echniqu ues (K4) )

28 85 minu utes

Learni ing Objec ctives for r Test Des sign Tech hniques
The obje ectives identify what you will be able to t do followin ng the compl letion of each h module.

4.1 The e Test Dev velopment t Process (K3)


LO-4.1.1 1 LO-4.1.2 2 LO-4.1.3 3 LO-4.1.4 4 Different tiate between n a test desig gn specificat tion, test case specificatio on and test procedure specificati ion (K2) Compare e the terms test t condition n, test case and a test proc cedure (K2) Evaluate e the quality of test cases s in terms of clear traceab bility to the re equirements s and expected d results (K2 2) Translate e test cases into a well-s structured tes st procedure specification n at a level of o detail rel levant to the knowledge of o the testers s (K3)

4.2 Cat tegories of o Test Des sign Tech hniques (K K2)


LO-4.2.1 1 LO-4.2.2 2 Recall re easons that both b specification-based (black-box) and a structure e-based (whitebox) test t design tech hniques are useful u and lis st the commo on technique es for each (K K1) Explain the t characteristics, comm monalities, an nd difference es between s specification-based testing, structure-bas s sed testing and a experienc ce-based tes sting (K2)

4.3 Spe ecification n-based or Black-bo ox Techniques (K3)


LO-4.3.1 1 LO-4.3.2 2 LO-4.3.3 3 Write tes st cases from m given softw ware models using equiva alence partiti ioning, bound dary value an nalysis, decis sion tables an nd state transition diagra ams/tables (K K3) Explain the t main pur rpose of each h of the four testing techn niques, what t level and ty ype of testing could c use the e technique, and a how cov verage may be b measured d (K2) Explain the t concept of use case testing and its benefits (K K2)

4.4 Str ructure-ba ased or Wh hite-box Technique T s (K4)


LO-4.4.1 1 LO-4.4.2 2 Describe e the concep pt and value of o code cove erage (K2) Explain the t concepts s of statemen nt and decision coverage e, and give re easons why these t concepts s can also be e used at tes st levels othe er than component testing g (e.g., on business s procedures s at system le evel) (K2) Write tes st cases from m given contr rol flows usin ng statement t and decision test design n techniqu ues (K3) Assess statement s an nd decision coverage c for completenes ss with respe ect to defined d exit criteria. (K4) (

LO-4.4.3 3 LO-4.4.4 4

4.5 Exp perience-b based Tec chniques (K2) (


LO-4.5.1 1 LO-4.5.2 2 Recall re easons for writing w test ca ases based on o intuition, experience e an nd knowledg ge about co ommon defec cts (K1) Compare e experience e-based tech hniques with specification n-based testin ng technique es (K2)

4.6 Cho oosing Te est Techni iques (K2) )


LO-4.6.1 1 Classify test design techniques t a according to their t fitness to t a given co ontext, for the e test basis, re espective mo odels and sof ftware charac cteristics (K2 2)

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 37 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

4.1
Terms

The Te est Deve elopmen nt Proces ss (K3)

15 minut tes

Test cas se specificatio on, test desig gn, test exec cution schedule, test proc cedure speci ification, test t script, tra aceability

Backgr round
The test developmen nt process de escribed in th his section ca an be done in different w ways, from ve ery informal with little or no documen ntation, to very formal (as s it is describ bed below). T The level of formality y depends on n the context t of the testin ng, including the maturity of testing an nd development processe es, time cons straints, safe ety or regulat tory requirem ments, and th he people inv volved. During te est analysis, the test basis document tation is analyzed in orde er to determin ne what to te est, i.e., to id dentify the tes st conditions s. A test cond dition is defin ned as an item or event th hat could be verified by b one or mo ore test case es (e.g., a fun nction, transa action, quality characteris stic or structu ural element) ). Establish hing traceability from test t conditions back b to the specifications s s and require ements enab bles both effe ective impact t analysis wh hen requirem ments change e, and determ mining requir rements cove erage for a set of tests. Dur ring test analysis the deta ailed test approach is implemented to o select the test t design te echniques to o use based on, o among other o conside erations, the identified risks (see Chapter 5 for more on risk analysis). During te est design th he test cases s and test dat ta are create ed and specif fied. A test c case consists s of a set of inp put values, execution e pre econditions, expected e res sults and exe ecution postc conditions, de efined to cover a certain tes st objective(s s) or test con ndition(s). The Standard for Software Test Docume entation (IEE EE STD 829-1998) descri ibes the cont tent of test design specifi ications (containi ing test cond ditions) and test case spe ecifications. Expected d results sho ould be produ uced as part of the specification of a test case and include outputs, changes s to data and states, and any other co onsequences s of the test. If expected r results have not been def fined, then a plausible, but erroneous s, result may y be interpret ted as the co orrect one. Expected d results sho ould ideally be b defined pr rior to test ex xecution. During te est implemen ntation the te est cases are e developed, implemente ed, prioritized d and organiz zed in the test procedure p sp pecification (IEEE STD 829-1998). Th he test proce edure specifies the seque ence of action ns for the exe ecution of a test. t If tests are a run using g a test execution tool, the sequence of actions is specified in n a test scrip pt (which is an automated d test procedure). The vario ous test proc cedures and automated test t scripts are subseque ently formed into a test executio on schedule that t defines the t order in which w the various test pro ocedures, an nd possibly automate ed test script ts, are execu uted. The test execution schedule will take into account such factors as a regression n tests, prioritization, and technical an nd logical dependencies.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 38 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

4.2 Catego ories of Test T Design Tec chnique es (K2)


Terms

15 minut tes

Black-bo ox test design n technique, experience-based test design d techni ique, test des sign techniqu ue, white-bo ox test design n technique

Backgr round
The purp pose of a tes st design tech hnique is to identify i test conditions, c te est cases, an nd test data. assic distinction to denote e test techniq ques as blac ck-box or whi ite-box. Blac ck-box test de esign It is a cla techniqu ues (also called specificat tion-based te echniques) are a a way to derive d and select test condition ns, test cases, or test dat ta based on an analysis of o the test ba asis documentation. This s includes both functio onal and non-functional te esting. Black k-box testing, by definition, does not use u any infor rmation rega arding the inte ernal structure of the com mponent or system s to be tested. Whit te-box test design technique es (also calle ed structural or structure-based techn niques) are b based on an analysis of the struct ture of the co omponent or system. Black-box and white-box w tes sting may als so be combine ed with exper rience-based d techniques to leverage the experien nce of develo opers, testers s and users to determine what w should be b tested. Some techniques fall clearly into a single cate egory; others s have eleme ents of more than one category y. This sylla abus refers to t specificatio on-based tes st design tec chniques as black-box b tec chniques and d structure e-based test design techn niques as wh hite-box techniques. In ad ddition exper rience-based d test design te echniques ar re covered. Common n characteris stics of specification-base ed test design techniques s include: o Models, either fo ormal or infor rmal, are use ed for the spe ecification of f the problem m to be solved d, the softw ware or its co omponents o Test t cases can be b derived sy ystematically y from these models Common n characteris stics of struct ture-based te est design te echniques inc clude: o Infor rmation abou ut how the so oftware is constructed is used to deriv ve the test ca ases (e.g., code c and detailed des sign informati ion) o The extent of cov verage of the e software ca an be measu ured for exist ting test case es, and furthe er test case es can be derived system matically to in ncrease cove erage Common n characteris stics of exper rience-based d test design techniques include: o The knowledge and a experien nce of people e are used to o derive the test t cases o The knowledge of o testers, de evelopers, us sers and othe er stakeholde ers about the e software, it ts usag ge and its en nvironment is s one source of informatio on o Know wledge abou ut likely defec cts and their distribution is i another so ource of infor rmation

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 39 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

4.3 Specification-b based or r Black-b box Techn niques (K3) (


Terms

1 150 minu utes

Boundar ry value analysis, decisio on table testin ng, equivalen nce partitioni ing, state transition testin ng, use case tes sting

4.3.1

Equivale ence Partit tioning (K K3)

In equiva alence partiti ioning, inputs s to the softw ware or syste em are divide ed into group ps that are expected d to exhibit similar s behav vior, so they are a likely to be b processed d in the same way. Equivale ence partition ns (or classes s) can be fou und for both valid data, i.e., values that should be e accepted d and invalid data, i.e., va alues that sh hould be rejected. Partitio ons can also be identified d for outputs, internal valu ues, time-rela ated values (e.g., ( before or after an event) e and for interface parameters (e.g., inte egrated com mponents bein ng tested during integrati ion testing). T Tests can be e d to cover all l valid and in nvalid partitions. Equivale ence partition ning is applic cable at all levels of designed testing. Equivale ence partition ning can be used u to achie eve input and d output cove erage goals. It can be ap pplied to human input, input via interfac ces to a syste em, or interfa ace paramete ers in integra ation testing.

4.3.2

Boundar ry Value Analysis A (K K3)

Behavior r at the edge e of each equ uivalence partition is mor re likely to be e incorrect th han behavior within the partit tion, so boun ndaries are an a area wher re testing is likely to yield defects. The e maximum and minimum m values of a partition are e its boundar ry values. A boundary b va alue for a vali id partition is sa valid bou undary value e; the bounda ary of an inva alid partition is an invalid boundary va alue. Tests can c be designed d to cover bo oth valid and invalid boun ndary values. When desig gning test ca ases, a test fo or each bou undary value e is chosen. Boundar ry value analysis can be applied at all test levels. It is relatively easy to apply and its defectfinding capability c is high. h Detailed d specificatio ons are helpf ful in determi ining the inte eresting boundar ries. This tech hnique is ofte en considere ed as an exte ension of equ uivalence partitioning or o other black-b box test design technique es. It can be used on equ uivalence cla asses for use er input on sc creen as well as, for exam mple, on time ranges (e.g., time out, tr ransactional speed requirements) or t table ranges s (e.g., table size is 256*256 6).

4.3.3

Decision n Table Testing (K3) )

Decision n tables are a good way to t capture sy ystem require ements that contain c logic cal conditions s, and to docum ment internal system design. They ma ay be used to o record com mplex business rules that ta system is to impleme ent. When cr reating decision tables, th he specificati ion is analyz zed, and cond ditions and actio ons of the sy ystem are ide entified. The input conditions and actions are mos st often stated in such a way w that they y must be true or false (Boolean). The e decision tab ble contains the triggering condition ns, often com mbinations of f true and false for all inp put conditions s, and the res sulting action ns for each com mbination of conditions. Each E column n of the table e corresponds to a busine ess rule that defines a unique com mbination of conditions an nd which res sult in the exe ecution of the actions associated with that rule. The cov verage stand dard commonly used with h decision table testing is s to have at least l one tes st per column n in the table e, which typic cally involves s covering all l combination ns of triggering g conditions. .

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 40 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

The strength of decis sion table tes sting is that it t creates com mbinations of o conditions that otherwis se might no ot have been exercised during testing g. It may be applied a to all situations w when the actio on of the softw ware depends on several logical decis sions.

4.3.4

State Tra ansition Te esting (K3 3)

A system m may exhibi it a different response de epending on current cond ditions or previous history y (its state). In n this case, th hat aspect of f the system can be show wn with a sta ate transition diagram. It allows a the teste er to view the e software in terms of its states, trans sitions betwee en states, the inputs or events e that trigg ger state cha anges (transit tions) and the actions wh hich may result from thos se transitions s. The states of f the system or object und der test are separate, s ide entifiable and d finite in num mber. A state table shows the t relationship between the states and a inputs, an nd can highli ight possible transition ns that are in nvalid. Tests ca an be designe ed to cover a typical sequ uence of states, to cover r every state, , to exercise every transition n, to exercise e specific seq quences of transitions t or r to test invalid transitions s. State tra ansition testin ng is much used within th he embedded d software in ndustry and technical automation in genera al. However, the techniqu ue is also suitable for mo odeling a bus siness object having specific s states s or testing screen-dialog s gue flows (e. .g., for Intern net applicatio ons or busine ess scenario os).

4.3.5

Use Case e Testing (K2)

Tests ca an be derived d from use ca ases. A use case c describ bes interactio ons between actors (user rs or systems), which prod duce a result t of value to a system use er or the cus stomer. Use c cases may be b describe ed at the abst tract level (business use case, techno ology-free, business b proc cess level) or o at the syste em level (sys stem use cas se on the sys stem function nality level). Each use ca ase has preconditions which need to be met m for the us se case to work w successf fully. Each use case terminate es with postc conditions which are the observable results r and final state of t the system after a the use case c has bee en completed d. A use cas se usually has a mainstre eam (i.e., most likely) sce enario and alter rnative scena arios. es describe the t process s flows throu ugh a system m based on its s actual likely use, so the e test Use case cases de erived from use u cases are most usefu ul in uncover ring defects in the proces ss flows durin ng real-worl ld use of the system. Use e cases are very v useful fo or designing acceptance tests with custome er/user participation. They y also help uncover u integ gration defects caused by y the interact tion and inter rference of different d components, which individua al component t testing wou uld not see. Designin ng test cases s from use ca ases may be combined with w other spe ecification-ba ased test techniqu ues.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 41 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

4.4 Structu ure-base ed or Wh hite-box Techn niques (K4)


Terms
Code co overage, deci ision coverag ge, statemen nt coverage, structure-ba ased testing

60 minut tes

Backgr round
Structure e-based or white-box w testing is based d on an ident tified structur re of the soft tware or the system, as seen in th he following examples: o Com mponent level: the structu ure of a softw ware component, i.e., stat tements, dec cisions, branc ches or ev ven distinct paths p o Integ gration level: the structure may be a call c tree (a diagram in wh hich modules s call other modules) tem level: the e structure may m be a men nu structure, business pr rocess or web page struc cture o Syst In this se ection, three code-related d structural te est design te echniques for code cover rage, based on o statemen nts, branches and decisio ons, are disc cussed. For decision d test ting, a contro ol flow diagra am may be used u to visua alize the alte ernatives for each e decisio on.

4.4.1

Statemen nt Testing g and Cove erage (K4)

In compo onent testing g, statement coverage is the assessm ment of the pe ercentage of f executable echnique derives statemen nts that have e been exerc cised by a tes st case suite. The statem ment testing te test case es to execute e specific sta atements, normally to increase statem ment coverag ge. Statement coverage is determine ed by the num mber of exec cutable statements cover red by (desig gned or execu uted) test cas ses divided by b the numbe er of all exec cutable statem ments in the code under test.

4.4.2

Decision n Testing and a Cover rage (K4)

Decision n coverage, related r to bra anch testing, is the asses ssment of the e percentage e of decision outcome es (e.g., the True T and Fal lse options of o an IF statement) that ha ave been ex xercised by a test case suite. The decis sion testing technique t de erives test ca ases to execu ute specific d decision outc comes. Branche es originate fr rom decision n points in the e code and show s the tran nsfer of contr rol to differen nt locations s in the code e. Decision n coverage is s determined d by the numb ber of all dec cision outcom mes covered by (designe ed or executed d) test cases s divided by the t number of o all possible e decision ou utcomes in th he code under test. Decision n testing is a form of cont trol flow testin ng as it follow ws a specific c flow of cont trol through the t decision points. Deci ision coverag ge is stronge er than statem ment coverag ge; 100% de ecision cover rage guarante ees 100% sta atement cove erage, but no ot vice versa a.

4.4.3

Other Structure-ba ased Tech hniques (K K1)

There ar re stronger le evels of struc ctural covera age beyond decision d cove erage, for ex xample, cond dition coverage e and multiple condition coverage. c The conc cept of coverage can also be applied d at other test levels For example, at the integration level the percentage of modules, components s or classes that have be een exercised d by a test ca ase suite cou uld be expres ssed as mod dule, compon nent or class coverage. Tool sup pport is usefu ul for the stru uctural testing g of code.
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 42 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

4.5
Terms

Experie ence-ba ased Tec chniques s (K2)

30 minut tes

Exploratory testing, (fault) ( attack

Backgr round
Experien nce-based te esting is where tests are derived d from the testers skill and intu uition and the eir experien nce with similar applicatio ons and technologies. Wh hen used to augment a sys stematic techniqu ues, these tec chniques can n be useful in n identifying special tests s not easily c captured by formal f techniqu ues, especially when applied after mo ore formal ap pproaches. However, this technique may m yield wid dely varying degrees d of effectiveness, depending on the tester rs experienc ce. A commonly used ex xperience-ba ased techniqu ue is error gu uessing. Gen nerally tester rs anticipate defects based b on exp perience. A structured s ap pproach to th he error gues ssing techniq que is to enumera ate a list of possible defects and to de esign tests th hat attack the ese defects. This systematic approach is called fa ault attack. Th hese defect and failure lists can be built based on n experience e, available e defect and failure data, and from co ommon know wledge about t why softwar re fails. Exploratory testing is s concurrent test design, test executio on, test loggi ing and learn ning, based on o a test char rter containin ng test object tives, and ca arried out within time-box xes. It is an a approach that is most use eful where th here are few or inadequat te specificati ions and sev vere time pre essure, or in order o to augme ent or complement other r, more forma al testing. It can c serve as s a check on the test proc cess, to help ensure e that th he most serio ous defects are a found.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 43 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

4.6
Terms

Choosi ing Test t Techniques (K K2)

15 minut tes

No specific terms.

Backgr round
The choice of which test techniqu ues to use de epends on a number of fa actors, includ ding the type e of system, regulatory st tandards, customer or co ontractual req quirements, level of risk, type of risk, test objective e, documenta ation availab ble, knowledg ge of the test ters, time and d budget, de evelopment li ife cycle, us se case models and prev vious experie ence with types of defects s found. Some techniques are e more applicable to cert tain situations and test levels; others are applicab ble to all test le evels. When cr reating test cases, c testers s generally use u a combin nation of test techniques including pro ocess, rule and data-driven techniques to t ensure adequate cove erage of the object o under test.

Refere ences
4.1 Crai ig, 2002, Het tzel, 1988, IE EEE STD 829-1998 4.2 Beiz zer, 1990, Co opeland, 200 04 4.3.1 Co opeland, 200 04, Myers, 19 979 4.3.2 Co opeland, 200 04, Myers, 19 979 4.3.3 Be eizer, 1990, Copeland, C 20 004 4.3.4 Be eizer, 1990, Copeland, C 20 004 4.3.5 Co opeland, 200 04 4.4.3 Be eizer, 1990, Copeland, C 20 004 4.5 Kaner, 2002 4.6 Beiz zer, 1990, Co opeland, 200 04

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 44 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5.

T Test Ma anagem ment (K3 3)

17 70 minu utes

Learni ing Objec ctives for r Test Managemen nt


The obje ectives identify what you will be able to t do followin ng the compl letion of each h module.

5.1 Tes st Organiz zation (K2)


LO-5.1.1 1 LO-5.1.2 2 LO-5.1.3 3 LO-5.1.4 4 Recognize the impor rtance of inde ependent tes sting (K1) Explain the t benefits and drawbac cks of indepe endent testin ng within an o organization (K2) Recognize the differe ent team members to be considered for the creation of a test team (K1) Recall th he tasks of a typical test leader l and te ester (K1)

5.2 Tes st Planning and Est timation (K K3)


LO-5.2.1 1 LO-5.2.2 2 Recognize the differe ent levels an nd objectives of test plann ning (K1) Summar rize the purpose and content of the te est plan, test design spec cification and d test procedure document ts according to the Stand dard for Softw ware Test Documentatio on (IEEE St td 829-1998) ) (K2) Different tiate between n conceptual lly different te est approach hes, such as analytical, modelm based, methodical, m p process/stand dard complia ant, dynamic/heuristic, co onsultative and regressio on-averse (K K2) Different tiate between n the subject t of test planning for a sy ystem and sc cheduling tes st executio on (K2) Write a test t execution schedule for f a given se et of test cas ses, consider ring prioritiza ation, and tech hnical and log gical depend dencies (K3) List test preparation and executio on activities that t should be b considered during test t planning g (K1) Recall ty ypical factors s that influenc ce the effort related to testing (K1) Different tiate between n two concep ptually differe ent estimatio on approache es: the metric csbased ap pproach and d the expert-b based approa ach (K2) Recognize/justify ade equate entry y and exit crit teria for spec cific test leve els and group ps of test case es (e.g., for integration te esting, accep ptance testing g or test case es for usability testing) (K2) (

LO-5.2.3 3

LO-5.2.4 4 LO-5.2.5 5 LO-5.2.6 6 LO-5.2.7 7 LO-5.2.8 8 LO-5.2.9 9

5.3 Tes st Progres ss Monitor ring and Control C (K K2)


LO-5.3.1 1 LO-5.3.2 2 LO-5.3.3 3 Recall co ommon metr rics used for monitoring test preparation and exec cution (K1) Explain and a compare e test metrics s for test rep porting and te est control (e e.g., defects found f and fixed d, and tests passed p and failed) f relate ed to purpose e and use (K K2) Summar rize the purpose and content of the te est summary y report document accord ding to the Stan ndard for Sof ftware Test Documentatio D on (IEEE Std 829-1998) (K2)

5.4 Configuratio on Manage ement (K2)


LO-5.4.1 1 Summar rize how configuration ma anagement supports s test ting (K2)

5.5 Ris sk and Tes sting (K2)


LO-5.5.1 1 LO-5.5.2 2 LO-5.5.3 3 LO-5.5.4 4 LO-5.5.5 5
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Describe e a risk as a possible pro oblem that wo ould threaten n the achieve ement of one e or more sta akeholders project p objectives (K2) Rememb ber that the level of risk is s determined d by likelihoo od (of happen ning) and impact (harm re esulting if it does happen) ) (K1) Distinguish between the project and a product risks (K2) Recognize typical product and pr roject risks (K K1) Describe e, using exam mples, how risk r analysis and risk man nagement may be used for f test planning g (K2)
Page 45 of 78 31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5.6 Inc cident Man nagement (K3)


LO-5.6.1 1 LO-5.6.2 2 Recognize the conte ent of an incid dent report according a to the t Standard d for Softwar re Test Doc cumentation (IEEE Std 829-1998) 8 (K K1) Write an incident rep port covering the observa ation of a failu ure during te esting. (K3)

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 46 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5.1
Terms

Test Organizat tion (K2)

30 minut tes

Tester, test leader, te est manager r

5.1.1

Test Org ganization and Indep pendence e (K2)

The effectiveness of finding defects by testing g and review ws can be imp proved by us sing independent testers. Options O for in ndependenc ce include the e following: o No in ndependent testers; deve elopers test their t own cod de o Inde ependent test ters within th he developme ent teams o Inde ependent test t team or gro oup within the e organizatio on, reporting to project management or o exec cutive manag gement o Inde ependent test ters from the e business or rganization or o user comm munity o Inde ependent test t specialists for f specific te est types suc ch as usability testers, se ecurity tester rs or certification teste ers (who cert tify a softwar re product ag gainst standa ards and regu ulations) o Inde ependent test ters outsourc ced or extern nal to the org ganization For large e, complex or o safety critic cal projects, it is usually best b to have multiple leve els of testing, with some or all of the lev vels done by independent testers. Development staff s may part ticipate in tes sting, especially at the lowe er levels, but t their lack of f objectivity often o limits th heir effectiveness. The independ dent testers may have th he authority to o require and d define test processes a and rules, bu ut testers should s take on o such process-related roles r only in the presence e of a clear m management t mandate e to do so. The benefits of indep pendence inc clude: o Inde ependent test ters see othe er and differe ent defects, and a are unbiased o An in ndependent tester can ve erify assump ptions people e made during specificatio on and imple ementation of o the system m Drawbac cks include: o Isola ation from the e developme ent team (if tr reated as tot tally independent) o Deve elopers may lose a sense e of respons sibility for qua ality o Inde ependent tes sters may be seen as a bottleneck b or blamed for delays d in rele ease Testing tasks t may be e done by pe eople in a specific testing g role, or may y be done by y someone in n another role, such as s a project manager, m qua ality manager r, developer, business an nd domain ex xpert, cture or IT operations. infrastruc

5.1.2

Tasks of f the Test Leader an nd Tester (K1) (

In this sy yllabus two te est positions s are covered d, test leader r and tester. The activities and tasks performe ed by people e in these two o roles depen nd on the pro oject and pro oduct contex xt, the people e in the roles, an nd the organization. Sometim mes the test leader is called a test ma anager or tes st coordinator r. The role of f the test leader may be performed p by y a project manager, m a de evelopment manager, a quality q assur rance manag ger or the mana ager of a tes st group. In la arger projects two positio ons may exist: test leader r and test manager r. Typically th he test leade er plans, mon nitors and co ontrols the testing activitie es and tasks s as defined in i Section 1.4. Typical test t leader ta asks may inc clude: o Coordinate the te est strategy and a plan with h project managers and others o o Write e or review a test strateg gy for the pro oject, and tes st policy for th he organizati ion
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 47 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

o o

o o o o o o o o

Cont tribute the te esting perspe ective to othe er project act tivities, such as integratio on planning Plan n the tests considering c t context and the a understa anding the test objectives s and risks inclu uding selectin ng test appro oaches, estim mating the tim me, effort and cost of testing, acquirin ng reso ources, defini ing test levels, cycles, an nd planning in ncident management Initia ate the specif fication, prep paration, imp plementation and execution of tests, m monitor the te est results and chec ck the exit criteria Adap pt planning based b on tes st results and d progress (sometimes do ocumented in n status repo orts) and take any act tion necessary to compen nsate for pro oblems Set up u adequate e configuratio on managem ment of testwa are for tracea ability Intro oduce suitabl le metrics for r measuring test progress s and evalua ating the qua ality of the tes sting and the product Deci ide what sho ould be autom mated, to what degree, and how Sele ect tools to su upport testing g and organi ize any traini ing in tool us se for testers s Deci ide about the e implementa ation of the te est environm ment Write e test summary reports based b on the e information gathered du uring testing

Typical tester t tasks may m include: o Revi iew and cont tribute to test t plans o Anal lyze, review and assess user requirements, specifications and d models for r testability o Crea ate test spec cifications o Set up u the test environment (often ( coordinating with system s administration and network management) o Prep pare and acq quire test data o Implement tests on all test levels, execute e and log the e tests, evalu uate the resu ults and docu ument the deviations d fro om expected d results o Use test adminis stration or ma anagement tools and test monitoring tools as requ uired o Auto omate tests (may be supp ported by a developer d or a test autom mation expert t) o Mea asure perform mance of com mponents and systems (if f applicable) o Revi iew tests dev veloped by others o People who w work on test analysis s, test design n, specific test types or te est automatio on may be specialis sts in these ro oles. Depend ding on the test t level and d the risks re elated to the p product and the project, different d peo ople may take e over the role of tester, keeping k som me degree of independence. Typically y testers at th he componen nt and integr ration level would w be developers, test ters at the acceptan nce test leve el would be business expe erts and use ers, and teste ers for operat tional accept tance testing would w be ope erators.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 48 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5.2
Terms

Test Pl lanning and Est timation (K3)

40 minut tes

Test app proach, test strategy s

5.2.1

Test Plan nning (K2) )

This sec ction covers the t purpose of o test planning within de evelopment and a impleme entation proje ects, and for maintenance m e activities. Planning may y be documen nted in a master test plan n and in sepa arate test plan ns for test lev vels such as system testin ng and acceptance testin ng. The outlin ne of a testplanning g document is s covered by y the Standa ard for Softwa are Test Doc cumentation (IEEE Std 8298 1998). Planning g is influence ed by the test t policy of the e organizatio on, the scope e of testing, o objectives, risks, constrain nts, criticality y, testability and a the avail lability of res sources. As the project an nd test plann ning progress s, more inform mation becomes availabl le and more detail can be e included in n the plan. Test plan nning is a co ontinuous act tivity and is performed p in all life cycle processes a and activities s. Feedbac ck from test activities a is used u to recog gnize changin ng risks so th hat planning can be adjusted.

5.2.2

Test Plan nning Activities (K3 3)

Test plan nning activities for an ent tire system or o part of a sy ystem may in nclude: o Dete ermining the scope and ri isks and iden ntifying the objectives o of testing o Defin ning the overall approach h of testing, including i the e definition of f the test leve els and entry y and exit criteria c o Integ grating and coordinating c the testing activities a into the software e life cycle ac ctivities (acquisition, supply, developm ment, operat tion and maintenance) o Mak king decisions s about what t to test, wha at roles will perform p the te est activities, , how the tes st activ vities should be done, and how the te est results wil ll be evaluate ed o Sche eduling test analysis a and design activ vities o Sche eduling test implementati i ion, executio on and evalua ation o Assigning resour rces for the different d activ vities defined d o Defin ning the amo ount, level of f detail, struc cture and tem mplates for th he test docum mentation o Sele ecting metrics s for monitor ring and cont trolling test preparation p a execution and n, defect reso olution and risk issues o Setti ing the level of detail for test procedu ures in order to provide en nough inform mation to sup pport repro oducible test t preparation n and executi ion

5.2.3

Entry Criteria (K2) )

Entry criteria define when w to start t testing such h as at the beginning of a test level or when a set t of tests is ready r for exe ecution. Typically y entry criteria may cover r the following: o Test t environmen nt availability and readine ess o Test t tool readine ess in the tes st environment o Test table code av vailability o Test t data availab bility

5.2.4

Exit Crite eria (K2)

Exit crite eria define wh hen to stop testing t such as at the end d of a test lev vel or when a set of tests s has achieved d specific goa al.
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 49 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Typically y exit criteria may cover the t following: o Thor roughness measures, m such as covera age of code, functionality y or risk o Estim mates of defe ect density or o reliability measures m o Cost t o Resi idual risks, such as defec cts not fixed or lack of tes st coverage in i certain are eas o Sche edules such as those bas sed on time to t market

5.2.5

Test Esti imation (K K2)

Two app proaches for the estimatio on of test effo ort are: o The metrics-base ed approach h: estimating the testing effort e based on o metrics of f former or si imilar ects or based d on typical values v proje o The expert-based approach: estimating the tasks bas sed on estimates made b by the owner of the tasks s or by experts Once the e test effort is s estimated, resources can c be identif fied and a sc chedule can b be drawn up p. ay depend on n a number of o factors, inc cluding: The testing effort ma o Characteristics of o the produc ct: the quality y of the speci ification and other inform mation used fo or test models (i.e., the test basis), the t size of th he product, th he complexit ty of the prob blem domain, the requ ion uirements for r reliability an nd security, and a the requi irements for documentati o Characteristics of o the development proce ess: the stabi ility of the org ganization, to ools used, te est proc cess, skills of f the people involved, i and d time pressure o The outcome of testing: t the number n of de efects and th he amount of f rework requ uired

5.2.6

Test Stra ategy, Tes st Approac ch (K2)

The test approach is the impleme entation of th he test strategy for a spec cific project. The test app proach is define ed and refined d in the test plans and te est designs. It typically inc cludes the de ecisions mad de based on n the (test) projects p goal l and risk ass sessment. It is the startin ng point for planning the test t process, , for selecting g the test des sign techniqu ues and test types to be applied, and d for defining the entry and d exit criteria a. The sele ected approa ach depends on the conte ext and may consider risk ks, hazards a and safety, available e resources and a skills, the technology y, the nature of the system (e.g., cust tom built vs. COTS), test t objective es, and regu ulations. a i include: Typical approaches o Anal lytical approa aches, such as risk-base ed testing where testing is s directed to areas of gre eatest risk proaches, su uch as stocha astic testing using statist tical informat tion about fai ilure o Model-based app rates s (such as re eliability grow wth models) or o usage (such as operat tional profiles s) o Meth hodical appro oaches, such h as failure-b based (includ ding error guessing and f fault attacks), expe erience-base ed, checklist-based, and quality q chara acteristic-bas sed o Proc cess- or standard-complia ant approach hes, such as those specif fied by indus stry-specific standards or the various agile e methodolo ogies o Dyna amic and heuristic approaches, such as explorato ory testing where w testing is more reac ctive to even nts than pre-planned, and d where exec cution and ev valuation are e concurrent tasks o Cons sultative app proaches, such as those in which test t coverage is s driven primarily by the advice a and guidance of technology and/or a business domain experts outs side the test t team o Regression-aver rse approach hes, such as those that in nclude reuse of existing test material, extensive automation of func ctional regres ssion tests, and a standard d test suites Different t approaches s may be com mbined, for example, e a risk-based dynamic appro oach.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 50 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5.3 Test Pr rogress Monitor ring and Control (K2)


Terms
Defect density, failur re rate, test control, c test monitoring, m te est summary y report

20 minut tes

5.3.1

Test Progress Mon nitoring (K K1)

The purp pose of test monitoring m is s to provide feedback f and d visibility ab bout test activ vities. Inform mation to be mo onitored may y be collected d manually or automatica ally and may be used to m measure exit t criteria, such s as cove erage. Metric cs may also be b used to assess progre ess against t the planned schedule e and budget t. Common test t metrics include: o Perc centage of wo ork done in test t case pre eparation (or percentage of planned te est cases prep pared) o Perc centage of wo ork done in test t environm ment prepara ation o Test t case execution (e.g., nu umber of test t cases run/n not run, and test t cases pa assed/failed) ) o Defe ect informatio on (e.g., defe ect density, defects d found d and fixed, failure f rate, a and re-test re esults) o Test t coverage of f requiremen nts, risks or code c o Subj jective confid dence of test ters in the pr roduct o Date es of test mile estones o Test ting costs, including the cost c compare ed to the ben nefit of finding the next de efect or to run the next t test

5.3.2

Test Rep porting (K2 2)

Test reporting is concerned with summarizing g information n about the te esting endea avor, includin ng: o Wha at happened during a per riod of testing g, such as da ates when ex xit criteria we ere met o Anal lyzed informa ation and me etrics to supp port recommendations an nd decisions about future e actio ons, such as an assessm ment of defects remaining g, the econom mic benefit of f continued testing, outstand ding risks, and the level of o confidence e in the tested d software The outline of a test summary rep port is given in Standard d for Software e Test Docum mentation (IEEE Std 829-1998). Metrics should s be co ollected durin ng and at the end of a tes st level in ord der to assess s: o The adequacy of f the test objectives for th hat test level o The adequacy of f the test app proaches tak ken o The effectivenes ss of the testi ing with resp pect to the ob bjectives

5.3.3

Test Con ntrol (K2)

Test con ntrol describe es any guidin ng or correcti ive actions ta aken as a res sult of inform mation and metrics m gathered d and reporte ed. Actions may m cover an ny test activit ty and may affect a any oth her software life cycle act tivity or task. . Example es of test con ntrol actions include: o Mak king decisions s based on in nformation fr rom test mon nitoring o Re-p prioritizing tests when an identified ris sk occurs (e.g., software delivered lat te) nment o Changing the tes st schedule due d to availa ability or unav vailability of a test environ o Setti ing an entry criterion requ uiring fixes to o have been re-tested (confirmation t tested) by a deve eloper before e accepting them into a build b

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 51 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5.4
Terms

Configu uration Manage M ement (K K2)

10 minut tes

Configur ration manag gement, vers sion control

Backgr round
The purp pose of configuration management is to establish and maintain the integrit ty of the prod ducts (compon nents, data and a documen ntation) of the e software or r system thro ough the proj ject and prod duct life cycle e. For testing, configura ation manage ement may involve ensur ring the following: o All items of testw ware are iden ntified, versio on controlled, tracked for changes, related to each h other and related to de evelopment it tems (test ob bjects) so tha at traceability y can be maintained throu ughout the te est process o All id dentified doc cuments and software item ms are refere enced unambiguously in test docu umentation For the tester, t config guration management hel lps to unique ely identify (a and to reprod duce) the tes sted item, tes st documents s, the tests and the test harness(es). h During te est planning, , the configur ration manag gement procedures and infrastructure i e (tools) shou uld be chosen, documented d and implem mented.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 52 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5.5
Terms

Risk an nd Testing (K2)

30 minut tes

Product risk, project risk, risk, risk-based test ting

Backgr round
Risk can n be defined as the chanc ce of an even nt, hazard, th hreat or situa ation occurrin ng and result ting in undesira able consequ uences or a potential p prob blem. The level of risk will be determi ined by the likelihood d of an adve erse event ha appening and d the impact (the harm re esulting from that event).

5.5.1

Project Risks R (K2) )

Project risks r are the risks that surround the projects capa ability to deliv ver its object tives, such as: o Orga anizational fa actors: Skill, trai ining and sta aff shortages s Personn nel issues Political issues, such h as: Problems with testers co ommunicating g their needs s and test res sults Failure by the team to follow up on in nformation fo ound in testin ng and review ws (e.g., not imp proving deve elopment and d testing prac ctices) Improper attitude tow ward or expectations of te esting (e.g., not n appreciating the value of finding defects d during g testing) o Tech hnical issues s: Problem ms in defining the right req quirements The exte ent to which requirements r s cannot be met given ex xisting constr raints Test env vironment no ot ready on tim me Late data a conversion n, migration planning p and d development and testing data conversi ion/migration n tools Low qua ality of the de esign, code, configuration c n data, test data and tests s o Supp plier issues: Failure of o a third part ty Contract tual issues When an nalyzing, managing and mitigating m the ese risks, the e test manag ger is followin ng well-estab blished project management m t principles. The T Standar rd for Software Test Docu umentation ( (IEEE Std 82 291998) ou utline for test t plans requir res risks and d contingenci ies to be stat ted.

5.5.2

Product Risks (K2 2)

Potential failure area as (adverse future f events s or hazards) ) in the software or system m are known n as product risks, r as they y are a risk to o the quality of the produ uct. These in nclude: o Failu ure-prone software delive ered o The potential tha at the softwar re/hardware could cause e harm to an individual or r company o Poor r software ch haracteristics s (e.g., functionality, reliability, usability and perfor rmance) o Poor r data integri ity and qualit ty (e.g., data migration issues, data conversion c pr roblems, data a trans sport problem ms, violation of data standards) o Softw ware that does not perfor rm its intended functions Risks are e used to de ecide where to t start testin ng and where e to test more e; testing is u used to reduce the risk of an n adverse eff fect occurring, or to reduce the impac ct of an adve erse effect.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 53 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Product risks are a special s type of o risk to the success of a project. Tes sting as a ris sk-control act tivity provides s feedback ab bout the residual risk by measuring th he effectiven ness of critica al defect rem moval and of co ontingency plans. p A risk-ba ased approac ch to testing provides pro oactive oppo ortunities to re educe the lev vels of produ uct risk, star rting in the in nitial stages of o a project. It I involves the identificatio on of produc ct risks and th heir use in gu uiding test planning and control, c spec cification, pre eparation and d execution o of tests. In a riskbased ap pproach the risks identifie ed may be used to: o Dete ermine the te est technique es to be employed o Dete ermine the ex xtent of testin ng to be carr ried out o Prior ritize testing in an attemp pt to find the critical defec cts as early as a possible o Dete ermine wheth her any non-t testing activi ities could be e employed to t reduce risk (e.g., provi iding training to inexpe erienced des signers) Risk-bas sed testing draws on the collective kn nowledge and d insight of th he project sta akeholders to t determin ne the risks and a the levels s of testing required r to ad ddress those e risks. To ensur re that the ch hance of a product failure e is minimize ed, risk mana agement acti ivities provide a discipline ed approach to: o Asse ess (and reassess on a regular basis) what can go wrong (risk ks) o Dete ermine what risks are imp portant to deal with o Implement action ns to deal wit th those risks In additio on, testing may m support the t identificat tion of new risks, r may he elp to determ mine what risk ks should be b reduced, and a may lower uncertaint ty about risks s.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 54 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5.6
Terms

Inciden nt Manag gement (K3)

40 minut tes

Incident logging, incident manage ement, incide ent report

Backgr round
Since on ne of the obje ectives of tes sting is to find d defects, the discrepanc cies between n actual and expected d outcomes need n to be lo ogged as inc cidents. An in ncident must t be investiga ated and may turn out to be e a defect. Ap ppropriate ac ctions to disp pose incidents and defec cts should be e defined. Inc cidents and defe ects should be b tracked fro om discovery y and classification to cor rrection and confirmation n of the solution. In order to manage m all in ncidents to completion, c an a organizatio on should es stablish an in ncident management proces ss and rules for f classificat tion. Incidents s may be rais sed during development, , review, test ting or use of f a software product. The ey may be raised d for issues in i code or the working sy ystem, or in any a type of documentatio d on including requirem ments, develo opment docu uments, test documents, d and user info ormation suc ch as Help or installatio on guides. Incident reports have e the followin ng objectives s: o Prov vide develope ers and othe er parties with h feedback about a the pro oblem to enable identifica ation, isola ation and cor rrection as ne ecessary o Prov vide test lead ders a means s of tracking the quality of o the system m under test a and the progress of the testing o Prov vide ideas for r test proces ss improveme ent Details of o the inciden nt report may y include: o Date e of issue, iss suing organiz zation, and author a o Expe ected and ac ctual results o Identification of the t test item (configuratio on item) and environment t o Softw ware or syste em life cycle process in which w the inc cident was ob bserved o Desc cription of the incident to enable repro oduction and d resolution, including log gs, database e dum mps or screen nshots o Scop pe or degree e of impact on n stakeholde er(s) interests s o Seve erity of the im mpact on the system o Urge ency/priority to fix o Statu us of the inci ident (e.g., open, o deferre ed, duplicate, , waiting to be b fixed, fixed d awaiting re e-test, close ed) o Conc clusions, rec commendatio ons and appr rovals o Glob bal issues, su uch as other areas that may m be affect ted by a change resulting g from the inc cident o Change history, such as the sequence of f actions take en by project t team memb bers with res spect to the incident to o isolate, repa air, and conf firm it as fixed d o Refe erences, inclu uding the ide entity of the test t case spe ecification tha at revealed t the problem The structure of an in ncident report is also cov vered in the Standard for r Software Te est Docume entation (IEE EE Std 829-1998).

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 55 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Refere ences
5.1.1 Black, 2001, Hetzel, H 1988 5.1.2 Black, 2001, Hetzel, H 1988 5.2.5 Black, 2001, Craig, C 2002, IEEE I Std 829 9-1998, Kaner 2002 5.3.3 Black, 2001, Craig, C 2002, Hetzel, H 1988 8, IEEE Std 829-1998 8 5.4 Crai ig, 2002 5.5.2 Black, 2001 , IEEE Std 829 9-1998 5.6 Blac ck, 2001, IEE EE Std 829-1 1998

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 56 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

6.

T Tool Su upport for f Testi ing (K2) )

8 80 minutes

Learni ing Objec ctives for r Tool Sup pport for Testing
The obje ectives identify what you will be able to t do followin ng the compl letion of each h module.

6.1 Typ pes of Tes st Tools (K K2)


LO-6.1.1 1 LO-6.1.3 3 Classify different types of test too ols according g to their purpose and to the activities s of the funda amental test t process and d the softwar re life cycle (K2) ( Explain the t term test t tool and the e purpose of tool support for testing (K K2) 2

6.2 Effe ective Use e of Tools s: Potentia al Benefits s and Risk ks (K2)
LO-6.2.1 1 LO-6.2.2 2 Summar rize the poten ntial benefits s and risks of f test automa ation and too ol support for r testing (K K2) Rememb ber special consideration c ns for test exe ecution tools s, static analy ysis, and tes st management tools (K K1)

6.3 Intr roducing a Tool into o an Orga anization (K1)


LO-6.3.1 1 LO-6.3.2 2 LO-6.3.3 3 State the e main princi iples of introd ducing a tool into an orga anization (K1 1) State the e goals of a proof-of-conc p cept for tool evaluation and a piloting phase for to ool impleme entation (K1) Recognize that facto ors other than n simply acquiring a tool are required for good too ol support (K1)

LO-6.1.2 Intentiona ally skipped


Page 57 of 78 31-Ma ar-2011

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

6.1
Terms

Types of Test Tools T (K K2)

45 minut tes

Configur ration manag gement tool, coverage too ol, debugging tool, dynam mic analysis tool, inciden nt management tool, load testing to ool, modeling g tool, monito oring tool, performance te esting tool, probe effect, re equirements managemen nt tool, review w tool, security tool, static c analysis tool, stress tes sting tool, test t comparator r, test data pr reparation to ool, test desig gn tool, test harness, h test t execution tool, test man nagement too ol, unit test fr ramework too ol

6.1.1

Tool Sup pport for Testing T (K K2)

Test tool ls can be use ed for one or r more activit ties that support testing. These inclu ude: 1. Tools that are dir rectly used in n testing suc ch as test exe ecution tools s, test data ge eneration too ols and result compa arison tools 2. Tools that help in n managing the t testing process such as those used to manag ge tests, test results, data, req quirements, incidents, defects, etc., and for report ting and mon nitoring test exec cution 3. Tools that are us sed in reconn naissance, or, in simple terms: t explor ration (e.g., t tools that mo onitor file activity a for an n application) ) 4. Any tool that aids s in testing (a a spreadshe eet is also a test t tool in this meaning) Tool sup pport for testing can have e one or more e of the follow wing purpose es depending on the con ntext: o Impr rove the effic ciency of test t activities by y automating repetitive ta asks or suppo orting manua al test activ vities like test t planning, te est design, te est reporting and monitor ring o Auto omate activiti ies that require significan nt resources when done manually m (e.g g., static test ting) o Auto omate activiti ies that cann not be execut ted manually y (e.g., large scale perfor rmance testin ng of clien nt-server app plications) o Incre ease reliabilit ty of testing (e.g., by auto omating large data comp parisons or si imulating beha avior) The term m test frameworks is als so frequently used in the industry, in at a least three e meanings: o Reus sable and ex xtensible test ting libraries that can be used to build d testing tool ls (called tes st harn nesses as we ell) o A typ pe of design of test autom mation (e.g., data-driven, , keyword-driven) o Overall process of execution of testing For the purpose p of th his syllabus, the term tes st framework ks is used in its first two meanings as s describe ed in Section 6.1.6.

6.1.2

Test Too ol Classific cation (K2 2)

There ar re a number of tools that support diffe erent aspects s of testing. Tools T can be e classified based on sever ral criteria su uch as purpose, commerc cial / free / op pen-source / shareware, technology used and so fo orth. Tools are a classified in this syllab bus according to the testi ing activities that they support. Some to ools clearly su upport one activity; a other rs may suppo ort more than n one activity y, but are classified d under the activity a with which w they are most clos sely associate ed. Tools fro om a single provider, especially those t that ha ave been des signed to work together, may be bund dled into one e package e. Some types of test to ools can be intrusive, which means th hat they can affect the ac ctual outcome e of the test. For example e, the actual timing may be b different due d to the ex xtra instructio ons that are executed d by the tool, , or you may y get a differe ent measure of code cove erage. The c consequence e of intrusive e tools is calle ed the probe e effect.
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 58 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Some to ools offer sup pport more ap ppropriate fo or developers s (e.g., tools that are used d during compone ent and component integ gration testing g). Such tools are marke ed with (D) i in the list bel low.

6.1.3

Tool Sup pport for Manageme M ent of Testing and Tests T (K1)

Management tools apply to all tes st activities over o the entir re software life cycle. Test Ma anagement Tools T These to ools provide interfaces for executing tests, t tracking defects an nd managing requirement ts, along with support fo or quantitative e analysis an nd reporting of the test objects. They also suppor rt he test objec cts to require ement specifications and might have an a independent version control c tracing th capability or an interf face to an ex xternal one. Require ements Mana agement To ools These to ools store req quirement sta atements, store the attrib butes for the requirement ts (including priority), provide uniq que identifier rs and suppo ort tracing the e requiremen nts to individu ual tests. Th hese ay also help with w identifyin ng inconsiste ent or missing requirements. tools ma t Manageme ent Tools (D Defect Tracking Tools) Incident These to ools store and manage in ncident repor rts, i.e., defec cts, failures, change requ uests or perc ceived problems s and anoma alies, and he elp in managi ing the life cy ycle of incide ents, optionally with supp port for statistica al analysis. Configu uration Mana agement Tools Although h not strictly test t tools, the ese are nece essary for sto orage and ve ersion manag gement of testware e and related software especially whe en configuring g more than one hardware/software environm ment in terms s of operating g system ver rsions, comp pilers, browse ers, etc.

6.1.4

Tool Sup pport for Static S Test ting (K1)

Static tes sting tools pr rovide a cost t effective wa ay of finding more defects at an earlie er stage in th he developm ment process s. Review Tools ools assist with review pro ocesses, che ecklists, revie ew guideline es and are us sed to store and a These to commun nicate review w comments and a report on n defects and d effort. They y can be of f further help by b providing g aid for onlin ne reviews fo or large or ge eographically y dispersed teams. t Static Analysis A Too ols (D) These to ools help dev velopers and testers find defects prior r to dynamic testing by providing support for enfor rcing coding standards (in ncluding sec cure coding), analysis of structures s an nd dependen ncies. They can n also help in n planning or r risk analysi is by providin ng metrics fo or the code (e e.g., complex xity). Modelin ng Tools (D) These to ools are used d to validate software mo odels (e.g., physical data model (PDM M) for a relational database e), by enume erating incon nsistencies and finding de efects. These e tools can o often aid in generating some test cases base ed on the mo odel.

6.1.5

Tool Sup pport for Test T Speci ification (K K1)

Test Des sign Tools These to ools are used d to generate e test inputs or executable tests and/o or test oracle es from requirem ments, graphi ical user inte erfaces, desig gn models (s state, data or r object) or c code.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 59 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

ta Preparation Tools Test Dat Test data a preparation n tools manip pulate databases, files or data transm missions to set up test da ata to be used during the execution e of tests t to ensu ure security th hrough data anonymity.

6.1.6

Tool Sup pport for Test T Execu ution and Logging (K1) (

Test Exe ecution Too ols These to ools enable te ests to be ex xecuted auto omatically, or r semi-autom matically, usin ng stored inp puts and expe ected outcom mes, through h the use of a scripting language and usually prov vide a test log g for each tes st run. They can c also be used u to recor rd tests, and usually support scripting g languages or GUI-bas sed configura ation for para ameterization n of data and d other customization in th he tests. T Framew work Tools (D) ( Test Harness/Unit Test A unit test harness or o framework facilitates th he testing of components c or parts of a system by ng the enviro onment in wh hich that test object will ru un, through the provision of mock objects simulatin as stubs s or drivers. Test Comparators Test com mparators de etermine diffe erences betw ween files, da atabases or test t results. T Test executio on tools typ pically include e dynamic co omparators, but post-exe ecution comp parison may b be done by a separate e comparison n tool. A test comparator may use a te est oracle, especially if it t is automate ed. ge Measurem ment Tools (D) Coverag These to ools, through intrusive or non-intrusive e means, me easure the pe ercentage of f specific types of code stru uctures that have been exercised e (e.g g., statements, branches s or decisions s, and module or function calls) by a set of tests. Security y Testing To ools These to ools are used d to evaluate e the security y characterist tics of softwa are. This inc cludes evalua ating the abilit ty of the softw ware to prote ect data conf fidentiality, in ntegrity, authentication, a authorization, , availability, and non-repudiation. Security too ols are mostly y focused on n a particular technology, platform, and purpos se.

6.1.7

Tool Sup pport for Performan P nce and Mo onitoring (K1)

Dynamic c Analysis Tools T (D) Dynamic c analysis too ols find defec cts that are evident e only when w softwa are is executi ing, such as time depende encies or memory leaks. They are typ pically used in componen nt and compo onent integra ation testing, and a when tes sting middlew ware. mance Testin ng/Load Tes sting/Stress Testing Too ols Perform Performa ance testing tools monito or and report on how a sy ystem behaves under a v variety of sim mulated usage co onditions in terms t of num mber of concu urrent users, their ramp-u up pattern, fr requency and d relative percentage p o transaction of ns. The simu ulation of load d is achieved d by means o of creating vi irtual users ca arrying out a selected set of transactio ons, spread across a variou us test mach hines commo only known as a load gener rators. Monitor ring Tools Monitorin ng tools cont tinuously ana alyze, verify and report on usage of specific s syste em resources s, and give war rnings of pos ssible service e problems.

6.1.8

Tool Sup pport for Specific S Te esting Nee eds (K1)

Data Qu uality Assessment Data is at a the center of some pro ojects such as data conve ersion/migrat tion projects and applicat tions like data a warehouses s and its attri ibutes can va ary in terms of criticality and a volume. In such cont texts, tools nee ed to be emp ployed for da ata quality as ssessment to o review and verify the da ata conversio on and
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 60 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

migration n rules to ensure that the e processed data is corre ect, complete e and complie es with a pre edefined context-spec c cific standard d. Other tes sting tools ex xist for usabi ility testing.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 61 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

6.2 Effectiv ve Use of o Tools: Poten ntial Benefits and Risks (K K2)
Terms
Data-driv ven testing, keyword-driv k ven testing, scripting s lang guage

20 minut tes

6.2.1 (K2)

Potential Benefits and Risks s of Tool Support S fo or Testing g (for all to ools)

Simply purchasing p or leasing a to ool does not guarantee success with that tool. Each type of to ool may requ uire additional effort to ac chieve real and a lasting be enefits. Ther re are potent tial benefits and a opportun nities with the e use of tools s in testing, but b there are e also risks. Potential benefits of using tools in nclude: o Repe etitive work is i reduced (e e.g., running regression tests, re-ente ering the sam me test data, and chec cking against t coding stan ndards) o Grea ater consiste ency and repe eatability (e.g g., tests exec cuted by a to ool in the sam me order with h the same frequency, , and tests de erived from requirements r s) o Obje ective assess sment (e.g., static measu ures, coverag ge) o Ease e of access to t information n about tests s or testing (e e.g., statistic cs and graphs about test prog gress, inciden nt rates and performance e) Risks of using tools include: o Unre ealistic expec ctations for th he tool (inclu uding functionality and ea ase of use) o Unde erestimating the time, co ost and effort for the initia al introduction n of a tool (in ncluding train ning and external exp pertise) o Unde erestimating the time and d effort need ded to achiev ve significant t and continu uing benefits from the tool t (including the need fo or changes in the testing g process and d continuous s improvement of the way w the tool is used) o Unde erestimating the effort required to ma aintain the test assets generated by th he tool o Over-reliance on n the tool (rep placement fo or test design n or use of au utomated tes sting where manual testing would w be bett ter) o Neglecting versio on control of f test assets within w the too ol o Neglecting relatio onships and interoperabi ility issues be etween critic cal tools, such as requirem ments management too ols, version control c tools, incident management to ools, defect tr racking tools s and tools s from multip ple vendors o Risk k of tool vend dor going out t of business, retiring the tool, or sellin ng the tool to o a different vend dor o Poor r response fr rom vendor for f support, upgrades, u an nd defect fixe es o Risk k of suspension of open-s source / free tool project o Unfo oreseen, suc ch as the inab bility to support a new pla atform

6.2.2

Special Considera C ations for Some Typ pes of Too ols (K1)

Test Exe ecution Too ols Test exe ecution tools execute test t objects usin ng automated d test scripts s. This type o of tool often requires significant effort e in order r to achieve significant s be enefits. Capturin ng tests by re ecording the actions of a manual teste er seems attractive, but t this approach h does not scale e to large numbers of aut tomated test scripts. A ca aptured scrip pt is a linear r representatio on with specific data and d actions as part of each h script. This type of scrip pt may be uns stable when unexpec cted events occur. o
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 62 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

A data-d driven testing g approach separates out t the test inputs (the data a), usually int to a spreadsheet, and uses s a more gen neric test scr ript that can read r the inpu ut data and execute e the s same test script with diffe erent data. Testers who are a not famili iar with the scripting s lang guage can then create the e test data for these predef fined scripts. . There ar re other techniques employed in data a-driven techniques, wher re instead of f hard-coded data combina ations placed in a spreads sheet, data is generated using algorit thms based o on configura able parameters at run tim me and supplied to the ap pplication. Fo or example, a tool may use an algorit thm, which ge enerates a ra andom user ID, I and for re epeatability in n pattern, a seed s is empl loyed for controllin ng randomne ess. In a keyw word-driven testing t appro oach, the spr readsheet co ontains keyw words describ bing the actio ons to be taken n (also called d action word ds), and test data. Testers s (even if the ey are not fam miliar with th he scripting language) can c then define tests usin ng the keywo ords, which can c be tailore ed to the application being tes sted. Technica al expertise in the scriptin ng language is needed fo or all approac ches (either by testers or r by specialis sts in test aut tomation). Regardle ess of the sc cripting techn nique used, th he expected results for each e test nee ed to be store ed for later com mparison. A Too ols Static Analysis Static an nalysis tools applied to so ource code can c enforce coding c standards, but if a applied to exi isting code ma ay generate a large quant tity of messa ages. Warnin ng messages s do not stop the code fro om being tra anslated into an executab ble program, but ideally should s be addressed so t that maintena ance of the co ode is easier in the future e. A gradual implementati ion of the analysis tool w with initial filte ers to exclude some messa ages is an ef ffective appro oach. anagement Tools T Test Ma Test management to ools need to interface i with h other tools or spreadsh heets in order to produce useful information in a format that fits th he needs of the organizat tion.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 63 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

6.3 Introdu ucing a Tool T into o an Org ganizatio on (K1)


Terms
No specific terms.

15 minut tes

Backgr round
The main considerat tions in selec cting a tool fo or an organiz zation include e: o Asse essment of organizationa o al maturity, st trengths and d weaknesses and identif fication of oppo ortunities for an improved d test proces ss supported by tools o Evaluation again nst clear requ uirements an nd objective criteria c o A pro oof-of-conce ept, by using a test tool during the eva aluation phas se to establis sh whether it t perfo orms effectiv vely with the software und der test and within the cu urrent infrastr ructure or to identify changes needed to th hat infrastruc cture to effec ctively use th he tool o Evaluation of the e vendor (inc cluding trainin ng, support and a commerc cial aspects) ) or service support s supp pliers in case e of non-com mmercial tools s o Identification of internal requi irements for coaching an nd mentoring in the use o of the tool o Evaluation of training needs considering the current test teams te est automatio on skills o Estim mation of a cost-benefit c r ratio based on o a concrete e business ca ase Introducing the selec cted tool into an organiza ation starts with w a pilot pro oject, which has the follow wing objective es: o Lear rn more deta ail about the tool t o Evaluate how the e tool fits with existing pr rocesses and d practices, and a determin ne what would need d to change o Deci ide on standard ways of using, mana aging, storing g and maintaining the too ol and the tes st asse ets (e.g., dec ciding on nam ming convent tions for files s and tests, creating c libraries and defining the modularity m of f test suites) o Asse ess whether the benefits will be achie eved at reaso onable cost include: Success s factors for the deployme ent of the too ol within an organization o o Rolling out the to ool to the res st of the orga anization incr rementally o Adap pting and improving proc cesses to fit with w the use of the tool o Prov viding training g and coaching/mentorin ng for new us sers o Defin ning usage guidelines g o Implementing a way w to gathe er usage information from m the actual use u o Monitoring tool use u and bene efits o Prov viding suppor rt for the test t team for a given g tool o Gath hering lesson ns learned fro om all teams s

Refere ences
6.2.2 Bu uwalda, 2001 1, Fewster, 1999 1 6.3 Few wster, 1999

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 64 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

7.

Referen nces

Stand dards
ISTQB Glossary G of Terms T used in Software Testing T Versi ion 2.1 [CMMI] Chrissis, C M.B B., Konrad, M. M and Shrum m, S. (2004) CMMI, Guidelines for Pro ocess Integr ration and Prod duct Improve ement, Addis son Wesley: Reading, MA A See Sec ction 2.1 [IEEE St td 829-1998] IEEE Std 82 29 (1998) IEEE Standa ard for Softw ware Test Doc cumentation, See Sec ctions 2.3, 2.4 4, 4.1, 5.2, 5.3, 5 5.5, 5.6 [IEEE 10 028] IEEE St td 1028 (20 008) IEEE Standard for Software S Rev views and Au udits, See Sec ction 3.2 [IEEE 12 2207] IEEE 12207/ISO/IE 1 EC 12207-20 008, Software e life cycle processes, See Sec ction 2.1 [ISO 912 26] ISO/IEC 9126-1:2001 9 1, Software Engineering E Software Product P Quality, See Sec ction 2.3

Books s
[Beizer, 1990] Beizer r, B. (1990) Software S Tes sting Techniq ques (2nd ed dition), Van N Nostrand Reinhold: Boston See Sec ctions 1.2, 1.3 3, 2.3, 4.2, 4.3, 4 4.4, 4.6 [Black, 2001] 2 Black, R. (2001) Ma anaging the Testing Proc cess (3rd edi ition), John W Wiley & Sons s: New York See Sec ctions 1.1, 1.2 2, 1.4, 1.5, 2.3, 2 2.4, 5.1, 5.2, 5.3, 5.5, , 5.6 [Buwalda a, 2001] Buw walda, H. et al. a (2001) Int tegrated Test Design and d Automation n, Addison Wesley: W Reading, MA See Sec ction 6.2 [Copelan nd, 2004] Co opeland, L. (2 2004) A Prac ctitioners Gu uide to Softw ware Test Des sign, Artech House: Norwood, N MA A See Sec ctions 2.2, 2.3 3, 4.2, 4.3, 4.4, 4 4.6 [Craig, 2002] 2 Craig, Rick R D. and Jaskiel, J Stef fan P. (2002) ) Systematic Software Te esting, Artech h House: Norwood, N MA A See Sec ctions 1.4.5, 2.1.3, 2 2.4, 4.1, 5.2.5, 5.3, 5.4 [Fewster r, 1999] Fewster, M. and Graham, D. (1999) Softw ware Test Au utomation, A Addison Wesl ley: Reading, MA See Sec ctions 6.2, 6.3 3 [Gilb, 1993]: Gilb, To om and Graham, Dorothy y (1993) Software Inspection, Addison n Wesley: Reading, MA See Sec ctions 3.2.2, 3.2.4 3 [Hetzel, 1988] Hetzel, W. (1988) Complete Guide G to Softw ware Testing g, QED: Welle esley, MA See Sec ctions 1.3, 1.4 4, 1.5, 2.1, 2.2, 2 2.3, 2.4, 4.1, 4 5.1, 5.3 [Kaner, 2002] 2 Kaner, , C., Bach, J. and Petttico ord, B. (2002 2) Lessons Learned L in So oftware Testi ing, John Wil ley & Sons: New N York See Sec ctions 1.1, 4.5 5, 5.2
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 65 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

[Myers 1979] Myers, Glenford J. (1979) The Art A of Softwa are Testing, John J Wiley & Sons: New w York See Sec ctions 1.2, 1.3 3, 2.2, 4.3 [van Vee enendaal, 20 004] van Vee enendaal, E. (ed.) (2004) The Testing g Practitioner r (Chapters 6, 6 8, 10), UTN N Publishers: The Nether rlands See Sec ctions 3.2, 3.3 3

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 66 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

8.

A Append dix A Syllabu S s Backg ground

History ry of this Documen D nt


This doc cument was prepared p bet tween 2004 and a 2011 by y a Working Group G compr rised of mem mbers appointe ed by the Inte ernational So oftware Testing Qualificat tions Board (ISTQB). ( It w was initially reviewed d by a select ted review pa anel, and the en by represe entatives dra awn from the internationa al software e testing com mmunity. The rules used in the produc ction of this document d are e shown in Appendix C. This doc cument is the e syllabus for r the Internat tional Founda ation Certific cate in Software Testing, the first level internationa al qualificatio on approved by the ISTQ QB (www.istqb.org).

Objectives of th he Found dation Ce ertificate Qualificat Q tion


o o o o o o o To gain g recogniti ion for testing as an esse ential and pro ofessional so oftware engin neering spec cialization To provide p a stan ndard framew work for the developmen nt of testers' careers c To enable e profes ssionally qua alified testers s to be recognized by employers, customers and peers, p and to raise the profile p of testers To promote p cons sistent and good testing practices p within all softwa are engineer ring discipline es To id dentify testing topics that t are relevant t and of value to industry y To enable e softwa are suppliers s to hire certified testers and a thereby gain g commercial advanta age over r their compe etitors by adv vertising their tester recru uitment policy y To provide p an op pportunity for r testers and those with an a interest in testing to ac cquire an inter rnationally re ecognized qu ualification in the subject

Objectives of th he International Qualificatio Q on (adapted from ISTQB meetin ng at Soll lentuna, Novembe N er 2001)
o o o o o o To be b able to com mpare testing skills acros ss different countries c To enable e testers to move ac cross country y borders mo ore easily To enable e multin national/intern national projects to have a common understandin u ng of testing issues To in ncrease the number n of qu ualified teste ers worldwide e To have h more im mpact/value as a an interna ationally-base ed initiative than from any y country-specific appr roach To develop d a com mmon international body y of understanding and kn nowledge ab bout testing throu ugh the sylla abus and term minology, and to increase e the level of f knowledge about testing g for all pa articipants To promote p testing as a profe ession in mo ore countries To enable e testers to gain a re ecognized qu ualification in n their native language To enable e sharin ng of knowled dge and reso ources acros ss countries To provide p intern national reco ognition of tes sters and this s qualification due to par rticipation from many countries

o o o o

Entry Requirem ments for r this Qua alification


The entr ry criterion fo or taking the ISTQB Foun ndation Certif ficate in Softw ware Testing g examinatio on is that cand didates have e an interest in software testing. t Howe ever, it is stro ongly recommended that t candidat tes also: o Have e at least a minimal m back kground in either software e developme ent or software testing, su uch as six months m experience as a system s or us ser acceptanc ce tester or as a a software e developer
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 67 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Take e a course th hat has been accredited to t ISTQB sta andards (by one o of the IS STQB-recogn nized Natio onal Boards) ).

Backg ground an nd History y of the Foundatio F on Certific cate in So oftware Testin ng


The inde ependent cer rtification of software s test ters began in n the UK with h the British C Computer Society's s Information n Systems Ex xamination Board B (ISEB) ), when a Software Testin ng Board wa as set up in 199 98 (www.bcs s.org.uk/iseb). In 2002, ASQF A in Germ many began to support a German tes ster qualification scheme (www.asqf.d de). This syll labus is base ed on the ISE EB and ASQ QF syllabi; it includes reorganized d, updated an nd additional l content, and d the empha asis is directe ed at topics that will provide the most t practical help to testers. . An existi ing Foundation Certificate e in Software e Testing (e.g., from ISEB, ASQF or a an ISTQBrecogniz zed National Board) awar rded before this t Internatio onal Certifica ate was relea ased, will be deemed to be equiva alent to the In nternational Certificate. The T Foundation Certificat te does not expire e and does s not need to o be renewed d. The date it i was awarded is shown on the Certificate. Within ea ach participa ating country y, local aspec cts are contro olled by a na ational ISTQB B-recognized d Software e Testing Boa ard. Duties of o National Boards are sp pecified by th he ISTQB, bu ut are implem mented within ea ach country. The duties of o the country y boards are expected to o include accreditation of training providers p and the setting g of exams.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 68 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

9. Append A dix B Learnin L ng Objec ctives/C Cognitiv ve Level of Know wledge


The follo owing learnin ng objectives are defined as applying to this syllab bus. Each top pic in the syllabus will be ex xamined acc cording to the e learning ob bjective for it. .

Level 1: Remember (K1 1)


The cand didate will re ecognize, rem member and recall a term m or concept. . Keyword ds: Rememb ber, retrieve, recall, recog gnize, know Example e Can reco ognize the de efinition of fa ailure as: or o Non n-delivery of service to an n end user or any other stakeholder s o Actu ual deviation n of the comp ponent or sys stem from its s expected delivery, service or result

Level 2: Under rstand (K2 2)


The cand didate can se elect the rea asons or expl lanations for statements related to the e topic, and can summarize, compare e, classify, ca ategorize and d give examples for the testing t conce ept. Keyword ds: Summar rize, generali ize, abstract, , classify, compare, map, , contrast, ex xemplify, inte erpret, translate e, represent, infer, conclu ude, categorize, construct models es Example Can exp plain the reas son why tests s should be designed d as early as pos ssible: o To find defects when w they are e cheaper to o remove o To find the most important de efects first Can exp plain the similarities and differences d between integ gration and system s testin ng: o Similarities: testing more than n one compo onent, and ca an test non-f functional as spects o Diffe erences: integration testin ng concentra ates on interf faces and int teractions, an nd system te esting conc centrates on whole-system aspects, such s as end-to-end proce essing

Level 3: Apply (K3)


The cand didate can se elect the cor rrect applicat tion of a conc cept or techn nique and ap pply it to a giv ven context. Keyword ds: Impleme ent, execute, use, follow a procedure, apply a proc cedure Example e o Can identify boundary values s for valid and invalid par rtitions o Can select test cases c from a given state transition dia agram in order to cover a all transitions s

Level 4: Analyz ze (K4)


The cand didate can se eparate infor rmation relat ted to a proce edure or tech hnique into it ts constituen nt parts for better understand ding, and can n distinguish between fac cts and infere ences. Typic cal application is to analyze a document, , software or r project situa ation and pro opose approp priate actions s to solve a problem or task. Keyword ds: Analyze, , organize, fin nd coherenc ce, integrate, outline, pars se, structure, , attribute, deconstr ruct, differentiate, discrim minate, disting guish, focus, , select

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 69 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International esting Software Te Q Qualifications s Board

e Example o Anal lyze product risks and propose preve entive and co orrective mitig gation activit ties o Desc cribe which portions p of an n incident report are factual and whic ch are inferre ed from results

Refere ence
(For the cognitive lev vels of learning objectives s) Anderso on, L. W. and Krathwohl, D. R. (eds) (2001) A Tax xonomy for Learning, Tea aching, and Assessin ng: A Revisio on of Bloom's s Taxonomy y of Education nal Objective es, Allyn & Bacon

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 70 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

10. Append A dix C Rules R A Applied to the ISTQB


Found dation Sy yllabus
The rules listed here were used in the develo opment and review r of this s syllabus. (A A TAG is sh hown after eac ch rule as a shorthand s ab bbreviation of o the rule.)

10.1.1 General Rules


SG1. The syllabus sh hould be und derstandable e and absorb bable by peop ple with zero o to six month hs (or more) ex xperience in testing. (6-M MONTH) SG2. The syllabus sh hould be pra actical rather than theoret tical. (PRACT TICAL) hould be clea ar and unam mbiguous to it ts intended readers. r (CLE EAR) SG3. The syllabus sh SG4. The syllabus sh hould be und derstandable e to people fr rom different countries, and easily translata able into diffe erent languag ges. (TRANS SLATABLE) SG5. The syllabus sh hould use Am merican English. (AMERICAN-ENGLISH)

10.1.2 Current Content C


SC1. The syllabus sh hould include e recent testi ing concepts s and should reflect curre ent best pract tices in softwa are testing where this is generally g agr reed. The syllabus is sub bject to review w every three e to five year rs. (RECENT T) SC2. The syllabus sh hould minimi ize time-relat ted issues, such s as curre ent market co onditions, to enable it t to have a sh helf life of thr ree to five ye ears. (SHELF F-LIFE).

10.1.3 Learning g Objective es


LO1. Lea arning object tives should distinguish between b item ms to be reco ognized/reme embered (cognitive level K1) ), items the candidate c should underst tand concept tually (K2), it tems the can ndidate should be able to practice/use p ( (K3), and items the candidate should be able to use u to analyz ze a document, software e or project situation in co ontext (K4). (KNOWLEDG GE-LEVEL) LO2. The e description n of the conte ent should be e consistent with the lear rning objectiv ves. (LOCONSIS STENT) LO3. To illustrate the e learning ob bjectives, sam mple exam questions for each major s section shou uld be issued along a with the e syllabus. (L LO-EXAM)

10.1.4 Overall Structure S


ST1. The e structure of o the syllabus should be clear and allow cross-ref ferencing to and from oth her parts, fro om exam que estions and from f other re elevant documents. (CRO OSS-REF) ST2. Overlap betwee en sections of o the syllabu us should be minimized. (OVERLAP) ST3. Eac ch section of f the syllabus s should hav ve the same structure. s (STRUCTURE E-CONSISTE ENT) ST4. The e syllabus sh hould contain n version, da ate of issue and a page num mber on ever ry page. (VERSIO ON) ST5. The e syllabus sh hould include e a guideline for the amou unt of time to o be spent in n each sectio on (to reflect th he relative im mportance of each topic). (TIME-SPEN NT)

Refere ences
SR1. So ources and re eferences wil ll be given fo or concepts in n the syllabu us to help training provide ers find out more m informa ation about the topic. (RE EFS) SR2. Wh here there ar re not readily y identified an nd clear sources, more detail d should be provided in the syllabus. For exampl le, definitions s are in the Glossary, G so only the term ms are listed in the syllab bus. (NON-REF DETAIL)

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 71 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

Source es of Infor rmation


Terms used in the sy yllabus are defined in the e ISTQB Glos ssary of Term ms used in S Software Test ting. A version of o the Glossa ary is availab ble from ISTQ QB. A list of recommende r ed books on software tes sting is also issued in par rallel with this s syllabus. The T main boo ok list is part t of the Refer rences sectio on.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 72 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

11.

A Appendi ix D No otice to Training T Provider rs

Each ma ajor subject heading h in th he syllabus is s assigned an n allocated ti ime in minute es. The purp pose of this is bo oth to give gu uidance on th he relative pr roportion of time t to be allocated to ea ach section of o an accredite ed course, and to give an n approximat te minimum time t for the teaching t of e each section. . Training providers may spend mo ore time than n is indicated d and candidates may spend more tim me again in reading and research. A course curri iculum does not have to follow the sa ame order as s the syllabus. The sylla abus contains references s to establish hed standards, which mus st be used in n the prepara ation of trainin ng material. Each E standar rd used mus st be the vers sion quoted in the current t version of this syllabus. Other publications, templates or sta andards not referenced r in n this syllabus may also be b used and d referenced d, but will not t be examine ed. All K3 an nd K4 Learning Objective es require a practical p exe ercise to be in ncluded in th he training materials s.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 73 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

12.

A Appendi ix E Re elease No otes

Release 2010
1. C Changes to Learning Ob bjectives (LO) include som me clarificatio on a. Wording W cha anged for the e following LO Os (content and a level of L LO remains unchanged): : LO-1.2.2, LO-1.3.1, LO-1.4.1, LO-1. .5.1, LO-2.1.1, LO-2.1.3, LO2 2.4.2, LO-4.1 1.3, LO-4.2.1 1, LO-4.2.2, LO-4.3.1, LO O-4.3.2, LO-4 4.3.3, LO-4.4 4.1, LO-4.4.2, LO O-4.4.3, LO-4 4.6.1, LO-5.1 1.2, LO-5.2.2 2, LO-5.3.2, L LO-5.3.3, LO O5 5.5.2, LO-5.6 6.1, LO-6.1.1 1, LO-6.2.2, LO-6.3.2. b. LO-1.1.5 has s been rewor rded and upg graded to K2 2. Because a comparison n of t terms of defe ect related te erms can be expected. c. LO-1.2.3 (K2 2) has been added. a The content was s already cov vered in the 2007 2 s syllabus. d. LO-3.1.3 (K2 2) now comb bines the con ntent of LO-3.1.3 and LO-3.1.4. e. LO-3.1.4 has s been removed from the e 2010 syllab bus, as it is p partially redun ndant w LO-3.1.3. with f. LO-3.2.1 has s been rewor rded for cons sistency with h the 2010 sy yllabus conte ent. g. LO-3.3.2 has s been modif fied, and its level l has bee en changed from K1 to K2, K for c consistency with LO-3.1.2. h. LO 4.4.4 has s been modif fied for clarity y, and has been changed d from a K3 to t a K4. Reason n: LO-4.4.4 had already been b written in i a K4 mann ner. i. LO-6.1.2 (K1 1) was dropp ped from the 2010 syllabu us and was r replaced with h LO6 6.1.3 (K2). There T is no LO-6.1.2 L in th he 2010 sylla abus. 2. Consistent C u for test approach acc use cording to the e definition in n the glossary y. The term test t s strategy will not be required as term to t recall. 3. Chapter C 1.4 now contains the concep pt of traceability between test basis and test cases. 4. Chapter C 2.x now contains s test objects s and test ba asis. 5. Re-testing R is s now the ma ain term in the glossary in nstead of con nfirmation tes sting. 6. The T aspect data d quality and a testing has h been add ded at severa al locations in the syllabu us: d data quality and a risk in Chapter C 2.2, 5.5, 5 6.1.8. Reason: Consistency to Exit 7. Chapter C 5.2.3 Entry Crite eria are adde ed as a new subchapter. s C Criteria (-> entry e criteria added to LO O-5.2.9). 8. Consistent C u of the ter use rms test strat tegy and test t approach with w their definition in the g glossary. 9. Chapter C 6.1 shortened be ecause the tool t descriptions were too o large for a 45 minute le esson. 10. IEEE Std 829:2008 has been b release ed. This vers sion of the sy yllabus does not yet consider t this new edit tion. Section 5.2 refers to o the docume ent Master Test Plan. The e content of the M Master Test Plan is cove ered by the co oncept that the t documen nt Test Plan covers diffe erent l levels of plan nning: Test plans p for the test levels ca an be create ed as well as a test plan on o the p project level covering mu ultiple test lev vels. Latter is s named Master Test Pla an in this syllabus a in the IS and STQB Glossa ary. 11. Code C of Ethics has been moved from m the CTAL to o CTFL.

Release 2011
Changes s made with the mainten nance releas se 2011 1. General: G Wo orking Party replaced r by Working W Gro oup 2. Replaced R po ost-conditions s by postcon nditions in ord der to be con nsistent with the ISTQB G Glossary 2.1. 3. First F occurre ence: ISTQB replaced by ISTQB 4. Introduction to this Syllab bus: Descript tions of Cogn nitive Levels s of Knowledg ge removed, b because this s was redund dant to Appendix B.
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 74 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

5. S Section 1.6: Because the e intent was not to define e a Learning Objective for r the Code of o E Ethics, the cognitive c leve el for the sec ction has bee en removed. 6. Section S 2.2.1 1, 2.2.2, 2.2.3 and 2.2.4, 3.2.3: Fixed formatting is ssues in lists s. 7. Section S 2.2.2 2 The word failure f was no ot correct for r isolate fa ailures to a s specific comp ponent . Therefor re replaced with w defect in that sente ence. 8. Section S 2.3: Corrected fo ormatting of bullet b list of test t objective es related to test terms in n s section Test Types (K2). 9. Section S 2.3.4 4: Updated description d of f debugging to be consistent with Ver rsion 2.1 of the ISTQB Gloss sary. 10. Section S 2.4 removed r wor rd extensive e from inclu udes extensiv ve regression n testing, b because the extensive depends on the change (size, risks, value, v etc.) a as written in the t n next sentenc ce. 11. Section S 3.2: The word in ncluding has s been remov ved to clarify y the sentenc ce. 12. Section S 3.2.1 1: Because the t activities of a formal review r had been b incorrec ctly formatted d, the r review proce ess had 12 main m activities s instead of six, s as intend ded. It has be een changed d back t six, which makes this section to s comp pliant with th he Syllabus 2007 2 and the e ISTQB Advanced L Level Syllabu us 2007. 13. Section S 4: Word W develop ped replaced by defined d because test cases ge et defined an nd not d developed. 14. Section S 4.2: Text change e to clarify ho ow black-box x and white-b box testing co ould be used d in c conjunction w experien with nce-based te echniques. 15. Section S 4.3.5 5 text change e ..between actors, inclu uding users and a the syste em.. to b between acto ors (users or r systems), . 16. Section S 4.3.5 5 alternative path replace ed by alterna ative scenario o. 17. Section S 4.4.2 2: In order to o clarify the te erm branch testing t in the e text of Sect tion 4.4, a s sentence to clarify the focus of branc ch testing has s been chang ged. 18. Section S 4.5, Section 5.2.6: The term experienced d-based tes sting has bee en replaced by b the c correct term experience-based. 19. Section S 6.1: Heading 6.1.1 Understa anding the Meaning M and Purpose of T Tool Support t for T Testing (K2) replaced by y 6.1.1 Tool l Support for Testing (K2). 20. Section S 7 / Books: B The 3rd 3 edition of [Black,2001] listed, repla acing 2nd edit tion. 21. Appendix A D: Chapters re equiring exerc cises have been b replaced by the gen neric requirem ment t that all Learn ning Objectiv ves K3 and higher h require e exercises. This is a req quirement specified i the ISTQB in B Accreditatio on Process (Version ( 1.26 6). 22. Appendix A E: The change ed learning objectives bet tween Versio on 2007 and 2010 are no ow c correctly liste ed.

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 75 of 78

31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

13.

Index
dy ynamic testin ng ..................... 13, 31, 32, 3 36 em mergency ch hange ................................. 30 en nhancement .................................... 27, 2 30 en ntry criteria .. ........................................... 33 eq quivalence partitioning p .......................... 40 10, 11, 18, 43, er rror............... ................... 1 4 50 er rror guessing g ............................. 18, 43, 4 50 ex xhaustive tes sting ................................... 14 ex xit criteria13, , 15, 16, 33, 35, 45, 48, 49, 4 50, 51 ex xpected resu ult ...................... 16, 38, 48, 4 63 ex xperience-ba ased technique ....... 37, 39, 3 43 ex xperience-ba ased test des sign techniqu ue 39 ex xploratory tes sting ............................. 43, 4 50 fa actory accept tance testing g ...................... 27 fa ailure10, 11, 13, 14, 18, 2 21, 24, 26, 32 2, 36, 43, 46, 50, 51, 53, 54, 6 69 fa ailure rate ..... ..................................... 50, 5 51 fa ault ............... ............................... 10, 11, 43 fa ault attack ..... ........................................... 43 fie eld testing .... ..................................... 24, 2 27 fo ollow-up ........ ............................... 33, 34, 3 35 fo ormal review .................... . ................. 31, 3 33 fu unctional requ uirement ...................... 24, 2 26 fu unctional spe ecification ............................ 28 fu unctional task k ......................................... 25 fu unctional test t .......................................... 28 fu unctional test ting ..................................... 28 fu unctionality ... ............. 24, 2 25, 28, 50, 53, 5 62 im mpact analysis ........................... 21, 30, 3 38 incident ... 15, 16, 17, 19, 2 24, 46, 48, 55 5, 58, 59, 62 incident loggin ng ....................................... 55 incident mana agement.................. 48, 55, 5 58 incident mana agement tool ................. 58, 5 59 incident report t ................................... 46, 4 55 independence e ............................. 18, 47, 4 48 informal review w ............................ 31, 33, 3 34 inspection ...... ......................... 31, 33, 34, 3 35 inspection leader ..................................... 33 integration13, 22, 24, 25, 2 27, 29, 36, 40, 41, 42, 45, 48, 59, 60, 69 integration tes sting22, 24, 2 25, 29, 36, 40 0, 45, 59, 60, 69 interoperability y testing ............................. 28 introducing a tool t into an o organization5 57, 64 IS SO 9126 ....... ......................... 11, 29, 30, 3 65 de evelopment model m ................................. 22 ite erative-increm mental development mod del22 ke eyword-drive en approach........................ 63 ke eyword-drive en testing ............................ 62 kick-off ........... ........................................... 33 le earning objec ctive ... 8, 9, 10, 21, 31, 37 7, 45, 57, 69, 70, 71 lo oad testing .... ............................... 28, 58, 5 60
Page 76 of 78 31-Ma ar-2011

action word .............. ................................ 63 alpha tes sting ............ .......................... 24, 27 architect ture .............. .. 15, 21, 22, 25, 28, 29 archiving g .................. .......................... 17, 30 automation ............... ................................ 29 benefits of independe ence ....................... 47 benefits of using tool l ............................... 62 beta test ting .............. .......................... 24, 27 37, 39, 40 black-bo ox technique .................... . black-bo ox test design n technique ............. 39 black-bo ox testing ...... ................................ 28 bottom-u up................. ................................ 25 boundar ry value analy ysis ......................... 40 bug ........................... ................................ 11 captured d script ......... ................................ 62 checklist ts ................. .......................... 34, 35 choosing g test techniq que .......................... 44 code cov verage ......... ........ 28, 29, 37, 42, 58 commerc cial off the sh helf (COTS) ............ 22 compiler r ................... ................................ 36 complex xity ................ .............. 11, 36, 50, 59 compone ent integratio on testing22, 25, 29, 59, 60 compone ent testing22 2, 24, 25, 27, , 29, 37, 41, 42 configura ation management ......... 45, 48, 52 Configur ration manag gement tool ............. 58 confirma ation testing.. .. 13, 15, 16, 21, 28, 29 contract acceptance testing .................... 27 control fl low............... .............. 28, 36, 37, 42 coverage e 15, 24, 28, 29, 37, 38, 39, 40, 42, 50, 51 1, 58, 60, 62 coverage e tool ........... ................................ 58 custom-d developed so oftware.................... 27 data flow w .................. ................................ 36 data-driv ven approach h .............................. 63 data-driv ven testing ... ................................ 62 debuggin ng ................ .............. 13, 24, 29, 58 debuggin ng tool ......... .......................... 24, 58 decision coverage .... .......................... 37, 42 decision table testing g ........................ 40, 41 decision testing ........ ................................ 42 defect10 0, 11, 13, 14, 16, 18, 21, 24, 26, 28, 29, 31 1, 32, 33, 34, 35, 36, 37, 39, 40, 41, 43, 44 4, 45, 47, 49, 50, 51, 53, 54, 55, 59, 60, 69 9 defect de ensity........... .......................... 50, 51 defect tra acking tool... ................................ 59 developm ment .. 8, 11, 12, 13, 14, 18, 21, 22, 24, 29 9, 32, 33, 36, 38, 44, 47, 49, 50, 52, 53, 55 5, 59, 67 ...... 21, 22 developm ment model .................... . drawbac cks of indepe endence ................... 47 driver ........................ ................................ 24 c analysis too ol ....................... 58, 60 dynamic
Version 2011 2
Internationa al Software Testing Qualifications Q Board

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

ting tool........ ................................ 58 load test maintain nability testing g ............................. 28 maintena ance testing ......................... 21, 30 management tool ..... .............. 48, 58, 59, 63 maturity .................... .............. 17, 33, 38, 64 metric ....................... .................... 33, 35, 45 mistake .................... .................... 10, 11, 16 modelling tool........... ................................ 59 moderator ................ .................... 33, 34, 35 monitorin ng tool ......... .......................... 48, 58 non-func ctional requir rement ......... 21, 24, 26 non-func ctional testing g ....................... 11, 28 objective es for testing ............................... 13 off-the-shelf .............. ................................ 22 operational acceptan nce testing ............... 27 operational test ........ .................... 13, 23, 30 patch ........................ ................................ 30 peer review .............. .................... 33, 34, 35 ...... 28, 58 performa ance testing .................... . performa ance testing tool ................... 58, 60 pesticide e paradox..... ................................ 14 portabilit ty testing ...... ................................ 28 probe eff ffect .............. ................................ 58 procedur re................. ................................ 16 product risk r .............. .............. 18, 45, 53, 54 project risk ............... .................... 12, 45, 53 prototyping ............... ................................ 22 quality 8, 8 10, 11, 13, 19, 28, 37, 38, 47, 48, 50, 53 3, 55, 59 rapid application dev velopment (R RAD) ..... 22 Rational Unified Proc cess (RUP) ............. 22 recorder r ................... ................................ 34 regressio on testing .... .. 15, 16, 21, 28, 29, 30 Regulation acceptance testing ............... 27 reliability y ................... .. 11, 13, 28, 50, 53, 58 reliability y testing ....... ................................ 28 requirem ment.............. ........ 13, 22, 24, 32, 34 requirem ments manag gement tool .............. 58 requirem ments specific cation ................ 26, 28 responsi ibilities ......... .................... 24, 31, 33 re-testing g . 29, See co onfirmation testing, t See confirm mationtesting g review13 3, 19, 31, 32, 33, 34, 35, 36, 47, 48, 53, 55 5, 58, 67, 71 review to ool................ ................................ 58 reviewer r ................... .......................... 33, 34 risk11, 12, 13, 14, 25 5, 26, 29, 30, , 38, 44, 45, 49, 50 0, 51, 53, 54 risk-base ed approach ............................... 54 risk-base ed testing..... .................... 50, 53, 54 risks ......................... .............. 11, 25, 49, 53 risks of using u tool ..... ................................ 62 robustne ess testing.... ................................ 24 roles ................ 8, 31, 33, 34, 35, 47, 48, 49 root caus se ................ .......................... 10, 11 scribe ....................... .......................... 33, 34 scripting language.... .................... 60, 62, 63 security .................... .. 27, 28, 36, 47, 50, 58
Version 2011 2
Internationa al Software Testing Qualifications Q Board

se ecurity testing ........................................ 28 se ecurity tool ... ..................................... 58, 5 60 simulators ...... ........................................... 24 site acceptanc ce testing ........................... 27 so oftware deve elopment ............. 8, 11, 21, 2 22 so oftware deve elopment model .................. 22 sp pecial consid derations for some types of tool 62 te est case ........ ........................................... 38 sp pecification-b based technique..... 29, 39, 3 40 sp pecification-b based testing g...................... 37 18, 26, 39, 45, st takeholders .. . 12, 13, 16, 1 4 54 st tate transition n testing ....................... 40, 4 41 st tatement cov verage ................................ 42 st tatement test ting ..................................... 42 st tatic analysis s .................................... 32, 3 36 st tatic analysis s tool ........... 3 31, 36, 58, 59, 5 63 st tatic techniqu ue ................................. 31, 3 32 st tatic testing .. ..................................... 13, 32 ........... 28, 58, st tress testing .................... . 5 60 st tress testing tool .............................. 58, 5 60 st tructural testi ing .................... 24, 28, 29, 2 42 st tructure-base ed technique e ................ 39, 3 42 st tructure-base ed test design technique .... 42 st tructure-base ed testing ..................... 37, 3 42 st tub ............... ........................................... 24 su uccess factors ....................................... 35 sy ystem integra ation testing ................. 22, 2 25 sy ystem testing g13, 22, 24, 2 25, 26, 27, 49, 4 69 te echnical revie ew .................... 31, 33, 34, 3 35 te est analysis .. ......................... 15, 38, 48, 4 49 te est approach ........................ 38, 48, 50, 5 51 te est basis ....... ........................................... 15 te est case . 13, 14, 15, 16, 2 24, 28, 32, 37 7, 38, 39, 40, 41, 42, 45, 51, 5 55, 59, 69 te est case spec cification ................. 37, 38, 3 55 te est cases ...... ........................................... 28 te est closure .... ............................... 10, 15, 16 ....................... 38 te est condition .................... . 15, 16, 28, 38, te est conditions s ........... 13, 1 3 39 te est control..... ............................... 15, 45, 4 51 te est coverage .................................... 15, 50 te est data ........ . 15, 16, 38, 4 48, 58, 60, 62, 6 63 te est data preparation tool .................. 58, 5 60 te est design13, , 15, 22, 37, 38, 39, 43, 48, 4 58, 62 te est design sp pecification .......................... 45 te est design tec chnique .................. 37, 38, 3 39 te est design too ol .................................. 58, 5 59 Te est Developm ment Proces ss ..................... 38 te est effort ....... ........................................... 50 17, 24, 26, 48, te est environme ent . 15, 16, 1 4 51 te est estimatio on ....................................... 50 te est execution n13, 15, 16, 3 32, 36, 38, 43 3, 45, 57, 58, 60 te est execution n schedule .......................... 38 te est execution n tool ..... 16, 3 38, 57, 58, 60, 6 62 te est harness... ................... 1 16, 24, 52, 58, 5 60 te est implemen ntation ..................... 16, 38, 3 49
Page 77 of 78 31-Ma ar-2011

Certif fied Teste er


Founda ation Level Sy yllabus

International Software Te esting Q Qualifications s Board

der ................ .............. 18, 45, 47, 55 test lead test lead der tasks....... ................................ 47 test level. 21, 22, 24, 28, 29, 30, 37, 40, 42, 44, 45 5, 48, 49 test log ..................... .............. 15, 16, 43, 60 test man nagement ..... .......................... 45, 58 test man nagement too ol ....................... 58, 63 test man nager ............ ...................... 8, 47, 53 test mon nitoring ......... .......................... 48, 51 test obje ective ....... 13 3, 22, 28, 43, 44, 48, 51 test orac cle ................ ................................ 60 test orga anization ...... ................................ 47 test plan n .. 15, 16, 32 2, 45, 48, 49, 52, 53, 54 test plan nning ............ .. 15, 16, 45, 49, 52, 54 test plan nning activities ......................... 49 test proc cedure .......... .. 15, 16, 37, 38, 45, 49 test proc cedure specif fication .............. 37, 38 test prog gress monitoring ......................... 51 test repo ort................. .......................... 45, 51 test repo orting ............ .......................... 45, 51 test scrip pt ................. .................... 16, 32, 38 test strat tegy ............. ................................ 47 test suite e .................. ................................ 29 test sum mmary report ........ . 15, 16, 45, 48, 51 test tool classification n .............................. 58 test type e ................... ........ 21, 28, 30, 48, 75 test-drive en developm ment ......................... 24 tester 10 0, 13, 18, 34, 41, 43, 45, 47, 4 48, 52, 62, 67 7 tester tas sks .............. ................................ 48 test-first approach .... ................................ 24

te esting and qu uality ................................... 11 te esting princip ples ............................... 10, 14 15, 16, 17, 48, te estware......... ................... 1 4 52 to ool support ... ................... 2 24, 32, 42, 57, 5 62 to ool support fo or manageme ent of testing g and tests .......... ........................................... 59 to ool support fo or performance and moni itoring 60 to ool support fo or static testin ng ................... 59 to ool support fo or test execution and logg ging60 to ool support fo or test specif fication ............ 59 to ool support fo or testing ...................... 57, 5 62 to op-down ....... ........................................... 25 tra aceability ..... ............................... 38, 48, 4 52 tra ansaction pro ocessing seq quences ......... 25 ty ypes of test to ool ................................ 57, 5 58 un nit test frame ework ...................... 24, 58, 5 60 un nit test frame ework tool ..................... 58, 5 60 up pgrades ....... ........................................... 30 us sability ......... ............. 11, 2 27, 28, 45, 47, 4 53 us sability testin ng ................................. 28, 2 45 us se case test .................... . ................. 37, 3 40 us se case testing .......................... 37, 40, 4 41 us se cases ...... ......................... 22, 26, 28, 2 41 us ser acceptan nce testing .......................... 27 va alidation ....... ........................................... 22 ve erification ..... ........................................... 22 ve ersion contro ol ......................................... 52 V-model ......... ........................................... 22 walkthrough ... ............................... 31, 33, 3 34 white-box test t design tech hnique ....... 39, 3 42 white-box test ting ............................... 28, 2 42

Version 2011 2
Internationa al Software Testing Qualifications Q Board

Page 78 of 78

31-Ma ar-2011

You might also like