Professional Documents
Culture Documents
Keynote Presentation
Wednesday 10/24/2007 8:45 AM JUMP TO: Biographical Information The Presentation
Presented at: The International Conference on Software Testing Analysis and Review October 22-26, 2007; Anaheim, CA, USA
330 Corporate Way, Suite 300 , Orange Park, FL 32043 888-268-8770 904-278-0524 sqeinfo@sqe.com www.sqe.com
Dorothy Graham
Dorothy Graham is the founder of Grove Consultants in the UK, which provides advice, training and inspiration in software testing, testing tools and Inspection. Originally from Grand Rapids Michigan, she has lived and worked in the UK for over 30 years. Dorothy is co-author with Tom Gilb of "Software Inspection", Addison-Wesley, 1993, co-author with Mark Fewster of "Software Test Automation", Addison-Wesley, 1999, and co-author with Rex Black, Erik Van Veenendaal and Isabel Evans) of Foundations of Software Testing:: ISTQB Certification, Thomson, 2007. Dorothy was Programme Chair for the first EuroSTAR Conference in 1993. She has been on the boards of conferences and journals in software testing, and has been an active member of the British Computer Society's Specialist Interest Group in Software Testing since 1989. She was a founder member of the Software Testing Board of the Information Systems Examination Board (ISEB). She was a member of the working party that developed the ISTQB Foundation Syllabus. Dot is a popular and entertaining speaker at conferences and seminars worldwide. She was awarded the European Excellence Award in Software Testing in 1999.
Mark Fewster
Mark has 20 something years of industrial experience in software testing. Since joining Grove Consultants in 1993, he has provided consultancy and training in software testing, particularly in the application of testing techniques and test automation. He has published papers in respected journals and is a popular speaker at national and international conferences and seminars. Mark is co-author of the book "Software Test Automation with Dorothy Graham, published by Addison-Wesley. In 2006 he received the Mercury BTO Innovation in Quality Award.
and
Dorothy Graham
40 Ryles Park Road Macclesfield SK11 8AH, UK Tel +44 1625 616279 email: dorothy@grove.co.uk
www.grove.co.uk
Grove Consultants, 2007
Five doings
What do we do when we are testing? - searching (for defects) - checking (software or aspects) - assessing (software or system) - measuring (software quality) - sampling (software)
Searching
in testing we search for - defects and clues to possible defects - evidence about what works examples outside testing - searching for something that has been lost - searching for evidence (detective, archaeologist) - fishing - treasure hunt (competition)
Characteristics of searching
searching is easier - if you know what to search for - if you know where to search - with a map - with a bright light can be difficult to know when to stop motivation is significant - affects effectiveness / efficiency - want to find defects / want to be searching
Code
T T T F T T - F T F T T - T F B C D
Condition Month
Invalid Tag Boundary XB1 XB2 XB3 XB4 XB5 XB6 XB7 XB8 XB9
Day (Month= 1 -> 31 7, 8, 10, 12) Day (Month= 1 -> 30 6, 9, 11) No. days
V2
V3
-6 -> 207 V4
< -6 >207
having tools to assist in searching - makes searching easier, quicker, more reliable
Checking
in testing we check - software is working (a form of positive testing) - software against spec. / standards / checklists examples outside testing - camping checklist - report - car (MOT) - multiple choice exam
Characteristics of checking
multiple definitions - examine, investigate, make inquiry for accuracy, quality of progress - a (rapid) control to ensure accuracy, progress (check list) outcome - binary - yes / no, pass / fail
others (less appropriate) -pause, restrain, control - slow growth or progress - impede an opponent
Objectivity of checking
checking against something - checklist - standard - specification not opinion - outcome is pass/fail, ok/not ok predetermined - known, trusted, understood - used before? - others know what is and what is not checked - can be improved must be credible - must exist - otherwise cannot check - be respected / valued
Brevity of checking
cover checklist / standard / spec. - no more easy to do - with appropriate skills / knowledge quick to do - less thinking possibly can stop early - stop on first checklist failure (reason not to ship) - if software can be rejected further investigation - becomes assessing not checking
Assessing
in testing we assess - the overall quality of the software / system - whether it is ready for release examples outside testing - buying a car - choosing a house - essay exam - hiring people - art work
Characteristics of assessing
definitions - estimate value of - determine amount of (tax, fine, damages, etc.) - judge the worth, importance of, evaluate outcome - on a scale, not binary - may not be conclusive implies - subjectivity use personal or
- open ended no clear finish line - no prior knowledge of state of software might be good, could be
Subjectivity of assessing
assessment using - personal opinion - professional opinion - consensus outcome - outcome may need supporting evidence - persuasive arguments different people - may give different assessment variable criteria - assessment criteria not all predetermined - may change (unwittingly) over time - not obvious to others what is and what is not assessed - can be challenged
Open-endedness of assessing
how much to assess? - broad scope - all aspects or just the most important? - which are the most important? how thorough? - equal depth - greater depth for most important areas proprietary knowledge - training, experience can take time/resource - more thinking, balancing - maybe more to consider than planned (e.g. impact of many defects) further investigation - focus more specific - more thorough
Measuring
in testing we measure - quality of the software examples outside testing - table - car - driving skill - cabinets or shelves - dress-making
Characteristics of measuring
assign a value to something - one or more numbers - a qualitative measure (e.g. good, ok, poor) quality of measure can vary - simple guess to scientific accuracy appropriateness of metrics can vary - many aspects to measure, not all are relevant easier to measure to what has been measured before
Quality of measure
guess (professional estimate) - approximate, quick, better if visible - based on experience, comparison, information more detailed assessment - takes time, effort, resource - more accurate extensive assessment - much more time, effort, resource - more accurate, precise
Appropriateness of measure
depends on the objective - what do you want to know? - how many different measures are needed? in testing - we want to know the goodness of the software but often report the badness (e.g. number of bugs
found) historical: measure of old version of software
Sampling
in testing we are sampling - the software by sampling inputs test cases examples outside testing - election exit polls - product manufacturer - market surveys
Characteristics of sampling
choice of sample is critical - sample size generally, the larger the better - demographics must be representative we infer conclusions for the larger set - these can be wrong we can improve our sampling - use tools to increase sample size - use techniques (e.g. pre-qualify a larger sample)
Choice of sample
sample size - larger samples give more accurate results but cost more demographics - being representative is key techniques help - can change over time in testing - sample size often constrained by time / resource - demographics biased to most important
Summary
testing involves activities that we do elsewhere - searching, checking, assessing, measuring, sampling understanding how we do these activities - can help improve the way we do them learn from similar tasks outside testing - can help avoid misunderstandings - can improve those activities within testing