You are on page 1of 43

CRITICAL SKILLS QUESTIONS

I.

GENERAL SKILLS (CATEGORY 1) 1. COMMUNICATION (DOMAIN 1) 1.1 Giving/Receiving Information 1. 2. 3. 4. 5. List five suggestions for relaying defect information and/or performing criticism effectively. What are three reasons why people dont listen well? Give one example of an effective audience evaluation technique that can be used to level-set a testing presentation or design a course for test or development staff. How do you know if the information you provided to an audience has been understood and communicated effectively? Give three examples of pertinent information that should always be included with defect reports.

1.2 Personal Effectiveness You are the quality assurance manager at the APEX company. One of your quality assurance analysts is engaged in a dispute with a developer over the cause of a defect reported by the analyst. What can/should you do? PROFESSIONAL DEVELOPMENT (DOMAIN 2) 2.1 Continuing Education 1. 2. List three ways a tester can improve his/her test ability. In what ways, if any, can a tester induce change and/or modification to a development process? 1.

2.

2.2 Leadership and Recognition 1. 2. 3. 3. What is the main purpose of a task force? What is the role of a facilitator? What role, in test process definition, should a tester play?
AND

QUALITY PRINCIPLES

CONCEPTS (DOMAIN 3)

3.1 Quality Principles/Quality Management 1. 2. 3. 4. Define cost of quality. Briefly define management by fact. How many Deming principles are there? How many levels are in the CMM?

3.2 Quality Assurance/Quality Control 1. 2. 4. METHODS Describe the fundamental difference between QA and QC. Give five examples of quality attributes or critical success factors.
FOR

SOFTWARE DEVELOPMENT

AND

MAINTENANCE (DOMAIN 4)

4.1 Process Knowledge 1. 2. 3. Briefly describe the difference between automated and manual testing tools. Give two examples of each type. Describe IV and V.

4.2 Roles/Responsibilities 1. 2. List three types of interfaces. Who should be accountable for testing?

II. TEST SKILLS/APPROACHES (CATEGORY 2) 5. TESTING PRINCIPLES


AND

CONCEPTS (DOMAIN 5)

5.1 Testing Specifications, Techniques, and Methods. 1. 2. What is COTS? List five phases in a typical testing life cycle.

160

3. 4. 5. 6.

List at least four components of a good test strategy. What are three general test strategies during requirements? What are the four verification activities that should occur at the design phase?
AND

VERIFICATION

VALIDATION METHODS (DOMAIN 6)

6.1 Verification Concepts and Reviews/Inspection. 1. List three types of reviews. _________________ 2. 3. 4. 5. 7. What is an audit? What three rules should be followed for all reviews? What is the main goal of a review? What are the three most common phase-end reviews? Post-implementation reviews are held when: a. b. c. d. 7. The product is certified for release The product is replaced The product has been released There is a major failure
AND

_________________

_________________

TEST MANAGEMENT, STANDARDS, 7.1 Test Management 1. 2. 3.

ENVIRONMENT (DOMAIN 7)

Draw a test process workbench. List five skills a competent tester should have. Give an example of a system objective and its associated test objective.

7.2 Test Standards 1. 2. List three industry test standard organizations. Give three examples of internal test standards.

7.3 Test Environment 1. 2. Differentiate between a test bed and a test script. A test environment includes:

III. TEST PLANNING (CATEGORY 3) 8. RISK ANALYSIS (DOMAIN 8) 8.1 Risk Identification 1. 2. 3. List three of five dimensions of risk. What are the two risk levels? Define risk as it applies to testing.

8.2 Managing Risks 1. 2. 9. Give three examples of testing risks. When is risk assessment conducted?

TEST TACTICS (DOMAIN 9) 9.1 Structural Test Approaches 1. List four structural test approaches.

9.2 Functional Test Approaches 1. Give four functional test approaches. 9.3 Test Tools/Environment 1. 2. 3. Describe the use of a traceability tool. Discuss why change control is/is not important to a test environment. Change control is sometimes part of: a. b. c. d. 4. Problem report documents Defect analysis Software library procedures None of the above

Configuration management can be applied to non-test changes.

161

True

or

False

10. PLANNING PROCESS (DOMAIN 10) 10.1 Pre-Planning Activities 1. Give two examples of well-defined user acceptance criteria.

2.

What is the difference between a test objective and a test case?

3.

Should assumptions be included in a test plan? Why or why not?

4.

Define entrance and exit criteria.

10.2 Test Plan 1. 2. List eight parts of a typical system test plan. A unit level test plan should be input to a system or integration level test plan. True or False

IV. EXECUTING THE TEST PLAN (CATEGORY 4) 11. TEST DESIGN (DOMAIN 11) 11.1 Design Preparation

1.
2.

Test coverage is defined as: Describe the characteristics of a test bed.

11.2 Design Execution 1. Give three techniques for creation of test data.

2.

A test script includes a procedure for executing the test and what other components? (Name at least 3)

12. PERFORMING TESTS (DOMAIN 12) 12.1 Execute Tests 1. Execution of a test in a test plan can only occur during the test phase of a project life cycle. True or False Why? ___________________________________________________ 12.2 Compare Actual versus Expected Results 1. To compare actual to expected results, the tester first designs test cases, which include at least these three components: _____________________ 12.3 Test Logs 1. What is the purpose of a test log? 2. What type of information should be included on a test log? ______________________ ______________________

162

12.4 Record Discrepancies 1. The purpose of recording defects is to make a complete record of all discrepancies found during testing. The input to this task would include (name 3 items): _____________________ ______________________ ______________________

2. A defect can be identified in three ways. Name them. _____________________ 13. DEFECT TRACKING
AND

______________________

______________________

MANAGEMENT

13.1 Testing Defect Corrections 1. 2. Who should review defect reports? Regression testing is the easiest way to uncover defects. True or False Why?

V.

TEST ANALYSIS, REPORTING, AND IMPROVEMENT (CATEGORY 5) 14. QUANTITATIVE MEASUREMENT (DOMAIN 14) 14.1 Test Completion Criteria/Metrics 1. 2. What is the purpose of code coverage tools? Give four examples of test specific metrics.

3.

Give one commonly recognized size measurement tool.

15. TEST REPORTING (DOMAIN 15) 15.1 Reporting tools and standards 1. Give three components included in a system test report.

2.

How is statistical analysis used in test reporting?

163

16. IMPROVING

THE

TEST PROCESS (DOMAIN 16)

16.1 Test Quality Control 1. What are two ways to demonstrate the test process has been performed correctly?

16.2 Analysis of the Test Process 1. How could you analyze the current test process?

16.3 Test Continuous Improvement 1. How could continuous improvement be applied to a Level One CMM test organization?

164

CRITICAL SKILLS ANSWERS I. GENERAL SKILLS (CATEGORY 1) 1. COMMUNICATION (DOMAIN 1) 1.1 Giving/Receiving Information 1. Do it privately. Have the facts. Be prepared to help the person improve. Follow a process when giving criticism. Be specific on expectations 2. The speaker may not be talking about a topic of interest to them. They are impatient and have a lot of other stimuli going through their minds. They are too busy rehearsing what they will say next. Do an internal self-assessment (for a training course) or ask questions of the audience before and after a presentation You ask questions, and conduct an audience evaluation immediately. What type of defect was found? Where was the defect found? What is the severity level of the defect?

3. 4. 5.

1.2

Personal Effectiveness 1. You should discuss the defect report with the QA analyst to ensure that the information is correct and that a root cause analysis was conducted. If it is clearly a case of developer error then have both people discuss and come to an effective resolution. Record the defect and status.

2.

PROFESSIONAL DEVELOPMENT (DOMAIN 2) 2.1 Continuing Education 1. Prepare for the CSTE exam. Join a local professional association and actively participate. Read journals and magazines about the testing profession. 2. A tester can induce changes into the development process by providing both qualitative and quantitative data derived from the testing process. With this information and metrics, changes can be identified and introduced into the development process that will ultimately benefit the organization. Information like types and frequency of defects found, where in the development process they were found, the on-time delivery capability, etc. can assist in improving the overall quality of the delivered products.

2.2

Leadership and Recognition 1. The main purpose of a task force is to develop optionsnot to solve a problem. 2. 3. To provide an unbiased focal point to a discussion and keep the discussion on track. A tester should play a major role in test process definition since the test process is his/her focus. The tester should review the current test process with an objective of inserting improvements and then should pilot the process and provide feedback and suggestions and/or recommendations. If a new test process is being developed then the tester is the best person to ensure that best testing practices are used.

165

3.

QUALITY PRINCIPLES AND CONCEPTS (DOMAIN 3) 3.1 Quality Principles/Quality Management 1. COQ is everything other than the true cost to build a product. It includes failure and appraisal/prevention costs. 2. MBF means that the specified results should be qualitative and the processes should provide adequate quantitative data as a by-product of the process to determine whether or not the specified results are being achieved. 3. 4. 3.2 14 5

Quality Assurance/Quality Control 1. QA activities are used to prevent the introduction of flaws; activities that find and correct flaws are QA activities. 2. Accuracy Ease of use Service levels Portability File integrity

4.

METHODS FOR SOFTWARE DEVELOPMENT AND MAINTENANCE (DOMAIN 4) 4.1 Process Knowledge 1. Manual tools do not execute program code and do not require executing software; automated tools do. 2. 3. Manual Reviews and checklists Automated Code coverage analyzers and file comp Independent verification and validation means reviewing a process by an entity outside of the process itself. This entity, will investigate whether or not a process exists and, it so, if it is being followed and how effective it is.

4.2 Roles/Responsibilities 1. Human to human System to system Human to machine 2. Everyone ultimately a test manager.

II. TEST SKILLS/APPROACHES (CATEGORY 2) 5. TESTING PRINCIPLES AND CONCEPTS (DOMAIN 5) 5.1 Testing Specifications, Techniques, and Methods. 1. Commercial off-the-shelf software (shrink-wrapped) 2. Requirements phase Design phase Program phase Implementation phase Maintenance phase Critical success factors Risk analysis Assumptions Methodology to be followed Conduct a requirements review and identify every requirement. Ensure that every requirement is testable. Determine an overall test strategy. Authorization rules are established. File integrity controls are designed. Design conforms to requirements. Contingency plan is developed.

3.

4.

5.

6.

VERIFICATION AND VALIDATION METHODS (DOMAIN 6) 6.1 Verification Concepts and Reviews/Inspection. 1. Checkpoint, Inspection, and Walk through

166

2. 3.

An inspection/assessment activity that verifies compliance with plans, policies, and procedures Only those who could have developed the product should participate. No personal attacks review is on product. Set time limit on time for review. To uncover defects. Requirements Design Test C
AND

4. 5.

6. 7.

TEST MANAGEMENT, STANDARDS, 7.1 Test Management 1.

ENVIRONMENT (DOMAIN 7)

REWORK
PROCEDURES TO:
DO EC CH K

PRODUCT(S) IN

Procedures to DO work

Procedures to CHECK work

PRODUCT(S) OUT

TOOLS

2.

Test process knowledge; Excellent written and oral communication skills; Analytical ability; Knowledge of test tools; and Understanding of defect management. System objective System must be portable to both unit and NT operating systems. Test objective Parallel test with some results on both platforms.

3. 7.2

Test Standards 1. IEEE ISO DoD 2. All levels of testing will prepare a test plan. Complete test documentation must be available before implement. All Category 1 and Category 2 defects will be resolved before implementation. an automated

7.3

Test Environment 1. Test beds are developed for a singly type of test; test scripts are capture/playback tool and execute over and over. 2.

Tools, plat forms, scripts, hardware, software, resources, and skills needed to support testing.

III. TEST PLANNING (CATEGORY 3) 8. RISK ANALYSIS (DOMAIN 8) 8.1 Risk Identification 1. Technology integration; Size and complexity; System environment and stability; Criticality / mission impact; and Reliability and integrity. 2. 3. LU1 criticality / mission impact. LU2 size, system environment, reliability, and integration. Risk analysis will identify high-risk applications in order to at them to more extensive testing and it can help focus testing on the critical components and/or quality dimensions that are most important to the project.

167

8.2

Managing Risks 1. Lack of trained testers; Inadequate change control; and Compressed timeframes. 2. At the earliest possible stage of a development project and then it should be conducted at critical points along the development life cycle.

9.

TEST TACTICS (DOMAIN 9) 9.1 Structural Test Approaches 1. Operations testing Stress testing Recovery testing Security testing 9.2 Functional Test Approaches 1. Requirements Regression Parallel Error-handling Test Tools/Environment 1. A traceability tool follows the paths taken by a computer program as it processes data. 2. 3. Change control is critical. Testing in an unstable or changing environment does not allow the tester to control the test. C

9.3

4. True 10. PLANNING PROCESS (DOMAIN 10) 10.1 Pre-Planning Activities 1. Response time must be less than 5 seconds. At least 20 concurrent users must be allowed into the system. 2. 3. 4. Test objective is the overall test criteria that must be met. Test cases include test specifics, procedures, inputs, and expected results. Yes, otherwise many test issues fall through the cracks. Entrance criteria Minimum standards to be met before product will be tested. Exit criteria Minimum standards to be met before the product is released.

10.2 Test Plan 1. Software description, milestones, schedule, test materials, test procedures, test cases, test methods, and test assumptions. 2. True

IV. EXECUTING THE TEST PLAN (CATEGORY 4) 11. TEST DESIGN (DOMAIN 11) 11.1 Design Preparation 1. Assuring that the application has been covered by the test process. 2. Test beds are files/tables needed to test a system/unit whose contents have been predefined to meet the conditions defined in designing test cases.

11.2 Design Execution 1. Scripting Test beds Production data 2. Test case Functions to be tested Test performed Set-up procedures Validation procedures

12. PERFORMING TESTS (DOMAIN 12)

168

12.1 Execute tests 1. False. Execution of a test can occur throughout the project lifecycle if a lifecycle approach to testing is used. 12.2 Compare actual versus expected results. 1. The condition, the expected result, and the procedure to run the test. 12.3 Test Logs 1. As each part of the test plan is executed, control can be maintained by using a document created to record test activities. A simple worksheet or spreadsheet can be used. 2. The test identifier, the test activity, who executed the test, the start and stop time of the test, and remarks or comments.

12.4 Record Discrepancies 1. The test plan, the test cases, and the test logs. 2. Missing, wrong, or extra

13. DEFECT TRACKING AND MANAGEMENT 13.1 Testing Defect Corrections 1. QA and developer and/or area which defect was found. 2. V. True. Regression testing revalidates that changes made to the system do not adversely influence other system functions.

TEST ANALYSIS, REPORTING, AND IMPROVEMENT (CATEGORY 5) 14. QUANTITATIVE MEASUREMENT (DOMAIN 14) 14.1 Test Completion Criteria/Metrics 1. They are used to show the extent to which the logic in the program was executed during testing. 2. User participation; Number of paths tested; Cost of all testing; Number of defects found; and Number of requirements tested. LOC Analysis

3.

15. TEST REPORTING (DOMAIN 15) 15.1 Reporting tools and standards 1. Description of test results and findings (defects); Summary (environment and references); and Recommendation. 2. Can be used to compare actual number of defects found to the number of defects expected or to indicate the number of functional requirements actually tested.

16. IMPROVING THE TEST PROCESS (DOMAIN 16) 16.1 Testing Quality Control 1. Perform quality control on the test process by comparing the number of defects in the test process to previous test process; or Use entrance and exit criteria. 16.2 Analysis of the Test Process 1. Compare it to best practices for testing or compare current test process to previous test processes for evidence of improvement. 16.3 Test Continuous Improvement 1. It really cant in the true sense of the concept, since continuous improvement requires a defined and repeatable process before continuous improvement can be attempted.

169

Sample CSTE Examination Questions and Answers

INSTRUCTIONS

Three sets of questions follow that are typical of the types of questions included on the CSTE examinations EXCEPT these questions are exclusively based on the skill courses in this manual. The actual exam will be based on the complete Common Body of Knowledge for the Information Systems Software Testing Profession. The answers are also included. These question sets follow: Question Set A B C Number of Questions 36 25 20 For Answers Refer to Answer Set A B C

NOTE: Many of the following questions are true/false to ensure that you understand a concept. However, the CSTE examination will primarily use multiple choice questions to evaluate theory.

175

1.

What type of change do you need before you can obtain a behavior change?

o o o o
2.

a) b) c) d)

Lifestyle Vocabulary Internal Management

Quality assurance is the process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected.

o True
3.

o False

Pick the best tactic to use in constructive criticism to help the worker understand his or her solution to the criticism.

o o o o
4.

a) b) c) d)

Do it in public, while others are listening, so they too can learn from other people's mistakes. Be prepared to help your subordinate improve his or her performance. Criticize the individual rather than the product, because the individual creates problems with the product. Explain to the employee what will happen to his or her career if the employee's behavior doesn't change.

The objective of risk analysis is to help IT management strike an economic balance between the impact of risks and the cost of protective measures.

o True
5.

o False

In quantifying risk, the term RE represents:

o o o
6.

a) b) c)

Risk expense Related expense Risk exposure

A standard is not an expected norm.

o True
7.

o False

Quality and quality assurance are synonymous.

o True
8.

o False

Productivity is increased if value is added to a product.

o True
9.

o False

National Quality Awards are intended to foster continuous improvement activities.

o True

o False

10. Vision is a clear definition of the result you are trying to achieve.

o True

o False

176

11. What is the present value of money of a $10,000 expenditure that you will spend one year from now if money is worth 14 percent?

o o o o

a) b) c) d)

$2630 $8770 $1600 $8600

12. Quality control relates to a specific product or service.

o True

o False

13. The objective of the present value of money calculation is to convert future dollars into current dollars.

o True

o False

14. What is the primary objective of the system proposal from the producer's viewpoint?

o o o

a) b) c)

To present the costs/benefits of the proposal To obtain an agreement for more work To standardize presentations

15. The receivers of an information systems service are known as users.

o True

o False

16. Almost one-half of a programmer's time is spent identifying and fixing errors in specifications.

o True

o False

17. Which two elements are the major causes of documentation problems?

o o o o

a) b) c) d)

Low priority Forgetfulness Not enough time Personal attitudes

18. When installing an information processing tool into your work environment, you can select an informal or formal procurement sequence. Which list is the formal procurement sequence?

a)

Technical requirements User review Generate RFP Issue RFP Proposal evaluation Select source

b)

Selection criteria Identify candidate User review Score candidates Select tool

19. A process allows the same quality to be replicated from product to product, often by the use of standards and procedures.

o True

o False

20. The term "defect" is related to the term "fault" because a fault is a defect, which has not yet been identified.

o True

o False

177

21. A latent defect is the same as a fault.

o True

o False

22. The average time between consecutive failures in a system or component during a specified period is known as the mean time between failures (MTBF).

o True

o False

23. A program whose purpose is to reduce the number of defects produced is known as a quality improvement program.

o True

o False

24. The inspection is an evaluation technique that relies on visual examination of an item.

o True

o False

25. The functionality/structure of the system created to solve a problem, satisfy a user need (note that this is the creative part of the design process) is known as the fishbone diagram.

o True

o False

26. The process used for documenting users requirements is known as validation.

o True

o False

27. A process/activity during which a high number of defects occur is known as a defect-prone process.

o True

o False

28. Name the three categories of costs associated with cost of quality.

29. What category would inspections, walkthroughs, and testing qualify as?

30. What category would creating of a help desk qualify as?

31. What category would training qualify as?

32. Rework activities are

costs.

178

CIRCLE THE CORRECT RESPONSE: 33. The conduct of an inspection of source code is quality control/quality assurance. 34. A computer operator verifies that the jobs to be run that day have been run. Quality Control/Quality Assurance 35. A task force selects and installs a system development methodology. Quality Control/Quality Assurance 36. A computer programmer conducts unit tests to validate that the program works. Quality Control/Quality Assurance

179

SAMPLE QUESTION SET B 1. The ______________ is an application of process management and quality improvement concepts to software development and maintenance.

o o o o
2.

a) b) c) d)

Malcolm Baldridge ISO 9000 SEI/CMM QS14000

Quality can be separated from the controls associated with it?

o True
3.

o False

What is one of the most powerful quality control tools?

4.

The number-one skill needed for the tester analyst is listening.

o True
5.

o False

The intent of the quality control checklist should be to improve the quality of the product being controlled by investigative means.

o True
6.

o False

A large portion of the cost of quality is usually the cost of rework.

o True
7.

o False

Quality assurance means the same as total quality control.

o True
8.

o False

In the IT workbench, quality, and performance of the work are the responsibility of the QA department.

o True
9.

o False

Identify five participant roles during a formal inspection. _____________________________ ____________________________

10. Function points measure the lines of code.

o True

o False

11. Function points quantify data processing work outputs.

o True

o False

180

12. Measurement of function points cannot be used for comparing two different kinds of systems.

o True

o False

13. Function points are hardware and software independent.

o True

o False

14. There are three major activities in function point counting.

o True

o False

15. The three parts of the function point count must be determined individually.

o True

o False

16. Identify the 5 function types: External Input type External Inquiry type External Interface File type Logical Output File type Internal File type External Output type Internal Inquiry type Logical Internal File type

17. The process of identifying the kinds of software failures that can occur and then quantifying how likely it is that they will actually occur is:

o o o o

a) b) c) d)

Configuration management Risk management Contingency planning Process improvement

18. The Baldrige award is a world-wide quality award.

o True

o False

19. What is the relationship between testing and quality assurance?

o o o

a) b) c)

QA is part of a complete testing process Testing and QA are two terms for the same thing Testing is part of a complete QA process

20. Software testing accounts for what percent of software development costs?

o o o

a) b) c)

10-20 40-50 70-80

21. Software errors are least costly to correct at what stage of the development cycle?

o o o

a) b) c)

Requirements Construction Acceptance test

22. The purpose of software testing is to:

o o o

a) b) c)

Demonstrate that the application works properly Detect the existence of defects Validate the logical design

23. For black-box testing:

o o o o

a) b) c) d)

The tester is completely unconcerned about the internal behavior of the program The tester is concerned with finding circumstances in which the program does not behave according to specifications Test data is derived solely from specifications All of the above

24. Boundary value testing:

a)

is the same as equivalence partitioning tests

181

o o o

b) c) d)

test boundary conditions on, above, and below the edges of input and output equivalence classes. tests combinations of input circumstances is used in white-box testing strategy

25. Decision/branch coverage strategy:

o o o o

a) b) c) d)

always satisfies statement coverage is used in black-box testing means that every branch direction is traversed at least once is the same as condition coverage

182

SAMPLE QUESTION SET D 1. The National Quality Awards provide a basis for successful benchmarking against other companies.

o True
2.

o False

Information systems organizations should have standards and procedures on running meetings.

o True
3.

o False

The Pareto analysis is most effective for:

o o o
4.

a) b) c)

Ranking items by importance Showing relationships between items Measuring the impact of identified items

One of the key concepts of a task force is that the leader be an expert in leading groups as opposed to an expert in a topical area.

o True
5.

o False

The more common benefits associated with a service-level agreement are:

o o o o
6.

a) b) c) d)

Establish two-way accountability Make complaining easy Provide the basis for an IS budget All of the above

The two major differences between internal auditing and quality assurance involve their respective statement of responsibilities and common body of knowledge.

o True
7.

o False

Within an organization, a service-level agreement is most effective when it is an ongoing negotiation and improvement process.

o True
8.

o False

A nonsampling error relates to the sample properly reflecting the true characteristics of the population.

o True
9.

o False

The cost of quality is usually the:

o o o

a) b) c)

Cost of prototyping Cost of rework Cost of risk assessment defective parts per million.

10. Motorola's objective of reaching "Six Sigma" translates into 11. Name the seven basic quality tools:

183

12. Policies provide:

o o o

a) Guidance to decision makers b) Broad direction to decision makers c) Detailed prescription to decision makers

13. A measurement system is established to measure the effectiveness of system and unit testing. Quality Control/Quality Assurance? 14. Which of the following test approaches is not a structured test approach?

o o o o

a) b) c) d)

Load/stress Operations Regression None of the above

15. A quantitative measurement used to determine test completion is:

o o o

a) b) c)

Defect measurement Requirements coverage Statistical analysis

16. Acceptance testing means:

o o o

a) b) c)

Testing performed on a single stand-alone module or unit of code Testing after changes have been made to ensure that no unwanted changes were introduced Testing to ensure that the system meets the needs of the organization and the end user.

17. Consensus means:

o o o o

a) b) c) d)

Majority rules You dont have to like it, you just have to be able to accept it Whatever the boss says Compromise

18. Joe is performing a test to see that it complies with the user requirement that a certain field be populated by using a dropdown box containing a list of values. Joe is performing:

o o o o

a) b) c) d)

White-box testing Black-box testing Load testing Regression testing

19. Sue is told to prepare a report on the most commonly occurring product defects. She reviews the software defect reports, which categorize the defects as coding errors, requirements errors, documentation errors, etc. The best tool to report this information is:

o o o o

a) b) c) d)

A A A A

histogram Pareto diagram cause and effect diagram scatter plot

20. Which of the following is NOT included in ISO guidelines?

o o o o

a) b) c) d)

Purchaser and Supplier Internal Quality System Audits Documentation Management principles

184

ANSWER KEY SAMPLE QUESTION SET A 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. B False B True C False False True True True B True True B True True A, D A True True True True True True False False True Preventive, Failure, and Appraisal Appraisal Failure Preventive Failure Quality control Quality control Quality assurance Quality control

185

ANSWER KEY SAMPLE QUESTION SET B 1. 2. 3. 4. 5. 6. 7. 8. 9. C No Checklist True True True True False Moderator Author/producer Reader Inspector Recorder False True False True True True External Input type External Inquiry type External Interface File type External Output type Logical Internal file type B False C B A B D B C

10. 11. 12. 13. 14. 15. 16.

17. 18. 19. 20. 21. 22. 23. 24. 25.

186

ANSWER KEY SAMPLE QUESTION SET C 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. True True A True D True True False B 3.4 Histogram Control Chart/Run Chart Stratification Scatter Diagram Fishbone (Cause-Effect) Diagram Pareto Chart Check Sheet A, B Quality assurance C B C B B B D

12. 13. 14. 15. 16. 17. 18. 19. 20.

Knowledge Domain 1: TEST PRINCIPLES AND CONCEPTS (contd...) (Note: for Qs 1-23, please look into my earlier mail sent on Feb 8, 2003 12:34 a.m.) 24. In designing a test strategy, _Risk__ become the basis or objective of testing. 25. List the a. b. c. d. e. f. g. h. considerations in Developing Testing Methodologies. Acquire and study the test strategy Determine the type of developing project Determine the type of software system Determine the project scope. Identify the tactical risks Determine when testing should occur Build the system test plan. Build the unit test plan.

26. Strategic risks are the low-level business risks faced by the software system. (True / False) 27.Tactical Risks risks are the high-level business risks faced by the software system. 28. Tactical risks are the high-level business risks faced by the software system. (True / False) 29. Tactical risks are subsets at a lower level of the strategic risks. (True / False) 30. What is the reason in decomposing the strategic risks into tactical risks? So that Test Scenarios can be created. It is hard to create Test situations for Strategic Risks. 31. Name the categories into which tactical risks can be divided. Structural Risks Technical Risks Size Risks

187

32 Test Plan. must be developed to describe when and how testing will occur. 33. What is incremental testing? Testing the interfaces between the unit tested programs as well as between components. 34. It is difficult to create test scenarios for high-level risks. (True / False) 35. What are the different types of incremental testing? a. Top-Down b. Bottom Up 36. __ White Box _ testing assumes that the path of logic in a unit or program is known. a) White Box Static d) Performance b) Black Box e) None of the above c)

37. Modules are added in descending hierarchical order in _Top Down_ type of incremental testing. 38. During software acceptance (acceptance testing), _ b) Black Boxtesting technique is relied upon for more accurate results (than other testing techniques listed below). a) White Box Incremental b) Black Box d) Thread c)

39. When evaluating the paybacks received from various test techniques, _White(Glass) Box Testing _ testing produces a higher defect yield than other dynamic techniques when planned and executed correctly.

40. What qualities must an individual possess to test effectively a software application? Good Communication Skill Good Error Guessing Good Analytical Skill Knowledge of Programming Concepts 41. Due to a change in design, the requirements for an already coded (built) software component (i.e., 1 software component among 10 software components in an application) got modified in its entirety. The developer had to modify the code based on the new requirements. Your role as a test manager is to choose the appropriate type of regression test to minimize the impact to the project schedule. choose? Full Regression Test What are the types and which type of regression test would you

42. At a minimum, the developer should always execute Unit Regression Testing when a change is made. (True / False)

43. What are the rules that should be followed for all reviews? 1. Only the work product should be reviewed. Not the Author. That means the Review results should not be used in performance evaluation of Author. 2. Focus should be on finding the defects not on the solution of defects. 3. Reviewers should work as a team. They should all be responsible for finding the defects. 4. Review meeting should not last for more than two hours. 5. Review findings must be documented. 44. What is stage containment as referred to reviews? -----------------------------------------------------------------------------------------------

188

CSTE Study Guide Summary No. 1. 2. 3. 4. 5. 6. 7. 8. Domain Communications Professional Development Quality Principles and Concepts Methods for S/w Dev. & Mgmt Testing Principles and Concepts Verification and Validation Mtds. Test Mgmt Standards and Env. Risk Analysis

Common Body of Knowledge: No. Domain 9. Test Tactics(App, Tool and Env.) 10. Planning Process 11. Test Design 12. Performing Tests 13. Defect Tracking and Mgmt. 14. Quantitative Measurement 15. Test Reporting 16. Improving the Test Process

No 1. 2. 3. 4. 5. 6.

Re-categorize: Skill No. Skill Test Principles and Concepts 7. Test Design The Testers Role in S/w Dev and Acquisition 8. Performing Tests Test Management 9. Defect Tracking and Correction Build the Test Environment 10. Acceptance Testing Risk Analysis 11. Status of Testing Test Planning Process 12. Test Reporting Skill Discussion: Important Points -for most Organizations testing is a process designed to compensate for an ineffective software development process. -testing is an important component of the dev. Process. -tests is to reduce the business risks. -testing is an integral part of each phase of the SDLC. -life cycle testing is to spend more effort testing right from the start of the project. Cost of Quality:- 1)failure cost , 2)appraisal cost and 3)Prevention cost biggest component in cost of software is the cost of finding and correcting defects. software quality assurance must be defect prevention. performing static testing thereby defect found earlier reduces the cost of testing and cost of software. continuous testing is 80% effective than testing after the development is complete. Role of Software Testing:determine whether the system meets specs.(producers view) determine whether the system meets business and user needs (customers view).

Dom. A

Skill Area Understanding the Testing Challenge

Defining Test Practices and the Testing Process

Using PDCA Plan(p)Device a Plan objective, strategy & methods Do(D) Execute a Plan- conditions, training, procedures & perform Check(C) Check the Results- check, performance & compare Action(A) take necessary Action- corrective measures Participants of testing software customer software user software developer software tester information service management senior Org. Mgmt Auditor defect- (wrong, missing, extra ) Variance from product specs. Variance from customer/user expectation A defect that causes an error in operation or negatively impacts a user/customer is called a failure. -90 percent of all defects are caused by process problems(Deming). -60 percent of software defects originate in the requirements phase. Testing the Risk Risk is the probability that undesirable events will occur.

189

Software testing is a control used by organizations to minimize risk. Testing should prioritize the business risks. Testing Strategy the demonstration of the validity of the software at each stage in the SDLC. determination of the validity of the final system w.r.t. user needs and requirements. Examination of the behavior of a system by executing the system on sample test data. Problems in testing process occur from Failure to define testing objectives. Testing at the wrong phase in the life cycle Use of ineffective test techniques Challenges to testing-Integration, System chains, The domino effect, Reliance on electronic evidence, multiple users C The People Challenges of Software testing Challenges Training in testing Relationship building with developers Using tools Getting managers to understand testing Communicating with users about testing Making the necessary time for testing Testing over the wall software Trying to hit a moving target Fighting to lose-lose situation Having to say no. Essential skills test planning, using test tools, executing tests, managing defects, risk analysis, test measurement, designing a test env., designing effective test cases, knowledge of testing vocabulary, understand what to test, who performs what type of test, when to test, how to test, and when to stop testing. -convince the management of the importance of testing -understand the importance of testing training. An assessment is carried out to Understand the state of process for improvement. Determining the suitability of process for a particular requirement Determining the suitability of another organizations process for a particular contract. A generic spiral model as a basis for process improvement1. Examine the Org. needs and Business Goals 2. Conduct Assessment 3. Initiate process improvement 4. Analyze assessment output and derive action plan. 5. Implement improvements 6. Confirm Improvements 7. Sustain improvement gains 8. Monitor Performance Objective is to help someone improve his/her performance. Do it privately Have the facts Be prepared to help the worker improve his/her performance Be specific on expectations Follow a specific process in giving the criticism 1. state the positive first 2. Never criticize the individual, only the work performed by the individual. 3. get the agreement that there is a problem 4. Ask the subordinate for advice on how to improve 5. suggest the course of action 6. Make a specific contract Avoid making threats Objective is describe how to make an effective system proposal presentation. General guidelinesEmphasize that you are presenting the best solution, your team is well

Continuous Improvement to Testing Process

1.1.

Constructive Criticism

1.2.

Effective System Proposal Presentation

190

equipped, sell the corporate experience, your management capabilities, technical expertise of your team, sell your enthusiasm. 1.Preparing the system proposal a. outline of the system the problem to be solved System proposal solution constraints the proposed solution impact on people impact on other systems impact on cost how to get the project approved b. prepare visual aids(trasperencies, powerpoint pres-tion) use one sheet for each major point use pictorial illustrations limit the text to 25 words in a sheet leave details for oral discussion make a good layout / flow /sequence of the pres-tion c. tell them what you are going to tell them, tell them, and then tell them what you have told them. d. Rehearse your system proposal presentation e. Be aware and able to explain some alternatives f. Limit session to one hour, if more is req break and cont. g. Better to say too little and have them ask questions 2.Presenting the System proposal a. is similar to a public speaking b. some hints Start and finish discussion on time Contents should meet customer needs not yours Thk in broad concepts, details should come automatically Be enthusiastic and act enthusiastically Use terms understood by the audience Include examples to support Issue handouts after talk Take the support of colleague for a break Keep some water or coffee to sip and thk Accept if you could not answer a question and answer later. End with a good briefing. 3.Closing the System proposal Closing is getting approval to proceed according to proposal Objections should be considered +ve to close. 1.3. Effective Listening Three listening stepshearing the speaker attending the speaker understanding the speaker Why people do not listen-impatient, too busy rehearsing what to say next, self-conscious, external stimuli, lack of motivation, not an interesting topic. Hearing the speaker 1. information channel 2. Verbal channel 3. Vocal channel 4. body channel 5. Graphic channel Attending the speaker 1. maintain eye contact 2. provide continuous feedback 3. periodically restate what you heard 4. free you mind 5. periodically be sure you have adequately understood Understanding the Speaker 1. Discriminative listening 2. Comprehensive listening 3. Therapeutic listening 4. Critical listening 5. Appreciative or enjoyment listening 1. 2. 3. Objective is help resolve the customer complaint, while using the opportunity to improve the customer relationships. -complaints should be resolved in four minutes.

1.4.

Conflict Resolution

191

Four step complaint resolution process1. Get your customers wave length o Get on the same physical wave length o Show interest in your customers problem o Physically display interest o React positively to your customers concern. 2. Get the facts o Ask probing questions o Take detailed notes o Observe feelings and attitudes o Listen carefully to what is being said. 3. Establish and initiate an action program o Admit the error o Negotiate a satisfactory resolution with your customer o State the solution and get agreement from your customer, confirm agreement o Take action immediately 4. Follow up with your customer. o Determine the satisfaction of the customer o If not satisfied start again from step 2. 1.5. Image and Influence Objective is to show how to correct an image probl - difference between a good technician and good manager is image. Six basic attributes of executive image1.Purposeful 2.Competent 3.Analytical 4.Decisive 5.Confident 6.Appearance Task force is a mechanism for focusing attention on high-priority problems(challenges) , and making decisions related to those problems. (for problem solving use work groups, of people who work with the problem every day.) Task force effectiveness Principles1.Task force Management Principles The leader should be an expert in leading groups. Group should include individuals from all areas that have vested interest in the decision. Should encompass the best people available Should be organized for a single purpose Needs a clear description of the problem Output of the task force should be a recommended course of action. 2.Task force Organizational Principles Group should be limited to 3- 8 members Should begin when the need the recognized Finish the task force business quickly, meet as often and as frequently as necessary. Meet at neutral grounds Appoint someone to record all significant infn. Topic not to be discussed outside the group How to develop an awareness training program 1.Prepare for awareness training Task1. Select awareness topic Task2. Identify the topics customers Task3. Define awareness training objectives Task4. Define customer benefits Task5. Develop administrative training plan Identify awareness training attendees( max25 nos) Invite attendees ( sessions with homogeneous ) Arrange for trng room. Arrange for trng eqpmt Assign responsibilities Solicit seniors people to introduce Limit trng to 2hrs Trng awareness agenda etc.

1.6.

Task Force

2.1.

Awareness Training Program

192

2.Conduct awareness training Task1. Attendees needs Task2. Awareness topic/product Task3. Identify objections to product/problem Task4. Overcome objections Task5. Recommend course of action 2.2. Group Think How to tackle group think (ie, those decisions taken in solidarity or loyalty to group members) to make efficient decision making ? Assign a person to be a devils advocate Use outside experts as resources Breakup into subgroups for discussions Let grp think over the project for a few more days Prevent grp members from divulging personal opinions Objectives1. a Mgmt step to involve individuals in improving Infn. System quality and productivity. 2. to develop a list of suggestions in a short time ParticipantsQ.A. Manager, Data processing(I.T.) Manager, Group Moderator, Participants. RolesQA Manager 1. Develop a one-day Suggestion day proposal IT Manager 2. Approve the suggestion day proposal IT Manager 3. Issue/announce memorandum regarding suggestion day. QA Manager 4. Identify areas/challenges to discuss QA Manager 5. Consolidate individual problem- identification worksheets into problem categories. QA Manager 6. Create Suggestion Day work groups of approximately eight people each QA Manager 7. Appoint a moderator for each group QA Manager 8. Complete the Suggestion Day Group Assignment Worksheet. IT Manager 9. All of the group participants should assemble in a common area the morning of suggestion day Group Moderator 10. The group moderator will have planned the day prior to commence the suggestion day operation. Group Moderator 11. Address each problem individually as a group, with the purpose of developing and documenting a solution for that problem. Group Moderator 12. The top-ranked suggestions for each problem should be documented. IT Manager 13. Reconvene the groups to report their recommendations. Group Moderator 14. Report group recommendations. QA Manager 15. Review and implement appropriate suggestions/recommendations. IT Manager - 16. Report to data processing staff on the utilization of their recommendations. Members of Professional org. have responsibility to accept the standards of conduct that the Org. represents. The five perspectives of Quality1. transcendent (I know it when I see it) 2. product based (Possesses desired features) 3. user- based ( fitness for use) 4. dev. and Mfg based( conforms to requirements) 5. value based (at an acceptable cost) Quality by FACT Doing the right thing Doing the right way Doing it right the first time Doing it on time Quality in Perception Delivering the right product Satisfying the customers needs. Meeting the customers expectations

2.3.

Conducting a Suggestion Day.

2.4. 3.1.

CSTE Code of Conduct What is Quality

193

Treating every customer with integrity, courtesy, and respect. Quality has two working definitions from the producers viewpoint, Quality is defined as meeting requirements. From the customers viewpoint, its fit for use or meeting customer needs. - Excellence is a measure or degree of quality. -The CUSTOMER is the most important person in any process, customers may be either internal or external. -Quality improves productivity, improves competitive position. - Cost of quality is almost 20-30% of the gross sales. -Quality Saves, it does not Cost -Quality is a Solution, not the problem. -Increases in quality in fact can lead directly to increases in productivity. Demings Deadly Management Diseases1. Lack of constancy of purpose. 2. Emphasis on short- term profits 3. Evaluation of performance, merit rating, or annual review performance. 4. Mobility of management 5. Running an Organization on visible figures alone 6. Excessive medical costs. 7. Excessive costs of warranty, fueled by lawyers that work on contingency fees. Demings 14 points for Management1. Create constancy of purpose for improvement of product and service. 2. Adopt a new philosophy. 3. Cease dependence on Inspection to achieve quality. 4. End the practice of awarding business on the basis of price alone. 5. Improve constantly and forever the system of production and service. 6. Institute training. 7. Adopt and institute leadership. 8. Drive out fear. 9. Break down barriers between staff areas. 10. Eliminate slogans, exhortations, and targets for the work force. 11. Eliminate numerical quotas for the work force and eliminate numerical goals for people in management 12. Remove barriers that rob people of the pride of workmanship. 13. Encourage education and self-improvement for everyone. 14. Take actions to accomplish the above transformations. -Other quality Principles by Juran, Crosby, Feigenbaum. -The scientific method combines a proven , logical approach, to proven solving, and process improvements with sound statistical techniques. -Employee involvement is another cornerstone of any quality program. 3.2. Cost Of Quality Two cost components- costs associated with producing the product right the first time or RTF costs. - The additional costs associated with assuring that the product delivered meets the quality goals established for the product. The three categories of costs of quality1. Prevention Costs 2. Appraisal Costs 3. Failure Costs Quality Methods are segmented two categoriespreventive methods detective methods Quality Assurance- is a planned and systematic set of activities necessary to provide adequate confidence that the products and services will conform to specified requirements and meet user needs. Quality Assurance is a staff function. Quality Control is a process by which product quality is compared with

3.3

Quality Assurance v/s Quality Control

194

applicable standards, and the action taken when non-conformance is detected. Quality control is a line function. Internal Auditing- is an independent appraisal activity within an organization for the review of operations, and is a service to management. -Quality Assurance is preventive in nature while Quality Control is detective in nature. Quality Assurance Quality Control Helps establish process Relates to specific product or service. Sets up measurement programs Verifies specific attributes are to evaluate process. there or not in product/service. Identifies weaknesses in Identifies for correcting defects. process and improves them. Management responsibility, Responsibility of team/worker frequently performed by staff function. Concerned with all products Concerned with specific product. produced by the process Is a Quality Control over Quality Control activity. 3.4. Pareto Analysis -analysis is designed to rank items by frequency, which follows a 20-80 rule, ie, 20 percent of the items would account for the 80 percent of the frequency. Steps for Pareto Analysis1. Name the events that will be analyzed. 2. count the named incidents. 3. Rank the counts by frequency, using a bar chart. 4. Validate reasonableness of the Pareto Analysis - simple, but powerful statistical tool because it differentiates between the two extremes. Two Types Factual Investigative Six Step Process for a Check List1. Determine the Objective of QC Checklist 2. Understand content and intent of Questions 3. Rehearse the use of checklist 4. Conduct an investigation to answer questions. 5. Answer and document the questions. 6. Use the quality control results. - 50- 70% of the defects originate in the requirements stage Some suggestions during this stage-Functionality, what the program is to do. -What the input is like, form, format, data types, units etc. -The form, formats, data types, and units for the output. -How exceptions, errors, and deviations are to be handled. -numerical methods or the accuracy required for scientific computations. -Req. such as security, maintainability, performance etc. -Architecture, platform like client/server, web, OS etc. Start developing the test set at the requirements stage Are you solving the correct problem? Types of TestingWhite Box Testing Statement Coverage Decision Coverage Condition Coverage Decision/Condition Coverage Multiple Condition Coverage Black Box Testing Equivalence Partitioning Boundary Analysis Error Guessing Incremental Testing Top Down ( with stubs)

3.5.

Control Check List

4.1.

Validate the Requirements

5.1.

Testing Techniques

195

Bottom up (with drivers) Thread Testing ( threading thru integrated components) 5.2. Testing Method -An Organizational testing policy supported by the executive management is required to gain support for the implementation of the test process. Establishment of Testing Policy Management directive Information Technology consensus Users meeting -it is recommended that most testing be performed by independent testers or test teams. Misconceptions regarding testingtesting is easy Anyone can perform testing No training or prior experience is necessary To Test effectivelyThoroughly understand the system. Thoroughly understand the technology Possess creativity, insight, business knowledge. Understand development methodology used. Independent Test Team comprises of , Test Manager or Team leader, Test engineers, Additional testers. Test Manager is responsible for Test Planning and estimation Designing and test strategy Reviewing analysis and design artifacts Chairing the test readiness review Managing the test effort Overseeing acceptance tests Test Engineer responsible for Developing test cases and procedures Test data planning, capture, and conditioning Reviewing analysis and design artifacts. Test Execution Utilizing automated test tools for regression testing Preparing test documentation. Defect tracking and reporting Other testers Test execution Defect reporting Regression testing etc. Types of Review1. In-process Review 2. Decision point or phase end review 3. Post implementation reviews Reviews classified as1. The informal review 2. The semiformal review 3. The Formal Review Three rules1. The product is reviewed, not the producer 2. Defects and issues are identified, not corrected 3. All members of the reviewing team are responsible for the results of the review. Reviews are generally more than 65 percent efficient in finding defects Minimum task performed by V & V management1. Create the Software verification and validation Plan 2. Baseline Change Assessment 3. Conduct Management Review of V & V 4. Management and Technical Review Report 5. Interface with Organizational and Supporting Process 6. Software Development Verification and Validation1. Conceptualization Activities Evaluate concept docs Conduct criticality Analysis Analyze Hardware/software/user req. allocation Conduct Traceability analysis

5.3.

Independent Testing

5.4.

Reviews and Inspections

6.1.

Verification and Validation

196

2.

3.

4.

5.

6.

7.

8.

Conduct Risk Analysis Analysis Activities Traceability Analysis Software Requirements Evaluation Interface Analysis Criticality Analysis Acceptance Test Procedure Generation and Ver. System Test Plan Generation and Ver. Acceptance Test Plan Generation and Ver. Configuration Management Assessment Hazard Analysis Risk Analysis Design activities Traceability Analysis Software Design Evaluation Interface Analysis Criticality Analysis Component Test Plan Generation and Ver. Integration Test Plan Generation and Ver. Test Design Generation and Ver. Hazard Analysis Risk Analysis Implementation Activities Traceability Analysis Source Code and Docs. Evaluation Interface Analysis Criticality Analysis Test Case Generation and Ver. Test Procedure Generation and Ver. Component test execution and Ver. Hazard Analysis Risk Analysis Testing Activities Traceability Analysis Integration test execution and Ver. System test Execution and Ver. Acceptance Test Execution and Ver. Hazard execution Risk Analysis Installation and Checkout Activities Installation Configuration Installation Checkout Hazard Analysis Risk analysis V & V Final Report Evaluation Operation Activities Evaluation of new constraints Proposed Change Assessment Operating procedures Evaluation Hazard Analysis Risk Analysis Maintenance Activities Software V & V Plan Revisions Proposed Change Assessment Anomaly Evaluation Criticality Analysis Migration Assessment Retirement Assessment Hazard Analysis Risk Analysis Task Iteration Acquisition Verification and Validation Scoping the V & V Effort Planning the Interface between V & V effort and Supplier System requirements review Supply Verification and validation Planning the interface between the V & V effort and supplier Contract verification Reporting Task reports

197

7.1. Test Objectives

V & V Activity Summary reports Anomaly report V & V Final Report Optional Reports

Test objectives state the test requirements that must be met during the test process, and define when the testing is complete. - its the goal of testing. - guide the development of test cases, test scripts, and test data. - helps to gauge testing progress and success - should contain a statement of the objective, and a high level description of expected results stated in measurable terms. Identification, selection and acquisition Difficulties in introducing tools Organizational Obstacles Problems arising from the tools Obstacles in the computer environment Activities prior Identify the goals to be met by the tool Approve a detailed tool acquisition plan Approving the procurement of tools and training. Determine the goals have been met. Software Mgmt to Identifying tool objectives Approving the acquisition plan Defining selection criteria Making the final selection of the tool or the source Software Engineer to Identifying the candidate tools Applying the selection criteria Preparing a ranked list of tools or sources Conducting any detailed evaluations Recommended Event Sequence1. Goals 2. Tools Objectives Informal Procurement Acquisition plan Selection criteria Identify candidate tools User review of the candidate Score candidate Select the Tool Formal Procurement Acquisition Plan Technical Req. Docs. User Review of Req. Request for Proposal (RFP) generation Solicitation of proposals Technical Evaluation should be constraint Source Selection 3. Procure Tool 4. Evaluation Plan 5. Toolsmithing Plan 6. Training Plan 7. Tool Received 8. Acceptance Test 9. Orientation 10. Modification 11. Training 12. Use in the Operating Environment 13. Evaluation report 14. Determine if Goals are met Attainment of technical objectives Adherence to budget and other resource constraints Timelines of the effort Cooperation from other departments Recommendations for future tools acquisitions Entrance criteria- the required conditions and standards for work product quality that must be present or met for entry into the next stage

7.2.

Test Tool Dev. And Acquisition

7.3.

Entrance/Exit Criteria

198

of the development process. Exit criteria- standards for work product quality, which block the promotion of defective work products to subsequent stages of the development process. Example: Requirement Definition exit criteria: Clear and understandable requirements Both functional and non-functional req. defined Requirements are testable Test Objectives are measurable Use cases and scenarios validated in walk-thru Analysis exit criteria: Test cases clearly defined and reviewed All requirements traced to use cases and test cases Non-functional req. documented in sup. Spec. docs. Use cases, scenarios, and class diagram val and appvd System test cases val and appvd System Test entrance criteria: Successful execution of the Integration test plan No open severity 1 or 2 defects 75- 80 % of total system functionally and 90% of major functionality delivered. System stability for 48 72 hours prior to start of test. System test exit criteria: Successful execution of the system test plan, and documentation that shows coverage of requirements and high risk system components. System meets pre-defined quality goals. 100% of total system functionality delivered. Like wise the entrance /exit criteria for each stage of the development and test cycle be defined. 7.4. Test Standards Standards covering the test process, activities, and work products. Standards for the Testers :Std # Title (IEEE) 730-1998 Standard for software Quality Assurance Plans 828-1998 Standard for software Configuration Management Plans 610.12-1990 Standard glossary of software engineering terminology 829-1998 Standard for software test documentation 830-1998 Recommended practice for software requirements specifications. 1008Standard for Software Unit Testing (ANSI) 1987(R1993) 1012-1998 Standard for Software Verification and 1012a-1998 Validation 1016-1998 Recommended practice for software design and descriptions. 1028-1997 Standard for Software Reviews 1044-1993 Standard classification for software Anomalies (ANSI) 1045-1992 Standard for Software Productivity Metrics (ANSI) 1058-1998 Standard for Software Project Management Plans (ANSI) 1061-1998 Standard for Software Quality Metrics Methodology 1063-1987 Standard for Software User Documentation (R1993) (ANSI)

8.1.

Risk Identification

A risk is a condition that can result in a loss. One of the most effective methods to reduce computer system strategic risk is testing. An effective approach to testing is to identify and evaluate the risks

199

8.2.

Risk Methods

9.1.

Special Test Types

in a computer system. Testing is inherently a risk-based activity. Risk is a major driver in test design activity. The probability that negative event will occur. The potential loss or impact associated with the event Four methods for conducting risk analysis: Judgment and Instinct Dollars estimate (cost of failure) Identification and weighting of risk attributes Software risk assessment packages Testing risks: Budget Number of qualified test resources Test Environment Tools and procedures Sequence and increments of code delivery Low reusability- every tweak requires new testing prior to implementation. New Technology Types of tests will vary depending on the objectives of test and the type of application being tested. Usability testing- determines how well the user will be able to understand and interact with the system. Vendor validation testing- verifies that the functionality of the contract or third party software meets the organizations requirements, prior to accepting it and installing it into a production environment. Conversion testing-designed to validate the effectiveness of the conversion process. Stress/Load testing-to validate that the application, database, and network they may be running, can handle projected volumes of users and data effectively. Performance testing- conducted in parallel with stress and load testing in order to measure system performance against specified service level objectives under various conditions. Recovery testing-evaluates the contingency features build into the application for handling interruptions. Configuration testing-evaluates that the system developed will run on the various support combinations of hardware and software. Benefits realization testing- is a test or analysis conducted after an application is moved into production in order to determine whether the application is likely to deliver the original projected benefits. Testing is the process of executing a program with the intent of finding errors. Why Plan Tests? Repeatable Controllable Coverage A Test Plan is (IEEE) : Is defined as an overall document providing direction for all testing activity. The Test design specification refines the test approach and identifies the features to be covered by the design and its associated tests. A test case spec. documents the actual values used for input along with the anticipated outputs. A test procedure spec. identifies all steps required to exercise the specified test cases. Table of content for a test plan: Test scope- What will be covered in the test? What will not be covered in the test? Test Objectives- is the testing goal. Is a statement of what the tester is expected to accomplish or validate during a specific testing activity. Assumptions- test prerequisites, which if not met could have a negative impact on the test. Risk Analysis- documents test risks and their possible impact on the test effort. Test Design- details what type of tests must be conducted,

10.1.

Test Plan

200


10.2. Requirements Tracing

what stages of testing are required and then outlines the order and timing of tests. Roles & Responsibilities- defines who is responsible for each stage and type of testing. Test Schedule & Resources- includes major test activities, sequence, dependences, estimates etc. Test Data Management- defines the data required for testing, infrastructure req. to manage test data. Test Environment- defines env. Req. for each stage and type of testing. Communications Approach- means of information exchange, contact lists, meeting audiences, freq. Etc. Test Tools- details of the tools to support the testing process

10.3.

Versioning and CM

11.1.

Test Coverage

12.1.

Execute Tests

-The ability to trace requirements through project artifacts and into the resulting application is critical to ensuring that stakeholder needs and user requirements have been delivered. -Primary goal of software testing is to prove that the user or stakeholder requirements are actually delivered in the final product developed. - the ability to maintain control over the changes made to all project artifacts is critical to the success of a project. List of Project artifacts to be managed and controlled: Source Code Requirements Analysis models Design models Test cases and procedures Automated test scripts User docs. Manuals and online help Hardware and software Configuration settings Other artifacts as needed - the objective of the test coverage is to ensure that the application has been covered by test process adequately. Some of the coverage types are Statement coverage Branch coverage Basis path coverage Integration subtree coverage Modified decision coverage Global data coverage User specified data coverage - the need of an effective test plan defining roles and responsibilities, test readiness review, procedures, exit/entry criteria, environments , tools, logs etc. Some items to focus while integration testingValidation of links between client and server Security controls Performance and load test on individual components Sequence of adds, updates, views, deletes Simple transaction completion Tests for empty database or file conditions Output interface file accuracy Back-out situations Some items to focus on while system testingMirroring a production environment. Establish a test data bed Identify the test cases Identify the test cycles needed to replicate. Assign the test cases to test cycles Assign test scripts to the testers Review test results to identify the real defects Record defects in a tracking system, and ensure this is notified to the developer responsible Re-test and validate the corrected defects. When is testing complete? When quality goals are met. % coverage achieved risks identified had been taken care of , etc.

201

13.1.

Defect Reporting

Defect

are recorded for four major purposesTo correct the defect To report status of the application To gather statistics used to develop defect expectations in future applications To improve the software development process A defect log could includeDefect ID number, descriptive defect name and type, source of defecttest case or other source, defect severity, priority, defect status, (egopen, fixed, closed, user error, design --), date and time, steps required to reproduce the defect, origin of the defect(component or program), screen prints logs etc, stage of origination, person assigned to correct the defect, etc. Severity v/s Priority -to be specified during test planning stage -to be identified for each defect while reporting. Regression testing is conducted during all stages of testing after a change is made, to ensure that no new defects are introduced, either to the application itself or to down stream applications. Regression Types Unit Regression Testing Regional Regression testing Full regression testing A metric is a mathematical number which shows a relationship between two variables. Metric(IEEE)- is a quantitative measure of the degree to which a system, component, or process possesses a given attribute. Testing Data Total number of tests Number of tests executed to date Number of tests executed successfully to date Defect Tracking data Total number of defects corrected in each activity Total number of defects detected in each activity Avg. duration between defect detection and correction Average effort to correct a defect Total number of defects remaining at delivery Software Performance data Average CPU utilization Average memory utilization Measuring I/O transaction rate Some of the metrics:Number of Tests- number of tests v/s size of system tested Paths Tested- number of paths tested v/s total no. of paths Test Cost test cost v/s total system cost Detected production error number of errors detected in production v/s application system size. Effectiveness of Test to business loss due to problem v/s total resources processed by the system Test efficiency- no. of tests required v/s no. of system errors Test Automation cost of manual test effort v/s total test cost Defects uncovered in Test defects uncovered v/s size of systems. Loss value of test loss due to problems v/s total resources processed by system. Code Coverage identify which program branches or statements are executed by a set of test cases. - means of communicating test status and findings both within the project team and to the upper management. Defects :- Data defect uncovered, Name of the defect, Location where it was found, severity, type, how it occurred. Efficiency:- Software system efficiency, Test efficiency -Function / test matrix -Defect reporting worksheet 1.Function Testing status report 2.Function Working Timeline 3.Expected versus Actual defects uncovered timeline 4.Defects uncovered versus corrected Gap timelines

13.2.

Regression Testing

14.1.

Testing Metrics

15.1.

Test Reporting

202

5.Average age uncorrected defects by Type 6.Defect distribution report 7.Relative defect distribution report 8.Testing Action Report Final Test Reporting : Unit Test Report, Integration Test Report, System test report, Acceptance test report.

CSTE Exam Questions June 15th 2002 Descriptive: 1. 2. 3. What is the difference between Verification and Validation? Write 3 test STOP conditions/criteria. The top management was feeling that when there are any changes in the technology being used, development schedules etc, its a waste of time to update the Test Plan also from time to time. Instead they are emphasizing that you should put your time into testing than working on the test plan. Your Project Manager asked your for your opinion. You have argued that Test Plan is very important and you need to update your test plan from time to time. Its not at all a waste of time and testing activities would be more effective when you have your plan clear. Explain with some metrics, how would you support your argument to have the test plan in place all the time. (I remember it this much, if any body remembers more, please add to it. But the summary of the question is some thing of this sort). You have been asked to design a Defect Tracking system. Explain what all fields you would put in the defect tracking system. Draw a pictorial diagram of a report which you would give to the developer to assess the status of his/her module of work. Write a sales transaction for 6.2% of tax deduction for $62,000 of income. A credit limit has been fixed for $5,000, $10,000 and $15,000. Give the 3 lower boundary values and 3 upper boundary values for the credits. Describe automated capture/playback tools and list the benefits of using them Suggest any three testing tools for your test Environment and why do you suggest them? (Repeated two times) Draw a pictoral diagram of a report you would create for users and management to show project status. Put the following testing types in order and give a brief description of each. System testing, acceptance testing, unit testing, integration testing, The QAI is starting a project to put the CSTE certification online. They will use an automated process for recording candidate information, scheduling candidates for exams, keeping track of results and sending out certificates. Write a brief test plan for this new project. What activity is done in Acceptance Testing, which is not done in System testing to ensure the Customer requirements? (Given like case study) You are going to take Load/Stress testing for a project which will be exhaustive one. What are the questions that your project manager will be asking when you take that assignment? (Exact phrase couldnt remember) Online shopping for www.buyonline.com is planned, what are the risks you will analyze and how will you address that. What are the benefits for : Developers testing the project Independent testing team testing the projects.

4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

15.
16.

17. How will you ensure that Requirement document is correct and complete? (Like case study but question is like this..) 18. Draw pictoral report the project status to the developers and users. 19. Out of 16 Domains of CBOK, what are the domains you consider important and why? 20. Explain 5 risks in an e-Commerce project. Identify those personnel who must be involved in the risk analysis of a project and describe what they need to do. How will you prioritize the risks?

Multiple choice: 1. 2. 3. 4. 5. Defects are least costly to correct at what stage of the development cycle? (Requirements) (Repeated 4 times in different way) Software testing accounts for what percent of software development costs? (40 50) Constructive criticism Which is not process (Doing it public) (Repeated 2 times) Match the following for Blackbox testing, Whitebox testing, Regression testing and capture and playback. (2 times) Training comes under which cost (Preventive)

203

6. 7. 8. 9. 10. 11.

Which is user-based quality? Which is not demings 14 points of management? Which of the following skills a Tester Engineer should have? Which of the following is a structural approach? Dr Juran is familiar towards his contribution to ? (Management, Quality control, other two options) Test plan will be changed if there is a change in the project. True/False

Describe automated capture/playback tools and list the benefits of using them. The purpose of these tools is to run automated tests. Some of these tools work by recording keyboard and mouse actions while running the tests for the first time and then playing them back when we need to run the test again. For example the macro recording program called macro magic used in windows. These tools are primarily used for system-level testing. A capture/ playback tool allows the system tester to define test cases in terms of operator inputs at an input device (keyboard, keypad, track marble) and expected results at an output device (CRT). Tools of this nature are extremely valuable to system testers because they allow a huge quantity of potentially repetitive operator interface tests to be run without an operator present. This type of tool is especially useful for organizations that must do system- level regression testing before releasing a new version of the software. Capture/playback tools are not suited to unit-level testing because by nature they work only in environments where a keyboard/CRT interface to the software exists. Capture/Playback tools capture or record a test session to a file in a format that can played back on the system later. These tools also can compare test results to a previous test. This is one major reason capture/playback tools are required for true regression testing - they can perform comparisons with other tests without human error. Most capture/playback tools permit the tester to modify the test scripts or procedures to add logic such as loops and IF statements and even I/O calls to process external test data. Benefit: Capture/playback tools can execute and compare tests faster and more accurately than manual testing, they can record exactly which actions were performed during a test to assist in debugging, and they can perform tests in an unattended mode and document results, allowing testers to sleep at night and have a weekend off every once in a while, at least. Capture/playback tools have their greatest benefit in regression and performance testing. What fields would you include in creating a new defect tracking program (used by QA, developers, etc)? Following are the fields that will be included in the new defect tracking program. DefectId: This is the unique id to identy the defect. Defect creation Date& Time: Defect Title : Short description of the defect. Defect Phase: Which phase of the lifecycle the defect is associated with example: Requirements phase, design, testing etc. Defect Severity: Seveirity of the defect example Critical , major, minor. Defect type: Missing, Wrong or Extra. DEVELOPER : Person assigned to QC: Person assigned to Subsystems: Which subsystems DEV Estimate and QC estimate: How much time for the Dev to fix it and how much time it takes to test. Status: Open/Fixed/Retest/Closed Notes: Notes from the developer and QC. Draw a pictoral diagram of a report you would create for developers to determine project status. Following are the fields that will be included for developers to determine project status. Product name Release no Developer name Sub system name Dev estimated time % completed Time remaining No of outstanding critical defects No of outstanding major defects Stage: Requirements/design/coding etc Status: Not started, In development, Dev completed, In QC, QC completed. Draw a pictoral diagram of a report you would create for users and management to show project status. Overall status for the whole project: ---------------------------------------Product Name Release No Start Date Estimated End date original % completed Time remaining

204

Estimate End date Latest Detailed subsystem status -----------------------------Subsystem name Developers QC Start Date Estimated End date original % completed Time remaining Estimate End date Latest Detailed subsystem status by stage Stage name Start Date Estimated End date original % completed Time remaining Estimate End date Latest

Hi all, Here are some of the subjective/objective questions of CSTE conducted on 07th December, 2002. 1. 2. You are testing one software for aeroplane takeoff and landing. Design a test strategy to test it. 15 marks In you company unit testing was not carried out. New IT manager has ordered that all projects will get totally unit tested and then delivered to the independent tester. What should be the testing objective? What should be the test selection techniques? How will you ensure that policy is being achieved? 25 marks You are working on a very complex project with large number of controls, interdependencies between them. What changes you will make to the test plan contents? What things you will consider at the test planning stage? 30 marks Differentiate between verification and validation. 5 marks over tested? 15 marks

3.

4. 5. 6. 7. 8. 9.

Testing is expensive. How would you ensure that your applications are not

There are 2 systems, one works only on 1 work station while the other on multiple work stations. Which test techniques you will use in testing second system that you will NOT use for testing the first system. 15 marks Explain 3 test stop conditions. 15 marks

Your customer doesnt know how to prepare acceptance test plan. Write contents of the acceptance test plan for his guidance. 15 marks Write standards for test plan, test script, test report. 15 marks

10. Define and explain with example boundary value analysis, equivalence partitioning, error guessing. 15 marks 11. You notice that sr. testers in yr. company are making errors in test plan that jr. testers. Tomorrow you want to tell that employee to make proper test plan using constructive criticism. Describe how would you do it. 10 marks 12. Testing of the e-comm website. 10 marks

True or False. Effectiveness is doing things right and efficiency is doing the right things. Which of the following is not one of Deming's 14 points for management? a. b. c. Adopt a new philosophy Eliminate slogans, exhortations, and targets for the work force Mobility of management

205

d.

Create constancy of purpose

True or False. The largest cost of quality is from production failure. Defects are least costly to correct at what stage of the development cycle? a. b. c. d. Requirements Analysis & Design Construction Implementation

Software testing accounts for what percent of software development costs? a. b. c. d. 10-20 40-50 70-80 5-10

206

You might also like