You are on page 1of 7

Best Practices Report

Questions 1 2 3 4 5 6 Do you have a clearly defined test policy aligned to your business needs and objectives? N/A Industry BP

Is traceability established between the test goals and N/A business & objectives? Are the Test Goals based on the organization business needs and objectives? Are you overstaffed/understaffed? What is your current mix of onsite to offshore resources? How do you know that your testing team is performing its best? do you get any reports to say that this team is performing at level? Is there adequate funding available for testing activities & trainings in the organization ? Are testers adequately compensated (in comparison with development teams)? N/A Industry BP N/A N/A

7 8 9

N/A Industry BP

Is there a dedicated process group/person/initiative N/A existing to define/recommend testing practices in the organization ? Is there a group or person available to audit the testing process for every project and reporting into senior management? Are measurable quality goals set for each software product / project? Are the Test teams' activities documented and reported to senior management? N/A


11 12 13


Does senior management support interactions N/A between project (test) managers, developers (testers), architects, and business analysts to support testing activities (planning, issue tracking, defect prevention etc) ? Does the test organization initiate early in the software development lifecycle? Does the test organization, supported by project management, develop contingency plans for test risks? Has cost/benefit analysis been completed for the usage of test automation tools? (where costs and benefits need not merely be expressed in terms of money) N/A N/A

14 15




Have formal lines of communication / escalation N/A been agreed at an organizational level for all projects, i.e. to retain a consistent approach and to minimize confusion i.e. a communication plan? Is a score card available to measure the test organization effectiveness? Are there any challenges/constraints in the areas of testing & quality? Do you see opportunities to optimize testing costs, improve quality and reduce time-to-market? Are requirements managed using a tool? Are requirements approved before development and test activities begin? N/A N/A N/A

18 19 20

21 22 23

None None

Are requirements reviewed for ambiguity and are the None ambiguity defects logged for further analysis?

24 25 26

Are the requirements specified for Performance, Usability and Security testing ? Is Requirements Traceability established /Are the test cases mapped to requirements?

None None

Are there processes, templates, checklists, guidelines None available for performing requirements review, test strategy, estimations, test plan, defect triage, test summary reports? Does the repository/QMS contain test process templates for requirement traceability? Does the testing team participate in the review of requirements? Is there a formal change request process used to modify requirements/test cases once development/testing has started? None None None

27 28 29


Is there a procedure in place for soliciting user/client None input while planning acceptance tests? Do you carry out product risk analysis - 1. product risk categorization, risk threshold, risk probability and impact, identy risk stakeholders, risk consequences, risk mitigation, etc. Do you prioritize the product risks for mitigation? Do you establish a horizontal traceability between products risks and requirements to ensure that the source of product risks is documented? N/A


32 33


34 35 36

Do you define approach for re-testing and regression N/A testing? Do you define a set of entry & exit criteria related to N/A product quality? Has the suspension and resumption criteria defined and used to suspend/resume all or a portion of the test tasks on the test items and/or features? Have you identified major milestones for each test lifecycle phase N/A

37 38


Have you established corrective action criteria for N/A determining what constitutes a significant deviation from the test plan and may require rescheduling? Did you determine staffing requirements based on N/A the work breakdown structure, test estimate and test schedule? Did you Identify knowledge and skills needed to perform the test tasks? Do you record the test estimation data, including the associated information needed to reconstruct the estimates? N/A N/A


40 41

42 43 44 45

Are the estimates monitored and adjusted if required? N/A Is there a mechanism to predict the defects and leakage? Do you manage a test schedule for your project? N/A N/A

Do you periodically measure the actual completion of N/A test task, test work products and milesones against the project plan and compare against the actual and planned dates documented in the test plan?


Do you conduct test progress reviews at meaningful N/A completion of selected stages in test schedule?


Do you identify and document significant issues and N/A their impacts, and make correcrtive and preventive actions? Do you perform a check against the entry and exit criteria during the test execution phase as identified in the test plan? Do you monitor the changes/additions to the requirements to identify new or changed product risks? N/A





Is the suspension and resumption criteria monitored N/A against those identified in the test plan? Do you regularly communicate status on product quality and progress to stakeholders? Examples of stakeholders typically include the following: Project management Business management Test team members N/A



Do you employ tools to support the test monitoring N/A and control process? Examples of tools include the following: Project management and progress tracking tools Risk management tools Incident management tools Test management tools


Are the following monitored and controlled as part of N/A monitoring and control process? a) Costs, effort and schedule of testing b) Test project risks c) Product risks and product quality d) Reports on test progress and product quality

54 55 56

Do you have standards and templates for documenting the test cases? Do you prioritize the tests based on identified product risks? Are the following fields available for a test case? 1) Test cases Id 2) Req.Id 3) Test Case Description 4) Test steps 5) Input Data 6) Expected Result 7) Actual Result 8) Pass/Fail 9) Defect Number


57 58

Are the test cases reusable? What is the percentage of test cases that are reused? Are the test cases reviewed with the stakeholders?


59 60

Do you generate a requirements / test conditions traceability matrix? Is it easy for you to follow directions mentioned in the test cases?(If you are not the author) Are the actual results compared with expected results? Do you perform a re-testing (confirmation testing) when an incident is found, and to confirm fix of an incident? Do you analyze the test incident for further information on the problem?


61 62


63 64


Do you revisit the priority and severity of incidents? N/A

65 66

Do you maintain a central test incident repository for N/A logging the incidents? Do you get adequate time to perform the test design N/A and execution activities? Do you create specific test data required to perform the tests as specified in the test procedures? N/A



Do you archive the set of specific test data to allow a N/A restore of the initial situation in the future? Do you translate the test environment needs, including generic test data, expectations and constraints into documented test environment requirements N/A



Do you identify key test environment requirements N/A that have a strong influence on cost, schedule or test performance. Do you identify test environment requirments that can be implemented using existing modified resources? N/A



Do you analyze test environment requirments to N/A ensure that they are complete, feasible and realizable? Do you revise the test envionment components as necessary. Explain the test environment downtimes, causes, resolutions. N/A N/A

73 74 75

Do you shut down the test environment correctly N/A after usage, e.g. by making sure it is in a known state and test files are removed. Are there designated owners for all test environments? Are there sufficient test environments to meet testing demand/requirements? Is there a test data repository available? Are test data security practices followed to ensure effective test data management? N/A N/A N/A N/A

76 77 78 79 80

In which phase of the test life cycle is a decision to N/A automate made? During planning stage or execution stage? Who all participate in making the decision to automate? Does the test management and the party paying for the investment in the tools acknowledge that the tools being used provide advantages ? N/A N/A

81 82


What type of automation tools are used?(Ex. Record N/A and playback, Test coverage tool)


Do you have well considered decision on what needs N/A to be automated and what need not be automated? Before procuring a new tool for automation do you N/A perform a POC to ensure whether tool is suitable for requirement and works in the infrastructure? Do you consider the maintainability of test scripts during implementation? Does the use of test tools results in inefficiency or undesired limitations of the test process? Do you finalize on the framework to be used for automation? Do you use a version control tool to maintain the scripts? Please elaborate on the test automation framework used in the project N/A N/A N/A N/A N/A


86 87 88 89 90 91

Is there a procedure for adaptation/integration of N/A new tools and technologies into the testing process? Is there a periodic review on the benefits gained due N/A to test automation? Do you maintain basic defect classificaton scheme/repository? What is the defect tracking tool in use? Please specify whether tool is spread sheet or client server tool? N/A N/A

92 93 94

95 96 97

What details do you capture for a defect to ensure fix N/A and retest defects? Do you have defect triage meeting in your project? N/A

Do all the teams follow the same priority and N/A severity levels for defects in the defect tracking tool? Are detected defects traced to corresponding test cases/scripts? Are defects tracked and reported by project phase? Once identified, are root causes of defects analyzed and eliminated systematically? Do you upload attchments like screen shots, error logs etc., related to the defects while logging the defects? Is a common defect repository established and used by test teams across all projects ? N/A N/A N/A N/A

98 99 100 101

102 103 104 105 106 107 108 109


Are any statistical tools/ analysis techniques applied N/A for defect prevention? Are client concerns reflected in the defect prevention N/A activities? What is the DRE/ post delivery defect rate? N/A Among the post delivered defects, what's the typical N/A spread of P1, P2 and P3 defects? Who raises the bug and how is it tracked to closure? N/A Do you have an incident management tool to manage N/A incidents [not a defect mgt tool]? Are incidents that breach agreed service level targets N/A identifed and incident resolution teams identified of the breach? Are incident records analysed to identify reason for the incident? N/A



Did you identify current and potential future training N/A needs by analyzing organization's strategic business objectives, test policy, strategy and test process improvement plans? Do you periodically assess the skills of testers? Do you analyze /identify the training needs of various projects to identify common trainings on testing that can be provided organization wide? Do you negotiate with projects to agree on how the trainings will be provided? N/A N/A

112 113

114 115 116


Do you have a Test Training Plan? N/A Do you review the Test Training Plan (and also N/A make amends as necessary] with the affected groups and individuals [HR, Test Resources, etc.] Do you ensure avalability of test training material, instructors, etc.? Do you revise the training material and supporting artifacts as appropriate? Do you gather feedback from the trainings? Do you evaluate the training feedback and suggest improvements? N/A N/A N/A N/A

117 118 119 120 121

Do you keep a record of the employees - a. who have N/A successfully completed training courses, b. have been waived off from trainings including rationale for waiver, and management approval? Do you manage a skill matrix with summary of education, trainings, experience? N/A

122 123

Do you assess in-progress or completed projects to N/A determine whether employee knowledge is adequate to perform the role? Do you assess effectiveness of each training course with respect to established org established organizational, project or individual learning objectives. N/A



Do you run training programs to ensure availability N/A of Domain expertise to the test team. Please provide details about your Inspection/ Review N/A process which is followed currently. What are the work products that are reviewed? [For Ex: Test Strategy, Test Plan, Test Cases/Scripts, Defect Reports] Do you have an Internal Review plan for the testing N/A activities. Do results of test reveiws used as inputs for further test process improvements? N/A


127 128 129

Does the reveiwer have sufficient test knowledge and N/A experience to perform the reviews? Are managers and technical staff trained in test organization reviews? Are the proejct's work products and test work product that needs to be reviewed identified? Are the identified work products for review prioritized as per the associated risks? Is there a review check list available? Is there a review guideline available? Is the review checklist reviewed with the stakeholders? Is peer review conducted as per the review guidelines? N/A N/A N/A N/A N/A N/A N/A

130 131 132 133 134 135 136

137 138

Are the data and results of peer review analyzed? Are the stakeholders communicated about the identified actions and issues arising out of reviews? Are the peer review comments recorded? Are the peer review comments/data and results analyzed?


139 140 141 142 143 144 145 146 147


Is the test approach/test cases, etc. revised as per the N/A peer review comments? Is an organization process defined to analyze testing N/A metrics (Periodicity, stakeholders etc)? Is the frequency of data collection identified? Is there data analysis procedure in place to analyze the test measurements? Are the results of data analysis communicated to all relevant stakeholders? Are test teams trained in test metric identification, collection, and analysis? N/A N/A N/A N/A

Is historical project data (Time, budget, tools) N/A available from past projects used in test planning for the current projects ? Do you currently use (or) plan to use any metrics to N/A monitor and improve the aspects related to quality & testing? Are measurements used to track and monitor testing N/A effectiveness? Are metrics defined to gauge the skill gaps of the testing team? Is a score card available to measure test team expertise? Do you see opportunities to optimize testing costs, improve quality and reduce time-to-market? Do you evaluate the new testing processes/technologies/opportunities to determine effect on organization's standard test process? N/A N/A N/A


149 150 151 152




Are appropriate test process improvements and new N/A technologies deployed across organization? Do you measure the benefits of the new process/tool N/A deployed and the information dissemenated across organization? Do you re-use test process components and tesware? N/A Do successful improvement actions lead to changes in the organizational test process, associated documents, and standards? N/A


156 157

158 159

Do you collate and report formally and regularly on N/A best practices / lessons learnt? N/A Does the organizational software test group coordinate with a centralized quality assurance group / process definition group to enhance test effectiveness and improve software quality with respect to the user's requirements?


Are there any documented processes around test organization improvements?