You are on page 1of 58

DISC – Personality Assessment

© 2010 Colt Telecom Group Limited. All rights reserved.

Metrics & Measurement : TSID’s Current State - Gaps

2

Assessment Evaluation by Cognizant
• As per the Test Maturity Assessment conducted by Cognizant in August 2012, Measurement & Analysis area is measured at ‘Inception’ Level. • i.e. It is completely people dependent with very minimal or no management and without consistent processes.
Inception Functional Performing Best in Class

Cognizant’s Findings

Review& Sign-off Templates, Checklists, etc… Risk Management Knowledge management Requirement Change Management Build/Release Management Test Artifacts Configuration… Test Measurement Strategy

The effectiveness and efficiency of testing is not measured adequately to initiate corrective and preventive actions
3

Current State - Gaps
1
2
No Metrics process for Testing Team.

Non availability of tools to capture and gather data for metrics analysis. Only basic metrics captured at project level (such as Voice
of Customer; Defect per Severity, Category, Priority; Defect Status).

3

4 5 6

No metrics at the team level (TSID level).

No benchmarks (industry or Org. Level) to compare and measure against.
Data analysis mechanism is missing due to which improvements are not fed back into the system & processes. No MIS/dashboard can be published to the stakeholders / senior management.

7
4

Business Benefits of Metrics in TSID
1
360o view process performance against the goals.

2

Data driven process evaluation and decision making.

3

Proactive management rather than reactive management. Helps in achieving operational/process excellence through identification of areas for improvement. Embed Predictability (into estimation process & calibration). Dashboard to Sr. management/stakeholders providing the “performance” view of the team and projects.

4

5

6

5

executed Not Implemented Partial Implemented Implemented Workflow Siebel Oracle SpeedBoat Mediation Sales & Service Order Order OSS Tools Marketing Management Management provisioning NOTE: Metrics captured in Billing and ERP are still awaited. 6 . No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Legends Metrics Voice of Customer Test Execution Productivity Cost of Quality Automation ROI Defect Seepage to UAT Defect by Severity Defect by Category Schedule Variance Test Design Productivity Review Effectiveness Requirement Traceability Index Test Case Effectiveness Defect Seepage to SIT Defect Rejection Ratio Defect Ageing by Severity Test case planned vs.Current State – Domain wise Metrics DOMAINS S.

Metrics & Measurement : TSID’s Desired State 7 .

4 Advanced test management and industry leading practices with continuous process improvement and innovation 3 Enterprise-wide testing framework with measurable processes. Publish the MIS reports/dashboard for the stakeholders/senior management. Tool’s changes to capture and gather data for metrics analysis. Initiation of Data capture and metrics analysis. Continue Improvement Cycle through Metrics Analysis. 8 Maturity: 5 6 7 Completely people dependent with very minimal or no test management and without consistent processes Inception Functional Performing Inception Best-in-Class Cognizant’s proprietary Test Maturity Model Framework 8 Best-in-Class 3 . predictive capabilities & sharing of best practices Performing Defined test processes but less effective with basic test management practices 1 Functional 2 Establish efficient and robust Metrics Model. Identification of benchmarks – Industry and/or Organization level. 2 4 Identification of metrics – both at Project & TSID level.Desired State 1 Develop an effective Metrics Process.

9 . No. executed Not Implemented Partial Implemented Implemented Workflow Siebel Oracle SpeedBoat Mediation Sales & Service Order Order OSS Tools Marketing Management Management provisioning NOTE: We can add more metrics as applicable. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Legends Metrics Voice of Customer Test Execution Productivity Cost of Quality Automation ROI (as applicable) Defect Seepage to UAT Defect by Severity Defect by Category Schedule Variance Test Design Productivity Review Effectiveness Requirement Traceability Index Test Case Effectiveness Defect Seepage to SIT Defect Rejection Ratio Defect Ageing by Severity Test case planned vs.Desired State – Domain wise Metrics DOMAINS S.

TSID’s Metrics Model .Approach 10 .

3. 4.We will have the Phased Approach 1. 5. Identify Data Collection Mechanism • Changes in Tools 5. 3. Identify initial Benchmarks 4. Collection of Metrics Analysis of Metrics Refine & Improve Strategy. We are here currently 3. if required Trend Analysis Causal Analysis and improvements fed into the system 4. Identify Roles & Responsibilties INCEPTION FUNCTIONAL 1 – 3 Months PERFORMING 4 – 6 Months 6 Months + Ongoing BEST IN CLASS 11 . Identify Metrics 2. 2. PHASE 1 1. PHASE 3 Define Organization Benchmarks Statistical Analysis of Metrics Introduction of Change of process to include metrics related to Post Production efficiency Causal Analysis & Continuous Improvement Cycle PHASE 2 1. Define Metrics Strategy/Process • Time box Strategy • Collection & Analysis Mechanism 2.

) Industry Benchmarks to be taken to measure our performance against Needed in Phase Phase 1 Phase 1 Phase 1 Phase 1 5 Introduction of initial Benchmarks Phase 1 12 .1 of 2 S. Defect Management process etc. No 1 2 3 4 Change Category Change in Tool Change in Tool Institutionalization of Process Change in Process Change Description PPM (ATLAS) – For accurate Time recording Quality Centre – For Defect Model Standardization Metrics Process – To define the Metrics Model and strategy Other relevant process – which impact Metrics (Estimation process. Test Execution Process.Major Changes Needed .

No 6 7 Change Category Introduction of New Role Introduction of Org. Level Benchmarks Change in Metrics Analysis Method Change in Process Change Description Introduction of Metrics Manager Role – To perform Metrics Analysis at TSID level To develop the performance benchmarks relevant to our organization (this will be done after we have data of 6 + months) Statistical Analysis of Metrics Introduction of metrics related to Post Production efficiency Needed in Phase Phase 2 Phase 3 8 9 Phase 3 Phase 3 13 .Major Changes Needed – 2 of 2 S.

Appendix 14 .

TSID’s Proposed Metrics Model 15 .

I4 Metrics Model – 1 of 2 Identify Improve Implement Investigate 16 .

I4 Metrics Model – 2 of 2 Identify • Define • Metrics Strategy • ‘What’ to measure & ‘How’ to measure • Roles & Responsibilities • Capture Metrics & Measurement • Using Tools Implement Identify Investigate • Analysis of the Metrics • Trend Analysis • Statistical Analysis Improve Implement Improve Investigate • Causal Analysis & Improvement 17 .

Types of Metrics Types of Metrics TSID Level Metrics Process Metrics Identify Product Metrics Project Specifics Metrics Improve Implement Investigate Small Projects Large Projects Ongoing Enhancements 18 .

List will be refined 19 .Proposed Metrics – 1 of 6 TSID Level Metrics PROCESS METRICS Metrics Voice of Customer Test Execution Productivity Cost of Quality KPI Average of individual ratings from the feedback form # of Test cases executed per day per tester [(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100 Total benefit derived from automation / Total cost of automation Benefit of measuring this Customer feedback on Testing services provided Monitor the execution performance and thereby improving the same Evaluate the cost expended in NOT creating a quality product or service System Changes Needed? NO NO YES [Effort tracking in PPM needs change] YES [Effort tracking in PPM needs change] Automation ROI Return on Investment from the Automation NOTE: These are few initial proposed metrics.

categories System Changes Needed? NO YES [Quality Centre to be streamlined for Severity Values] YES [Quality Centre to be streamlined for Category Values] NOTE: These are few initial proposed metrics. List will be refined 20 .Proposed Metrics – 2 of 6 TSID Level Metrics PRODUCT METRICS Metrics Defect Seepage to UAT Defect by Severity Defect by Category KPI # of Defects seeped to UAT # of Defects per Severity # of Defects per Category Benefit of measuring this Evaluate the number of defects that could not be contained in SIT Measure the benefit of detecting ‘high’ severity defects early before it goes to users Provides snapshot of how many defects detected per diff.

Proposed Metrics – 3 of 6 Project Level Metrics PROCESS METRICS Metrics Schedule Variance KPI [(Actual End Date Estimated End Date)/ (Estimated End Date Estimated Start date)] X100 # of Test cases created per day per tester # of Test cases executed per day per tester Internal vs external Review feedback of Test cases Benefit of measuring this Monitor Project Performance vis-àvis plan System Changes Needed? NO Test Design Productivity Test Execution Productivity Review Effectiveness Monitor the test design performance and thereby improving the same Monitor the execution performance and thereby improving the same Evaluate the efficiency of finding defects during reviews NO NO NO NOTE: These are few initial proposed metrics. List will be refined 21 .

Defects not mapped to Test cases) / Total # test cases ) x 100 Benefit of measuring this Ensure that all the testable requirements are mapped with test scenarios and test cases Measures % of Defects mapped to test cases to evaluate the Test Design process System Changes Needed? NO NO NOTE: These are few initial proposed metrics. List will be refined 22 .Proposed Metrics – 4 of 6 Project Level Metrics PROCESS METRICS Metrics Requirement Traceability Index Test Case Effectiveness KPI Requirements mapped to Test Scenarios / cases (Total Defects .

Proposed Metrics – 5 of 6 Project Level Metrics PRODUCT METRICS Metrics Defect Seepage to SIT KPI # of Defects seeped to SIT from earlier phases # of Defects seeped to UAT # of Defects per Severity Benefit of measuring this Evaluate the number of defects that could not be contained in SIT System Changes Needed? YES [Changes in Quality Centre to introduce Defect Injection Phase] NO Defect Seepage from SIT Defect by Severity Evaluate the number of defects that could not be contained in phases prior to SIT Measure the benefit of detecting ‘high’ severity defects early before it goes to users Provides snapshot of how many defects detected per diff. List will be refined 23 . categories YES [Quality Centre to be streamlined for Severity Values] YES [Quality Centre to be streamlined for Category Values] Defect by Category # of Defects per Category NOTE: These are few initial proposed metrics.

executed KPI # of Rejected defects in SIT / Total Defects Time Taken to fix the defect No. executed Benefit of measuring this No. List will be refined 24 .Proposed Metrics – 6 of 6 Project Level Metrics PRODUCT METRICS Metrics Defect Rejection Ratio Defect Ageing by Severity Test case planned vs. of test cases planned vs. of defects rejected per total defects provides inputs for improvement in test process Measure the period it took to fix the defect & hence causing delay in test schedule This gives a snapshot of test execution status at any given point of test execution System Changes Needed? NO NO NO NOTE: These are few initial proposed metrics.

VOICE OF CUSTOMER Metrics Voice of Customer KPI Average of individual ratings from the feedback form 1. Or • Any other agreed milestone • • At the completion of the UAT phase of the project None Projects Enhancements YES NO?? 25 . Or • Each iteration of the program. 2. Tools used to capture the Metrics Feedback Form Sharepoint Site (Need to explore this possibility) Measured For Programs YES / NO YES Frequency At the completion of UAT for – • Each release in the program.

r. Test Execution and Quality of Defects logged? How do you rate the usage and effectiveness of tools used (QC / QTP / LR)? How do you rate Testing team’s efficiency w.r.t defects seepage to UAT? 26 .t Test cases. Test Estimates.VOICE OF CUSTOMER . Daily Test Status and Test Closure Report? How do you rate Tester’s Efficiency w.Questionnaire How would you rate the Level of Professionalism of Testing team (Test manager and testing team members)? How would you rate the knowledge levels of testing team members? Was the testing team efficient in responding to the project’s and business’s needs? Has the testing team aligned to the testing schedule and project budget as agreed? How do you rate the Quality of Test Plan & Control Deliverables – Test Plan.

Questionnaire Did you find the communication with Testing team open and timely? Did you find testing team proactive in overcoming project’s issues? How would you rate the overall experience of testing services offered? 5 Point Scale Measurement Scale Excellent Good Average Needs Improvement Poor Equivalent Rating 5 4 3 2 1 27 .VOICE OF CUSTOMER .

parameters Yes Is the feedback <=3? No Prepare the Lessons Learnt & Document Good Practices Conduct Causal Analysis. Test manager sends the Testing feedback form to the Project Manager Project Manager receives the Testing feedback form and update the feedback Project Manager receives the filled up feedback form and compute the average rating from the feedback on diff.VOICE OF CUSTOMER – Work Instructions TEST MANAGER PROJECT MANAGER On Completion of the project. Identify Improvement Areas 28 .

2.DEFECT SEEPAGE TO UAT Metrics Defect Seepage to UAT KPI # of Defects seeped to UAT 1. Or • Any other agreed milestone • • At the completion of the UAT phase of the project Monthly – consolidated for all the enhancements went live in a month Projects Enhancements YES YES 29 . Or • Each iteration of the program. Tools used to capture the Metrics Quality Center Excel or any other tool used by UAT members Measured For Programs YES / NO YES Frequency At the completion of UAT for – • Each release in the program.

Identify Improvement Areas NO Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 30 . Test manager gathers the data for UAT defects Test Manager Analyzes the UAT defects and compare it with the established project and TSID level goals YES Are the UAT defects more than the defined goals? Conduct Causal Analysis.Defect Seepage to UAT – Work Instructions TEST MANAGER On Completion of the UAT.

TEST EXECUTION PRODUCTIVITY Metrics Test Execution Productivity KPI # of Test cases executed per day per tester 1. 2. Or • Any other agreed milestone • • At the completion of the project Monthly – consolidated for all the enhancements went live in a month Projects Enhancements YES YES 31 . Tools used to capture the Metrics Quality Center Execution Efforts logged in ATLAS Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program. Or • Each iteration of the program.

Test Manager would collate this data and compute Test Execution Productivity Yes Is the Test Execution Productivity more than the defined goals? No Conduct Causal Analysis. tester would log the test cases executed per day.TEST EXECUTION PRODUCTIVITY – Work Instructions TESTER TEST MANAGER During the test execution phase. Execution time is logged in QC At the completion of Test Execution. Identify Improvement Areas Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 32 .

Tools used to capture the Metrics Execution Efforts logged in ATLAS Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program.COST OF QUALITY Metrics Cost of Quality KPI [(Appraisal Effort + Prevention Effort + Failure Effort) / Total Project effort] * 100 1. Or • Any other agreed milestone • • At the completion of the project N/A Projects Enhancements YES NO 33 . Or • Each iteration of the program.

Test Manager would collate this data and compute Cost Of Quality Yes Is the COQ more than defined goals? No Conduct Causal Analysis. Identify Improvement Areas Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 34 . Testing team log the efforts in ATLAS At the completion of Project/Release.COST OF QUALITY – Work Instructions TESTER TEST MANAGER During the testing life cycle.

if Automation is involved YES / NO YES Frequency At the completion of – • Each release in the program.AUTOMATION ROI Metrics Automation ROI KPI Total benefit derived from automation / Total cost of automation 1. if Automation is involved YES • At the completion of the project NO • N/A 35 . Or • Any other agreed milestone Projects. Or • Each iteration of the program. Tools used to capture the Metrics Execution Efforts logged in ATLAS Measured For Programs. if Automation is involved Enhancements.

Test Manager would collate this data and compute Automation ROI Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 36 . Testing team log the efforts in ATLAS At the completion of Project/Release.AUTOMATION ROI – Work Instructions TESTER TEST MANAGER During the testing life cycle.

Or • Each iteration of the program. Or • Any other agreed milestone • • At the completion of the project Monthly – consolidated for all the enhancements went live in a month Projects Enhancements YES YES This metrics can be reported as a Daily Status to all relevant stakeholders 37 .DEFECT BY SEVERITY Metrics Defect by Severity KPI # of Defects per Severity 1. Tools used to capture the Metrics Defects logged in Quality Center Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program.

Test Manager would collate this data and compute Defect by Severity Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 38 .Work Instructions TESTER TEST MANAGER During the test execution. defects are logged in Quality Center At the completion of Project/Release.DEFECT BY SEVERITY .

DEFECT BY CATEGORY Metrics Defect by Category KPI # of Defects per Category 1. Or • Each iteration of the program. Tools used to capture the Metrics Defects logged in Quality Center Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program. Or • Any other agreed milestone • • At the completion of the project Monthly – consolidated for all the enhancements went live in a month Projects Enhancements YES YES This metrics can be reported as a Daily Status to all relevant stakeholders 39 .

Test Manager would collate this data and compute Defect by Category Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 40 . defects are logged in Quality Center At the completion of Project/Release.DEFECT BY CATEGORY.Work Instructions TESTER TEST MANAGER During the test execution.

Tools used to capture the Metrics MPP Excel spreadsheet used to maintain the plan Measured For Programs YES / NO YES Frequency At the completion of TSID activities for: • Each release in the program. Or • Each iteration of the program. Or • Any other agreed milestone • • At the completion of the TSID activities of the project Monthly – consolidated for all the enhancements went live in a month Projects Enhancements YES YES 41 .Schedule Variance Metrics Schedule Variance KPI [(Actual End Date .Estimated End Date)/ (Estimated End Date . 2.Estimated Start date)] X100 1.

Identify Improvement Areas NO Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 42 .Schedule Variation– Work Instructions TEST MANAGER On Completion of the TSID activities. Test manager gathers the data and computes Schedule Variation Test Manager Analyzes the schedule Variation and compare it with the established project and TSID level goals YES Is the schedule Variation more than the defined goals? Conduct Causal Analysis.

Or • Any other agreed milestone • • At the completion of the project N/A Projects Enhancements YES NO 43 .TEST DESIGN PRODUCTIVITY Metrics Test Design Productivity KPI # of Test cases created per day per tester 1. 2. Tools used to capture the Metrics Quality Center Test Design Efforts logged in ATLAS Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program. Or • Each iteration of the program.

Identify Improvement Areas Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 44 . Test Manager would collate this data and compute Test Design Productivity Yes Is the Test Design Productivity more than the defined goals? No Conduct Causal Analysis. Test Design time is logged in QC At the completion of Test Execution. tester would log the test cases designed per day.TEST DESIGN PRODUCTIVITY – Work Instructions TESTER TEST MANAGER During the test design phase.

2. Or • Any other agreed milestone • • At the completion of the project N/A Projects Enhancements YES No 45 .REVIEW EFFECTIVENESS Metrics Review Effectiveness KPI Internal vs external Review feedback of Test cases 1. Or • Each iteration of the program. Tools used to capture the Metrics Quality Center for review defects Review defects logged in excel sheet by external reviewers Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program.

tester would conduct peer review of the test cases designed by peers Test Manager would collate this data and compute Review Effectiveness During the test design phase. Identify Improvement Areas Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 46 . the test cases would be peer reviewed by external stakeholders Yes Is the Review No Effectiveness more than the defined goals? Conduct Causal Analysis.REVIEW EFFECTIVENESS – Work Instructions TESTER TEST MANAGER During the test design phase.

2.REQUIREMENT TRACEABILITY INDEX Metrics Requirement Traceability Index KPI Requirements mapped to Test Scenarios / cases 1. Tools used to capture the Metrics Quality Center TCM Matrix Measured For Programs YES / NO YES Frequency During the course of – • Each release in the program. Or • Each iteration of the program. Or • Any other agreed milestone • • During the course of the project N/A Projects Enhancements YES NO 47 .

tester would map the test case with the requirements.REQUIREMENT TRACEABILITY INDEX – Work Instructions TESTER TEST MANAGER During the test design phase. Test Manager would compute this metrics to ensure that all testable requirements are tested Close 48 . This will be managed/updated throughout test lifecycle During the course of testing lifecycle.

Defects not mapped to Test cases) / Total # test cases ) x 100 1.TEST CASE EFFECTIVENESS Metrics Test Case Effectiveness KPI (Total Defects . Or • Each iteration of the program. Or • Any other agreed milestone • • At the completion of the project N/A Projects Enhancements YES NO 49 . Tools used to capture the Metrics Quality Center Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program.

Identify Improvement Areas Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 50 . tester log the defect and map it to the test case defined At the completion of Test Execution. Test Manager would collate this data and compute Test Case Effectiveness Yes Is the Test case Effectiveness too less? No Conduct Causal Analysis.TEST CASE EFFECTIVENESS – Work Instructions TESTER TEST MANAGER During the test execution phase.

Or • Each iteration of the program. Or • Any other agreed milestone • • At the completion of the UAT phase of the project Monthly – consolidated for all the enhancements went live in a month Projects Enhancements YES YES This metrics can be reported as a Daily Status to all relevant stakeholders 51 .DEFECT SEEPAGE TO SIT Metrics Defect Seepage to SIT KPI # of Defects seeped to SIT from earlier phases 1. Tools used to capture the Metrics Quality Center Measured For Programs YES / NO YES Frequency At the completion of UAT for – • Each release in the program.

if required Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 52 .Defect Seepage to SIT – Work Instructions TEST MANAGER On Completion of the SIT. Test manager gathers the data for SIT defects Test Manager Analyzes which phase contributed the maximum defects to SIT Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders.

DEFECT REJECTION RATIO Metrics Defect Rejection Ratio KPI # of Rejected defects in SIT / Total Defects 1. Or • Any other agreed milestone • • At the completion of the project Monthly – consolidated for all the enhancements went live in a month Projects Enhancements YES YES This metrics can be reported as a Daily Status to all relevant stakeholders 53 . Tools used to capture the Metrics Quality Center Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program. Or • Each iteration of the program.

DEFECT REJECTION RATIO– Work Instructions TESTER TEST MANAGER During the test execution phase. Identify Improvement Areas Submit it to Metrics Manager for the Trend graph and for TSID level analysis Close 54 . tester log the defect and its status At the completion of Test Execution. Test Manager would collate this data and compute Defect Rejection Ratio Yes Is the Defect Rejection ratio very high as compared to TSID or project level goals? No Conduct Causal Analysis.

Tools used to capture the Metrics Quality Center Measured For Programs YES / NO YES Frequency At the completion of – • Each release in the program. Or • Any other agreed milestone • • At the completion of the project N/A Projects Enhancements YES NO This metrics can be reported as a Daily Status to all relevant stakeholders 55 . Or • Each iteration of the program.DEFECT AGEING BY SEVERITY Metrics Defect Rejection Ratio KPI Time Taken to fix the defect across various severity levels 1.

DEFECT AGEING BY SEVERITY– Work Instructions TESTER TEST MANAGER During the test execution phase. if required Yes No Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders. if required Close 56 . Test Manager would collate this data and compute Defect Ageing by Severity of Defects Test manager submit the data to PM and may participate in the Causal Analysis with stakeholders. tester log the defect’s open and close date At the completion of Test Execution.

executed 1.TEST CASES PLANNED vs. EXECUTED Metrics Test case planned vs. Tools used to capture the Metrics Quality Center Measured For Programs Projects Enhancements YES / NO YES YES NO • • • Frequency Ongoing as part of Test execution Ongoing as part of Test execution N/A This metrics can be reported as a Daily Status to all relevant stakeholders 57 . of test cases planned vs. executed KPI No.

TEST CASES PLANNED vs. tester will updated test cases executed against planned Test cases As an ongoing activity Test manager would compute test cases planned vs. executed Test manager reports this metrics on a daily basis as part of daily status Yes No Close 58 . EXECUTED– Work Instructions TESTER TEST MANAGER During the test execution phase.