You are on page 1of 46

Module C- 1: Defect Reporting Management,

Test Metrics and Test Tools

Sep 2014
NTT DATA Corporation

Copyright © 2012 NTT DATA, Inc.


Agenda

• Defect Reporting Management


• Test Metrics
• Test Tools
• Class Quiz

Copyright © 2012 NTT DATA, Inc.


Defect Reporting Management

Copyright © 2012 NTT DATA, Inc.


Error, Defect and Failure

• Error: A human action that produces an incorrect result. If someone


makes an error or mistake in using the software, this may lead
directly to a problem - the software is used incorrectly and so does
not behave as we expected.

• Defect: A flaw in a component or system that can cause the


component or system to fail to perform its required functions are
called Defects or sometimes Bugs or Faults

• Failure: A defect, if encountered during execution, may cause a


failure of the component or system. When the software code has
been built, it is executed and then any defects may cause the system
to fail to do what it should do (or do something it shouldn't), causing
a Failure. Not all defects result in failures; some stay dormant in the
code and we may never notice them.

Copyright © 2012 NTT DATA, Inc.


Defect?

• Defect is:
– An incorrect or incomplete implementation of the functionality mentioned
in the requirement specification document (FS or Use case)
– A missing functionality mentioned in the requirement specification
document (FS or Use case)
– Any deviation made in the screen layout from that mentioned in the GUI
Standards document
– Nonconformance to the requirements mentioned in the requirement
specification document (FS or Use case)

• A defect is therefore anything that, alters the program's ability in a


negative sense, to completely and effectively meet the user's /
client’s requirement.

Copyright © 2012 NTT DATA, Inc.


What is Not a Defect?

• Any suggestion / observation logged by the tester.


• Any functionality that is expected by the tester but not mentioned in
the requirements specification document (FS or Use case)
– This could be an enhancement

Copyright © 2012 NTT DATA, Inc.


When is a defect found?
• A defect is found in the following phases of the Project Life Cycle:
– Requirements Definition - Defining the requirements that meets the needs of
the organization / client
– Design - Design to accomplish the defined requirements
– Coding - Programs to accomplish the defined requirements and design
– Unit Testing - Testing done by the developer for each piece of code
individually
– Integration Testing - Testing done by the testing team while integrating the
modules into the system
– System Testing - Testing done by the testing team taking the entire system
into consideration
– Acceptance Testing - Testing done by the Client before accepting the product
delivered

• Defects found during unit, integration , system testing and acceptance


testing are called as Application defects

• Defects found during requirements definition, design and coding phase


are called Review comments
Copyright © 2012 NTT DATA, Inc.
How to Write and Not Write a Defect?

• While writing a defect the following details must be taken care:


– Defect must be written concurrent with or immediately following test
execution
– Describe the entire scenario that led to the defect.
– Attachment of the screen shot must be provided (if applicable)
– Language clarity must be taken care of
– Ambiguous statements should be avoided.
– Data used for testing (if applicable)
– Trace the defect to the requirement mentioned in the requirement
specification document (if applicable)

• A defect must not be written as follows:


– Describing the defect without the entire scenario
– Describing the defect without clarity in the language used
– Describing the defect without the evidence

Copyright © 2012 NTT DATA, Inc.


Defect Classification

Defects are classified as follows:

Defect Classification

Defect Severity Defect Priority

Critical Urgent

Major High

Minor Medium

Cosmetic Low

Copyright © 2012 NTT DATA, Inc.


Defect Severity & Priority
Severity - The severity of a defect is the impact of the defect on the software.
The severity classification is as follows:
 Critical: The defect affects critical functionality or critical data. It does not have a
workaround. Example: Unsuccessful installation, complete failure of a feature.
 Major: The defect affects major functionality or major data. It has a workaround but is not
obvious and is difficult. Example: A feature is not functional from one module but the task is
doable if 10 complicated indirect steps are followed in another module/s.
 Minor: The defect affects minor functionality or non-critical data. It has an easy workaround.
Example: A minor feature that is not functional in one module but the same task is easily
doable from another module.
 Cosmetic: The defect does not affect functionality or data. It does not even need a
workaround. It does not impact productivity or efficiency. It is merely an inconvenience.
Example: Petty layout discrepancies, spelling/grammatical errors

Priority - Priority of the defect indicates the importance and order in which the defect
should be fixed.
The priority classification is as follows:
 Urgent: Must be fixed in the next build.
 High: Must be fixed in any of the upcoming builds but should be included in the release.
 Medium: May be fixed after the release / in the next release.
 Low: May or may not be fixed at all.

Copyright © 2012 NTT DATA, Inc.


Severity and Priority Examples

• High Priority & Major (High) Severity defect:


• If 'Login' is required for an Application and the users are unable to login
to the application with valid user credentials. Such defects need to be
fixed with high importance. Since it is stopping the customer to
progress further.

• Low Priority and Major (High) Severity defect:


• If an application crashes after multiple use of any functionality i.e. if
‘Save’ Button (functionality) is used for 200 times and then the
application crashes, such defects have High Severity because
application gets crashed, but Low Priority because no need to debug
right now you can debug it after some days.

• High Priority & Minor (Low) Severity defect:


• If in a web application, company name is miss spelled or Text “User
Nam:” is displayed instead of “User Name:” on the application login
page. In this case, Defect Severity is low as it is a spell mistake but
Priority is high because of its high visibility

Copyright © 2012 NTT DATA, Inc.


Defect Life Cycle

• The defect that is recorded goes through the following state:


– Open - Initially when the defect is logged
– Assigned - When is assigned to the developer for fixing the defect
– Resolved - When the developer has fixed the defect
– Re-Open - When the defect is re-tested and found that it still persists.
– Closed - When the defect is re-tested and found that it is fixed.
– Rejected - When the defect is invalid.

Review / Test Open Assigned Resolved Re-Test

Rejected Re-Open Closed

Copyright © 2012 NTT DATA, Inc.


Defect Report

• What is a Defect Report ? Why do we write them?


– Defect report is a technical document
– Describes failure mode in system under test (SUT)
– The only tangible “product” of testing
– Written to increase product quality
– Documents a specific quality problem quality of SUT
– Communicates to developers

• Structured testing foundational to good defect reports


– Use deliberate, careful approach to testing.
– Follow written test cases or run automated ones per written or
standardized process
– Defect reporting begins when expected and observed results differ
– Sloppy testing results in sloppy defect reports

Copyright © 2012 NTT DATA, Inc.


Defect Report (Contd..)

• Tips for a Good Defect Report


– Structure: test carefully
– Reproduce: test it again
– Isolate: test it differently
– Generalize: test it elsewhere
– Compare: review results of similar tests
– Summarize: relate test to customers
– Condense: trim unnecessary information
– Disambiguate: use clear words
– Neutralize: express problem impartially
– Review: be sure

• Once you follow above , should be able to avoid below statements


– "Show me.“
– "I think the tachyon modulation must be wrongly polarized.“
– "That's funny, it did it a moment ago."

Copyright © 2012 NTT DATA, Inc.


Bug Report

A Bug Report should contain A Bug Report can also contain


Summary Generalize
Steps to reproduce Compare
Isolation Condense
Neutralize
Disambiguate

• Summary
– One or two sentence description of the bug
• Put a short “tag line” on each report Capture failure and impact on
customer
• Advantages of good summaries
• Get management attention
• Name bug report for developers
• Help set priorities

Copyright © 2012 NTT DATA, Inc.


Bug Report

• Steps to Reproduce
– Always check reproducibility of the failure as part of writing defect report
– Document a crisp sequence of actions that will reproduce the failure
– Note: Failure incidence rate (that is., 1 in 3 tries)
• Isolate
– Change variables that may alter symptom make changes one by one
– Can be extensive match amount of effort to severity of problem
– Avoid getting into debugging activities
• Generalize
– Look for related failures in SUT Does the same failure occur in other
modules or locations?
– Are there more severe occurrences of the same fault?
– Avoid confusing unrelated problems
– Same symptom can arise from different bugs
• Compare
– Examine results for similar tests
– Same test run against earlier versions
Copyright © 2012 NTT DATA, Inc.
Bug Report

• Condense
– Eliminate extraneous words or steps
• Neutralize
– Deliver bad news gently
– Avoid Attacking developers
– Avoid Criticizing the underlying error
– Confine defect reports to statements of fact
• Disambiguate
– Remove, rephrase, or expand vague, misleading, or subjective words
and statements
– Make sure report is not subject to misinterpretation

Copyright © 2012 NTT DATA, Inc.


Poor Bug Report example

• Summary:
– Speedy Writer has trouble with Arial
• Steps to reproduce:
– Open Speedy Writer
– Type in some text
– Select Arial
– Text gets screwed up
• Isolation

Copyright © 2012 NTT DATA, Inc.


Good Bug Report example

• Summary:
– Speedy Writer for Windows 98 scribbles on the open file if Arial Font is
selected
• Steps to reproduce:
– Open Speedy Writer and create a new file
– Type in two or more lines of random text. (It doesn't matter what)
– Highlight the text, pull down the font menu, and select Arial
– Text is transformed into meaningless garbage
– Able to reproduce this problem three out of three times
• Isolation:
– On the suspicion that this was just a formatting problem, I saved the file,
Closed Speedy Writer and reopened the file. The garbage remained
– If you save the file before Arial zing the contents, the bug does not occur
– The bug does not occur with existing files
– This happens only under Windows 98
– This bug doesn't occur with other fonts

Copyright © 2012 NTT DATA, Inc.


Defect Reporting

• The following details should be recorded while logging a defect:


– Phase of the project life cycle in which the defect was found
– Module in which the defect was found
– Test Case ID against which the defect was logged
– Clear description of the defect (along with attachments if any)
– Severity of the defect
– Priority of the defect
– Date on which the defect was found
– Status of the defect (Initially Open)
– Tester who raised the defect
– Developer to whom the defect is assigned
– Date on which the defect was closed
– Comments

Copyright © 2012 NTT DATA, Inc.


Defect Tracking System

• Defects are everywhere! How do you keep track of them and ensure
that the bugs get fixed?

• Defect Tracking
– Tracking all the defects found in each phase of the project life cycle to
ensure that it is tracked to closure i.e. either fixed or rejected with
reason

• Defect Tracking/Management System


– A software into which all the defects found in different phases of the
software life cycle are logged and tracked to closure
– Ensures that the final product is in compliance with the defined
requirements
– Ensures that the defect with the Severity and Priority gets fixed based
on the criticality.

Copyright © 2012 NTT DATA, Inc.


Features of a Defect Management System

• Record Defect - new defect found in the system


• Assign defect for fixing - assign to a team member
• Fix defect - defect fixed by the person to whom assigned
• Review and close defect - Final stage of defect
• Obtain the status of the defects based on
state/severity/priority/assignee.
• Obtain the report based on several custom criteria.

Valid defect
Accept Defect
Record Defect Check defect Assign/Reassign
Fix/Reject Defect Accept defect Verify Defect
(Recorder) (Controller) Valid Defect
(Assignee) (Recorder) (Controller)
(Controller)
Invalid defect
Defect Closed
(End of Process)
Reject Defect
Fix/Reject Invalid
Re-open Defect
(Recorder)

Copyright © 2012 NTT DATA, Inc.


Roles in a project for Defect Management

Role Description
Administrator • Set-up & Manage the project.
• Acts as a super user
Recorder / Submitter • Record or submit (raise) a Defect
• View Reports
Assignee • Person to whom the defect is assigned for fixing
• View Reports
Reviewer / Validator • Verify the Defect
• View Reports

Copyright © 2012 NTT DATA, Inc.


Tools Used for Defect Management

• Quality Center from HP


– One of the most popular Defect Tracking tool
• Mantis, Bugzilla
– Popular Opensource tools
• Clear Quest from Rational
– A highly flexible defect and change tracking system that captures and
tracks all types of change
• Buggit from Pierce Business Systems
– Manages bugs and features. (freeware)
• TeamTrack from TeamShare, Inc.
– Web-architected change management, bug and defect tracking software
for development, technical support, quality assurance and help desk
teams
• NetResults Tracker –
– Manages bugs and features. (freeware)
• TestTrack from Seapine Software
– Defect tracking and issue management software solution
Copyright © 2012 NTT DATA, Inc.
Test Metrics

Copyright © 2012 NTT DATA, Inc.


Testing Metrics

• Test metrics helps in analyzing the current level of maturity in testing


and give a projection on how to go about testing activities by allowing
us to set goals and predict future trends. It can also be termed as
Metrics of Measurement based on various criteria's

• Purpose:
– To Quantitatively Monitor, Control and Improve the following:
– Project Planning / Performance
– Defect Containment
– Increase Review / Test Efficiency
– Increase Defect Detection Efficiency
– Improve Product Quality
– Monitor Process Improvement
– Optimize Effort Utilization

Copyright © 2012 NTT DATA, Inc.


Metrics Definition

Sl . Metrics Formula for Calculation Measure


No
1 Effort Variance 100 * [Actual Effort – Planned Effort] / [Planned Effort] %
2 Schedule Variance 100 * [Actual Duration - Planned Duration] /[Planned %
Duration]
100 * [No. of Testing Defects Rejected by the
3 Testing Defect Development Team] / [No. of Testing Defects Raised by %
Rejection Rate the Testing Team]
100 * [No. of Peer Review Defects reported ] / [[No. of
4 Review Efficiency Review Defects Reported by SQA] + [No. of Review %
Defects Reported by Peer]]

5 COQ 100 * [[Prevention Effort] + [Appraisal Effort] + %


[Correction Effort]] / [Total Testing Effort]
6 Test Development [Number of Test Cases/scripts created ]/ Actual Effort of Ratio
Productivity creation
7 Test Execution [Number of Test Cases executed] / Actual Effort of Ratio
Productivity Testing
8 Testing Defect [No. of Testing Defects Reported by testing team] / Ratio
Density [No. of Test Cases Executed]

Copyright © 2012 NTT DATA, Inc.


Testing- Metrics (sample) Data Sheet - BL 33

Sl.
Baselin Baselin Baselin Baselin Baselin Remarks
No Metric Goal
e 28 e 29 e 30 e 31 e 33
1 Effort Variance +/- 5% 0.12% 0.03% 0.25% -0.44% -0.22% Within the goal & Shows an
improvement trend
2 Schedule Variance +/- 2% 0.44% 0.00% -0.02% 0.01% 0.01% Within the goal and shows an
improvement trend
3 Testing Defect 10% 1.67% 0.15% 1.19% 1.02% 0.83% Within the goal and shows an
Rejection Rate improvement trend
4 Review Efficiency 95% 94.93% 92.14% 92.67% 92.96% 96.57% Lesser than the goal, shows an
improvement trend
5 Cost Of Quality 10- 15.38% 12.69% 6.19% 7.75% 7.17% Lesser than the goal, shows
15% decreasing trend

Manual Testing
Automation Testing
Sl Legacy Tech Non ERP ERP
Metric
No
LCL Mean UCL LCL Mean UCL LCL Mean UCL LCL Mean UCL
Test Development
1 Productivity 0.00 1.44 5.92 0.00 3.74 12.48 0.00 1.49 5.42 NA NA NA

2 Test Execution Productivity 0.00 0.37 1.00 0.00 4.79 15.52 0.00 1.15 4.04 NA NA NA

3 Testing Defect Density 0.00 0.69 2.11 0.00 0.082 0.34 0.00 0.27 0.89 NA NA NA

Note: Grouping was done based on the technology


1. Legacy Tech – AS 400, Mainframes, Cobol
2. Non ERP – .NET, Java, VB, UNIX, C, C#
3. ERP – Oracle eBusiness Suit
Copyright © 2012 NTT DATA, Inc.
Software reliability

• Software reliability is very useful to decide when to deliver the product or


when to release the product when the software is highly complex.

• The important reliability figures are:


• MTBF-Mean time between failures
• MTTR-Mean time to repair
• MTTF-Mean time to fail

• While carrying out testing process the values of MTBF, MTTR and MTTF
need to be recorded and if these values are within the limits only, the
product can be considered as ‘ready for release’.

• Software reliability models have to developed by using defect analysis


during the testing and maintenance phase

Copyright © 2012 NTT DATA, Inc.


Test Tools

Copyright © 2012 NTT DATA, Inc.


Test Management

• The management of testing applies over the whole of the software


development life cycle, so a test management tool could be among the
first to be used in a project.
• Test management tools help to gather, organize and communicate
information about the testing on a project.
• Some features or characteristics of test management tools are listed
below
– Management of tests (keeping track of the associated data for a given set of
tests, number of tests planned, written, run, passed or failed)
– Scheduling of tests to be executed (manually or by a test execution tool)
– Management of testing activities (time spent in test design, test execution)
– Interfaces to other tools
– Traceability of tests, test results and defects to requirements or other sources
– Logging test results
– Preparing progress reports based on metrics (quantitative analysis)

Copyright © 2012 NTT DATA, Inc.


Requirement management tools

• Tests are based on requirements, the better the quality of the


requirements, the easier it will be to write tests from them. It is also
important to be able to trace tests to requirements and requirements
to tests

• Some features or characteristics of requirement management tools:


– Storing requirement statements
– Storing information about requirement attributes
– Checking consistency of requirements
– Identifying undefined, missing or 'to be defined later' requirements
– Prioritizing requirements for testing purposes
– Traceability of requirements to tests and tests to requirements, functions
or features
– Traceability through levels of requirements
– Interfacing to test management tools
– Coverage of requirements by a set of tests (sometimes)

Copyright © 2012 NTT DATA, Inc.


Configuration management

• Configuration management tools are not strictly testing tools.


Configuration management is critical for controlled testing. We need
to know exactly what it is that we are supposed to test, such as the
exact version of all of the things that belong in a system.
• Some features or characteristics of Configuration management tools:
– Storing information about versions and builds of the software and
testware
– Traceability between software and testware and different versions or
variants
– Keeping track of which versions belong with which configurations (e.g.
operating systems, libraries, browsers)
– Build and release management
– Baselining (e.g. all the configuration items that make up a specific
release)
– Access control (checking in and out)

Copyright © 2012 NTT DATA, Inc.


Test Execution

• These tools enable tests to be executed automatically, or semi-


automatically, using stored inputs and expected outcomes
• They use of a scripting language and provide a test log for each test run.
• They can be used to record tests, and usually support scripting
languages or GUI-based configuration for parameterization of data and
other customization in the tests. They are best used for regression
testing.
• Features or characteristics of test execution tools include support for:
– Capturing (recording) test inputs while tests are executed manually.
– Storing an expected result to compare to, the next test run
– Executing tests from stored scripts and optionally data files accessed by the
script (if data-driven or keyword-driven scripting is used)
– Dynamic comparison (while the test is running) of screens, elements, links,
controls, objects and values
– Initiate post-execution comparison
– Logging results of tests run (pass/fail, expected verses actual results)
– Measuring timings for tests
– Sending summary results to a test management tool.
Copyright © 2012 NTT DATA, Inc.
Unit Test Execution and Logging

• These are used by developers to test individual components or units


of software.
• A unit test harness or framework simulates the environment in which
that test object will run, though the provision of mock objects as
stubs or drivers (which are small programs that interact with the
software under test).
• e.g. JUnit for Java, NUnit for .Net applications, etc
• Features of test harnesses and unit test framework tools support:
– Supplying inputs to the software being tested
– Receiving outputs generated by the software being tested
– Executing a set of tests within the framework or using the test harness
– Recording the pass/fail results of each test (framework tools)
– Storing tests (framework tools)
– Support for debugging (framework tools)
– Coverage measurement at code level (framework tools)

Copyright © 2012 NTT DATA, Inc.


Test Comparators

• Test comparators determine differences between files, databases or test


results.
• Dynamic comparison is where the comparison is done dynamically, i.e.
while the test is executing. Dynamic comparison is useful when an actual
result does not match the expected result in the middle of a test, the tool
can be programmed to take some recovery action at this point or go to a
different set of tests.
• Post-Execution comparison is where the comparison is performed after
the test has finished executing and the software under test is no longer
running. Post-execution comparison is best for comparing a large volume
of data, for example comparing the contents of an entire file with the
expected contents of that file, or comparing a large set of records from a
database with the expected content of those records.
• Features of test comparators include support for:
– Dynamic comparison of transient events that occur during test execution.
– Post-execution comparison of stored data, e.g. in files or databases.
– Masking or filtering of subsets of actual and expected results.
Copyright © 2012 NTT DATA, Inc.
Coverage Measurement
• Coverage tools can help to answer ‘How thoroughly have you tested? ‘
• A coverage tool first identifies the elements or coverage items that can be
counted, and when a test has exercised that coverage item.
• At component testing level, the coverage items could be lines of code or
code statements or decision outcomes (e.g. True or False of an IF block)
• At component integration level, the coverage item may be a call to a
function or module.
• The process of identifying the coverage items at component test level is
called ‘Instrumenting the code‘. These tools measure the % of specific
types of code structures that have been exercised (e.g., statements,
branches or decisions, and module or function calls) by a set of tests.
• Features of coverage measurement tools include support for
– Identifying coverage items
– Calculating the percentage of coverage items that were exercised by a suite
of tests
– Reporting coverage items that have not been exercised as yet
– Identifying test inputs to exercise as yet uncovered items
– Generating stubs and drivers (if part of a unit test framework)

Copyright © 2012 NTT DATA, Inc.


Security Testing
• Security Testing tools are used to evaluate the security characteristics of
software. This includes evaluating the ability of the software to protect
data confidentiality, integrity, authentication, authorization & availability.
• Security tools are typically on a particular technology, platform or purpose
• There are a number of tools that protect systems from external attack, for
example firewalls.
• Security testing tools can be used to test security by trying to break into a
system, whether or not it is protected by a security tool. The attacks may
focus on the network, the support software, the application code or the
underlying database.
• Features or characteristics of security testing tools include support for:
– Identifying viruses
– Detecting intrusions such as denial of service attacks
– Simulating various types of external attacks
– Probing for open ports or other externally visible points of attack
– Identifying weaknesses in password files and passwords;
– Security checks during operation, e.g. for checking integrity of files, and
intrusion detection.
Copyright © 2012 NTT DATA, Inc.
Benefits and Risks of using tools
Potential Benefits Potential Risks
• Repetitive work is reduced • Unrealistic expectations for the tool (including functionality and
(e.g., running regression tests, ease of use)
re-entering the same test • Underestimating the time, cost and effort for the initial training and
data, and checking against gaining expertise
coding standards).
• Underestimating the time and effort needed to achieve benefits
• Greater consistency and from the tool (including changes in testing process and the way in
repeatability (e.g., tests which the tool is used)
executed by a tool in the
same order with the same • Underestimating the effort required to maintain the test assets
frequency, and tests derived generated by the tool
from requirements). • Over-reliance on the tool (use of automated testing where manual
testing would be better)
• Objective assessment (e.g.,
static measures, coverage). • Neglecting version control of test assets within the tool
• Ease of access to information • Interoperability between tools, such as requirements management
about tests or testing (e.g., and defect tracking tools or tools from multiple vendors
statistics and graphs about • Risk of tool vendor going out of business, returning the tool, or
test progress, incident rates selling the tool to a different vendor
and performance). • Poor response from vendor for support or upgrades
• Risk of suspension of open-source/ free tool project
• Inability to support a new platform.

Copyright © 2012 NTT DATA, Inc.


Some testing tools
Test Management Configuration Requirement Defect Tracking
Management Management
Proprietary Proprietary Proprietary Proprietary
• IBM Rational • Microsoft Visual • IBM Rational ClearQuest
Quality Manager SourceSafe DOORS • TestTrack
• HP Quality Center • ClearCase • IBM Rational • Track Record
• Microsoft Team • AccuRev RequisitePro • Remedy Quality
Foundation Server • IBM Configuration • Teamcenter Management
• Qmetry Management Requirements (Tc
• SilkCentral Test Version Control RM) GNU (Open Source)
Manager (CMVC) • Cognition Cockpit• Bugzilla
• SourceAnywhere • Mantis
GNU (Open GNU (Open Source) • Trac
Source) GNU (Open • aNimble Platform • Redmine
• TestLink Source) • Request Tracker
• Squash TM • Concurrent
• XStudio Versions Systems
(CVS)
• Subversion (svn)
• Vesta

Copyright © 2012 NTT DATA, Inc.


Class Quiz

Copyright © 2012 NTT DATA, Inc.


Class Quiz

•The Severity classification is :


A. Critical, Major, Minor, Cosmetic
B. Urgent, High, Medium, Low

Copyright © 2012 NTT DATA, Inc.


Class Quiz Cont.

•The Priority classification is :


A. Critical, Major, Minor, Cosmetic
B. Urgent, High, Medium, Low

Copyright © 2012 NTT DATA, Inc.


Class Quiz Cont.

• Formula for Calculation Testing Defect Density


A. [No. of Testing Defects Reported by dev team] / [No. of Test Cases
Executed]
B. [No. of Testing Defects Reported by testing team] / [No. of Test Cases
Written]
C. [No. of Testing Defects Reported by testing team] / [No. of Test Cases
Executed]
D. [No. of Testing Defects Reported by testing team] / [No. of Test Cases
Identified]

Copyright © 2012 NTT DATA, Inc.


Class Quiz Cont.

• Bugzilla tool is used for

A. Test Management
B. Configuration Management
C. Requirement Management
D. Defect Tracking

Copyright © 2012 NTT DATA, Inc.


Copyright © 2012 NTT DATA, Inc. This document contains confidential Company information. Do not disclose it to third parties without permission from the Company.

You might also like