This action might not be possible to undo. Are you sure you want to continue?
Black Box Testing Definition, Example, Application, Techniques, Advantages and Disadvantages: DEFINITION Black Box Testing, also known as Behavioral Testing, is a software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester. These tests can be functional or non-functional, though usually functional.
This method is named so because the software program, in the eyes of the tester, is like a black box; inside which one cannot see. Black Box Testing is contrasted with White Box Testing. View Differences between Black Box Testing and White Box Testing. This method of attempts to find errors in the following categories:
• • • • •
Incorrect or missing functions Interface errors Errors in data structures or external database access Behavior or performance errors Initialization and termination errors
EXAMPLE A tester, without knowledge of the internal structures of a website, tests the web pages by using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against the expected outcome. LEVELS APPLICABLE TO Black Box Testing method is applicable to all levels of the software testing process:
• • •
Integration Testing System Testing Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more black box testing method comes into use. BLACK BOX TESTING TECHNIQUES Following are some techniques that can be used for designing black box tests. Equivalence partitioning Equivalence Partitioning is a software test design technique that involves dividing input values into valid and invalid partitions and selecting representative values from each partition as test data. Boundary Value Analysis Boundary Value Analysis is a software test design technique that involves determination of boundaries for input values and selecting values that are at the boundaries and just inside/outside of the boundaries as test data. Cause Effect Graphing Cause Effect Graphing is a software test design technique that involves identifying the cases (input conditions) and effects (output conditions), producing a Cause-Effect Graph, and generating test cases accordingly. BLACK BOX TESTING ADVANTAGES
• • • •
Tests are done from a user’s point of view and will help in exposing discrepancies in the specifications Tester need not know programming languages or how the software has been implemented Tests can be conducted by a body independent from the developers, allowing for an objective perspective and the avoidance of developer-bias Test cases can be designed as soon as the specifications are complete
BLACK BOX TESTING DISADVANTAGES
Only a small number of possible inputs can be tested and many program paths will be left untested Without clear specifications, which is the situation in many projects, test cases will be difficult to design
usually a developer as well. Transparent Box Testing. Open Box Testing. black box test design technique: Procedure to derive and/or select test cases based on an analysis of the specification. either functional or non-functional. LEVELS APPLICABLE TO . This method is named so because the software program. Ever wondered why a soothsayer closes the eyes when foretelling events? So is almost the case in Black Box Testing. inside which one clearly sees. is like a white/transparent box. Application. White box testing is testing beyond the user interface and into the nitty-gritty of a system. Code-Based Testing or Structural Testing) is a software testing method in which the internal structure/design/implementation of the item being tested is known to the tester. Definition by ISTQB • • black box testing: Testing. Example. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs. determines all legal (valid and invalid) AND illegal inputs and verifies the outputs against the expected outcomes. EXAMPLE A tester. either functional or non-functional. Programming know-how and the implementation knowledge is essential. of a component or system without reference to its internal structure. White Box Testing White Box Testing Definition. Advantages and Disadvantages DEFINITION White Box Testing (also known as Clear Box Testing. View Differences between Black Box Testing and White Box Testing. studies the implementation code of a certain field on a webpage. White Box Testing is contrasted with Black Box Testing. Glass Box Testing.• • Tests can be redundant if the software designer/ developer has already run a test case. without reference to the internal structure of the component or system. which is also determined by studying the implementation code. in the eyes of the tester.
Testing is more thorough. Test script maintenance can be a burden if the implementation changes too frequently. it is mainly applied to Unit Testing. highly skilled resources are required. Since this method of testing it closely tied with the application being testing. tools to cater to every kind of implementation/platform may not be readily available. with thorough knowledge of programming and implementation. white-box test design technique: Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system. . Definition by ISTQB • • white-box testing: Testing based on an analysis of the internal structure of the component or system. Example: DEFINITION Gray Box Testing is a software testing method which is a combination of Black Box Testing method and White Box Testing method. In Black Box Testing. One need not wait for the GUI to be available.White Box Testing method is applicable to the following levels of software testing: • • • Unit Testing: For testing paths within a unit Integration Testing: For testing paths between units System Testing: For testing paths between subsystems However. In Gray Box Testing. WHITE BOX TESTING DISADVANTAGES • • • • Since tests can be very complex. White Box Testing is like the work of a mechanic who examines the engine to see why the car is not moving. the internal structure is partially known. WHITE BOX TESTING ADVANTAGES • • Testing can be commenced at an earlier stage. Gray Box Testing Gray Box Testing Definition. with the possibility of covering most paths. the internal structure of the item being tested is unknown to the tester and in White Box Testing the internal structure in known.
it is said.This involves having access to internal data structures and algorithms for purposes of designing the test cases. The term ‘smoke testing’. in the eyes of the tester is like a gray/semi-transparent box. Elaboration. EXAMPLE An example of Gray Box Testing would be when the codes for two units/modules are studied (White Box Testing method) for designing test cases and actual tests are conducted using the exposed interfaces (Black Box Testing method). inside which one can partially see. or black-box level. but testing at the user. . Hence Grey Box Testing and Gray Box Testing mean the same. LEVELS APPLICABLE TO Though Gray Box Testing method may be used in other levels of testing. The results of this testing is used to decide if a build is stable enough to proceed with further testing. came to software testing from a similar type of hardware testing. SPELLING Note that Gray is also spelt as Grey. also known as “Build Verification Testing”. Details: DEFINITION Smoke Testing. Gray Box Testing is named so because the software program. Smoke Testing Smoke Testing Definition. it is primarily useful in Integration Testing. is a type of software testing that comprises of a non-exhaustive set of tests that aim at ensuring that the most important functions work. Advantages. in which the device passed the test if it did not catch fire (or smoked) the first time it was turned on.
go ahead with further testing. it takes just one incorrect character in the code to render an entire application useless. halt further tests and ask for a new build with the required fixes. with addition of more functionalities etc. Smoke test helps in exposing integration and major problems early in the cycle. As and when an application becomes mature. detailed testing might be a waste of time and effort. Smoke test is performed manually or with the help of automation tools/scripts. It can be conducted on both newly created software and enhanced software. It uncovers problems early. it is best to automate smoke testing. The result of this test is used to decide whether to proceed with further testing. If it fails. It provides some level of confidence that changes to the software have not adversely affected major areas (the areas covered by smoke testing. If an application is badly broken. ADVANTAGES • • • It exposes integration issues. NOTE Do not consider smoke testing to be a substitute of functional/regression testing. If builds are prepared frequently. . the smoke test needs to be made more expansive. If the smoke test passes. System Testing and Acceptance Testing levels.ELABORATION Smoke testing covers most of the major functions of the software but none of them in depth. Sometimes. of course) LEVELS APPLICABLE TO Smoke testing is normally used in Integration Testing.
LITERAL MEANING OF REGRESSION Regression [noun]: the act of going back to a previous place or state. many will not agree with this analogy but what the heck! Let’s have some fun! . Due to the scale and importance of regression testing. return or reversion. Integration. a dance technique (popularized by Michael Jackson) that gives the illusion of the dancer being pulled backwards while attempting to walk forward. ANALOGY You can consider regression testing as similar to moonwalk or backslide. EXTENT In an ideal case. LEVELS APPLICABLE TO Regression testing can be performed during any level of testing (Unit.DEFINITION Regression testing is a type of software testing that intends to ensure that changes (enhancements or defect fixes) to the software have not adversely affected it. new test cases are not created but previously created test cases are re-executed. Well. or Acceptance) but it is mostly relevant during System Testing. ELABORATION The likelihood of any code change impacting functionalities that are not directly associated with the code is always there and it is essential that regression testing is conducted to make sure that fixing one thing has not broken another thing. System. it is essential to do an impact analysis of the changes to identify areas of the software that have the highest probability of being affected by the change and that have the highest impact to users in case of malfunction and focus testing around those areas. In such cases. a full regression test is desirable but oftentimes there are time/resource constraints. more and more companies and projects are adopting regression test automation tools. During regression testing.
and Guidelines: TEST PLAN DEFINITION A Software Test Plan is a document describing the testing scope and activities. and the rationale for their choice. master test plan: A test plan that typically addresses multiple test levels. ISTQB Definition • • • test plan: A document describing the scope. It is the basis for formally testing any software/product in a project. the test design techniques and entry and exit criteria to be used.Test Plan Test Plan Definition. approach. the test environment. It is a record of the test planning process. Template. . the testing tasks. resources and schedule of intended test activities.and any risks requiring contingency planning. phase test plan: A test plan that typically addresses one test phase. the features to be tested. It identifies amongst others test items. degree of tester independence. Types. who will do each task.
o Unit Test Plan o Integration Test Plan .TEST PLAN TYPES One can have the following types of test plans: • • Master Test Plan: A single high-level test plan for a project/product that unifies all other test plans. Testing Level Specific Test Plans:Plans for each level of testing.
the following format. Provide references to the Requirements and/or Design specifications of the features to be tested Features Not to Be Tested: • • List the features of the software/product which will not be tested. Specify the reasons these features won’t be tested. standards. and test management tools being implemented. o o TEST PLAN TEMPLATE The format and content of a software test plan vary depending on the processes.• System Test Plan Acceptance Test Plan Testing Type Specific Test Plans: Plans for major types of testing like Performance Test Plan and Security Test Plan. with links to them if available. . including the following: o Project Plan o Configuration Management Plan Test Items: • List the test items (software/products) and their versions. References: • List the related documents. Specify the goals/objectives. provides a summary of what a test plan can/should contain. Test Plan Identifier: • Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one. Features to be Tested: • • List the features of the software/product to be tested. Nevertheless.) Introduction: • • • Provide an overview of the test plan. Specify any constraints. which is based on IEEE standard for software test documentation.
Specify the testing levels [if it's a Master Test Plan]. and/or provide a link to the detailed schedule. White Box/Black Box/Gray Box] Item Pass/Fail Criteria: • Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing. Suspension Criteria and Resumption Requirements: • • Specify criteria to be used to suspend the testing activity. and links to them if available. Identify training that is necessary to provide those skills. . network etc.Approach: • • Mention the overall approach to testing. Estimate: • Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation. the testing types. specifying key test milestones. if not already acquired. Staffing and Training Needs: • • Specify staffing needs by role and required skills. Test Deliverables: • List test deliverables. including the following: o Test Plan (this document itself) o Test Cases o Test Scripts o Defect/Enhancement Logs o Test Reports Test Environment: • • Specify the properties of test environment: hardware. Schedule: • Provide a summary of the schedule. software. and the testing methods [Manual/Automated. List any testing or related tools. Specify testing activities which must be redone when testing is resumed.
Have the test plan reviewed a number of times prior to baselining it or sending it for approval. Avoid redundancy and superfluousness. . Test Case Test Case Definition. when you specify an operating system as a property of a test environment. Update the plan as and when necessary. Tips DEFINITION A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly. Avoid lengthy paragraphs. The quality of your test plan speaks volumes about the quality of the testing you or your team is going to perform. (If the document is to be printed. Be specific. Make use of lists and tables wherever possible. not just the OS Name.) TEST PLAN GUIDELINES • • • • • Make the plan concise. mention the OS Edition/Version as well. For example. An out-dated and unused document stinks and is worse than not having the document in the first place. Provide space for signatures and dates. Specify the mitigation plan and the contingency plan for each risk. If you think you do not need a section that has been mentioned in the template above. Approvals: • • Specify the names and roles of all persons who must approve the plan. List the dependencies. Risks: • • List the risks that have been identified. go ahead and delete that section in your test plan. Template.Responsibilities: • List the responsibilities of each team/role/individual. Assumptions and Dependencies: • • List the assumptions that have been made during the preparation of this plan. Example.
TEST CASE TEMPLATE A test case can have the following elements. RS001 1. The test data. Any prerequisites or preconditions that must be fulfilled prior to executing the test. The ID of the requirement this test case relates/traces to. Expected Result The expected result of the test. to be filled after executing the test. Remarks Any comments on the test case or test execution. User is authorized. Executed By The name of the person who executed the test. Date of The date of execution of the test.The process of developing test cases can also help find problems in the requirements or design of an application. Execution Test The environment (Hardware/Software/Network) in which the test was Environment executed. The summary / objective of the test case. Test Procedure Step-by-step procedure to execute the test. The ID of the test case. or links to the test data. Pass or Fail. TEST CASE EXAMPLE / TEST CASE SAMPLE Test Suite ID Test Case ID Test Case Summary Related Requirement Prerequisites TS001 TC001 To verify that clicking the Generate Coin button generates coins. Other statuses can be ‘Not Executed’ if testing is not Status performed and ‘Blocked’ if testing is blocked. that normally a test management tool is used by companies and the format is determined by the tool used. Date of Creation The date of creation of the test case. that are to be used while conducting Test Data the test. Note. Actual Result The actual result of the test. Test Suite ID Test Case ID Test Case Summary Related Requirement Prerequisites The ID of the test suite to which this test case belongs. Created By The name of the author of the test case. . however.
3. 0. Denominations: 0. 10. Click Generate Coin. 1.05. Do not overlap or complicate test cases. etc). Characteristics of a good test case: o Accurate: Exacts the purpose. Actual Result 2. Enter the number of coins in the Quantity field. nothing happens.Test Procedure 2. o Economical: No unnecessary steps or words. Attempt to make your test cases ‘atomic’. o Traceable: Capable of being traced to requirements. 5. 5 Test Data 2. do that. 1. the result is as expected. A message ‘Please enter a valid quantity between 1 and 10′ should be displayed if the specified quantity is invalid. 0. o Repeatable: Can be used to perform the test over and over. o Use active voice: Do this. Language: o Write in simple and easy to understand language. 2.25. Ensure that all positive scenarios and negative scenarios are covered. Created By John Doe Date of Creation 01/14/2020 Executed By Jane Roe Date of 02/16/2020 Execution • OS: Windows Y Test Environment • Browser: Chrome N WRITING GOOD TEST CASES • • • • As far as possible. 2. 1. Coin balance is available. 1. 0. o Use exact and consistent names (of forms. Quantities: 0.10. Coin of the specified denomination should be produced if the specified Quantity is valid (1. 1.50. 5) Expected Result 2. If the specified quantity is valid. If the specified quantity is invalid. fields. the expected message is not displayed Status Fail Remarks This is a sample test case. Select the coin denomination in the Denomination field. . 20 1. write test cases in such a way that you test only one thing at a time.
The process of finding the cause of bugs is known as debugging. This provides information on which module / component is buggy or risky. • • • • • A program that contains a large number of bugs is said to be buggy.(which may not be specified but are reasonable). • • • Module/Component A Module/Component B Module/Component C . (See Defect Report) Applications for tracking bugs are known as bug tracking tools. The process of intentionally injecting bugs in a software program. CLASSIFICATION Software Defects/ Bugs are normally classified as per: • • • • • • • Severity / Impact (See Defect Severity) Probability / Visibility (See Defect Probability) Priority / Urgency (See Defect Priority) Related Dimension of Quality (See Dimensions of Quality) Related Module / Component Phase Detected Phase Injected Related Module /Component Related Module / Component indicates the module or component of the software where the defect was detected. a defect is an error in coding or logic that causes a program to malfunction or to produce incorrect/unexpected results. Software Testing proves that defects exist but NOT that defects do not exist. Reports detailing bugs in software are known as bug reports. is known as bebugging. to estimate test coverage by monitoring the detection of those bugs. In other words.
Classification. • • • • Unit Testing Integration Testing System Testing Acceptance Testing Phase Injected Phase Injected indicates the phase in the software development lifecycle where the bug was introduced. conflicts. Phase Injected is always earlier in the software development lifecycle than the Phase Detected.• … Phase Detected Phase Detected indicates the phase in the software development lifecycle where the defect was identified. • • • • • Requirements Development High Level Design Detailed Design Coding Build/Deployment Note that the categorizations above are just guidelines and it is up to the project/organization to decide on what kind of categorization to use. the categorization depends on the defect tracking tool that is being used. Defect Severity Defect/Bug Severity Definition. In most cases. and unhealthy bickering later. Caution: Defect Severity or Impact is a classification of software defect (bug) to indicate the degree of negative impact on the quality of software. Phase Injected can be known only after a proper root-cause analysis of the bug. ISTQB Definition • severity: The degree of impact that a defect has on the development or operation of a component or system. NOTE: We prefer the term ‘Defect’ over the term ‘Bug’ because ‘Defect’ is more comprehensive. It is essential that project members agree beforehand on the categorization (and the meaning of each categorization) so as to avoid arguments. .
DEFECT SEVERITY CLASSIFICATION The actual terminologies. A typical situation is where a Tester classifies the Severity of Defect as Critical or Major but the Developer refuses to accept that: He/she believes that the defect is of Minor or Trivial severity. see Defect. if necessary) what each level of severity means and by agreeing to at least some standards (substantiating with examples. Though we have provided you some guidelines in this article on how to interpret each level of severity. or defect tracking tools. Defect Probability . Major: The defect affects major functionality or major data. It has a workaround but is not obvious and is difficult. but the following is a normally accepted classification. It has an easy workaround. organizations. It is merely an inconvenience. can vary depending on people. Example: Unsuccessful installation. Example: A minor feature that is not functional in one module but the same task is easily doable from another module. It does not even need a workaround. Minor: The defect affects minor functionality or non-critical data. this is still a very subjective matter and chances are high that one will not agree with the definition of the other. Example: A feature is not functional from one module but the task is doable if 10 complicated indirect steps are followed in another module/s. It does not have a workaround. Example: Petty layout discrepancies. projects. if necessary. It does not impact productivity or efficiency. • • • • Critical: The defect affects critical functionality or critical data. spelling/grammatical errors. Trivial: The defect does not affect functionality or data. Severity is also denoted as: • • • • S1 = Critical S2 = Major S3 = Minor S4 = Trivial CAUTION: Defect Severity is one of the most common causes of feuds between Testers and Developers.) ADVICE: Go easy on this touchy defect dimension and good luck! For more details on defects. and their meaning. complete failure of a feature. You can however lessen the chances of differing opinions in your project by discussing (and documenting.
• • • High: Encountered by all or almost all the users of the feature Medium: Encountered by about 50% of the users of the feature Low: Encountered by very few or no users of the feature Defect Probability can also be denoted in percentage (%). a bug in a widely used feature can have a low probability if the users rarely detect it. the life cycle in general resembles the following: . a bug in a rarely used feature can have a high probability if the bug is easily encountered by users of the feature. For more details on defects. Similarly. The Life Cycle varies from organization to organization and is governed by the software testing process the organization or project follows and/or the Defect tracking tool being used. Details: Defect Probability (Defect Visibility or Bug Probability or Bug Visibility) indicates the likelihood of a user encountering the defect / bug. Categorization. Hence. The measure of Probability/Visibility is with respect to the usage of a feature and not the overall software. see Defect. « Defect Severity Defect » Defect Life Cycle Life cycle of a Software Defect/Bug: Defect Life Cycle (Bug Life cycle) is the journey of a defect from its identification to its closure. Nevertheless.Defect Probability / Visibility: Definition.
The assigned Developer’s responsibility is now to fix the defect and have it . This defect is yet to be studied/approved. The fate of a NEW defect is one of ASSIGNED. DROPPED and DEFERRED. RESOLVED. ASSIGNED / OPEN: Test / Development / Project lead studies the NEW defect and if it is found to be valid it is assigned to a member of the Development Team.Status NEW ASSIGNED DEFERRED DROPPED COMPLETED REASSIGNED CLOSED Defect Status Explanation • Alternative Status OPEN REJECTED FIXED. TEST REOPENED VERIFIED • NEW: Tester finds a defect and posts it with the status NEW.
CLOSED / VERIFIED: If the Tester / Test Lead finds that the defect is indeed fixed and is no more of any concern. avoid entertaining any ‘defect related requests’ without an appropriate change in the status of the defect in the tool. Or else. it is reassigned to the Developer who ‘fixed’ it. you will never be able to get up-to-date defect metrics for analysis. This defect is ASSIGNED when the time comes. If a defect tracking tool is being used. Also. A COMPLETED defect is either CLOSED. A REASSIGNED defect needs to be COMPLETED again. COMPLETED / FIXED / RESOLVED / TEST: Developer ‘fixes’ the defect that is ASSIGNED to him or her. Now. Guidelines: . For example. Defect Report Defect Report Description. Template. make sure the defect life cycle is documented. o Need More Information: The Developer needs more information on the defect from the Tester. it is CLOSED / VERIFIED. Sometimes. do not simply DROP a defect but provide a reason for doing so. ASSIGNED and OPEN can be different statuses. Do not let anybody take shortcuts. In that case. REASSIGNED / REOPENED: If the Tester finds that the ‘fixed’ defect is in fact not fixed or only partially fixed. the ‘fixed’ defect needs to be verified by the Test Team and the Development Team ‘assigns’ the defect back to the Test Team. DROPPED / REJECTED: Test / Development/ Project lead studies the NEW defect and if it is found to be invalid. or REASSIGNED.• • • • • • COMPLETED. some organizations may offer the following statuses: o Won’t Fix / Can’t Fix: The Developer will not or cannot fix the defect due to some reason. This is the happy ending. a defect can be open yet unassigned. Ensure that enough detail is entered in each status change. Ensure that each individual clearly understands his/her responsibility as regards each defect. Defect Life Cycle Implementation Guidelines • • • • Make sure the entire team understands what each defect status exactly means. Note that the specific reason for this action needs to be given. if fine. DEFERRED: If a valid NEW or ASSIGNED defect is decided to be fixed in upcoming releases instead of the current release it is DEFERRED. If a Developer cannot fix a defect. if still not fine. o Can’t Reproduce: The Developer is unable to reproduce the defect. it is DROPPED / REJECTED.
1. Build version of the product where the defect was detected (e. Status The status of the defect. Number the steps.5) Summary of the defect. The purpose of a defect report is to state the problem as clearly as possible so that developers can replicate the defect easily and fix it.g. Steps to Replicate Step by step description of the way to reproduce the defect. Actual Result The actual result you received when you followed the steps.9) REPORTING DEFECTS EFFECTIVELY It is essential that you report defects effectively so that time and effort is not unnecessarily wasted in trying to understand and reproduce the defect. Release version of the product. Keep this clear and concise. (Usually Automated) Project name. Here are some guidelines: • Be specific: o Specify the exact action: Do not say something like ‘Select ButtonB’. (See Defect Life Cycle) Fixed Build Version Build version of the product where the defect was fixed (e. if the defect can be arrived at by using all the three ways. Product name.3.2. a defect report can consist of the following elements. Defect Severity Severity of the Defect. Detailed description of the defect. Remarks Any additional comments on the defect. (See Defect Severity) Defect Priority Priority of the Defect.2.After uncovering a defect (bug). Assigned To The name of the person that is assigned to analyze/fix the defect.2. Do you mean ‘Click ButtonB’ or ‘Press ALT+B’ or ‘Focus on ButtonB and click ENTER’? Of course. it’s okay to use a generic term as ‘Select’ but bear in mind that .g.g. ID Project Product Release Version Module Detected Build Version Summary Description Unique identifier given to the defect.3. (e. Describe as much as possible but without repeating anything or using complex words. testers generate a formal defect report. in general. Attachments Attach any additional information like screenshots and logs. (See Defect Priority) Reported By The name of the person who reported the defect. Expected Results The expected results. However. DEFECT REPORT TEMPLATE In most companies. a defect reporting tool is used and the elements of a report can vary. Keep it simple but comprehensive.3) Specific module of the product where the defect was detected. 1. 1.
mention elsewhere in the report that “D can also be got if you do ‘B and Y’ or ‘C and Z’. (If you cannot replicate it again. Elaboration. Review it at least once. Replicate it at least once more to be sure. Defect Age Defect Age Definition. Formula.” Understanding all the paths at once will be difficult.” What does the ‘it’ stand for? ‘Z’ or. stating that you are unable to reproduce the defect anymore and providing any evidence of the defect if you had gathered. and Uses: Defect Age can be measured in terms of any of the following: • • Time Phases DEFECT AGE (IN TIME) Definition . and then close it. mention the exact path you followed: Do not say something like “If you do ‘A and X’ or ‘B and Y’ or ‘C and Z’. or ‘X’ or ‘ApplicationA’?” Be detailed: o Provide more information (not less). do not be lazy. Y. ‘Y’. finally submit the report for further investigation.” o Stick to the facts and avoid the emotions. In other words. and Z. ) Review the report: o Do not hit ‘Submit’ as soon as you write the report. open X. Remove any typos.• • • • you might just get the fix for the ‘Click ButtonB’ scenario. Be objective: o Do not make subjective statements like “This is a lousy application” or “You fixed it real bad.” o Do not use vague pronouns: Do not say something like “In ApplicationA. Developers may or may not use all the information you provide but they sure do not want to beg you for any information you have missed.] o In case of multiple paths. Reproduce the defect: o Do not be impatient and file a defect report as soon as you uncover a defect. [Note: This might be a highly unlikely example but it is hoped that the message is clear.” You can. try recalling the exact test condition and keep trying. say “Do ‘A and X’ and you get D. Instead. you get D. However. of course. if you cannot replicate it again after many trials.
Elaboration • • ‘defect injection phase’ is the phase in the software life cycle where the defect was introduced. ‘fixed’ means that the defect is verified and closed. ‘defect detection phase’ is the phase in the software life cycle where the defect was identified. The difference in time can be calculated in hours or in days. not just ‘completed’ by the developer. the Defect Age is 74 hours. Uses • For determining the responsiveness of the development/testing team. Defect Age Formula Defect Age in Time = Defect Fix Date (OR Current Date) – Defect Detection Date Normally. average of all defects is calculated. . Elaboration • • • • The ‘defects’ are confirmed and assigned (not just reported). average age of all defects is calculated. Defect Age Formula Defect Age in Phase = Defect Detection Phase – Defect Injection Phase Normally. Example If a defect was detected on 01/01/2009 10:00:00 AM and closed on 01/04/2009 12:00:00 PM. Dropped defects are not counted.Defect Age (in Time) is the difference in time between the date a defect is detected and the current date (if the defect is still open) or the date the defect was fixed (if the defect is already fixed). Lesser the age better the responsiveness. DEFECT AGE (IN PHASES) Definition Defect Age (in Phases) is the difference in phases between the defect injection phase and the defect detection phase.
the first month. Lesser the age better the effectiveness. The period might be for one of the following: • • • for a duration (say. 3. Requirements Development High-Level Design Detail Design Coding Unit Testing Integration Testing System Testing Acceptance Testing If a defect is identified in System Testing and the defect was introduced in Requirements Development. the quarter. 4. 2. 6. Formula. ELABORATION The ‘defects’ are: • • confirmed and agreed upon (not just reported). for each phase of the software life cycle. 7. the Defect Age is 6. Dropped defects are not counted. 8. and Uses: DEFINITION Defect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component. 5. for the whole of the software life cycle.Example Let’s say the software life cycle has the following phases: 1. Elaboration. or the year). Uses • For assessing the effectiveness of each phase and any review/testing activities. Defect Density Defect Density Definition. The size is measured in one of the following: .
Example: DEFINITION Defect Detection Efficiency (DDE) is the number of defects detected during a phase/stage that are injected during that same phase divided by the total number of defects injected during that phase. and Coding. Hence. ELABORATION • • • defects: o Are confirmed and agreed upon (not just reported). Formula. the injected phase for that defect is Design phase. Unit. Design. Requirement. Elaboration. For comparing software/products so that quality of each software/product can be quantified and resources focused towards those with low quality. o Dropped defects are not counted. injected: o The phase a defect is ‘injected’ in is identified by analyzing the defects [For instance.• • Function Points (FP) Source Lines of Code DEFECT DENSITY FORMULA USES • • For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them.] FORMULA . Uses. phase: o Can be any phase in the software development life cycle where defects can be injected AND detected. a defect can be detected in System Testing phase but the cause of the defect can be due to wrong design. For example. Target Value. Defect Detection Efficiency Defect Detection Efficiency Definition.
For identifying the phases in the software development life cycle that are the weakest in terms of quality control and for focusing on them.] USES • • For measuring the quality of the processes (process efficiency) within software development life cycle. by evaluating the degree to which defects introduced during that phase/stage are eliminated before they are transmitted into subsequent phases/stages.ment 4 Review 40. EXAMPLE Phase Injected Injected Defects Phase Specific Activity Requirement Development Design Coding — Detected Detected Detected Defect Defects Phase Specific Defects that Detection Activity were Injected Efficiency in the same Phase 4 Require.10 ments Design Coding Unit Testing 24 155 0 .19%[= 22 / 155] 25 Unit Testing — — Require.50%[= Review 15 / 24] 23 Code Review 22 14. [Note: the cost of fixing a defect at a later phase is higher.• DDE = (Number of Defects Injected AND Detected in a Phase / Total Number of Defects Injected in that Phase) x 100 % UNIT • Percentage (%) TARGET VALUE • The ultimate target value for Defect Detection Efficiency is 100% which means that all defects injected during a phase are detected during that same phase and none are transmitted to subsequent phases.00%[= 4 / 10] 16 Design 15 62.
Requirement Review can be strengthened.] The other Phases like Integration Testing etc do not have DDE because defects do not get Injected during these phases. The DDE of Coding Phase is only 14. Coding and Unit Testing phases are combined. .19% which can be bettered. [Note: sometimes.50 % which is relatively good but can be bettered. The DDR of Design Phase is 62.00% which can definitely be bettered. The DDE for this phase is usually low because most defects get injected during this phase but one should definitely aim higher by strengthening Code Review.Integration Testing System Testing Acceptance Testing Operation • • • 0 — 30 System Testing Integration Testing Acceptance Testing Operation — — 0 0 — — 83 5 — — — — 0 — 3 — — • The DDE of Requirements Phase is 40.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.