4.

Introduction to Testing
What is testing? Testing is the execution of a system in a real or simulated environment with the intent of finding faults. Objectives of Testing Finding of Errors - Primary Goal Trying to prove that software does not work. Thus, indirectly verifying that software meets requirements What is Software Testing? Software testing is the process of testing the functionality and correctness of software by running it. Software testing is usually performed for one of two reasons: (1) defect detection (2) reliability or Process of executing a computer program and comparing the actual behavior with the expected behavior What is the goal of Software Testing? * * * * Demonstrate That Faults Are Not Present Find Errors Ensure That All The Functionality Is Implemented Ensure The Customer Will Be Able To Get His Work Done

Modes of Testing * Static Analysis doesn’t involve actual program execution. The code is examined, it is tested without being executed Ex: - Reviews * Dynamic In Dynamic, The code is executed. Ex:- Unit testing Testing methods * White box testing Use the control structure of the procedural design to derive test cases. * Black box testing Derive sets of input conditions that will fully exercise the functional requirements for a program. * Integration Assembling parts of a system

Verification and Validation * Verification: Are we doing the job right? The set of activities that ensure that software correctly implements a specific function. (i.e. The process of determining whether or not products of a given phase of the software development cycle fulfill the requirements established during previous phase). Ex: - Technical reviews, quality & configuration audits, performance monitoring, simulation, feasibility study, documentation review, database review, algorithm analysis etc * Validation: Are we doing the right job? The set of activities that ensure that the software that has been built is traceable to customer requirements.(An attempt to find errors by executing the program in a real environment ). Ex: Unit testing, system testing and installation testing etc What is a Test Case? A test case is a noted/documented set of steps/activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs. What is a software error? A mismatch between the program and its specification is an error in the program if and only if the specifications exists and is correct. Why we use testing techniques? Testing Techniques can be used in order to obtain a structured and efficient testing which covers the testing objectives during the different phases in the software life cycle.

5. Testing Types
There are different types of testing available in software testing. The testing terminology also includes in the below definitions. Those are: Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria. Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.). Ad Hoc Testing: Similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it. Alpha testing: Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers. Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services. Automated Testing: · Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. · The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Click this Link for : Approaches for automated testing tools Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests. Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control. Beta Testing: Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. Branch Testing : Testing in which all branches in the program source code are tested at least once. The process is repeated until the component at the top of the hierarchy is tested. Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases. (Some of these tests are stress tests). All functions found in the Functional Specifications have been implemented. The goal is to test how well the component conforms to the published requirements for the component. test inputs would include negative 101 and positive 1001. . It means that if a function expects all values in range of negative 100 to positive 1000. Boundary Value Analysis: BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. to analyze the programmer's logic and assumptions. Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. Bottom Up Testing: An approach to integration testing where the lowest level components are tested first.Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic. analyzing the code with respect to a checklist of historically common programming errors. Code Complete: Phase of development where functionality is implemented in entirety. Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention. and analyzing its compliance with coding standards. Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases. then used to facilitate the testing of higher level components. CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes. bug fixes are all that are left. while the state of program variables is manually monitored.

e. or systems if appropriate. such as interacting with a database. Cyclomatic Complexity: A measure of the logical complexity of an algorithm. Depth Testing: A test that exercises a feature of a product in full detail. End-to-End testing: Testing a complete application environment in a situation that mimics real-world use. Operating Systems. or hardware. Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate. Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values. Dependency Testing: Examines an application's requirements for pre-existing software. used in white-box testing. or system that accepts the same inputs and produces the same outputs as a given system. or interacting with other hardware. . using network communications. maintained as a file or spreadsheet. Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features. computer program. Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution. A common technique in Automated Testing. browsers. initial states and configuration in order to maintain proper functionality. Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.g. applications. Component: A minimal software item for which a separate specification is available. Dynamic Testing: Testing software through executing it Emulator: A device.

If the project has a pilot in production then logs from the pilot can be used to generate ‘usage profiles’ that can be used as part of the testing process. · Testing the features and operational behavior of a product to ensure they correspond to its specifications. functionality heavily. Gorilla Testing: Testing one particular module. by measuring transaction pass/fail/error rates. so that anticipated activity can be accurately simulated in a test situation. Glass Box Testing: A synonym for White Box Testing. or power out conditions.Functional Testing: See also Black Box Testing. This type of testing is especially relevant to client/server and distributed systems. · Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. product (document itself) improvement and process improvement (of both document production and inspection). The primary objective of this test is to determine the response times for various time critical transactions and business processes and that they are within documented expectations (or Service Level Agreements SLAs). done by programmers or by testers. Incremental Integration testing: Continuous testing of an application as new functionality is added. It consists of two aspects. Usually performed after unit and functional testing. The test also measures the capability of the application to function correctly under load. requiring substantial input from the business. Events can include shortage of disk space. Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Load Testing: Load Tests are end to end performance tests under anticipated production load. This is a major test. Inspection: A group review quality improvement process for written material. and can even be used to ‘drive’ large portions of the Load Test. . or that test drivers be developed as needed. unexpected loss of communication. requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed.

Metric: A standard of measurement. i. It is important that such tests are repeatable as they may need to be executed several times in the first year of wide scale deployment. Loop Testing: A white box testing technique that exercises program loops. See also Regression Testing.e. a customer search may take 15 seconds in a full sized database if indexes had not been applied correctly. Negative Testing: Testing aimed at showing software does not work. and optionally with a “projected” database. Also known as "test to fail". just few tests here and there to ensure the system or an application does not crash out. A metric should be a real objective measurement of something such as number of bugs per lines of code. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. or if an SQL 'hint' was incorporated in a statement that had been optimized with a much smaller database. but with a production sized database. Mutation testing: A method for determining if a set of test data or test cases is useful. This sets ‘best possible’ performance expectation under a given configuration of infrastructure. Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Software metrics are the statistics describing the structure or content of a program. For example. Such . N+1 Testing: A variation of Regression Testing. Also know as "Load Testing". It also highlights very early in the testing process if changes need to be made before load testing should be undertaken. The cycles are typically repeated until the solution reaches a steady state and there are no errors. Monkey Testing: Testing a system or an Application on the fly. by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Often this is performed using an automated test tool to simulate large number of users. See Performance Testing also. If some database tables will be much larger in some months time.Load testing must be executed on “today’s” production size database. Proper implementation requires large computational resources. then Load testing should also be conducted against a projected database. to ensure that new releases and changes in database size do not push response times beyond prescribed SLAs. Performance Tests are tests that determine end to end timing (benchmarking) of various time critical business processes and transactions. while the system is under low load.

Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management. Sanity Testing: Brief test of major functional elements of a piece of software to determine if it’s basically operational. or power out conditions. . procedures. Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level. Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer. Quality System: The organizational structure. Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load. Also known as "test to pass". processes. and resources for implementing quality management. which could be remediate prior to a full end to end load test. Positive Testing: Testing aimed at showing software works. responsibilities. Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. unexpected loss of communication.performance testing would highlight such a slow customer search transaction. See also Smoke Testing. Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives. Quality Management: That aspect of the overall management function that determines and implements the quality policy. Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality. Events can include shortage of disk space.

Such testing involves a mathematical approach to determine the impact that one system will have on another system. This is in contrast to Load Testing. or corrupting databases. . Catastrophic failures require restarting various infrastructures and contribute to downtime. including spike. System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components. There are various varieties of Stress Tests. a stress-full environment for support staff and managers. covers all combined parts of a system. This is external storage as opposed to internal storage. which loads the IT systems with more work than had been planned. web enabling a customer 'order status' facility may impact on performance of telemarketing screens that interrogate the same tables in the same database. For example. Sociability (sensitivity) Tests: Sensitivity analysis testing can determine impact of activities in one system on another related system. Static Testing: Analysis of a program carried out without executing the program. Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. the software may not be in a 'sane' enough condition to warrant further testing in its current state. stepped and gradual ramp-up tests. It’s a black-box type testing that is based on overall requirements specifications. which attempts to simulate anticipated load. This test is one of the most fundamental load and performance tests. or if everything just “goes really slow”. It is important to know in advance if a ‘stress’ situation will result in a catastrophic system failure. Static Analysis: Analysis of a program carried out without executing the program.Smoke Testing: Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. and how it fails. if the new software is crashing systems every 5 minutes. as well as possible financial losses. See also White Box Testing. The issue of web enabling can be that it is more successful than anticipated and can result in many more enquiries than originally envisioned. For example. Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software. bogging down systems to a crawl. Stress Testing: Stress Tests determine the load under which a system fails.

verification steps. · The process of analyzing a software item to detect the differences between existing and required conditions (that is. Test Suite: A collection of tests used to validate the behavior of a product. and expected outcomes developed for a particular objective. The scope of a Test Suite varies from organization to organization. Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs. Also known as a Test Driver. outputs. bugs). and schedule of intended testing activities. There may be several Test Suites for a particular product for example. Test Environment: The hardware and software environment in which tests will be run. A Test Case will consist of information such as requirements testing. Test Procedure: A document providing detailed instructions for the execution of one or more test cases. the testing tasks. the features to be tested. Test Plan: A document describing the scope. Test Automation: See Automated Testing. and making an evaluation of some aspect of the system or component.Testing: · The process of exercising software to verify that it satisfies specified requirements and to detect errors. approach. prerequisites. Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool. observing or recording the results. · A set of inputs. and any risks requiring contingency planning. · The process of operating a system or component under specified conditions. test environment. In most cases however . resources. Test Harness: A program or test tool used to execute a test. It identifies test items. predicted results and execution conditions for the associated tests. execution preconditions. who will do each task. and to evaluate the features of the software item. Test Case: · Test Case is a commonly used term for a specific test. such as to exercise a particular program path or to verify compliance with a specific requirement. and any other software with which the software under test interacts when under test including stubs and test drivers. etc. This is usually the smallest unit of testing. test steps.

Includes techniques such as Branch Testing . Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities. Typically done by the programmer and not by testers. The techniques for validation are testing. Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. with lower level components being simulated by stubs. Tested components are then used to test lower level components. or its documentation. inspection and reviewing. Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases. Not always easily done unless the application has a well-designed architecture with tight code. Walkthrough: A review of requirements. Verification: The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. grouping together hundreds or thousands of tests related by what they are intended to test. may require developing test driver modules or test harnesses. to test particular functions or code modules. The process is repeated until the lowest level components have been tested.a Test Suite is a high level concept. The techniques for verification are testing. inspection and reviewing. designs or code characterized by the author of the material under review guiding the progression of the review. as it requires detailed knowledge of the internal program design and code. Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first. a component of the system. Use Case: The specification of tests that are conducted from the end-user perspective. Unit Testing: The most 'micro' scale of testing. Test Tools: Computer programs used in the testing of a system. Usability Testing: Testing the ease with which users can learn and use a product. User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.

Contrast with Black Box Testing. Also known as Structural Testing and Glass Box Testing. .and Path Testing.

To fulfill this role the Lead must understand the discipline of testing and how to effectively implement a testing process while fulfilling the traditional leadership roles of a manager. These are roles. Test Lead or Test Manager The Role of Test Lead / Manager is to effectively lead the testing team. complexity. . What does this mean? The manager must manage and implement or maintain an effective testing process. Testing Process The testing process includes the Test planning. Test design and Test execution.6. The requirement for any given role depends on the size. The Structured Approach to Testing is: Test Planning * Define what to test * Identify Functions to be tested * Test conditions * Manual or Automated * Prioritize to identify Most Important Tests * Record Document References Test Design * Define how to test * Identify Test Specifications * Build detailed test scripts * Quick Script generation * Documents Test Execution * Define when to test * Build test execution schedule * Record test results Testing Roles: As in any organization or organized endeavor there are Roles that must be fulfilled within any testing organization. so it is quite possible that one person could fulfill many roles within the testing organization. goals. and maturity of the testing organization.

provides testing frameworks and templates. and ensures effective implementation of the appropriate testing techniques.Test Architect The Role of the Test Architect is to formulate an integrated test architecture that supports the testing process and leverages the available testing infrastructure. To fulfill this role the designer must be able to apply the most appropriate testing techniques to test the application as efficiently as possible while meeting the test organizations testing mandate. The Test Automation Engineer must work in concert with the Test Designer to ensure the appropriate automation solution is being deployed. To fulfill this role the Methodologist works with Quality Assurance to facilitate continuous quality improvement within the testing methodology and the testing organization as a whole. Test Designer or Tester The Role of the Test Designer / Tester is to: design and document test cases. execute tests. record test results. To fulfill this role the Test Automation Engineer must develop and maintain an effective test automation infrastructure using the tools and techniques available to the testing organization. . Test Methodologist or Methodology Specialist The Role of the Test Methodologist is to provide the test organization with resources on testing methodologies. Test Automation Engineer The Role of the Test Automation Engineer to is to create automated test case scripts that perform the tests as designed by the Test Designer. the resources (both hard and soft) available to the organization and a clear vision on how to most effectively deploy these assets to form integrated test architecture. To fulfill this role the Test Architect must have a clear understanding of the short-term and long-term goals of the organization. and perform test coverage analysis. To this end the methodologist: evaluates the test strategy. document defects.

Retesting the Defects . 2. Test Scripts to use in testing software. 5. testers work with developers in determining what aspects of a design are testable and under what parameter those tests work. Test Reporting: Once testing is completed. 7. Design Analysis: During the design phase. Test Development Life Cycle The Software test development Life cycle involves the below steps. 6. Requirements Analysis: Testing should begin in the requirements phase of the software development life cycle. 3. Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team.7. Test Planning: Test Strategy. Test Cases. testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. 1. Test Development: Test Procedures. Test Plan(s) creation. 4. Test Scenarios.

approach. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it.A. A test plan states what the items to be tested are. being applicable to all of organizations software developments. A test plan should ideally be organization wide. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. the software produced fulfils the functional or design statements of the appropriate software specification. scope. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. and focus of a software testing effort. Elements of test planning * Establish objectives for each test phase * Establish schedules for each test activity * Determine the availability of tools. . resources * Establish the standards and procedures to be used for planning and conducting the tests and reporting test results * Set the criteria for test completion as well as for the success of each test What a test plan should contain? A software project test plan is a document that describes the objectives. Test Plan Document What is a test plan? A software project test plan is a document that describes the objectives. scope. and describes the test environment. by testing the software. The objective of each test plan is to provide a plan for verification. how the test strategy will be applied to the testing of each item. approach. what sequence they are to be tested in. this generally means the Functional Specification. In the case of acceptance testing and system testing. and focus of a software testing effort. at what level they will be tested.

Test Deliverables 12. Environmental Needs 14. Item Pass/Fail Criteria 10. Remaining Test Tasks 13. Software Risk Issues 6. Introduction 4. Test Items 5. Features to be Tested 7. Glossary . Schedule 17. Test Plan Identifier 2. Approach 9. depending on the particular project: What to consider for the Test Plan: 1. Approvals 19. Planning Risks and Contingencies 18.The following are some of the items that might be included in a test plan. Responsibilities 16. Features not to be Tested 8. References 3. Suspension Criteria and Resumption Requirements 11. Staffing and Training Needs 15.

tester should restore it before other test cases will be performed.B. . o Actions step by step to be done to complete test. For example. while describing found defect. o Identifier of requirement which is covered by test case. reference to the defect involved should be listed in this column. Test Case format: The test case format contains the below format. written test cases consist of three main parts with subsections: • • • Introduction/overview contains general information about Test case. we should open some file. o Identifier is unique identifier of test case for further references. o Dependencies Test case activity o Testing environment/configuration contains information about configuration of hardware or software which must be met while executing test case o Initialization describes actions. For example if test case crashes database. o Finalization describes actions to be done after test case is performed. Also here could be identifier of use case or functional specification item. This is often replaced with a Pass/Fail. Test Case Id Executio n Type Test Case Prerequisite Description Testing Steps Expected Result Actual Value Status Structure of test case: Formal. o Purpose contains short description of test purpose. Quite often if a test case fails. which must be performed before test case execution is started. o Input data description Results o Expected results contains description of what tester should see after all test steps has been completed o Actual results contains a brief description of what the tester saw after the test steps has been completed. who created test or is responsible for its development o Version of current Test case definition o Name of test case should be human-oriented title which allows to quickly understand test case purpose and scope. o Test case owner/creator is name of tester or test designer. for example. what functionality it checks.

of a subsystem. Exception . is related to the aesthetics of the system. or of a software unit (program or module) within the system. does not impair usability. Critical . Test Case Execution: After test cases execution the results will come. 4.The defect does not result in a failure. or inconsistent results. there are acceptable processing alternatives which will yield the desired result. Average . Minor . The results includes the below levels The five Levels are: 1. 3.The defect is the result of non-conformance to a standard.C. or the defect impairs the systems usability. but causes the system to produce incorrect.The defect does not cause a failure.The defect results in the failure of the complete software system. Defects at this level may be deferred or even ignored. There is no way to make the failed component(s). and the desired processing results are easily obtained by working around the defect. 5. . or is a request for an enhancement. or of a software unit (program or module) within the system. defect priority level can be used with severity categories to determine the immediacy of repair. of a subsystem. however. Major . In addition to the defect severity level defined above. 2. incomplete.The defect results in the failure of the complete software system.

Rejected and 10. Deferred 7. This means that the bug is not yet approved. The bug should go through the life cycle to be closed. Closed Description of Various Stages: 1. Assign 4. New 2. The bug attains different states in the life cycle.8. the bug has a life cycle. Verified 6. Open 3. Duplicate 9. New: When the bug is posted for the first time. . Bug Life Cycle: In software development process. Test 5. its state will be “NEW”. Reopened 8. The life cycle of the bug can be shown diagrammatically as follows: The different states of a bug can be summarized as follows: 1. A specific life cycle ensures that the process is standardized.

10. It specifies that the bug has been fixed and is released to testing team. Assign: Once the lead changes the state as “OPEN”. the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”. changed to deferred state means the bug is expected to be fixed in next releases. This state means that the bug is fixed. 4. lack of time for the release or the bug may not have major effect on the software. the tester tests the bug. Test: Once the developer fixes the bug. 9. The state of the bug now is changed to “ASSIGN”. tested and approved. 8. he assigns the bug to corresponding developer or developer team. 7. the tester changes the status to “REOPENED”. he changes the state of bug to “TEST”. Guidelines on deciding the Severity of Bug: Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects. Verified: Once the bug is fixed and the status is changed to “TEST”. he rejects the bug. A sample guideline for assignment of Priority Levels during the product test phase includes: . Deferred: The bug. he changes the status of the bug to “CLOSED”. then one bug status is changed to “DUPLICATE”. The reasons for changing the bug to this state have many factors. 5.2. Before he releases the software with bug fixed. 3. The bug traverses the life cycle once again. Some of them are priority of the bug may be low. If the tester feels that the bug no longer exists in the software. If the bug is not present in the software. 6. Open: After a tester has posted a bug. he approves that the bug is fixed and changes the status to “VERIFIED”. Then the state of the bug is changed to “REJECTED”. he has to assign the bug to the testing team for next round of testing. Rejected: If the developer feels that the bug is not genuine. it is tested by the tester. Reopened: If the bug still exists even after the bug is fixed by the developer. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug. Closed: Once the bug is fixed.

Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. Examples include matching visual and text links which lead to different end points. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. . Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs. 2. . 4. Examples of this include inaccurate calculations. Examples of this include a missing menu option or security permission required to access a function under test. Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. the wrong field being updated. 3. . No workaround is possible for such bugs. Defect Severity and Defect Priority This document defines the defect Severity scale for determining defect criticality and the associated defect Priority levels to be assigned to errors found in software.1. etc. . Easy workarounds exists to achieve functionality objectives. The workaround can be provided for such bugs. It is a scale which can be easily adapted to other automated test management tools.

.

It is manifested as a fault. bugs) and to evaluate the features of the software items.The difference between a computed. Fault .The inability of a system or component to perform its required functions within specified performance requirements. design. specified. Debug . Verification . Error .An incorrect step.The degree to which a system or component is free from faults in its specification.The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. locate. Static analysis . content. and implementation. documentation. Testing . or other items meet user needs and expectations. Correctness .The process of evaluating a system or component based on its behavior during execution. whether specified or not. or other items meet specified requirements. or documentation. Formal proof of program correctness. Validation .The process of evaluating a system or component based on its form. observed. process.To detect. documentation. Failure . or measured value or condition and the true. . and correct faults in a computer program.A few simple definitions of Testing Terminology. structure. Dynamic analysis .The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.The process of analyzing a software item to detect the differences between existing and required conditions (that is. The degree to which software. The degree to which software. or theoretically correct value or condition. or data definition in a computer program.

enormous relational databases.Why does software have bugs? • • • miscommunication or no communication .continuous testing of an application as new functionality is added.testing of combined parts of an application to determine if they function together correctly. What kinds of testing should be considered? • • • • • Black box testing .the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. unit testing . incremental integration testing .as to specifics of what an application should or shouldn't do (the application's requirements).redesign. or that test drivers be developed as needed. may require developing test driver modules or test harnesses. work already completed that may have to be redone or thrown out. requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed.not based on any knowledge of internal design or code. White box testing . effects on other projects. The 'parts' can be code . Tests are based on requirements and functionality. hardware requirements that may be affected. programming errors . to test particular functions or code modules.based on knowledge of the internal logic of an application's code. Typically done by the programmer and not by testers.programmers. Not always easily done unless the application has a welldesigned architecture with tight code.the most 'micro' scale of testing. can make mistakes. software complexity . changing requirements (whether documented or undocumented) . and the complexity of coordinating changes may result in errors. paths. branches. If there are many minor changes or any major changes. rescheduling of engineers. or may understand and request them anyway . Tests are based on coverage of code statements. data communications. clientserver and distributed applications. etc. known and unknown dependencies among parts of the project are likely to interact and cause problems. done by programmers or by testers. integration testing .the end-user may not understand the effects of changes. and sheer size of applications have all contributed to the exponential growth in software/system complexity. Multi-tiered applications. conditions. like anyone else. as it requires detailed knowledge of the internal program design and code.

partial. . or interacting with other hardware. etc. individual applications. Programmers and testers are usually not appropriate as usability testers. functional testing . video recording of user sessions. this type of testing should be done by testers. User interviews. using network communications.testing an application under heavy loads. bogging down systems to a crawl. large complex queries to a database system. regression testing .similar to system testing. if the new software is crashing systems every 5 minutes. Also used to describe such tests as system functional testing while under unusually heavy loads. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans. surveys. Automated testing tools can be especially useful for this type of testing. Clearly this is subjective. or upgrade install/uninstall processes. such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.black-box type testing geared to functional requirements of an application. and will depend on the targeted end-user or customer. such as interacting with a database.final testing based on specifications of the end-user or customer.) system testing . stress testing . For example.testing for 'user-friendliness'.typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. acceptance testing . especially near the end of the development cycle. heavy repetition of certain actions or inputs. involves testing of a complete application environment in a situation that mimics real-world use. usability testing . This type of testing is especially relevant to client/server and distributed systems. applications. It can be difficult to determine how much re-testing is needed.term often used interchangeably with 'load' and 'performance' testing. client and server applications on a network. performance testing .• • • • • • • • • • • modules. the 'macro' end of the test scale. sanity testing or smoke testing . or based on use by end-users/customers over some limited period of time. or systems if appropriate. or corrupting databases. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.re-testing after fixes or modifications of the software or its environment. etc. and other techniques can be used. covers all combined parts of a system. load testing . end-to-end testing .term often used interchangeably with 'stress' and 'load' testing.black-box type testing that is based on overall requirements specifications. install/uninstall testing . the software may not be in a 'sane' enough condition to warrant further testing in its current state.testing of full. input of large numerical values.

mutation testing . by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected.testing driven by an understanding of the environment.testing of an application when development is nearing completion. user acceptance testing . If a problem-tracking system is in place.a method for determining if a set of test data or test cases is useful.often taken to mean a creative.• • • • • • • • • • • • recovery testing . not by programmers or testers. or other catastrophic problems. compatability testing . context-driven testing . and intended use of software. exploratory testing . Typically done by end-users or others. For example.testing how well a system recovers from crashes. fixes should be re-tested. comparison testing . etc. it should encapsulate these processes. hardware failures. culture. and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. Proper implementation requires large computational resources.similar to exploratory testing. ad-hoc testing . What should be done after a bug is found? The bug needs to be communicated and assigned to developers that can fix it. may require sophisticated testing techniques. informal software test that is not based on formal test plans or test cases. not by programmers or testers. failover testing . but often taken to mean that the testers have significant understanding of the software before testing it. testers may be learning the software as they test it. After the problem is resolved. the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game. A variety of commercial problemtracking/management software tools are available (see the 'Tools' section for .testing how well the system protects against unauthorized internal or external access.testing when development and testing are essentially completed and final bugs and problems need to be found before final release. beta testing . Typically done by end-users or others.determining if software is satisfactory to an end-user or customer.typically used interchangeably with 'recovery testing' security testing .testing how well software performs in a particular hardware/software/operating system/network/etc. alpha testing . minor design changes may still be made as a result of such testing. environment.comparing software weaknesses and strengths to competing products. willful damage.

. screen. module. where the bug occurred Environment specifics. and reporting/summary capabilities are needed for managers. system.) The application name or identifier and version The function. etc. get an idea of it's severity. For instance.) Current bug status (e. relevant hardware specifics Test case name/number/identifier One-line bug description Full bug description Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool Names and/or descriptions of file/data/messages/etc. used in test File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common) Was the bug reproducible? Tester name Test date Bug reporting date Name of developer/group/organization the problem is assigned to Description of problem cause Description of fix Code section/file/module/class/method that was fixed Date of fix Application version that contains the fix Tester responsible for retest Retest date Retest results Regression testing requirements Tester responsible for regression tests Regression testing results A reporting or tracking process should enable notification of appropriate personnel at various stages. feature. The following are items to consider in the tracking process: • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Complete information such that developers can understand the bug. and reproduce it if necessary.. ID. testers need to know when retesting is needed. platform. etc. developers need to know when bugs are found and how to get the needed information. etc. 'Released for Retest'. object. Bug identifier (number.g.web resources with listings of such tools). 'New'.

Level 2 .software project tracking. It is geared to large organizations such as large U. and run in such an interdependent environment. Common factors in deciding when to stop are: • • • • • • Deadlines (release deadlines. Few if any processes in place. successful practices can be repeated. etc. successes may not be repeatable.How can it be known when to stop testing? This can be difficult to determine. initiated by the U. Many modern software applications are so complex. a Software Engineering Process Group is is in place to oversee software processes. Defense Department to help improve software development processes. and configuration management processes are in place. It's a model of 5 levels of process 'maturity' that determine effectiveness in delivering quality software. Project performance is predictable. requirements management. Defense Department contractors.) Test cases completed with certain percentage passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point Bug rate falls below a certain level Beta or alpha testing period ends What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help? • • SEI = 'Software Engineering Institute' at Carnegie-Mellon University. processes. Level 4 . CMM = 'Capability Maturity Model'.metrics are used to track productivity. Organizations can receive CMMI ratings by undergoing assessments by qualified auditors. and training programs are used to ensure understanding and compliance. .S. now called the CMMI ('Capability Maturity Model Integration').characterized by chaos.S. and heroic efforts required by individuals to successfully complete projects. testing deadlines. Level 3 . and products. realistic planning. and if reasonably applied can be helpful.standard software development and maintenance processes are integrated throughout an organization. that complete testing can never be done. However. periodic panics. developed by the SEI. Level 1 . many of the QA processes involved are appropriate to any organization.

reliability. To be ISO 9001 certified.org/ for the latest information.S. In the U. servicing. the standards can be purchased via the ASQ web site at http://e-standards.the focus is on continuous process improvement. and CobiT.and quality is consistently high. The impact of new processes and technologies can be predicted and effectively implemented when required. IEEE = 'Institute of Electrical and Electronics Engineers' . Bootstrap. and portability. and certification is typically good for about 3 years. not just software. • • • ISO = 'International Organization for Standardization' . Other software development/IT management process assessment methods besides CMMI and ISO 9000 include SPICE.asq. creates standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829). usability. TickIT. testing.Quality Management Systems: Fundamentals and Vocabulary.org/ ISO 9126 defines six high level quality characteristics that can be used in software evaluation.Quality Management Systems: Requirements. and other processes. maintainability. development.iso. installation. Note that ISO certification does not necessarily indicate quality products . 'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730). publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality). the primary industrial standards body in the U.. It includes functionality. . Level 5 . a third-party auditor assesses an organization. efficiency. Trillium. ANSI = 'American National Standards Institute'.Quality Management Systems: Guidelines for Performance Improvements. after which a complete reassessment is required. Also see http://www. (c)Q9004-2000 . It covers documentation. 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008). production. ITIL. MOF. design.The ISO 9001:2000 standard (which replaces the previous standard of 1994) concerns quality systems that are assessed by outside auditors.it indicates only that documented processes are followed. (b)Q9000-2000 . and others.among other things. The full set of standards consists of: (a)Q9001-2000 . and it applies to many kinds of production and manufacturing organizations.S.

Sign up to vote on this title
UsefulNot useful