You are on page 1of 20

Software Testing Tutorial

General Testing Terms

Software Testing: 1. The process of operating a product under certain conditions, observing and recording the results, and making an evaluation of some part of the product. 2. The process of executing a program or system with the intent of finding defects/bugs/problems. 3. The process of establishing confidence that a product/application does what it is supposed to. Verification: "The process of evaluating a product/application/component to evaluate whether the output of the development phase satisfies the conditions imposed at the start of that phase. Verification is a Quality control process that is used to check whether or not a product complies with regulations, specifications, or conditions imposed at the start of a development phase. This is often an internal process." Validation: "The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified user requirements. Validation is Quality assurance process of establishing evidence that provides a high degree of assurance that a product accomplishes its intended requirements." Quality Assurance (QA): "A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives. A planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements." Quality Control (QC): The process by which product quality is compared with applicable standards, and the action is taken when nonconformance is detected. White Box: In this type of testing we use an internal perspective of the system/product to design test cases based on the internal structure. It requires programming skills to identify all flows and paths through the software. The test designer uses test case inputs to exercise flows/paths through the code and decides the appropriate outputs.

Gray Box: Gray box testing is a software testing method that uses a combination of black box testing and white box testing. Gray box testing is completely not black box testing, because the tester will have knowledge of some of the internal workings of the software under test. In gray box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the gray box testing a black box approach is taken in applying inputs to the software under test and observing the outputs. Black Box: Black box testing takes an external perspective of the test object to design and execute the test cases. In this testing technique one need not know the test object's internal structure. These tests can be functional or non-functional, however these are usually functional. The test designer selects valid and invalid inputs and determines the correct output.

Test Life Cycle / Software Testing models

This page contains a brief description on the Life Cycle and the different Testing Models. SDLC: The software development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application/product. Phases of SDLC System Study Feasibility Study Requirements Design Coding Testing Implementation Maintenance V-Model: The V-Model shows and translates the relationships between each phase of the development life cycle and its associated phase of testing. The V-model is a software development model which is considered to be an extension of the waterfall model. Instead of moving down in a linear way, the process steps are targeted upwards after the coding phase, to form the typical V shape.

Requirements analysis: In this phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned about establishing what the ideal system has to perform. However, it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated. The user requirements document will typically describe the systems functional, physical, interface, performance, data, security requirements etc as expected by the user. The user acceptance tests are designed in this phase. System Design: System engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly. The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing is prepared in this phase. High-level design: This phase can also be called as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in this phase. Low-level design: This phase can also be called as low-level design. The designed system is broken up in to smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudo-code - database tables, with all elements, including their type and size - all interface details with complete API references- all dependency issues- error message listings- complete input and outputs for a module. The unit test design is developed in this stage."

SDLC - WaterFall, Spiral, Agile and Test Life Cycle Waterfall Model:

The waterfall model is a popular version of the systems development life cycle model for software engineering. Often considered the classic approach to the systems development life cycle, the waterfall model describes a development method that is linear and sequential. Waterfall development has distinct goals for each phase of development. Imagine a waterfall on the cliff of a steep mountain. Once the water has flowed over the edge of the cliff and has begun its journey down the side of the mountain, it cannot turn back. It is the same with waterfall development. Once a phase of development is completed, the development proceeds to the next phase and there is no turning back. The advantage of waterfall development is that it allows for departmentalization and managerial control. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process like a car in a carwash, and theoretically, be delivered on time. The disadvantage of waterfall development is that it does not allow for much reflection or revision. Once an application is in the testing stage, it is very difficult to go back and change something that was not well-thought out in the concept stage. The model is also called Classic or Linear Sequential model. A linear and orderly sequence of steps from System Study to Implementation. When one phase ends, then the next phase begins. Appropriate if systems requirements are stable and well understood. Examples Building a well-defined maintenance releases of a product. Porting an existing application to a new platform. Biggest Risks Customer gets to see it only in the end. Correctness and completeness of earlier phases is a prerequisite of each phase. Stages: Project Planning -> Requirements definition -> Design -> Development -> Integration and Testing -> Installation/Acceptance -> Maintenance Spiral Model: There are four phases in the "Spiral Model" which are: Planning, Evaluation, Risk Analysis and Engineering. These four phases are iteratively followed one after other in order to eliminate all the problems, which were faced in "The Waterfall Model". Iterating the phases helps in understating the problems associated with a phase and dealing with those problems when the same phase is repeated next time, planning and developing strategies to be followed while iterating through the phases. In Spiral model it splits the project to multiple mini-projects; 1. Based on risks such as: Poor requirements understanding, architecture definition, performance, technologies etc. 2. Riskier one first.

3. Each mini-project could be in a non-risk based model such as waterfall. Identify objectives, alternatives, constraints Identify and resolve risks. Choose the right alternative. Develop for that iteration. Verify it. Plan for the next iteration. 4. Appropriate when the challenges are huge and many. Agile Process: Agile aims to reduce risk by breaking projects into small, time-limited modules or timeboxes ("iterations") with each iteration being approached like a small, self-contained mini-project, each lasting only a few weeks. Each iteration has its own self-contained stages of analysis, design, production, testing and documentation. In theory, a new software release could be done at the end of each iteration, but in practice the progress made in one iteration may not be worth a release and it will be carried over and incorporated into the next iteration. The project's priorities, direction and progress are re-evaluated at the end of each iteration. Test life cycle: 1. Test Requirements stage - Requirement Specification documents, Functional Specification documents, Design Specification documents (use cases, etc), Use case Documents, Test Trace-ability Matrix for identifying Test Coverage. 2. Test Plan - Test Scope, Test Environment, Different Test phase and Test Methodologies, Manual and Automation Testing, Defect Management, Configuration Management, Risk Management, Evaluation & identification Test, Defect tracking tools, test schedule, resource allocation. 3. Test Design - Traceability Matrix and Test coverage, Test Scenarios Identification & Test Case preparation, Test data and Test scripts preparation, Test case reviews and Approval, Base lining under Configuration Management. 4. Test Environment Setup - Test Bed installation and configuration, Network connectivity's, All the Software/ tools Installation and configuration, Coordination with Vendors and others. 5. Test Automation - Automation requirement identification, Tool Evaluation and Identification, Designing or identifying Framework and scripting, Script Integration, Review and Approval, Base lining under Configuration Management. 6. Test Execution and Defect Tracking - Executing Test cases, Testing Test Scripts, Capture, review and analyze Test Results, Raise the defects and tracking for its closure. 7. Test Reports and Acceptance - Test summary reports, Test Metrics and process Improvements made, Build release, Receiving acceptance.

Other SDLC Models

Waterfall with Prototyping Prototyping to analysis and design phases Preview and validation of User Interface by customer. Preview and validation of complex workflows by customer. Technical risks mitigation etc. Feasibility. Biggest Risks Correctness and completeness of earlier phases is a prerequisite of each phase. Risks related to complex functionalities and large size of projects. Might not handle all scenarios. Evolutionary Prototyping Methodology. 1. Here the system concept evolves during the project Develop a prototype Show it to customer. Get their feedback. Refine the prototype. Repeat till prototype is good. Complete any pending work and release the product. 2. Appropriate when Rapidly changing requirements Customer can not commit to a set of requirements. Poor understanding of the application area (by customer or developer). Poor understanding of technology, architecture or algorithms. 3. Problems Cant predict efforts, size, costs. Demands on good engineering, to ensure sustaining every iteration. Rapid Application Development 1. Modified waterfall model that emphasizes an extremely short development life cycle Methodology driven by tools which achieve rapid development Joint Requirements Planning Joint Application Development Construction Cutover Costs of rework is very less. So, design and testing phases and shortened 2. Appropriate if the requirements can be realized using the tools CASE tools (such as VB, Borland Delphi, PowerBuilder etc.) Product Lifecycle Models

1. Modification of iterative model Multiple cycles of development, adding newer features in every cycle. Requirements driven by the market, not by specific customers. Influenced by competing products. Present customer base as well as targeted customers. 2. Each cycle builds on earlier cycles work products Stable architecture and standard development practices. Requirements gathering and analysis is more tedious. HLD and architecture inherited from earlier cycles. DD & CUT combined to one stage. System testing is elaborate. Customer testing is more tedious: alpha and beta. 3. Biggest Risks and Challenges Conflicting requirements from diverse customer base. Need to rapidly change to adapt to changing market pressures.

Software Testing Types

Below are some of the important types of testing. Unit Testing: Testing of individual software components or groups of related components Testing conducted to evaluate whether systems or components pass data and control correctly to one another. Unit Test / Work Unit: Work Unit Testing: Code(Program) Module Function Sub-Program SP (Stored Procedure)

Unit testing is done to ensure work units which make up the entire work product are working properly. It is easier to pinpoint and correct faults at a unit level. Unit Test Cases are part of the Unit Test Plan. Integration Testing:

Integration testing is the activity of software testing in which individual software modules are combined and tested as a group. Testing in which software components or hardware components or both are combined and tested to evaluate the interaction between them. Functional Testing: Functionality testing is performed to verify whether the product/application meets the intended specifications and the functional requirements mentioned in the documentation. Functional tests are written from a user's perspective. These tests confirm that the system does what the users are expecting it to do. Both positive and negative test cases are performed to verify the product/application responds correctly. Functional Testing is critically important for the products success since it is the customer's first opportunity to be disappointed. System Testing: System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic. System testing is performed on the entire system with reference of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS). Structural Testing: In Structural testing, a white box testing approach is taken with focus on the internal mechanism of a system or component Types of structural testing: Branch testing Path testing Statement testing" Regression Testing: When a defect is found in verification and it is fixed we need to verify that 1) the fix was done correctly 2) to verify that the fix doesnt break anything else. This is called regression testing. Regression testing needs to be performed to ensure that the reported errors are indeed fixed. Testing also needs to be performed to ensure that the fixes made to the application do not cause new errors to occur.

Selective testing of a system or component to verify that modifications have not caused unintended effects. A system that fails after the modification of a component is said to regress. Regression Testing is where the integration and System tests are rerun to capture such failures. Regression Testing is planned usually for components that are Dependent, Risky and Widely used. Retesting: Retesting means executing the same test case after fixing the bug to ensure the bug fixing. Negative Testing: Negative Testing is testing the application beyond and below of its limits. For ex: If the requirements is to check for a name (Characters), 1) We can try to check with numbers. 2) We can enter some ascii characters. 3) First we can enter some numbers and then some characters. 4) If the name should have some minimum length, we can check beyond that length. Performance Testing: Performance test is testing the product/application with respect to various time critical functionalities. It is related to benchmarking of these functionalities with respect to time. This is performed under a considerable production sized setup. Performance Tests are tests that determine end to end timing (benchmarking) of various time critical business processes and transactions, while the system is under low load, but with a production sized database. This sets 'best possible' performance expectation under a given configuration of infrastructure. It can also serve to validate and verify other quality attributes of the system, such as scalability (measurable or quantifiable), reliability and resource usage. Under performance testing we define the essential system level performance requirements that will ensure the robustness of the system. The essential system level performance requirements are defined in terms of key behaviors of the system and the stress conditions under which the system must continue to exhibit those key behaviors." Some examples of the Performance parameters (in a Patient monitoring system - Healthcare product) are, 1. Real-time parameter numeric values match the physiological inputs 2. Physiological input changes cause parameter numeric and/or waveform modifications on the display within xx seconds.

3. The system shall transmit the numeric values frequently enough to attain an update rate of x seconds or shorter at a viewing device. Stress Testing: 1. Stress Tests determine the load under which a system fails, and how it fails. 2. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. 3. A graceful degradation under load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load. Some examples of the Stress parameters (in a Patient monitoring system - Healthcare product) are, 1. Patient admitted for 72 Hours and all 72 hours of data availale for all the parameters (Trends). 2. Repeated Admit / Discharge (Patient Connection and Disconnection) 3. Continuous printing 4. Continuous Alarming condition Load Testing: Load testing is the activity under which Anticipated Load is applied on the System, increasing the load slightly and checking when the performance starts to degrade. Load Tests are end to end performance tests under anticipated production load. The primary objective of this test is to determine the response times for various time critical transactions and business processes. Some of the key measurements provided for a web based application include: 1. How many simultaneous users, can the web site support? 2. How many simultaneous transactions, can the web site support? 3. Page load timing under various traffic conditions. 4. To find the bottlenecks. Adhoc Testing: Adhoc testing is a commonly used term for software testing performed without planning and documentation. The tests are intended to be run only once, unless a defect is discovered. Exploratory Testing: Exploratory testing is a method of manual testing that is described as simultaneous learning, design and execution.

Exhaustive Testing: Testing which covers all combination's of input values and preconditions for an element of the software under test. Sanity Testing: A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep. Testing few functions/parameters and checking all their main features. In which one can perform testing on an overall application (all features) initially to check whether the application is proper in terms of availability and Usability. Sanity testing is done by Test engineer. Smoke Testing: In software industry, smoke testing is a wide and shallow approach whereby all areas of the application are tested, without getting into too deep. Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing. Smoke testing is done by Developer or White box engineers. Soak Testing: Soak testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. For example, in software testing, a system may behave exactly as expected when tested for 1 hour. However, when it is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly. Compatibility Testing: Compatibility testing is done to check that the system/application is compatible with the working environment. For example if it is a web based application then the browser compatibility is tested. If it is a installable application/product then the Operating system compatibility is tested. Compatibility testing verifies that your product functions correctly on a wide variety of hardware, software, and network configurations. Tests are run on a matrix of platform hardware configurations including High End, Core Market, and Low End.

Alpha Testing: Testing performed by actual customers at the developers site. Beta Testing: Testing performed by actual customers at their site (customers site)." Acceptance Testing or User Acceptance Testing (UAT): Formal testing conducted to enable a user, customer or other authorized entity to determine whether to accept a system or component. Static Testing: The intention to find defects/bugs without executing the software or the code is called static testing. Example: Review, Walkthrough, CIP(Code Inspection Procedure). Static testing is a form of software testing where the software isn't actually used. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code and/or manually reviewing the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used. Dynamic testing: Dynamic testing is nothing but functional testing. It is used to test software by executing it. i18n / Internationalization: Here 18 refers for the number of letters between the first i and last n in Internationalization. Internationalization is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes. Internationalization is the process of designing and coding a product so it can perform properly when it is modified for use in different languages and locales. L10N / Localization: Here 10 refers for the number of letters between the first L and last n in Localization. Localization is the process of adapting internationalized software for a specific region or language by adding locale-specific components and translating text.

Localization is the process of adapting a globalized application to a particular culture/locale. Localizing an application requires a basic understanding of the character sets typically used in modern software development and an understanding of the issues associated with them. Localization includes the translation of the application user interface and adapting graphics for a specific culture/locale. The localization process can also include translating any help content associated with the application.

Test Design Techniques

Some general software documentation terms. USR: User Requirements Specification - Contains User (Customer) requirements received from the user/client. PS: Product Specification. Derived from UR such that it can be implemented in the product. A high level product requirement. DRS: Design Requirement Specifications - Related to design. Design requirements. For hardware components. SRS: Software Requirement Specifications - Low level requirements derived from PS. For Software related components. SVP: Software Verification Procedure written from SRS. These are the actual test cases. DVP: Design Verification Procedure written from DRS. More design oriented test cases. Some of the Test Design Techniques are as below, Test Design Technique 1 - Fault Tree analysis

Fault tree analysis is useful both in designing new products/services (test cases for new components) or in dealing with identified problems in existing products/services. Fault tree analysis (FTA) is a failure analysis in which the system is analyzed using boolean logic. Test Design Technique 2 - Boundary value analysis Boundary value analysis is a software testing design technique in which tests are designed to include representatives of boundary values. The test cases are developed around the boundary conditions. One common example for this technique can be, if a text box (named username) supports 10 characters, then we can write test cases which contain 0,1, 5, 10, >10 characters. Test Design Technique 3 - Equivalence partitioning Equivalence partitioning is a software design technique that divides the input data to a software unit into partition of data from which test cases can be derived. Test Design Technique 4 - Orthogonal Array Testing This Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of Test Cases. Pay attention to the fact that it is an old and proven technique. The Orthogonal array testing was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi in 1987. It is a Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000] Test Design Steps - Test case writing steps 1. Read the requirement. Analyze the requirement. 2. Write the related prerequisites and information steps if required (ex. If some setting should have already been done, or Some browser should have been selected). 3. Write the procedure (steps to perform some connection, configuration). This will contain the majority of steps to reproduce is this test case fails. 4. Write a step to capture the tester input/record. This is used for objective evidence. 5. Write the Verify step (Usually the expected Result). What is a Test Case? It is a document, which specifies the test inputs, events and expected results developed for a particular objective, so as to evaluate a particular program path or to verify the compliance with a specific requirement based on the test specification. Areas for Test Design - Below are some of the areas in Test Design.
o Deriving Test Cases from Use Cases o Deriving Test Cases from Supplementary Specifications

>>>>>> >>>>>> >>>>>> >>>>>> >>>>>>

Deriving Deriving Deriving Deriving Deriving

Test Test Test Test Test

Cases Cases Cases Cases Cases

for for for for for

Performance Tests Security / Access Tests Configuration Tests Installation Tests other Non-Functional Tests

o Deriving test cases for Unit Tests >>>>>> White-box tests >>>>>> Black-box tests o Deriving Test Cases for Product Acceptance Test o Build Test Cases for Regression Test

The contents of a test case are, * Prerequisites * Procedures * Information if required * Tester input/record * Verify step Please refer the Software Test Templates area for a Test Case Template. Types of Test Cases They are often categorized or classified by the type of test / requirement for test they are associated with, and will vary accordingly. Best practice is to develop at least two test cases for each requirement for test: 1. A Test Case to demonstrate the requirement has been achieved. Often referred to as a Positive Test Case. 2. Another Test Case, reflecting an unacceptable, abnormal or unexpected condition or data, to demonstrate that the requirement is only achieved under the desired condition, referred to as a Negative Test Case.

Software Testing Process

One of the main process involved in Software Testing is the preparation of Test Plan. The contents of a Test Plan would contain the following,

Purpose. Scope. References. Resources Required. Roles and responsibilities Test Environment and Tools

Test Schedule. Types of Testing involved. Entry/Exit criteria. Defect Tracking. Issues/Risks/Assumptions/Mitigation's. Deviations.

Please refer the Software Test Templates area for a Test Plan Template. Process involved in Test Case Design: 1. Review of requirements. 2. comments for the requirements. 3. Fix the review comments. 4. Baseline the requirements document. 5. Prepare test cases with respect to the baselined requirements documents. 6. Send the test cases for review. 7. Fix the comments for the test cases. 8. Baseline the test case document. 9. If there are any updates in the requirements, Update the requirements document. 10. Send the updated requirements document for review. 11. Fix any comments, if received. 12. Baseline the requirements document. 13. Update the test case document with respect to the latest baselined requirements document. Please refer the Software Test Templates area for a Test Case Template. Traceability matrix: Traceability matrix is a matrix which associates the requirements to its work products,Test cases. This can also be used to associate the Use case to the Requirements. The advantage of traceability is to ensure the completeness of requirements. Every Test case should associate to a requirement and Every Requirement has one or more associated test cases. Testing Review - Dos and Dont's Author

Be open to feedback. Observe the review process. Dont get discouraged by review feedback They are constructive. Do not defend yourself or the product that you produced - Remember that there is no product that is defect-free.


Review the Product not the Producer. Do not accuse. Do not resolve the issues. Avoid discussions of styles - stick to technical correctness.

Testers Role
The various Roles and Responsibilities of a Tester or Senior Software Tester Requirement Analysis - in case of doubt check with global counter parts or clinical specialist. Test design - Always clarify, never assume and write. Review of Other Test design documents. Approving Test Design documents. Test Environment setup. Test results gathering. Evaluation of any tools if required. - ex. QTP, Valgrind memory profiling tool. Mentoring of new joinees. Helping them ramp up. Finding Defects Testers need to identify two types of defects: Variance from Specifications A defect from the perspective of the builder of the product. Variance from what is Desired A defect from a user (or customer) perspective. Testing Constraints Anything that inhibits the testers ability to fulfill their responsibilities is a constraint. Constraints include: Limited schedule and budget Lacking or poorly written requirements Changes in technology Limited tester skills The various Roles and Responsibilities of a Software Test Lead The Role of Test Lead is to effectively lead the testing team. To fulfill this role the Lead must understand the discipline of testing and how to effectively implement a testing process while fulfilling the traditional leadership roles. What does This mean that the Lead must manage and implement or maintain an effective testing process. This involves creating a test infrastructure that supports robust communication and a cost effective testing framework. The Test Lead is responsible for:

* Defining and implementing the role testing plays within the organizational structure. * Defining the scope of testing within the context of each release / delivery. * Deploying and managing the appropriate testing framework to meet the testing mandate. * Implementing and evolving appropriate measurements and metrics. o To be applied against the Product under test. o To be applied against the Testing Team. * Planning, deploying, and managing the testing effort for any given engagement / release. * Managing and growing Testing assets required for meeting the testing mandate: o Team Members o Testing Tools o Testing Process * Retaining skilled testing personnel. The various Roles and Responsibilities of a Software Test Manager * Manage and deliver testing projects with multi-disciplinary teams while respecting deadlines. * Optimize and increase testing team productivity by devising innovation solutions or improving existing processes. * Experience identifying, recruiting and retaining strong technical members. * Bring a client-based approach to all aspects of software testing and client interactions when required. * Develop and manage organizational structure to provide efficient and productive operations. * Provide issue resolution to all direct reportees/subordinates, escalating issues when necessary with appropriate substantiation and suggestions for resolution. * Provide required expertise to all prioritization, methodology, and development initiative. * Assist direct reports in setting organization goals and objectives and provide annual reviews with reporting of results to HR and the executive team. * Work with Product Management and other teams to meet organization initiatives. * Promote customer orientation through organizing and motivating development personnel. The Test Manager must understand how testing fits into the organizational structure, in other words, clearly define its role within the organization. Challenges in People Relationships testing The top ten people challenges have been identified as: Training in testing Relationship building with developers Using tools Getting managers to understand testing Communicating with users about testing Making the necessary time for testing Testing over the wall software Trying to hit a moving target Fighting a lose-lose situation Having to say no

*According to the book Surviving the Top Ten Challenges of Software Testing, A People Oriented Approach by William Perry and Randall Rice

Software Testing Inputs, Process, Outputs

Testing IPO (Inputs, Process and Outputs)

Inputs: Plan - Scope, Schedule Data - Clean, Base lined, Initial, To be transacted, Sufficient, protected. Environment - Hardware, OS as close to real world, Web Servers, Browers, Display settings, Drivers, Option packs/service packs/fixes. Test Cases - Unique ID, Description, Expected results, Exit Criteria.

Process: Verify whether the Work Product is taken from the appropriate work area for testing. Verify if the test cases are reviewed and approved. Verify that you are working in the test environment (Not Development Environment). Execute the Work Product. Apply test scenarios. Observe actual results and behavior. Record the actual results and observations.

Outputs: Test Results - Actual results Vs Expected results. Defects Log - Priority, Severity, Symptoms, Description, Date detected. Test Report - Summary of testing done, GO/No-Go recommendations, Phase/iterations, Date.