You are on page 1of 10

Testware:

As we know that hardware development engineers produce hardware. Software development engineers produce software. Same like this, Software test engineers produce testware. Testware is produced by both verification and validation testing methods. Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained. Like software, the testware has significant value because it can be reused. The testers job is to create testware that is going to have a specified lifetime and is valuable asset to the company.

How do you create a Test Strategy?


The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process: A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data. A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules. Testing methodology. This is based on known standards. Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents. Requirements that the system can not provide, e.g. system limitations. Contents of Test Strategy document: Sections in the test strategy document include: . A description of the required hardware and software components, including test tools: This information comes from the test environment, including test tool data. . A description of roles and responsibilities of the resources required for the test and schedule constraints: This information comes from man-hours and schedules. . Testing methodology: This is based on known standards. . Functional and technical requirements of the application: This information comes from requirements, change request, technical, and functional design documents. . Requirements that the system cannot provide, e.g. system limitations. Outputs for this process: An approved and signed off test strategy document, test plan, including test cases. Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

Responsibilities of a Test Lead


Responsibilities of a Test leader: Prepare the Software Test Plan. Check / Review the Test Cases document System, Integration and User Acceptance prepared by test engineers. Analyze requirements during the requirements analysis phase of projects. Keep track of the new requirements from the Project. Forecast / Estimate the Project future requirements. Arrange the Hardware and software requirement for the Test Setup. Develop and implement test plans. Escalate the issues about project requirements (Software, Hardware, Resources) to Project Manager / Test Manager. Escalate the issues in the application to the Client. Assign task to all Testing Team members and ensure that all of them have sufficient work in the project. Organize the meetings. Prepare the Agenda for the meeting, for example: Weekly Team meeting etc. Attend the regular client call and discuss the weekly status with the client. Send the Status Report (Daily, Weekly etc.) to the Client. Frequent status check meetings with the Team. Communication by means of Chat / emails etc. with the Client (If required). Act as the single point of contact between Development and Testers for iterations, Testing and deployment activities. Track and report upon testing activities, including testing results, test case coverage, required resources, defects discovered and their status, performance baselines, etc. Assist in performing any applicable maintenance to tools used in Testing and resolve issues if any. Ensure content and structure of all Testing documents / artifacts is documented and maintained. Document, implement, monitor, and enforce all processes and procedures for testing is established as per standards defined by the organization. Review various reports prepared by Test engineers. Log project related issues in the defect tracking tool identified for the project. Check for timely delivery of different milestones. Identify Training requirements and forward it to the Project Manager (Technical and Soft skills). Attend weekly Team Leader meeting. Motivate team members. Organize / Conduct internal trainings on various products.

What is software testing methodology?


A software testing methodology is the use of a three step process of: 1. Creating a test strategy; 2. Creating a test plan/design; and 3. Executing tests. This methodology can be used and molded to your organization's needs. Using this methodology is important in the development and in ongoing maintenance of the customers' applications.

Test Case Maintenance:


Changes to Product features can happen many times during a product development lifecycle. These may be driven by change of customer requirements, design changes or some times as late as customer acceptance tests. Many times, making necessary changes reported by customers can be crucial. In such scenarios test cases drawn by Test Engineers can become obsolete rendering the whole effort in test planning a fruitless exercise. Planning for Test Case Maintenance is critical as - Test cases may become redundant due to behavior change. - Expected Outcome may change for some test cases. - Additional test cases need to be added because of altered conditions. For effective Test Case Maintenance: 1. It is important to keep the test case list updated to reflect changes of product features otherwise for next phase of testing there will be a tendency to discard current test case list and start afresh. 2. Making the Test Case a 'Work Sheet' is very important. On daily basis this test case list should be updated. To enable each member in the team to update test cases the Excel Worksheet was kept in a shared drive and workbook was shared to allow concurrent editing. 3. Keeping track of Test Cases Metrics complements Defect Reports Metrics and gives a better view of Product quality. When you read that there are 30 defects pending and only 50 test cases are pending with 1200 test cases successful, you have better knowledge of product quality. 4. Maintaining the test case list updated works as proof of testing. In future when a defect is reported, it will be possible to validate whether this case has been covered during testing or not. 5. This list of updated test cases list can be used by anyone who will do the testing for next version of product. Same team need not work on next release as test case list matches 'current' behavior of application Need for Test Case and Test Plan Test Plan identifies the test activities and also defines the objective, approach, and schedule of intended testing activities, while Test Cases comprises of the test procedure, test conditions and expected result. Writing a test plan early in the project lifecycle, and having it peer reviewed by Development in general helps reduce the workload later in the project lifecycle. This allows testers to quickly and unambiguously complete the majority of the testing required, which will provide more time for "Ad Hoc", "Real World", and "User Scenario" testing of the product. Test case is comprised of a test condition, an expected result, and a procedure for performing the test case. These can be performed either in combination with other test cases or in isolation. A testcase is the difference between saying that something seems to be working okay and proving that a set of specific tasks is known to be working correctly.

Test Plan talks about "What has to be tested" while Test Cases talk about "How to test" because of which both these documents are of equal importance in testing.

Difference between System Testing and Integration Testing


System testing is high level testing, and integration testing is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa. For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components. For system testing, on the other hand, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. The purpose of system testing, on the other hand, is to validate an application's accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.

How objective is your qc?


QC people have a special role in the development process. Part of this role is being involved in different activities throughout the entire development process. Starting from the time when requirements are gathered, when initial design is made, and until the end of each iteration, the QC people must observe and gather information about the system under development. This is crucial to the completeness and correctness of the QC activity. A QC Engineer cant be expected to jump into the project just before the actual testing starts. She must be aware of the evolution of the requirements and the system itself (including the way it is designed and implemented) in order to create an effective test plan, and build correct tests. However, at the same time, it is crucial to maintain the objectivity of the QC people. Since the QC engineer is so involved in the development process, it is easy to forget that if she will be too close to the developers, the result of the QC process might not reflect the real quality of the product. If, for example, something in the requirements is vague, the QC engineer might be tempted to ask the developer for the correct interpretation of this issue. But, it is the QC engineers role to verify that the developers interpretation of the requirements is correct. How can she use the developers interpretation to build an objective test? This is a simple example for what could happen when QC and developers are too close. Doing extensive testing based on what the developers gathered from the requirements, is bound to catch only coding errors. Logical errors in the functionality of the system are most likely to be left in the code under these circumstances. The way to solve this conflict, is having great self discipline. QC people should be involved in the development, but they should think twice about whom are they talking to when having questions about the requirements of the product. The best person to ask such questions is the same person defining the

requirements. It is important to note that the QC people should not be completely isolated from the development team. The QC engineer could benefit a great deal from discussing the structure of the code and the design details with the developers. This kind of discussion might give her ideas for more test cases, for example. The difference between discussing the structure of the code and discussing the interpretation of the requirements is that verifying the latter is in fact the purpose of the QC process.

Testing standards:
General testing standards for testing any type of application based on various factors are listed below: 1. Look & Feel -> Uniformity in terms of Content, Title, Position (Should be displayed at the center of any User Interface (UI)) of message boxes. -> Enabling & Disabling of menu items/icons according to the user security. -> Ensure menu items are invoked as set in user security. If the service is not available for testing then that particular menu item should be disabled. -> Navigation using tab key should be proper across fields and should move from right to left, then top to bottom. -> Scrolling effect of Vertical & Horizontal scrollbars should be proper. -> Alignment of controls should be proper. -> Spacing between controls should be proper. -> Ensure the uniformity in font (Type, Size, Color) -> In case of Multi line text box, Press Enter key to go for the next line. -> Ensure 'Backspace' & 'Space bar' are working properly, wherever applicable. -> In case of List / Combo boxes, press ALT+DOWN to display the list values and use Down Arrow key for selection. -> Esc key should activate Cancel button and Enter key should activate OK button. -> Check for the spelling in Message Box, Titles, Help files and Tool Tip. -> In case of reports check for the proper display of column headers on different Zooming effect. -> Check ToolTip text is provided for all icons in the UI. 2. Functionality Testing -> Check for the functionality of the application. The entire flow of the application has to be checked. -> In functionality testing both Positive and Negative testing is done. 2.1.Positive Testing -> Check for positive functionality of the application. -> Check for field validation using positive values within the permissible limits. 2.2.Negative Testing -> Give numbers in char fields and vice versa -> Give only numbers in the alphanumeric fields -> Know the permissible range for each field and check using values exceeding the limits. Use Equivalence Partitioning and Boundary Value Analysis techniques for deciding on the test data. -> Without giving values for mandatory fields, save data. -> Random clicking and continuous tab out (especially in grids for application error).

-> Check for the space and updating with blank fields wherever applicable. -> Maximize and minimize the screens and check for the toolbar display. 3. Menu Organization -> Simultaneous opening of screens. 4. Help files -> F1 should invoke context sensitive help files. 5. User Interface Traversal -> For all mouse driven operation there should be a keyboard function , Short cut keys & Alternate Keys where ever applicable in menu. 6. Date Format -> The application should support various date formats in regional settings.

Software Development Life Cycles: Outline for Developing a Traceability Matrix


The concept of Traceability Matrix is to be able to trace from top level requirements to implementation, and from top level requirements to test. A traceability matrix is a table that traces a requirement to the tests that are needed to verify that the requirement is fulfilled. A good traceability matrix will provide backward and forward traceability, i.e. a requirement can be traced to a test and a test to a requirements. The matrix links higher level requirements, design specifications, test requirements, and code files. It acts as a map, providing the links necessary for determining where information is located. This is also known as Requirements Traceability Matrix or RTM. This is mostly used for QA so that they can ensure that the customer gets what they requested. The Traceability matrix also helps the developer find out why some code was implemented the way it was, by being able to go from code to Requirements. If a test fails, it is possible to use the traceability matrix to see what requirements and code the test case relates to. The goal of a matrix of this type is 1. To make sure that the approved requirements are addressed/covered in all phases of development: From SRS to Development to Testing to Delivery. 2. That each document should be traceable: Written test cases should be traceable to its requirement specification. If there is new version then updated testcases should be traceable with that. 1. Software Life Cycle 1. The FDA does not prescribe a specific software development life cycle, but requires manufacturers to identify and follow what makes sense for them 2. Manufacturers choose a software life cycle model and development methodology appropriate for their device and organization 1. Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, May 1998 3. Software Life Cycle must include: 1. Risk management 2. Requirements analysis and specification 3. Design (both top level and detailed) 4. Implementation (coding) 5. Integration 6. Validation 7. Maintenance 4. A software life cycle model should be understandable, thoroughly documented, results oriented, auditable, and traceable. 1. Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, May 1998 2. What is required to demonstrate traceability? 1. Provide a traceability analysis or matrix which links requirements, design specifications, hazards, and validation. Traceability among these activities and documents is essential. This document acts as a map, providing the links necessary for determining where information is located. 1. Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, May 1998

3. How Does Traceability Ensure the Life Cycle is Followed? 1. It demonstrates the relationship between design inputs and design outputs 2. It ensures that design is based on predecessor, established requirements 3. It helps ensure that design specifications are appropriately verified, that functional requirements are appropriately validated 4. Important: Traceability is a 2-way street. Maintain "backwards" and "forwards" -Tunnel Vision not acceptable in the Software Life Cycle! 4. Traceability Across the Life Cycle 1. Risk Analysis (Initial and Ongoing Activities) 1. Trace potential hazards to their specific cause 2. Trace identified mitigations to the potential hazards 3. Trace specific causes of software-related hazards to their location in the software 2. Requirements Analysis and Specification 1. Trace Software Requirements to System Requirements 2. Trace Software Requirements to hardware, user, operator and software interface requirements 3. Trace Software Requirements to Risk Analysis mitigations 3. Design Analysis and Specification 1. Trace High-Level Design Specifications to Software Requirements 2. Trace Design Interfaces to hardware, user, operator and software interface requirements 3. Evaluate design for introduction of hazards; trace to Hazard Analysis as appropriate 4. Design Analysis and Specification 1. Trace Detailed Design Specifications to High-Level Design 2. IMPORTANT: Ability to demonstrate traceability of safety critical software functions and safety critical software controls to the detailed design specifications 5. Source Code Analysis (Implementation) 1. Trace Source Code to Detailed Design Specifications 2. Trace unit tests to Source Code and to Design Specifications 1. Verify an appropriate relationship between the Source Code and Design Specifications being challenged 6. Source Code Analysis (Implementation) 1. Trace Source Code to Design Specifications 2. Trace unit tests to Source Code and to Design Specifications 1. Verify an appropriate relationship between the Source Code and Design Specifications being challenged 7. Integration 1. Trace integration tests to High-Level Design Specifications 2. IMPORTANT: Use High-Level Design Specifications to establish a rational approach to integration, to determine regression testing when changes are made 8. Validation 1. Trace system tests to Software Requirement Specifications 2. Use a variety of test types 1. Design test cases to address concerns such as robustness, stress, security, recovery, usability, etc. 3. Use traceability to assure that the necessary level of coverage is achieved

5. Plan Ahead for Traceability 1. Options 1. Manual methods 1. Word processors 2. Spreadsheets 2. "Home-built" Automated Systems 1. Relational Databases 3. Commercial Automated Systems 1. DOORS 2. Requisite Pro

The Testing Estimation Process


One of the most difficult and critical activities in IT is the estimation process. I believe that it occurs because when we say that one project will be accomplished in such time by at such cost, it must happen. If it does not happen, several things may follow: from peers comments and senior managements warnings to being fired depending on the reasons and seriousness of the failure. Here are a few rules for effective testing estimation: Rule 1: Estimation shall be always based on the software requirements All estimation should be based on what would be tested, i.e., the software requirements. In many cases, the software requirements are only established by the development team without any or just a little participation from the testing team. After the specification have been established and the project costs and duration have been estimated, the development team asks how long would take for testing the solution. Instead of this: The software requirements shall be read and understood by the testing team, too. Without the testing participation, no serious estimation can be considered. Rule 2: Estimation shall be based on expert judgment Before estimating, the testing team classifies the requirements in the following categories: - Critical: The development team has little knowledge in how to implement it; - High: The development team has good knowledge in how to implement it but it is not an easy task; - Normal: The development team has good knowledge in how to implement. The experts in each requirement should say how long it would take for testing them. The categories would help the experts in estimating the effort for testing the requirements. Rule 3: Estimation shall be based on previous projects All estimation should be based on previous projects. If a new project has similar requirements from a previous one, the estimation is based on that project. Rule 4: Estimation shall be recorded All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again. The testing team would not need to return for all steps and take the same decisions again. Sometimes, it is an opportunity to adjust the estimation made earlier.

Rule 5: Estimation shall be supported by tools Tools (e.g a spreadsheet containing metrics) that help to reach the estimation quickly should be used. In this case, the spreadsheet calculates automatically the costs and duration for each testing phase. Also, a document containing sections such as: cost table, risks, and free notes should be created. This letter should be sent to the customer. It also shows the different options for testing that can help the customer decide which kind of test he needs. Rule 6: Estimation shall always be verified Finally, all estimation should be verified. Another spreadsheet can be created for recording the estimations. The estimation is compared to the previous ones recorded in a spreadsheet to see if they have similar trend. If the estimation has any deviation from the recorded ones, then a reestimation should be made.

You might also like