You are on page 1of 70

What Manual Testing?

The Analyst conducts an initial study the and asks is Testing activities performed by people without the of help of problem Software Testing Tools. the solution T echnologically possible? What is Software Quality? Economically possible? It is reasonably bug free delivered on time with inpossible? the budget, meets all requirements and Legally it is maintainable. Operationally possible? s i b All the stages from start to finish that take place when developing new Software. l e Feasibility Study ? • Feasibility Study – What exactly is this system supposed to do? O p Analysis • Analysis e – Determine and list out the details of the problem. r a • Design t Design – How will the system solve the problem? i • Coding o – Translating the design into the actual system. n Coding a • Testing l – Does the system solve the problem? l Testing – Have the requirements been satisfied? y 1. The Software life cycle – Does the system work properly in all situations? p • Maintenance o – Bug fixes s s  The software life-cycle is a description of the events that occur between the birthi and death of a software project inclusively. b l  SDLC is separated into phases (steps, stages) e ?  SLDC also determines the order of the phases, and the criteria for transitioning from S phase to phase c Change Requests on Requirement  Changes to business environment h e Specifications  Technology changes d Why Customer ask Change Requests u  Misunderstanding of the stated  Different users/customers have l requirements due to lack of domain different requirements. e knowledge d  Requirements get clarified/ known at a How to Communicate the Change Requests later date to team t 1 i m e s

a l e p o s s i b l e ?

 Formal indication requirements     Joint Review Meetings



to 1. Direct Benefits • Facilitates Proper Control and Monitoring • • • Metrics Speak for themselves You can buy more time.

Regular Daily Communication Queries Defects reported by Clients during testing.

 Client Reviews of SRS, SDD, Test plans • etc.

e Builds Customer Confidence. m s are  Across the corridor/desk (Internal What can be done if the requirements changing continuously? Projects) a  Work with project stakeholders early  Presentations/ Demonstrations n on to understand how the requirements might a Analyzing the Changes change. So that alternative test plans l and strategies can be worked out in advance.  Classification y s design  It is helpful if the application initial − Specific t has some adaptability. So that later − Generic changes do not require redoing the c application from scratch.  Categorization o  If the code is well commented andnwell − Bug documented. Then it is easy to make d − Enhancement changes for developers. u − Clarification etc. c  Use rapid prototyping whenever possible t to help customers feel sure of their  Impact Analysis s requirements and minimize changes. − Identify the Items that will be effected  Negotiate to allow only easily− Time estimations implemented new requirements into a the project, while moving more difficult n new − Any other clashes / open issues raised requirements into future versions of the due to this? application. i Benefits of accepting Change Requests n The feasibility report i t  BRS (Business Requirement Document) i a  Applications areas to be considered (Stock control, banking, Accounts etc) l  System investigations and System requirements for each application s  Cost estimates t u  Timescale for implementation d y  Expected benefits 2 o f t

You may be able to bill more. st 2. Indirect Benefits:

e p r o b l e m a n d a s k s i s t h e

1.1 Feasibility Study: Feasibility Study Analysis (Requirements) Design

Coding Testing Installation & Maintenance

s o l u 1.2 Systems Analysis: t i  The process of identifying problems, resources opportunities, requirements o and constraints n  System analysis is the process of Investigating a business with a view to determining how best to manage the various procedures and information T processing tasks that it involves. e c Feasibility Study h 1.2.1 The Systems Analyst n o Analysis • Performs the investigation andl might o recommend the use g of a computer to improve the efficiency of the i information system. Design c a 1.2.2 Systems Analysis l Coding • The intention to determine how l well a y business copes with its current information processing needs p Testing • Whether it is possible to improve the o procedures in order s Installation & to make it more efficient or profitable. s 3 Maintenance i The (BRS, FRS and SRS) Documents Bridge the communication b Gap between the client, user, developer and Tester

passwords Test plan Design Coding Testing Installation & Maintenance 4 .e files. Systems analysis determines what the system should do Design determines how it should be done.3 Systems Design: The business of finding a way to meet the functional requirements within the specified constraints using the available technology    Planning the structure of the information system to be implemented. ♦ Use Cases contains user action and system response with fixed 1. database Tables System security Backups. Output. validation.The System Analysis Report    SRS(Software Requirement Specification) Use Cases( User action and system Response) FRS(Functional Requirement Document) Or Functional specifications   ♦ ♦ ♦ ♦ [These 3 are the Base documents for writing Test Cases] Documenting the results Systems flow charts Data flow diagrams Organization charts Report e ? E c o n o m i c a l l y p o s s i b l e ? L e g a l l y p o s s i b l e ? O p e r a t i o n a l l y p o s Note ♦ FRS contains Input. Feasibility Study Analysis User interface design Design of output reports Input screens Data storage i . process but no format.

Detailed functional logic of the module. 3. with all elements. including their type and size 3. Complete input and output format of a module t i m e s c a l e p o s s i b l e ? Note: HLD and LLD phases put together called Design phase 1. Overall architecture l diagrams along with technology e d details Low Level Design 1. 2. Error MSG listing 6. List of modules and a Brief functionality of Interface relationship Dependencies between brief description of each each module among modules modules Design Phases i b l e ? S c 5. All interface details 4. Reports that related to System . in pseudo code 2. Functions.System Design Report consist of    Architectural Design Database Design Interface Design ♦ High Level Design ♦ Low Level Design High Level Design 1. All dependency issues 5.4 Coding: Feasibility Study Analysis Translating the design into the actual system Program development Unite testing by Development Tem (Dev Tem) Design Coding Testing Installation & Maintenance 5 Coding Report  All the programs. 4. Database tables. Database tables identified h e with key elements d u 6.

1. which helps in reducing the cost of defect fixing? 6 . To discover defects. IEEE Definitions Why is Software Testing? 1.5. To prove that the software has no faults failure. Error : Refers to difference between Actual Output and Expected output. a To avoid user detecting problems  Fault: Discrepancy in code that causes 3. Note:  Error: Human mistake that caused 4. 5.  Cost of Defect Repair Phase Requirements Design Coding Testing Customer Site % Cost 0 10 20 50 100 To ensure that product works as user expected. To stay in business To detect defects early. To avoid being sued by customers  Error is terminology of Developer 6. Fault : It is a condition that causes the software to fail to perform its required function. Testing is Executing a program with an intention of finding defects Testing is executing a program with an indent of finding Error/Fault and Failure. 7. fault To learn about the reliability of the software.1 What Is Software Testing? IEEE Terminology : An examination of the behavior of the program by executing on sample data sets.  Failure: External behavior is incorrect 2. Installation & Maintenance F Failure : It is the inability of a system or component to perform required function according to its specification .5 Testing: Feasibility Study  Analysis  Design Coding Testing  1.  Bug is terminology of Tester 8.

 Make sure that all agreed-upon standards rather than using real data (from user activity logs. and procedures are followed from logical scenarios. paths.this is no way to measure and Evaluating tests. When they are measures quality over time and starting from a removed. from careful consideration  Ensuring that problems are found and of the concept domain).Cost of Defect Repair 100 80 60 Cost 40 20 0 Requirements Design Coding Testing Customer Site % Cos t 0 10 20 50 100 SDLC Phase How exactly Testing is different from QA/QC To over come following problems. Team will not be in a position to announce the percentage of testing completed. Effective quality assurance programs expand their base of  It is a Prevention Process Why should we need an approach for testing?documentation on the product and on the testing process over time. We definitely need an approach for granularity of tests over time. addressed. overall risk issues regarding code coverage and  Testing measures software quality quality metrics. Implementing No risk management -.  Measures the quality of the process used Inefficient to over the long term -. With no time to prepare.  It is the process of Creating. Quality Control (QC) Too little emphasis on user tasks -. With no time to prepare. we need a formal approach for Testing.because  It is the process of Inspections. Walktesters will focus on ideal paths instead of real troughs and Reviews. known base of evaluation. requires good test setup and preparation. software quality is improved. ideal paths are  Measures the quality of the product defined according to best guesses or developer  It is a Detection Process feedback rather than by careful consideration of Quality Analysis (QA ) how users will understand the system or how users understand real-world analogues to the  Monitoring and improving the entire application tasks.quality create good quality Product assurance involves a range of tasks. Testing is often confused with the processes Incomplete functional coverage: Completeness of quality control and quality assurance. of testing is difficult task for testing team with out a formal approach. but success with the kind Test plan-less approach Testing 7 . testers SDLC process will be using a very restricted set input data. increasing the coverage and Yes. Effective quality assurance  Testing can find faults. Great testing testing.

Maximum Value + 1. Approach: Equivalence Class:  For each piece of the specification. Minimum Value -1  Maximum Value.  Minimum Value.  If an input condition requires a specific Error Guessing. value. Maximum Value .  Initialization or termination errors. Boundary Value Analysis  Generate test cases for the boundary values.  Behavior or performance based errors.  Interface errors.  Generating test cases against to the specification.  User doesn’t require the knowledge of software code. A continued pattern of quick-and-dirty testing like this is a sign that the product or application is unsustainable in the long run.1  If an input condition specifies a range.described in this essay may reinforce bad project and test methodologies. Areas of Testing:  Black Box Testing  White Box Testing  Gray Box Testing Black Box Testing Test the correctness of the functionality with the help of Inputs and Outputs.  If an input condition is Boolean. one valid and one invalid classes are defined. one valid and one invalid equivalence classes are defined. Minimum Value + 1. generate one or more equivalence Class  Label the classes as “Valid” or “Invalid”  Generate one test case for each Invalid Equivalence class  Generate a test case that covers as many Valid Equivalence Classes as possible An input condition for Equivalence Class A specific numeric value A range of values A set of related values A Boolean condition Equivalence classes can be defined using the following guidelines      If an input condition specifies a member of a set.  Errors in data structures or external data base access.  It attempts to find errors in the following categories:  Incorrect or missing functions. one valid and two invalid equivalence class are defined. 8 . one valid and two invalid equivalence classes are defined.  Black box testing is also called as Functionality Testing.

are likely to be made while coding for  Used to test embedded systems.  Data Flow Testing Selects test paths according to the location of definitions and use of variables.loops.  Functionality and behavioral Approach parts are tested. you can test it better from the outside.  Testing Logic Errors  Testing Incorrect assumptions  It is just a combination of both Structure = 1 Entry + 1 Exit with certain Black box & white box testing. Structure Testing:  Condition Testing  Tester should have the knowledge of both the internals and externals of the function  If you know something about how the product works on the inside.tests can be developed for . Conditions and Loops. Constraints. conditions  Process guarantees that every statement will get executed at least once.  Provides flow-graph notation to identify independent paths of processing  Once paths are identified . Gray box testing is especially important with Web and Internet applications. Unless you understand the architecture of the Net. because the Internet is built around loosely integrated components that connect via relatively well-defined interfaces. Basic Path Testing Complexity(Mc Cabe Method) (Cyclomatic  Measures the logical complexity of a procedural design.  It is platform independent and Logic Errors and incorrect assumptions most language independent. your testing will be skin deep.  User does require the knowledge of software code. Purpose All logical conditions contained in the program module should be tested. Need to ensure these execution paths are tested. 9 . “special cases”. White Box Testing  Testing the Internal program logic  White box testing is also called as Structural testing.  Loop Testing ♦ Simple Loops ♦ Nested Loops  Testing all loops ♦ Concatenated Loops  Testing Basis paths  Testing conditional statements ♦ Unstructured Loops  Testing data structures Gray Box Testing.

hardware platform. coding. Code-and-fix model:   Earliest software development approach (1950s) Iterative.Feasibility Study Analysis 1. compiler. such as when an operating system. programmers' approach No provision for:      10 . from inception to termination Different life cycle models 2. Software Development Life Cycles Life cycle: Entire duration of a project.1.6 Installation & Maintenance Design Coding Installation    File conversion New system becomes operational Staff training Installation & Maintenance Testing Maintenance    Corrective maintenance  A type of maintenance performed to correct a defect Perfective maintenance  Reengineering include enhancement Adaptive maintenance  To change software so that it will work in an altered environment. software library or database structure changes Table format of all the phases in SDLC: PHASE Analysis Design Coding Testing INPUT BRS FRS and SRS Design Doc All the above Doc’s  OUTPUT FRS and SRS Design Doc . 2. fixing the code Project planning Analysis Design Testing Maintenance 2.exe File/Application/Website Defect Report Two phases: 1.

Analysis 3. integration test. feedback to previous phases is possible (but is discouraged in practice) model today Main phases: 1. because of poor testing and maintenance practices Solutions: 1.2. organized approach and suitable for planning   Waterfall model is a linear approach. Maintenance  Still is the most widespread  Also called the classic life cycle 11 . After several iterations.Problems with code-and-fix model: 1. acceptance test) 6. subsequent fixes became very expensive 2. Requirements 2. Coding 5. Design (overall design & detailed design) 4. code became very poorly structured. Changes to code were expensive. Even well-designed software often very poorly matched users’ requirements: were rejected or needed to be redeveloped (expensively!) 3. Requirements analysis before design 3. Testing (unit test. Separate testing and maintenance phases after coding 2. Design before coding 2. Waterfall model:  Introduced in 1956 to overcome limitations of code-and-fix model  Very structured. quite inflexible At each phase.

Identify System and Analyze them Requirements 3. Deploy the System and Operate It Waterfall Model Assumption  The requirements are knowable in advance of implementation. Integrate the Pieces and Test the System (System Testing) 7. Design Each Design) Piece (Detailed 5. 12 . Break the System into Pieces (Architectural Design) 4. Debugging. and Unit Testing) 6.Classic Waterfall Requirements Analysis Design Coding Testing Maintenance Approaches The standard waterfall model for systems development is an approach that goes through the following steps: 1. Code the System Components and Test Them Individually (Coding. Document System Concept 2.

users.e.. and the requirements are fully known . organizational impacts  The nature of the requirements are compatible with all the key system stakeholders’ expectations -. The requirements unresolved.e. it works fine. The stages are clear cut All R&D done before coding starts. There is enough calendar time to proceed sequentially. performance. this model build a high quality product. developers. security. investors  The right architecture for implementing the requirements is well understood. implies better quality program design    13 . user interface. schedule. safety. for short and  Advantages of Waterfall Model     duration The waterfall model is effective when there is no change in the requirements.. Conversion of existing projects in to new projects. risks due to COTS choices. Suitable projects.g. If there is no Rework. implications have no high-risk -. maintainers. For proven platforms technologies. cost.g. customer.

Engineer product     A working version of the system is not seen until late in the project's life. Prototyping model:    Introduced to overcome shortcomings of waterfall model Suitable to overcome problem of requirements definition Prototyping builds an operational model of the planned system. Build prototype 4. The model has adapting to change.Disadvantages with Waterfall Model:      Testing is postponed to later stage till coding completes. Requirements gathering 2. So rework is more. Not suitable for large projects It assumes uniform and orderly sequence of steps. Real projects rarely flow in a sequential process. It is difficult to define all requirements at the beginning of a project. Iterate steps 4. Refine prototype 6. Delivery only at the end (long wait)    14 . and 5. to "tune" the prototype 7. which the customer can evaluate Main phases: 1. Risk in certain project where technology itself is a risk. Errors are discovering later (repairing problem further along the lifecycle becomes progressively more expensive). problems 2. Maintenance cost can be as much as 70% of system costs. Customer evaluation of prototype 5. Correction at the end of phase need correction to the previous phase. Quick design 3.3.

temporary design solutions may become permanent after a while. the prototype is discarded after step 5. the end user and the client are permitted to use the application and further modifications are done based on their feedback. when the developer has forgotten that they were only intended to be temporary (results in poor software quality) Helps counter the limitations of waterfall model After prototype is developed. (throw-away prototyping) Possible problems:   Customer may object to prototype being thrown away and may demand "a few changes" to make it working (results in poor software quality and maintainability) Inferior. User oriented What the user sees Not enigmatic diagrams Quicker error feedback Earlier training Requirements Yes Quick Design Build Prototype Advantages        15 .Prototyping Engineer Product No Changes? Refine Prototype Evaluate Prototype Note: Mostly. and the actual system is built from scratch in step 6.

4 Incremental: During the first one-month phase. Define outline Requirements 2. Develop Integrate 6. Possibility of developing a system that closely addresses users' needs and expectations  Disadvantages         Development costs are high. Design system architecture 4. iterative nature Requires feedback on the prototype Incomplete prototypes may be regarded as complete systems 2. Validate Incremental Define outline requirements Assign requirements to increments Design system architecture Develop system increment Validate increment Integrate increment Validate Final system System System incomplete 16 . In focus group meetings. the team discussed users’ needs and the potential features of the product and then showed a demonstration of its prototype. Main phases: 1. Assign requirements to increments 3. 5. User expectations Bypass analysis Documentation Never ending Managing the prototyping process is difficult because of its rapid. the development team worked from static visual designs to code a prototype. The excellent feedback from these focus groups had a large impact on the quality of the product.

Job Right) Testing the system in a real environment i. with software delivered for beta use at the end of each cycle. code. the feature set was frozen and the product definition complete.  Typically involves in Reviews and Meetings to evaluate documents. requirements and specifications. This can be done with checklists. Implementation consisted of four-to-six-week cycles. The result was a world-class product that has won many awards and has been easy to support.After the second group of focus groups. issue lists. Typically involves in actual testing and take place after verifications are completed  V E R I F I C A T I O N V A L I D ATI O N Advantages     Reduces the cost of defect repair (·.5 V-Model: Verification  (Static System – Doing Right Job) To test the system correctness as to whether the system is being functioning as per specifications.5 months. whether software is catering the customers requirements. walkthroughs and inspection meetings. Implementation lasted 4.e.  Validation  (Dynamic System . The entire release took 10 months from definition to manufacturing release.· Every document is verified by tester ) No Ideal time for Testers Efficiency of V-model is more when compare to Waterfall Model Change management can be effected in V-model Disadvantages 17 . plans. 2.

 

Risk management is not possible Applicable of medium sized projects

2.6 Spiral model:    Objective: overcome problems of other models, while combining their advantages Key component: risk management (because traditional models often fail when risk is neglected) Development is done incrementally, in several cycles _ Cycle as often as necessary to finish

Main phases: 1. Determine objectives, alternatives for development, and constraints for the portion of the whole system to be developed in the current cycle 2. Evaluate alternatives, considering objectives and constraints; identify and resolve risks 3. Develop the current cycle's part of the system, using evolutionary or conventional development methods (depending on remaining risks); perform validation at the end 4. Prepare plans for subsequent phases


Spiral Model

This model is very appropriate for large software projects. The model consists of four main parts, or blocks, and the process is shown by a continuous loop going from the outside towards the inside. This shows the progress of the project.   Planning This phase is where the objectives, alternatives, and constraints are determined. Risk Analysis What happens here is that alternative solutions and constraints are defined, and risks are identified and analyzed. If risk analysis indicates uncertainty in the requirements, the prototyping model might be used to assist the situation.  Engineering Here the customer decides when the next phase of planning and risk analysis occur. If it is determined that the risks are to high, the project can be terminated.  Customer Evaluation In this phase, the customer will assess the engineering results and make changes if necessary.


Spiral model flexibility  Well-understood systems (low technical risk) - Waterfall model. Risk analysis phase is relatively cheap Stable requirements and formal specification. Safety criticality Formal transformation model High UI risk, incomplete specification - prototyping model Hybrid models accommodated for different parts of the project  Good for large and complex projects Customer Evaluation allows for any changes deemed necessary, or would allow for new technological advances to be used Allows customer and developer to determine and to react to risks at each evolutionary level Direct consideration of risks at all levels greatly reduces problems Difficult to convince customer that this approach is controllable Requires significant risk assessment expertise to succeed Not yet widely used efficacy not yet proven If a risk is not discovered, problems will surely occur 2.7 RAD Model  RAD refers to a development life cycle designed to give much faster development and higher quality systems than the traditional life cycle. It is designed to take advantage of powerful development software like CASE tools, prototyping tools and code generators. The key objectives of RAD are: High Speed, High Quality and Low Cost. RAD is a people-centered and incremental development approach. Active user involvement, as well as collaboration and co-operation between all stakeholders are imperative. Testing is integrated throughout the development life cycle so that the system is tested and reviewed by both developers and users incrementally.

 

Advantages of spiral model:  

Problems with spiral model:    


With conventional methods. there is nothing until 100% of the process is finished. then 100% of the software is delivered To prevent runaway schedules (RAD needs a team already disciplined in time management)  Good Reasons for using RAD  To converge early toward a design acceptable to the customer and feasible for the developers To limit a project's exposure to the forces of change To save development time.Problem Addressed By RAD  With conventional methods. With conventional methods. possibly at the expense of economy or product quality Mapping between System Development Life Cycle (SDLC) of ITSD and RAD stages is depicted as follows. development can take so long that the customer's business has fundamentally changed by the time the system is ready for use. there is a long delay before the customer gets to see any results. RAD     Bad Reasons For Using RAD To prevent cost overruns (RAD needs a team already disciplined in cost management)  SDLC Project Request RAD in SDLC  Requirements Planning System Analysis & Design SA&D) User Design RAD Construction Implementation Post Implementation Review Transition 21 .

tools and user involvement.  Advantages of RAD    Buying may save money compared to building Deliverables sometimes easier to port Early visibility 22 . In the long run. Productivity of developers will be increased. we will also achieve that:   Time required to get system developed will be reduced. Standards and consistency can be enforced through the use of CASE tools.Essential Ingredients of RAD  RAD has ingredients: ♦ ♦ ♦ ♦  essential  four Business benefits can be realized earlier.  The following benefits can be realized in using RAD:  High quality system will be delivered because of methodology. Capacity will be utilized to meet a specific and urgent business need. Tools Methodology People Management.

code generators. code reuse) Increased user involvement (because they are represented on the team at all times) Possibly reduced cost (because time is money.              Greater flexibility (because developers can redesign almost at will) Greatly reduced manual coding (because of wizards. lackluster appearance) Successful efforts difficult to repeat Unwanted features Disadvantages of RAD Testing Process: Test cases T est data Test results Test reports Design test cases Prepare test data Run program with test data Compare results to test cases 4. also because of reuse) Buying may not save money compared to building Cost of integrated toolset and hardware to run it Harder to gauge progress (because there are no classic milestones) Less efficient (because code isn't hand crafted) More defects Reduced features Requirements may not converge Standardized look and feel (undistinguished. Testing Life Cycle: A systemic approach for Testing 23 .

4.Ten Lines Of Code (LOC) = 1 Functional Point. Numbers of Modules 9. Functional Points: . Domain Knowledge :. Number of Days: . Number of Resources : -Like Programmers. Software : − Front End(GUI) VB / JAVA/ FORMS / Browser − Process − Back End Language witch we want to write programmes Database like Oracle.For actual completion of the Project. 3. and Managers. 8. Designers. 5.High/ Medium/ Low importance for Modules 3. Number of Pages: . 6. Priority:.Internet/ Intranet/ Servers which you want to install. SQL Server etc. Hardware: .Used to know about the client business Banking / Finance / Insurance / Real-estates / ERP / CRM / Others 2.The document which you want to prepare.2 Scope/ Approach/ Estimation: Scope 24 . 7.1 System Study 1.System Study Scope/ Approach/ Estimation’s Test Plan Design Test Case Design Test Case Review Test Case Execution Defect Handling Gap Analysis Deliverables 3.

    − What to test What not to test Methods. tools and techniques used to accomplish test objectives.) 3. Overview of Application 5. and schedule of testing activities. Boundary Value Analysis. Testing approach (Testing strategy) 6. = Test Cases) − − − − − − 30 TC Par Day => 300/30 = 10 Days to Design Test Cases Test Case Review => ½ of Test Case Design (5 Days) Test Case Execution = 1 ½ of Test Case Design(15 Days) Defect Headlining = Test Case Design (5 Days) Test Plan = 5 days ( 1 week ) Buffer Time = 25% of Estimation Approach Estimation The 3 Tech are (Equivalence Class. For each testing 25 . Estimation should be done based on LOC/ FP/Resources − 1000 LOC = 100 FP (by considering 10 LOC = 1 FP) 100 x 3 = 300 (FP x 3 Tech. resources. approach. the test itself cannot be verified. Scope (What to be tested and what not to be ) 4. Error Guessing) 3. Why Test Plan? 1.3 Test Plan Design:  A test plan prescribes the scope. Repeatable 2. About the client and company 2. Reference document (BRS. Adequate Coverage Importance of Test Plan Test planning process is a critical step in the testing process. To Control 3. Without a documented test plan. coverage cannot be analyzed and the test is not repeatable The Test Plan Design document helps in test execution it contain 1. FRS and UI etc.

Defect definition 9. Remarks. Status (Pass/Fail) 7. Deliverables To support testing. which specifies    What to Do? How to Do? When to Do? 3. Training Required 11. What are the items of test case? 1. Test Case Number 2. Schedules 12. a plan should be there.4 Test Cases Design: What is a test case? Test case is a description of what to be tested. Resources and there Roles and Responsibilities ♦ 8. what data to be given and what actions to be done to check the actual result against the expected result. Pre-Condition 3. Test Case Template TC ID PreCondition Description Expecte d Result Actual Result Status Remarks 26 . Description 4.Definition ♦ Technique ♦ Start criteria ♦ Stop criteria 7. Actual Result 6. Risk / Contingency / Mitigation Plan 10. Expected Result 5.

what data to provided 3. Design based 3. . Check inbox is displaye d 2. Code based 4. Requirement Based 2. Extreme Can this test cases reusable? Test cases developed for functionality testing and can be reusable for    Integration System Regression 27 Testing with few modifications. Extracted 5. User ID/PW 3. What to be tested 2. what action to be done 1.Unique Condition Test Case to satisfied number 1. Click on Submit As pear System FSR response Pass or Fail If any Yahoo001 Yahoo web page should displayed System should mail box System response Test case Development process      Identify all potential Test Cases needed to fully test the business and technical requirements Document Test Procedures and Test Data requirements Prioritize test cases Identify Test Automation Candidates Automate designated test cases Source Specifications Logical system Code Existing files or test cases Limits and boundary conditions Types of Test Cases Type 1.

(Convention should be followed same across the Project Eg. <Action Buttons> . Out dated test cases should be cleared off. TC should not contain “If” statements. Developed to verify that specific requirements or design are satisfied Each component must be tested with at least two test cases: Positive and Negative Real data should be used to reality test the modules after successful test data is used The following issues should be considered while writing the test cases        Test case Guidelines 3. There should not be any duplicate test cases. Performance What are the characteristics of good test case? A good test case should have the following:     TC should start with “what you are testing”. Team Manager Review Review Process 28 . TC should be uniform. Team Lead Review 3. TC should be independent. Peer to peer Reviews 2. Links… All the TC’s should be traceable.5 Test Case Review: 1. All the test cases should be executable.

Defect Tracking and Reporting 6. 1. Screen shoots of failure executions should be taken in word document. Regression Testing The following activities should be taken care: 1. 4. Collection of Metrics 5. Test Case Execution Process: Test execution consists of following activities to be performed 29 . Execution and execution results plays a vital role in the testing. Number of defects found 3. Time wasted due to the unavailability of the system. which involves executing the planned test cases and conducting of the tests. Test execution phase broadly involves execution and reporting. Test Methodology used 4.Take checklist Take a demo of functionally Go through the Use cases & Functional Spec Try to find the gap between TC & Use cases Submit the Review Report 3.6 Test Case Execution:    Test execution is completion of testing activities. Time taken to execute. Execution of test cases on the setup 3. 2. Number of test cases executed. Creation of test setup or Test bed 2. 5.

In most of the cases defects are reported by Testing Team. Defect Reporting • A software error is present when Who can report a Defect? the program does not do what its end user expects it to do.Take the Test Case document Check the availability of application Implement the Test Cases Raise the Defects Test Case Test Data INPUT 3.7 Defect Handling What is Defect? Test Case Execution Raise the Defect Screen shot OUTPUT Installation & 5. well conceived. A short list of people expected to report bugs: 1. highquality technical document The problem should be described in a way that maximizes the probability that it will be fixed PROCESS • Defect is a coding error in a computer program. particularly at the summary level Defect Report should be accurate. Sales and Marketing Engineers  Defect or Bug Report is the medium of communication between the tester and the programmer Provides clarity to the management.  Anyone who has involved in software development life cycle and who is using the software can report a Defect. 2. concise. 3. 4. Testers / QA Engineers Developers Technical Support End Users   30 . thoroughlyedited.

 Defect Raised Internal Review Defect Defect Submitted to Dev Team Defect Rejected No Valid Yes s Defect Accepted Defect Postponed Defect Fixed No Valid Yes s Close the Defect 31 . This DLC will help the users to know the status of the defect. Defect Report should be nonjudgmental and should not point finger at the programmer Crisp Defect Reporting process improves the test team’s communications with the senior and peer management Defect Life Cycle   Defect Life Cycle helps in handling defects efficiently.

32 . A defect occurred which severely restricts the system such as the inability to use a major function of the system. or the user is dropped out of the system. Installation Problem How do u decide the Severity of the defect Severity Level High Description A defect occurred due to the inability of a key function to perform. Missing Feature 8. Response Time or Turn-around Time A response or action plan should be provided within 5 working days and the situation should be resolved before test exit. This problem causes the system hang it halts (crash). There is an acceptable workaround for the defect. Documentation Issue 5. Severity Level Low Description A defect is occurred which places minor restrict on a function that is not critical. An immediate fix or work around is needed from development so that testing can continue. There is no acceptable work-around but the problem does not inhibit the testing of other functions 7. Slow Performance 9. Unexpected Behavior 11.Types of Defects 1. Cosmetic flaw 2. Unfriendly behavior Response Time or Turn-around Time Defect should be responded to within 24 hours and the situation should be resolved test exit A response or action plan should be provided within 3 working days and the situation should be resolved before test exit. Data corruption 3. Incorrect Operation 6. Data loss 4. System Crash 10.

8 Gap Analysis: 1. This may not be the same in all cases some times even though severity of the bug is high it may not be take as the High priority. Defect Tracking Sheet Defect No Unique No Description Dec of Bug Origin Birth place of the Bug Severity Critical Major Medium Minor Cosmetic Priority High Medium Low Status Submitted Accepted Fixed Rejected Postponed Closed Defects Defect001 Defect002 BRS Defect Tracking Tools    Bug Tracker -. All the High Severity Defects should be fixed first.BSL Proprietary Tools Rational Clear Quest Test Director SRS SRS001 Test Case TC001 BRS001 SRS002 SRS003 TC002 TC003 3. Defect Severity VS Defect Priority     The General rule for the fixing the defects will depend on the Severity. The fix dates are subject to negotiation. At the same time the low severity bug may be considered as high priority. No immediate impact to testing.Others An incident occurred which places no An action plan should be provided for next release or future enhancement restrictions on any function of the system. BRS Vs SRS BRS01 – SRS01 -SRS02 -SRS03 33 . A Design issue or Requirements not definitively detailed in project.

SRS Vs TC SRS01 – TC01 .       FRS SRS Use Cases Test Plain Defect Report Review Report etc.1 Requirement Analysis Testing Objective  The objective of Requirement AnalysisTesting is to ensure software quality by eradicating errors as earlier as possible in the developement process Difficulties in conducting requirements analysis:    Analyst not prepared Customer has no time/interest Incorrect customer personnel involved 34 . Consistency Types of requirements Analysis          Functional Requirements Data Requirements Look and Feel requirements Usability requirements Performance Requirements Operational requirements Maintainability requirements Security requirements Scalability requirements 4 Testing Phases  Requirement Design Testing Unit Testing Testing      Integration Testing System Testing Acceptance Testing 4. The objective can be acheived by three basic issues: 1. Completeness 3.9 Deliverables: All the documents witch are prepared in each and every stage. Correctness 2.Defects02 3.  If the errors noticed at the end of the software life cycle are more costly compared to that of early ones. and there by validating each of the Outputs. TC Vs Defects TC01 – Defects01 -.TC02 .TC03 3.2..

Tractability Testing activities in Design phase 1. Creation of over-all Test Plan and Test Strategy 3. The objective can be achieved by the following issues 1.2 Design Testing Objective  The objective of the design phase testing is to generate a complete specifications for implementing a system using a set of tools and languages Design objective is fulfilled by five issues 1. Insufficient time allotted in project schedule 4. 2. Feasibility 5. Completeness 3.  All field level validations are expected to test at the stage of testing.3 Unit Testing Objective  In Unit testing user is supposed to check each and every micro function. Verify Test Cases & test scripts by peer reviews. Debugging 4. Consistency 2. Early Testing 4.  In most of the cases Developer will do this.4 Integration Testing: Objective  The primary objective of integration testing is to discover errors in the interfaces between Modules/Sub-Systems (Host & Client Interfaces). What constitutes “good” requirements? Clear Concise → Unambiguous terminology → No unnecessary narrative or non-relevant facts Consistent→ Requirements that are similar are stated in similar terms. Requirements do not conflict with each other. Develop Test cases to ensure that product is on par with Requirement Specification document. Capturing Performance criteria of the software requirements 4. Complete → All functionality needed to satisfy the goals of the system is specified to a level of detail sufficient for design to take place Testing related activities during Requirement phase 1. 3. Completeness 3. Correctness 35 . Preparation of traceability matrix from system requirements 4. Capturing Acceptance criteria and preparation of Acceptance Test Plan 4. Correctness 2. Creation and finalization of testing templates 2.

Tests are conducted as each module is module is integrated. 5. The main control module is used as a test driver. Advantages  We can verify the major controls early in the testing Process Disadvantage:  Stubs are required. 2. Very difficult to develop stubs 36 . 4. and stubs are substituted for all modules directly subordinate to the main control module. another stub is replaced with the real-module. One completion of each set of tests. 3. Depending on the integration approach selected (depth or breadth-first) subordinate stubs are replaced at a time with actual modules. Regression testing may be conducted to ensure that new errors have not been introduced. Minimizing the errors which include internal and external Interface errors Approach: Top-Down Approach The integration process is performed in a series of 5 steps 1.

2. Address several software requirements. Complex & Error-Phone. (resides relatively high in the program structure) 3. Have definite performance requirements. A driver (control program for testing) is written to coordinate test case input and output. This testing is conducted in parallel with integration of various applications (or components) 2. A critical module has one or more of the following characteristics: 1. 2. Testing the product with its external and internal interfaces without using drivers and stubs. Drivers are removed and clusters are combined upward in the program structure Advantages  Easy to Develop the drivers than stubs The need of test drivers Late detection of interface problems An integration testing is conducted. the tester should identify critical modules. 3. 4. A bottom-up integration strategy may be implemented with the following steps: 1. Testing activities in Integration Testing Phase 1. The cluster is tested. 4. Disadvantage:   37 . Low level modules are combined into clusters (Some times called builds) that perform a specific software sub function. Has a high-level of control.Bottom-Up Approach.

 Beta Testing    4. Security Testing. This phase enables the developer to modify the code so as to alleviate any remaining bugs before the final ‘official’ release. Defect found during system test should be logged into Defect Tracking System for the purpose of tracking. Review of all the test documents Approach: IDO Model    Identifying the EndEnd/Business Life Cycles. System test is done for validating the product with respect to client requirements 2. User is expected to test from Login-To-Logout by covering various business functionalities. System testing is also called as End-End Testing. The primary objective of system testing is to discover errors when the system is tested as a hole. 5. Incremental approach integrating the interfaces. Generally. Test logs and defects are captured and maintained. 4. As and bugs are uncovered. Design the test and data.3. 4.5 System Testing:  while  The primary objective of acceptance testing is to get the acceptance from the client.6 Acceptance Testing: Approach: BE 38 . the software is ready to migrate outside the developer’s premises. Functional Testing  Testing activities in System Testing phase 1. Testing can be in multiple rounds 3. On successful completion of this phase. It is carried out at one or more user’s premises using their infrastructure in an uncontrolled manner. Optimize the End-End/Business Life Cycles. the Quality Assurance cell is the body that is responsible for conducting the test. the developer is notified about the same.      Acceptance Testing Types The following Tests will be conducted in Systemtesting     Recovery Testing. with/without the developer around. Load & Stress Testing. Testing the system behavior against customer’s requirements Customers undertake typical tasks to check their requirements Done at the customer’s premises on the user environment Alpha Testing  Testing the application on the developer’s premises itself in a controlled environment. It is the customer or his representative that conducts the test.

   To confirm all the requirements are covered. Quantitative Approach: Approach: 1. Approach: 1. This generic test cases should be given to 10 different people and ask to execute the system to mark the pass/fail status.1 Functionality Testing: Objective:  Testing the functionality of the application with the help of input and out put Test against requirements. Boundary Value Analysis 3.  Execution of business Test Cases. system  Requirements Build Acceptance Testing System testing Integration Testing Unit Testing Qualitative Approach  Each and every function should available from all the pages of the site. Building a team with real-time user. functional users and developers. 5. User should able to submit each and every request with in 4-5 actions. Error Guessing. Qualitative Approach: 39  . Qualitative & Quantitative 2. Heuristic Checklist should be prepared with all the general test cases that fall under the classification of checking.2 Usability Testing:    To test the Easiness and Userfriendliness of the system. SDLC Phase Requirements Freeze Business Acceptance Test Requirements Docs Cases Software System Test Cases Requirements Docs Design Integration test Requirements Docs Cases Code Unit Test Cases 5 Testing Methods 5. The average of 10 different people should be considered as the final result. Equivalence Class 2. Confirmation message should be displayed for each and every submit. When Should we start writing Test Cases/ Testing V Model is the most suitable way to start writing Test Cases and conduct Testing.

 Manual Testing (By using impact Analysis)   Automation tools 5.  Accessibility  Consistency  Navigation  Design & Maintenance  Visual Representation. At the same time some other may feel its better if the submit button is placed on the right side. 5. Mean time for recovery.4 Regression Testing:  Objective is to check the new functionalities has incorporated correctly with out failing the existing functionalities. More then 85% of the stability is must. If the submit is button on the left side of the screen. Testing the code problems have been fixed correctly or not.  Identifying weak points in the architecture  Detect obscure bugs in software  Clarity of communication. RAD – In case of Rapid Application development Regression Test plays a vital role as the total development happens in bits and pieces.5 Performance Testing: Primary objective of the performance testing is “to demonstrate the system works functionally as per specifications with in given response time on a production sized database. By performing the continuous hours of operation.  Assessing the system capacity for growth.Example: Some people may feel system is more users friendly. Classification of Checking:  Reliable hyper links Note: This should be done by using performance testing tools 5.3 Reliability Testing: Objective  Reliability is considered as the probability of failure-free operation for a specified time in a specified environment for a given purpose To find Mean Time between failure/time available under specific load pattern.   Approach  Approach   Objectives Reliability Testing helps you to confirm:    Business expected logic performs as  Tuning the system Active buttons are really active Correct menu options are available 40   Verify resilience & reliability Performance Parameters Request-Response Time .

7 Compatibility Testing: Compatibility testing provides a basic understanding of how a product will perform over a wide range of hardware. software & network configuration and to isolate the specific problems. Understanding the end users Importance of selecting both old browser and new browsers Selection of the Operating System Test Bed Creation Partition of the hard disk. Repeatedly working on the same functionality Critical Query Execution (Join Queries) To Emulate peak load. N number of people working on it will not enforce stress on the server. Creation of Base Image Stress Testing   ♦  ♦ ♦ Load Vs Stress:  41 . 5. Approach is data profile Estimating the breakdown point of the system beyond the resources limit.  Objective is to find the maximum number of user system can handle. Approach  ♦ ♦   Volume Testing   Environment Selection. With the Simple Scenario (Functional Query). 5.    Transactions per Second Turn Around time Page down load time Through Put  A complex scenario with even one less number of users will stress the server.6 Scalability Testing: Approach  Usage of Automation Tools Classification of Performance Testing:     Load Test Volume Test Stress Test Classification:    Approach  Performance Tools Network Scalability Server Scalability Application Scalability Load Testing Estimating the design capacity of the system within the resources limit Approach is Load Profile Is the process of feeding a program with heavy volume of data.

42  .5.8 Installation Testing:  Installation testing is performed to ensure that all Install features and options function properly and to verify that all necessary components of the application are installed.8 Security Testing:  Testing how well the system protects against unauthorized internal or external access. 6 Performance Life Cycle 6.1 What is Performance Testing?  Primary objective of the performance testing is “to demonstrate the system works functionally as per specifications with in given response time on a production sized database 6. unauthorized entry into the software. network security are all taken into consideration. Add/Remove programs.9 Adhoc Testing  Testing carried out using no recognized test case design technique. Feasible only for small. simple programs.  The uninstallation of the application is tested using DOS command line. and . and manual deletion of files 5. 5.DLLs are removed. executables.2 Why Performance Testing:  To assess the system capacity for growth The load and response data gained from the tests can be used to validate the capacity planning model and assist decision making.  To identify weak points in the architecture The controlled load can be increased to extreme levels to stress the architecture and break it bottlenecks and weak components can be fixed or replaced  To detect obscure bugs in software Tests executed for extended periods can cause failures caused by memory leaks and reveal obscure contention problems or conflicts  To tune the system Repeat runs of tests can be performed to verify that tuning activities are having the desired effect – improving performance.  Verify how easily a system is subject to security violations under different conditions and environments  During Security testing. password cracking.  To verify resilience & reliability  The uninstallation of the product also needs to be tested to ensure that all data. 5.10 Exhaustive Testing  Testing the application with all possible combinations of values for program variables.

and webservers) for the load levels can save a lot of money when you can already discover at this moment that your hardware is to slow. At this point we need to calculate the impact of a slow website on the company sales and support costs. 6. In the real world situations like this can be created by a massive spike of users – far above the normal usage – e. application-.   6. database.  The goals of stress tests are to learn under what load the server generates errors. This is something like a “real world test” of the website. 6. As soon as several web pages are working the first load tests should be conducted and from there on should be part of the regular testing routine each day or week or for each build of the software.4 Load-Tests:  This type of test is done to test the website using the load that the customer expects to have on his site. not from a technical point of view.  First we have to define the maximum request times we want the customers to experience.6 When should we start Performance Testing:  It is even a good idea to start performance testing before a line of code is written at all! Early testing the base technology (network. 6. The costs for correcting a performance problem rise steeply from the start of development until the website goes productive and can be unbelievable high for a website already online.g. load balancer. whether it will come back online after such a massive spike at all or crash and when it will come back online. Also the first stress tests can be a good idea at this point.7 Popular tools used to conduct Performance Testing:  LoadRunner Interactive from Mercury 43 .  Then we have to calculate the anticipated load and load pattern for the website (Refer Annexure I for details on load calculation) which we then simulate using the Tool. caused by a large referrer (imagine the website being mentioned on national TV…). this is done from the business and usability point of view.Executing tests at production loads for extended periods is the only way to access the systems resilience and reliability to ensure required service levels are likely to be met.5 Stress-Tests:  They simulate brute force attacks with excessive load on the web server.3 Performance-Tests:  Used to test each part of the web application to find out what parts of the website are slow and how we can make them faster.  At the end we compare the test results with the requests times we wanted to achieve. 6.

       AstraLoad Interactive from Mercury  6. In this case one can blindly follow the respective process demonstrated by the tool.8 Performance Test Process: This is a general process for performance Testing. If Client is using any of the tools. This process can be customized according to the project needs. deleting any of the steps from the existing process may result in Incomplete process. Silk Performer from Segue Rational Suite Test Studio from Rational Rational Rational Site Load from Webload from Radview RSW eSuite from Empirix MS Stress tool from Microsoft General Process Steps: Setting up of the Environment Record & Playback in the standby mode Enhancement of the script to support multiple users Configure the scripts Execution for fixed users and reporting the status to the developers 44 Re-execution of the scenarios after the developers fine-tune the code . Few more process steps can be added to the existing process.

schedule the scenarios Distribute the users onto different scripts. At this stage we can simulate the virtual users similar to the live environment by deciding 1. Record & playback in the stand by mode  The scripts are generated using the script generator and played back to ensure that there are no errors in the script. Configuration of the scenarios  Scenarios should be configured to run the scripts on different agents. • Scenarios: A scenario might either comprise of a single script or multiple scripts. should be parameterised to simulate the live environment. • Hosts: The next important step in the testing approach is to run the virtual users on different host machines to reduce the load on the client 45    . The main intention of creating a scenario to simulate load on the server similar to the live/production environment. agents Directory structure creation for the storage of the scripts and results Installation of additional software if essential to collect the server statistics It is also essential to ensure the correctness of the environment by implementing the dry run. What should be the time interval between every batch of users? Execution for fixed users and reporting the status to the developers  The script should be initially executed for one user and the results/inputs should be verified to check it out whether the server response time for a transaction is less than or equal to the acceptable limit (bench mark). How many users should be activated at a particular point of time as a batch? 2. machine by sharing the resources of the other machines.Setting up of the test environment    The installation of the tool. user inputs etc. collect the data related to database etc. If the results are found adequate the execution should be continued for  Enhancement of the script to support multiple users  The variables like logins. • Users: The number of users who need to be activated during the execution of the scenario. • Ramping: In the live environment not all the users login to the application simultaneously. It is also essential since in some of the applications no two users can login with the same id.

different set of users. the scenarios should be re-executed for the specific set of users for which the response was inadequate. comparison statistics if any etc. Approach – summary of the steps followed in conducting the test Analysis & Results – is a brief explanation about the results and the analysis of the report. Conclusion – the report should be concluded by telling whether the objectives set before the test is met or not. Objectives – set / specified in the test plan.  If found satisfactory. • • • Final report At the end of the performance testing. Re-execution of the scenarios after the developers fine tune the code  After the fine-tuning. final report should be generated which should comprise of the following – 7 Life Cycle of Automation Analyze Application the Select the Tool Identify the Scenarios Reporting the Defect Design / Record Test Finding & Reporting the Scripts Defect Finding & Reporting the Modify the Test Defect Scripts Run& the Test Scripts Finding Reporting the Defect 46 . At the end of every execution the results should be analysed. Annexure – can consist of graphical representation of the data with a brief description. then the inputs should be given to the developers. then the execution should be continued until the decided load. • • • Introduction – about the application.  If a stage reaches when the time taken for the server to respond to a transaction is above the acceptable limit.

3 Benefits of Test Automation:  Allows more testing to happen      Tightens / Strengthen Test Cycle Testing is consistent.1 Test Strategy  Test strategy is statement of overall approach of testing to meet the business and test objectives. This is referred to as “automated software testing”.1 What is Automation? Defect  & Rational Robot Reporting the     WinRunner SilkTest QA Run WebFT A software program that is used to test another software program.2 Why Automation Avoid the errors that humans make when they get tired after multiple repetitions.5 What are the different tools available in the market? 47 . repeatable Useful when new patches released Makes configuration testing easier Test battery can be continuously improved. The test program won’t skip any test by mistake. Required for regression testing. It is a plan level document and has to be prepared in the requirement stage of the project. 8 Testing 8. Developing a test strategy which effectively meets the needs of the organization/project is critical to the success of the software development An effective strategy has to meet the project and business objectives Defining the strategy upfront before the actual testing helps in planning the test activities          7.4 False Benefits:      A test strategy will typically cover the following aspects  Definition of test objective 7. Each future test cycle will take less time and require less human intervention. Fewer tests will be needed It will be easier if it is automate Compensate for poor design No more manual testing. It can be a project or an organization specific. 7. It identifies the methods. 7. techniques and tools to be used for testing .Finding Defect & Reporting the sdasdadadasdadafhgfdgdf Finding Defect & Reporting the Finding 7.

then it has to be considered during estimation Risk analysis should carried out for testing phase  Once Risks are identified and classified. Integration and system testing The method of testing viz Black– box. − − − −   8. − Incomplete or ambiguous requirements may lead to inadequate or incorrect testing. the following activities will be carried out Identify the probability of occurrence Identify the impact areas – if the risk were to occur Risk mitigation plan – how avoid this risk? Risk contingency plan – if the risk were to occur what do we do? You cannot test a program completely We can only test against system requirements − May not detect errors in the requirements. The internal risks are things that the test team can control or influence.5 Testing Limitations     8.3 Test Environment  8.2 Testing Approach    Test approach will be based on the objectives set for testing Test approach will detail the way the testing to be carried out Types of testing to be done viz Unit.4 Risk Analysis   Exhaustive (total) testing is impossible in present scenario. White-box etc. Details of any automated testing to be done All the Hardware and Software requirements for carrying out testing shall be identified in detail. − The external risks are things beyond the control or influence of the test team  − 8. 48 ..        Strategy to meet the specified objective Overall testing approach Test Environment Test Automation requirements Metric Plan Risk Identification. Mitigation and Contingency plan Details of Tools usage Specific Document templates used in testing  The risk identification will be accomplished by identifying causes-and-effects or effects-andcauses The identified Risks are classified into to Internal and External Risks. Any specific tools required for testing will also be identified If the testing is going to be done remotely.

7 Testing Metrics  Time Time per test case Time per test script Time per unit test Time per system test  Sizing Function points Lines of code  Defects Numbers of defects Defects per sizing measure Defects per phase of testing Defect origin Defect removal efficiency      8.6 Testing Objectives Number of defects found in producer testing Defect Removal Efficiency = Number of defects during the life of the product Actual Size-Planed Size Size Variance = Planed Size Actual end date – Planed end date Delivery Variance = Planed end date – Planed start date Actual effort – Planed effort Effort = Planed effort Effort Productivity = Size No defect found during the review time Review efficiency = 49 . Test results are used to make business decisions for release dates. Time and budget constraints normally require very careful planning of the testing effort. you’ll never know it You will run out of time before you run out of test cases You cannot test every path You cannot test every valid input You cannot test every invalid input    You cannot prove a program correct (because it isn’t!) The purpose of testing is to find problems The purpose of finding problems is to get them corrected   8. Even if you do find the last bug. Compromise between thoroughness and budget.

Firm Requirements: Avoid new features. 2. Problems are inventible. The quality of the test process determines the success of the test effort. 6. Allow adequate time for planning. 5.8 Six Essentials of Software Testing 1. and plan for sufficient time for both testing and bug fixing. 3. Inadequate Testing: No one will know weather the system is good or not. Unrealistic Schedules: If too much work is creamed in too little time. complete. problems are guaranteed. Extremely common Miscommunication: If the developers don’t know what is needed (or) customers have erroneous expectations. Realistic Schedules: Have schedules that are realistic. bug fixing. too general and not testable there will be problem. Communication. cohesive. attainable and testable. 8. Until the complains system crash 50 . 8. Start testing early on. detailed. 4. retest after fixes or changes.7 Test Stop Criteria:      Minimum number of test cases successfully executed. A real person must take responsibility for improving the testing process. Communicate Require walk-thorough and inspections when appropriate 8. Solid requirements :Ensure the requirements are solid.9 What are the five common problems in s/w development process? Poor Requirements: If the requirements are unclear. Stick to initial requirements as much as possible. design. Uncover minimum number of defects (16/1000 stm) Statement coverage Testing uneconomical Reliability model Futurities: Requests to pile on new features after development is underway. Personnel should be able to complete the project without burning out. skilled people. Adequate Testing: Do testing that is adequate. Cultivate a positive team attitude of creative destruction.10 Give me five common problems that occur during software development.11 What Should be done no enough time for testing Risk analysis to determine where testing should be focused  Which functionality is most important to the project's intended purpose? 8. Testing is a professional discipline requiring trained. incomplete. testing. changes and documentation.Effort 8. clear. The time for software testing tools is now. re-testing. Prevent defect migration by using early life-cycle testing techniques.

Coverage of code. Bug rate falls below a certain level. functionality.12 How do you know when to stop testing? Common factors in deciding when to stop are.     Deadlines.. e.        8. testing deadlines.g. or requirements reaches a specified point. release deadlines..15 Different Type of Errors in Software • • • • • • User Interface Errors Error Handling Boundary related errors Calculation errors Initial and Later states Control flow errors  51 . or Beta or alpha testing period ends.14 Why does the software have Bugs?       Miscommunication communication Software Complexity Programming Errors Changing Requirements Time Pressures Poorly Documented Code or No      8.   Which functionality visible to the user? is most the  Which functionality has largest safety impact? Which tests will have the best high-risk-coverage to timerequired ratio? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? 8. Test budget has been depleted. Test cases completed with certain percentage passed.

We can’t test every thing. as documented. Check tests are correct before reporting s/w faults Defect Reporting and Status reporting Assess risk objectively Prioritize what you report Communicate the truth. Version and ID Control •          Resolves technical issues for the product group Provides direction to the team members Performs activities for respective product group the 9 Roles and Responsibilities 9. There is never enough time to do all testing you would like. Client QA manager Team management Work allocation to the team Test coverage analysis Co-ordination with onsite for issue resolution.1 Test Manager            Testing Errors Review and Approve of Test Plan / Test cases Review Test Script / Code Approve completion Integration testing of Single point contact between Wipro onsite and offshore team Prepare the project plan Test Management Test Planning Interact with Wipro onsite lead. Monitoring the deliverables Verify readiness of the product for release through release review Obtain customer acceptance on the deliverables Performing risk analysis when required Reviews and status reporting Authorize intermediate deliverables and patch releases to customer.4 How to Prioritize Tests: 9.2 Test Lead 52 . scripts etc.3 Tester Engineer                9. Conduct System / Regression tests Ensure tests are conducted as per plan Reports status to the Offshore Test Manager Development of Test cases and Scripts Test Execution Result capturing and analysing Follow the test plans.• • • • • Errors in Handling or Interpreting Data Race Conditions Load Conditions Hardware Source. 9.

whenever you stop testing. A set of program characteristics that lead to testable software: 53 . So Prioritize Tests. technically critical. The team should contain 55% hard core test engineers. 30 domain knowledge engineers and 15% technology engineers. Most complex areas. Areas with most problems in the past. 10 How can we improve the efficiency in testing?  In the recent year it has show lot of outsourcing in testing area. you have done the best testing in the time available. Possible ranking criteria ( all risk based) Test where a failure would be most serve. Test where failures would be most visible. Its right time to think and create process to improve the efficiency of testing projects. The best team will result in the efficient deliverables. Testing Vs Domain Vs Tech Technolog y 15% Domain 30% Te sting 5 5% Software testability is simply how easily a computer program can be tested. Areas changed most often. or Tips     Note: If you follow above. Take the help of customer in understanding what is most important to him. 28-33% of test cases are resulted to cover the domain oriented business rules and 15-18% technology oriented test cases. What is most critical to the customers business.        How did we arrive to this figures? The past projects has shown 50-60 percent of the test cases are written on the basis of testing techniques.

Activities performed (plans and procedures) 54 . Managed work) 5. a public declaration.   The extent of implementation for a specific key Process Area is evaluated by assessing: 1. management oversight. Organizations at the ad hoc process level can improve their performance by instituting basic project controls. capability and performance of a software development organization. Software Development Process Maturity levels The Initial Process (Level 1) The Initial (ad hoc) process level is unpredictable and often very chaotic. clear responsibility. and a dedication to performance. and project plans. Verification implementation (oversight quality assurance) of and The Capability Maturity Model defined five levels of process maturity: 1. QA are any if be 4. Commitment to perform (policies and leadership) 2. 3. The Software Engineering Institute uses a conceptual framework based on industry best practices to assess the process maturity. Ability to (resources and training) perform 3. The most important are project management. developed by the SEI. Initial (worship the hero) Repeatable (Plan the work) Defined (Work the plan) (measure (work the the   4. It is geared to large organizations such as large U. CMM Levels   CMM = 'Capability Maturity Model'. Optimized measures)  Organizations can receive CMM ratings by undergoing assessments by qualified auditors.S. many of the processes involved appropriate to organization. The fundamental role of the project management system is to ensure effective control of commitments. It's a model of 5 levels of organizational 'maturity' that determine effectiveness in delivering quality software. and reasonably applied can helpful. This framework is called the Capability Maturity Model "CMM". and change control.11. 2. Measurement and analysis (measures and status) 5. At this stage. Defense Department contractors. quality assurance. the organization typically operates without formalized procedures. However. cost estimates. This requires adequate preparation.

The Repeatable Process (Level 2) The repeatable process has one important strength that the ad hoc process does not: It provides control over the way the organization establishes its plans and commitments. schedule tracking. and verification activities. This control provides such an improvement over the ad hoc process level that the people in the organization tend to believe they have mastered the software problem The Repeatable level having 6 key processing areas Requirements management ♦ ♦ A quality assurance group is charged with assuring management that software work is done the way it is supposed to be done. or development life cycle. Sufficient resources to monitor performance of all key planning. Training program Integrated software management Software product Engineering ♦ ♦ ♦ Inter-group coordination Peer reviews Metrics are used to track productivity. installed quality performance. This includes review and approval of all major development plans prior to their official commitment. and products. Also. The Defined level having 7 key processing areas. Project performance is predictable. The procedure for establishing a software development process architecture. the organization has achieved the foundation for major and continuing progress. ♦ ♦ Software Project planning Software project tracking and oversight ♦ Software subcontract management ♦ ♦ Software Quality Organization process Assurance Software configuration management ♦ focus Organization process definition ♦ ♦ ♦ A suitably disciplined software development organization must have senior management oversight. a quarterly review should be conducted of facility wide process compliance. that describes the technical and management activities required for proper execution. implementation. cost trends. The Defined Process (Level 3) With the defined process. and quality is consistently high. processes. computing service. Level 4: Managed 55 .

production. recruitment. ISO(International Organisation for Standardization)  The ISO 9001:2000 standard concerns quality systems that are assessed by outside auditors. Decision Analysis Note: CMM is being used in over 5000 organizations worldwide. ♦ ♦ Defect prevention Technology change management Process change management ♦ Perspective on CMM ratings: During 1997-2001. Such as performance management. testing. Software Integrated 3. Two major factors 1.  It covers documentation. installation. SEI-CMM (Software Engineering InstituteCapability Maturity Model): It is a frame work for Software Development P-CMM(People-Capability Maturity Model): Focus on People-Related process. design. 1018 organizations were assessed. and other processes. staffing and the interoperability between different roles with in a services Organization. They are 1.The Managed level having 2 key processing areas Quantitative process management ♦ 4. It has four components. Only 120 organizations worldwide are at CMM level5. development. training and development. not just software. Risk Identification 2. Product process development 56 . Software Acquisition There is a significant difference between CMM and CMM-I . Level 1 27% Level 2 39% Level 3 23% Level 4 4% Level 5 5% CMM-I(Capability Maturity Model-Integration): CMM-I is an advanced version of capability Maturity Model(CMM). The impact of new processes and technologies can be predicted and effectively implemented when required. Software process management ♦ Level 5: Optimized The focus is on continuous process improvement. and it applies to many kinds of production and manufacturing organizations. Software Engineering 2. The Optimized level having3 key processing areas. servicing.

Automated Testing Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is 57 . If carried out by a skilled tester. Sometimes ad hoc testing is referred to as exploratory testing. this will be the only kind of testing that can be performed. Sometimes. The full set of standards consists of ♦ IEEE (Institute of Electrical and Electronics Engineers') IEEE/ANSI Standard 829: IEEE Standard for Software Test Documentation IEEE/ANSI Standard 1008: IEEE Standard of Software Unit Testing IEEE/ANSI Standard 730: IEEE Standard for Software Quality Assurance Plans. if testing occurs very late in the development cycle. Testing Types Term Acceptance Testing Definition Testing the system with the intent of confirming readiness of the product and customer acceptance. Alpha Testing Testing after code is mostly complete or contains most of the functionality and prior to users being involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department. it can often find problems that are not caught in regular testing. Q9001-2000 Management Requirements Q9000-2000 Management Fundamentals Vocabulary - Quality Systems: Quality Systems: and ♦ - ♦ Q9004-2000 Quality Management Systems: Guidelines for Performance Improvements. With some projects this type of testing is carried out as an adjunct to formal testing. Ad Hoc Testing Testing without a formal test plan or outside of a test plan. Sometimes a select group of users are involved.

must be written from a definitive source document. Testing software without any knowledge of the inner workings. The individual or group doing this work is not part of the group or organization that developed the software. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests. Configuration Testing Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software. verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do. A term often applied to government work or where the government regulates the products. demonstrating that defects are not present. such as a specification or requirements document. Beta Testing Testing after the product is code complete. as in medical devices. utilities. and competing software will conflict with the software being tested. structure or Black Box Testing language of the module being tested. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released. Compatibility Testing Testing used to determine whether other system software components such as browsers..diminished. as most other kinds of tests. Installation 58 . Independent Verification and Validation (IV&V) The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. Black box tests. Functional Testing Testing two or more modules together with the intent of finding defects.

and other malevolent attackers. On a larger level. this type of testing is done to ensure that no degradation of baseline functionality has occurred. (see system testing) Load Testing Testing with the intent of determining how well the product handles competition for system resources. (see beta testing) Regression Testing Testing with the intent of determining if bug fixes have been successful and have not created any new problems. becomes its own standalone test phase.Testing Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs. Pilot Testing Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. hackers. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Software Testing 59 . CPU utilization or memory allocation. Also. integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. Testing completed at as a part of unit or functional testing. Automated test tools geared specifically to test and finetune performance are used most often for this type of testing. Typically involves many users. is conducted over a short period of time and is tightly controlled. The competition may come in the form of network traffic. Security Testing Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users. Performance Testing Testing with the intent of determining how quickly a product handles a variety of events. and sometimes. Integration Testing Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions.

System Integration Testing Testing a specific hardware/software installation. This is typically performed on a COTS (commerical off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm. Bug Report Template 60 . Testing White Box Testing Testing in which the software tester has knowledge of the inner workings. The organization and management of individuals or groups doing this work is not relevant. or at least its purpose. This term is often applied to commercial products such as internet applications. User Acceptance See Acceptance Testing. structure and language of the software. 12. (contrast with independent verification and validation) Stress Testing Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner.

Open Internet Explorer window • .hotmail.Enter login(say xyz@hotmail. Other Items of Bug Templete:          Resolved by Resolve Date Build Resolution Closed by Closed date Attachements Documents Related Bugs Status Report: Test Case Status: Total Test Cases 50 Test Cases Executed Passed 34 Failed 6 Blocked 0 40 Bug Statistics: Sev1 Active Resolved Closed Total 1 Sev2 0 0 1 7 Sev3 2 2 3 5 Sev4 0 1 4 2 Total 1 3 0 3 1 9 15 61 .com) in the login Text box • .Enter www.doc Steps to Repro: • .0 • Test Case ID: 1103 Build: build001 • Opened By: Abc Opened on: 2.Sample Bug Report Bug Report: • Bug ID: Auto generated Title: Password field is not Encrypted • Area: Login Window Type: Code defect • Environment:OS: Windows2000 Browser: IE6.Click on Go button • .com in the address bar • .22 • Assigned to : Active Attachment: screensnap.Enter text in the password text box Expected Result: • Password field should be encrypted and text should be displayed as ‘*’ Actual Result: • Text appeared in the password Text box.2.

analysis.places where the specification is not fulfilled. or evaluation. just inside/outside boundaries. Bug: A design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test. Also known as closed-box testing. mini-mum. Boundary Value Analysis: A test data selection technique in which values are chosen to lie along data extremes. Checksheet: A form used to record data as it is gathered. Affinity Diagram: A group process that takes large amounts of language data. This is also known as glass-box or open-box testing. such as a list developed by brainstorming. Bottom-up Testing: An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test. Cause-effect Graphing: A testing technique that aids in selecting. and ensures that resources are conserved. policies. 62 . Brainstorming: A group process for generating creative and diverse ideas. and error values. it serves as the “eyes and ears” of management. and divides it into categories. in a systematic way. and procedures. Cause-and-Effect (Fishbone) Diagram: A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause. Structural testing is sometimes referred to as clear-box testing. Audit: An inspection/assessment activity that verifies compliance with plans. Black box testing indicates whether or not a program meets required specifications by spotting faults of omission -.13 Testing Glossary Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system. Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system. Audit is a staff function. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications. Automated Testing: That part of software testing that is assisted with software tool(s) that does not require operator input. Alpha Testing: Testing of a software product or system conducted at the developer’s site by the end user. Clear-box Testing: Another term for white-box testing. Branch Coverage Testing: A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once. since “white boxes” are considered opaque and do not really permit visibility into the code. Black-box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. typical values. a high-yield set of test cases that logically relates causes to effects to produce test cases. Boundary values include maximum.

that receives the product. and 2) a mental mistake made by a programmer that may result in a program fault. specified. is visually checked against requirements and standards. Evaluation: The process of examining a system or system component to determine the extent to which specified properties are present. 63 . or theoretically correct value or condition. Defect analysis generally seeks to classify defects into categories and identify possible causes in order to direct process improvement efforts. error-prone language constructs. Error-based Testing: Testing where information about programming style. 2) From the end user’s viewpoint: anything that causes end user dissatisfaction. Dynamic analysis approaches rely on executing a piece of software with selected test data. Execution: The process of a computer carrying out an instruction or instructions of a computer. Customer (end user): The individual or organization.. Dynamic Analysis: The process of evaluating a program based on execution of that program. or measured value or condition and the true. and other programming knowledge is applied to select test data capable of detecting faults. etc. Dynamic Testing: Verification or validation performed which executes the system’s code. whether in the statement of requirements or not. Desk Checking: A form of manual static analysis usually performed by the originator. Cyclomatic Complexity: A measure of the number of linearly independent paths through a program module. Error: 1) A discrepancy between a computed. Defect Analysis: Using defects as data for continuous quality improvement. either a specified class of faults or all possible faults. it is useful to work with two definitions of a defect: 1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product. Data Flow Analysis: Consists of the graphical analysis of collections of (sequential) data definitions and reference patterns to determine constraints that can be placed on data values at various points of executing the source program. Control Chart: A statistical method for distinguishing between common and special cause variation exhibited by processes. and receives the benefit from the use of the product. internal or external to the producing organization. Source code documentation.Client: The end user that pays for the product received. Defect: NOTE: Operationally. observed. Defect Density: Ratio of the number of defects to program length (a relative number). Debugging: The act of attempting to determine the cause of the symptoms of malfunctions detected by testing or by frenzied user complaints.

services. performance. Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. 1 = minor influence. or code are examined in detail by a person or group other than the author to detect faults. Heuristics Testing: Another term for failure-directed testing. or information needed from suppliers to make a process work. Also known as black-box testing. design. outputs.Exhaustive Testing: Executing the program with all possible combinations of values for program variables. Fault: A manifestation of an error in software. Inputs: Products. etc. 5 = strong influence. Incremental Analysis: Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product. violations of development standards. Function Points: A consistent measure of software size based on user requirements. frequently occurring faults. operational ease. Weight scale: 0 = not present. including the types of reviews called for in the standards. A fault. Hybrid Testing: A combination of top-down testing combined with bottom-up testing of prioritized or available components. Flowchart: A diagram showing the sequential steps of a process or of a workflow around a product or service. Failure: The inability of a system or system component to perform a required function within specified limits. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation. Infeasible Path: Program statement sequence that can never be executed. Data components include inputs. if encountered. may cause a failure. 64 . Inspection: 1) A formal evaluation technique in which software requirements. Environment characteristics include data communications. A failure may be produced when a fault is encountered. reusability. etc. Failure-directed Testing: Testing based on the knowledge of the types of errors made in the past that are likely for the system under test. typically. Fault-based Testing: Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults. Formal Review: A technical review conducted with the end user. Fault Tree Analysis: A form of safety analysis that assesses hardware safety to provide failure statistics and sensitivity analyses that indicate the possible effect of critical failures. Histogram: A graphical description of individual measured values in a data set that is organized according to the frequency or relative frequency of occurrence. and other problems.

Measurement: 1) The act or process of measuring. Integration Testing: An orderly progression of testing in which software components or hardware components. Non-intrusive Testing: Testing that is transparent to the software under test. implementation (code) phase. testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment.e. or evaluation. operation and maintenance phase. Interface Analysis: Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.. Life Cycle: The period that starts when a software product is conceived and ends when the product is no longer available for use. A figure. property. An interface might be a hardware component to link two devices. and a retirement phase. Interface: A shared boundary. analysis. Manual Testing: That part of software testing that requires operator input. The software life cycle typically includes a requirements phase. Instrument: To install or insert devices or instructions into hardware or software to monitor the operation of a system or component. are combined and tested until the entire system has been integrated. or attribute. test phase. or amount obtained by measuring. i. or both. or it might be a portion of storage or registers accessed by two or more computer programs. into an overall system. Metric: A measure of the extent or degree to which a product possesses and exhibits a certain quality. Intrusive Testing: Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Integration: The process of combining software components or hardware components. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.2) A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection). extent. Mean: A value derived by adding several qualities and dividing the sum by the number of these quantities. Usually involves additional hardware that 65 . installation and checkout phase. Mutation Testing: A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program. design phase. or both. IV&V: Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.

and procedures. Such changes may add to or delete from the list of attributes and/or the list of functions defining a product. services. administrative/ information products (invoices. NOTE: This process could result in a totally new product. Outputs: Products. letters. There are three useful classes of products: manufactured products (standard and custom). Peer Reviews: A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed. usually measured in the same units. one path from each class is tested. Operational Requirements: Qualitative and quantitative parameters that specify the desired operational capabilities of a system and serve as a basis for deter-mining the operational effectiveness and suitability of a system prior to deployment. Process: The work effort that produces a product. Products are defined by a statement of requirements. The defect rate must be maintained or reduced. standards.). Same as defect. Procedure: The step-by-step method followed to ensure that standards are met. Paths through the program often are grouped into a finite set of classes.collects timing or processing information and processes that information on another platform. Product: The output of a process. Problem: Any deviation from defined standards. or of higher quality. Policy: Managerial desires and intents concerning either process (intended objectives) or products (desired attributes). Such changes may require the product to be changed. Path Analysis: Program analysis performed to identify all possible paths through a program. etc. It is frequently useful to compare the value added to a product by a process to the value of the input resources required (using fair market values for both input and output). intellectual. and service products (physical. Operational Testing: Testing performed by the end user on software in its normal operating environment. or information supplied to meet end user needs. and psychological). they are produced by one or more people working in a process. or to discover portions of the program that are not on any path. This includes efforts of people and equipment guided by policies. to detect incomplete paths. more economically. Productivity: The ratio of the output of a process to the input. Path Coverage Testing: A test method satisfying coverage criteria that each logical path through the program is tested. the work product. 66 . Product Improvement: To change the statement of requirements that defines a product to make the product more satisfying and attractive to the end user (more competitive). Process Improvement: To change a process to make the process produce a given product faster. physiological. Such changes frequently require the process to be changed.

technical reviews. Regression Testing: Selective retesting to detect faults introduced during modification of a system or system component. Prototyping: Evaluating requirements or designs at the conceptualization phase. Quality Assurance (QA): The set of support activities (including facilitation. walkthroughs. the performance of these tasks is the responsibility of the people working within the process.Proof Checker: A program that checks formal proofs of program properties for logical correctness. or design phase by quickly building scaled-down components of the intended system to obtain rapid feedback of analysis and design decisions. and inspections. NOTE: Operationally. the requirements analysis phase. To the producer a product is a quality product if it meets or conforms to the statement of requirements that defines the product. 67 . to demonstrate that the software meets its specified requirements. Random Testing: An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. Reliability: The probability of failure-free operation for a specified period. A review is a general work product evaluation technique that includes desk checking. Review: A way to use the diversity and power of a group of people to point out needed improvements in a product or confirm those parts of a product in which improvement is either not desired or not needed. Requirement: A formal statement of: 1) an attribute to be possessed by the product or a function to be performed by the product. Quality Improvement: To change a production process so that the rate at which defective products (defects) are produced is reduced. Its focus is defect detection and removal. peer reviews. Quality: A product is a quality product if it is defect free. and analysis) needed to provide adequate confidence that processes are established and continuously improved in order to produce products that meet specifications and are fit for use. to verify that modifications have not caused unintended adverse effects. usually conducted by the developer for the end user. training. and the action taken when nonconformance is detected. Some process changes may require the product to be changed. or 3) the measuring process to be used in verifying that the standard has been met. the performance standard for the attribute or function. that is. or to verify that a modified system or system component still meets its specified requirements. This statement is usually shortened to “quality means meets requirements. Qualification Testing: Formal testing. measurement. Quality Control (QC): The process by which product quality is compared with applicable standards. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment. formal reviews. the work quality refers to products. This is a line function.

or property of software (for example.Run Chart: A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation. Statistical Process Control: The use of statistical techniques and tools to measure an ongoing process for change or stability. and maintenance tools. NOTE: The statement of requirements should document requirements proposed and rejected (including the reason for the rejection) during the requirements determination process. quality. Also called static analysis. Standards: The measure used to evaluate products and identify nonconformance. test.. attributes. automated design tools. performance. Static Testing: Verification performed without executing the system’s code. 68 . Scatter Plot (correlation diagram): A graph designed to show whether there is a relationship between two changing factors. Software Tool: A computer program used to help develop. Standardize: Procedures are implemented to ensure that the output of a process is maintained at a desired level. 2) The relationships between symbols and their meanings. Statement Coverage Testing: A test method satisfying coverage criteria that requires each statement be executed at least once. The basis upon which adherence to policies is measured. Software Characteristic: An inherent. lines of branches). or maintain another computer program or its documentation. attributes. functionality. Software Feature: A software characteristic specified or implied by requirements documentation (for example. service. or design constraints). number of states. test tools. possibly accidental. performance. Statement of Requirements: The exhaustive list of requirements that define a product. analyze. Supplier: An individual or organization that supplies inputs needed to generate a product. or information to an end user. Structural Testing: A testing method where the test data is derived solely from the program structure. e. Semantics: 1) The relationship of characters or a group of characters to their meanings. Structural Coverage: This requires that each pair of module invocations be executed at least once. functionality.g. trait. Stub: A software component that usually minimally simulates the actions of called components that have not yet been integrated during top-down testing. compilers. design constraints. independent of the manner of their interpretation and use.

and methods organized to accomplish a set of specified functions. Technical Review: A review that refers to content of the technical material being reviewed. This may include test requirements (objectives). System Testing: The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Test Harness: A software tool that enables the testing of software components that links test capabilities to perform specific tests. Test Plan: A formal or informal plan to be followed to assure the controlled testing of the product under test. 2) A suite of test programs used in conducting the test of a component or system. System: A collection of people. This is usually a written document that allows others to execute the test with a minimum of training. procedures. Testing: Any activity aimed at evaluating an attribute or capability of a program or system to determine that it meets its required results. Test Development: The development of anything required to conduct testing. and even project to project. System Simulation: Another name for prototyping. instrumentation. etc. processes. documentation. accept program inputs. simulators. plans. simulate missing components. Test Objective: An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation. and data. 2) the structure of expressions in a language. Test Procedure: The formal or informal procedure that will be followed to execute a test. A test case usually includes an identified set of information about observable states. including inputs and expected outputs.Syntax: 1) The relationship among characters or groups of characters independent of their meanings or the manner of their interpretation and use. The process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results. software tools. engineer to engineer. Test Bed: 1) An environment that contains the integral hardware. machines. Test Case: The definition of test case differs from company to company. and other support elements needed to conduct a test of a logically or physically separate component. conditions. strategies. events. compare actual outputs with expected outputs to determine correctness. and 3) the rules governing the structure of the language. cases. software. 69 . and report discrepancies. Test Executive: Another term for test harness.

reference manuals. User: The end user that actually uses the product received. with an imagined set of inputs. 70 . loaded. and tested) satisfies its functional specification or its implemented structure matches the intended design structure. specifications. line by line. White box testing determines if program-code structure and logic is faulty. Verification: The process of evaluating the products of a given software development activity to determine correctness and consistency with respect to the products and standards provided as input to that activity. Unit Testing: The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled. such as data descriptions. as when walking through code. etc. The test is accurate only if the tester knows what the program is supposed to do. glass-box or open-box testing. This is also known as clear box testing. The term has been extended to the review of material that is not procedural. and all visible code must also be readable. a step-by-step simulation of the execution of a procedure. White box testing does not account for errors caused by omission.Diagram (model): a diagram that visualizes the order of testing activities and their corresponding phases of development Validation: The process of evaluating software to determine compliance with specified requirements. Walkthrough: Usually. White-box Testing: Testing approaches that examine the program structure and derive test data from the program logic. V.Top-down Testing: An integration testing technique that tests the high-level components first using stubs for lower-level called components that have not yet been integrated and that stimulate the required actions of those components. He or she can then see if the program diverges from its intended goal.