This action might not be possible to undo. Are you sure you want to continue?
What is software testing ? Testing is executing a program with an intention of finding defects. Fault: Is a condition that causes the software to fail to perform its required function. Error: Error refers to difference between actual output & expected output. Failure: Is the inability of a system or component to perform the required function according to its specification. WHY S/W TESTING ? • • • • • • • • To discover defects. To avoid the user from detecting problems. To prove that the s/w has no defects. To learn about the reliability of the software. To ensure that product works as user expected. To stay in business To avoid being sued by customers To detect defects early, which helps in reducing the cost of fixing those defects?
WHY EXACTLY TESTING IS DIFFERENT FROM QA/QC ? Testing is the process of creating, implementing & evaluating tests. Testing measures software quality. Testing can find faults. When they are removed software quality is improved. Simply: Testing means “Quality control”. Quality control measures the quality of a product. Quality assurance measures the quality of processes used to create a quality product. Quality Control: is the process of inspections, walk through & reviews. Inspection: An inspection is formalized than a ‘ walkthrough ‘ – typical with group of people including a moderator, mediator, reader & a recorder to take notes. The subjects of the inspection is typically a document such a requirements specifications, or a test plan & two purpose is to find problems and see what is missing, not to fix anything. The primary purpose of inspection is to detect defects of different stages during a project.
Walkthrough Informal meeting. The motto of meeting is defined, but the members will come without any preparation. The author describes the work product in an informal meeting to his peers or superiors to get feedback or inform or explain to their work product. Reviews Means Re-verification. Reviews have been found to be extremely effective for detecting defects, improving productivity & lo0wering costs. They provide good check points for the management to study the progress of a particular project. Reviews are also a good tool for ensuring quality control. In short, they have been found to be ext4remely useful by a diverse set of people and have found their way in to standard management & quality control practice of many institutions. Their use continues to grow. Quality Assurance: Quality assurance measures the quality of processes used to create a quality product. Software QA involves the entire s/w development process monitoring & improving the process, making sure that any agreed upon standards & procedures are followed and ensuring that problems are found and deal with. AREAS OF TESTING: 1. Black box testing 2. White box testing 3. Grey box testing 1. Black Box Testing Black box testing is also called as functionality testing. In this testing testers will be asked to test the correctness of the functionality with the help of inputs & outputs. Black box testing not based on any knowledge of internal design or code. Tests are based on requirements & functionality. Approach Equivalence Class Boundary Value Analysis Error Guessing Equivalence Class • For each piece of the specification, generate one or more equivalence class. • Label the classes as “valid” or “invalid”. • Generate one test case for each Invalid Equivalence Class. • Generate a test cases that covers as many as possible equivalence classes. Eg: In LIC different types of policies are there Policy type Age 1 0-5 years
2 3 4 5
6-12 years 13-21 years 21-40 years 40-60 years
Here we test each & every point. Suppose 0-5 means we write Test cases for 0,1,2,3,4 & 5. Here we divide who comes under which policy & write TC’s for valid & invalid classes. Boundary values Analysis • Generate test cases for the boundary values. • Minimum value, minimum value+1 , minimum value-1 • Maximum value, Maximum value +1 , Maximum value –1 Eg: In LIC,
When user applies for type-5 insurance, system asks to enter the age of the custo0mer. Here age limit is greater than 40 yrs. & less than 60 yrs. Here just we will test boundary values. 40-60 Minimum =40 Minimum + 1 =41 Minimum – 1 =39 Maximum =60 Maximum +1 = 61 Maximum -1 = 59
Here we write test cases for this step only. Error Guessing: Generate test cases against to the specification Eg: Type-5 policy. It takes age limits only 40-60. But here we write test cases against to that like 30, 20, 70 & 65. WHITE BOX TESTING:
because the internet is built around loosely integrated components that connect via relatively welldefined interfaces. Tester should have good knowledge of white box testing & complete knowledge of black box testing Grey box testing is especially important with web & internet applications. Tester should have the knowledge of both the internal and externals of the function. branches. statements. paths. conditions & loops. conditions and loops. White box testing based on knowledge of the internal logic of an application’s code. PHASES OF TESTING – V MODEL BRS Verification Acceptance Test Validation SRS Verification System Test Validation Design Testing Integration Test Verification Validation . Approach Basic Path Testing • Cyclomatic Complexity • MC cabe complexity Structure Testing • Conditions Testing • Dataflow Testing • Loop Testing GREY BOX TESTING This is just a combination of both black box and white box testing.White box testing also called as Structural Testing. Why do we go for White Box Testing ? When black box testing is used to find defects. Tests are based on coverage of code. Structure = 1 Entry + 1 Exit with certain constrains.
This model defines co-existence relation between development process and testing process. • Depends on LLD • Follows white box testing techniques. • In unit testing both black box & white box testing conducted by developers. All field level validations are expected to be tested at this stage of testing. Draw back Cost & Time. It is a suitable model for large scale companies to maintain testing process. • Basic path testing • Loop coverage • Program technique testing Approach: i. Error guessing . PHASES ARE 1) Unit Testing 2) Integration Testing 3) System Testing 4) User Acceptance Testing 1) Unit Testing The main goal is to test the internal logic of the module.Build System Verification V – MODEL Unit Test Validation ‘V’ stands for verification & validation. In most cases the developer will do this. Boundary value analysis iii. Equivalence Class ii. In unit testing tester is supposed to check each and every micro function.
Tester is expected to test from login to logout by covering various business functionalities. Client will be using the system against the business requirements. In this many unit tested modules are combined into sub-systems.this is used for existing systems. • Execution of business test cases. Bottom-up approach: Testing sub modules with out coming main modules is called bottom-up approach. Follows black box testing techniques. functional users and developers. 4) Acceptance testing: Acceptance testing is to get the acceptance from the client. The primary objective of system testing is to discover errors when the system is tested as a whole. Top-down approach --. Follows white box testing techniques to verify coupling of corresponding modules. We can use temporary programs instead of main module is called driver. Top-down Approach Testing main module without coming sub modules is called top-down approach. System testing is also called as End – to – End testing.this is used for new systems. Client side tests the reallife data of the client. We can use temporary programs instead of sub modules is called stub. Approach: • Building a team with real-time users. Approach i. . Bottom-up approach --. ii. • Design the test data. Approach: • Identify the end-to-end business life cycle. The main goal is to see if the s/w meets its requirement. Depends on SRS. The goal here is to see if the modules are combined can be integrated properly. App server and database server. conducted by test engineers. 3) System Testing. • Optimize the end-to-end business life cycle.2) Integration Testing: In this the primary objective of Integration Testing is to discover errors in the interface between modules / sub-systems.
Test cases developed for functionality testing can be used integration/system/regression testing and performance testing with few modifications. • TC should be uniform Eg: <Action Buttons>. • Description (what data to be used. “Links “. test cases can be reused.WHAT IS A TEST CASE ? • Test case is a description of what is to be tested. • Test case number(unique number) • Pre-condition (The assertion(declaration) about the i/p condition is called the precondition. • Test cases are valuable because they are repeatable. WHAT ARE THE CHARACTERISTICS OF A GOOD TEST CASE ? A good test case should have the following: • TC should start with “ what you are testing “ • TC should be independent • TC should not contain “if “ statements. • A test case is a document that describes an input action or event and an expected response to determine if a feature of an application is working correctly. what data to be provided & what data to be inserted) • Expected output ( The assertion about the expected final state of a program is (called post-condition)) • Actual output( what ever system displays) • Status (pass/fail) • Remarks CAN THESE TEST CASES BE REUSED ? Yes. WHAT ARE THE ITEMS OF A TEST CASE ? Test case item are. reproducible under the same/different environments. • A test case is simply a test with formal steps and instructions. What data to be used and what actions to be done to check the actual result against the expected result. for .
Approach: Qualitative and quantitative Heuristic Check List. facility) and user-friendliness of the system.ARE THERE ANY ISSUES TO BE CONSIDERED ? Yes there are few issues… • All the TCs should be traceable. • There should not be too many duplicate test cases. all links are correcting working or not) Testing Approach : • Equivalence class • Boundary value analysis • Error guessing 2) Usability Testing: To test the ease(comfort.( whether the application reporting to wrong data or not) • URL’s checking – ( for only web application. FURRPSC MODEL: (Types of Testing) F Functionality Testing U Usability Testing R Reliability Testing R Regression Testing P Performance Testing S Scalability Testing C Compatibility Testing 1) Functionality Testing To conform that all the requirements are covered. • All the test cases should be executable. • Out dated test cases should be cleared off. • Input domain -.(whether taking right values of i/p or not) • Error handling -. Functional requirements specify which o/p should be produced from the given i/p. They describe the relationship between Input and Output of the system. Classifications of checking: • Accessibility • Clarity of communication . A major part in black box testing is called functional testing Eg: Here we test….
It will change servers.• • • • Qualitative approach i. if the submit button is on the left side of the screen. User should be able to submit request within 4-5 actions. Eg: Some people may feel the system is more user friendly. iii. . After the problem is resolved. Approach RRT ( Ration Real time tool) 4) Regression Testing To check the new functionalities have been incorporated correctly without failing the existing functionalities. 3) Reliability Testing Which defines how well the software meets its requirements? Objective is to find mean time between failure/time available under specific load pattern and mean for recovery. fixes should be re-tested. Every 6 hrs. The bugs need to be communicated and assigned to developers that can fix it. Eg: 23 hours/day availability & 1 hour for recovery (system). Consistency Navigation Design & maintenance Visual representation Each and every function should be available from all the pages of the site. At the same time some others may feel its better if the submit button is placed on the right side. Confirmation message should be displayed for each submit. City bank – have 4 servers in each region. ii. Approach: Automation Tool. Quantative approach: The average of 10 different people should be considered as the final result. and determination mode regarding requirements for regression testing to check that fixes did not create problems else where.
• To emulate peak load. • Request – response time • Transactions per second • Turn around time • Page down load time • Through put Approach: Classification of performance testing.5) Performance Testing Primary objective of the performance testing is “ to demonstrate the system functions to specifications with acceptable response times while processing the required transaction volume on a production sized data base. • Detect obscure bugs in the software. Max. Objectives: • Assessing the system capacity for growth. To find out threshold point we may use this test Approach: Data Profile.of users that an application can handle(at the same time) Approach: RCQE • • Repeatedly working on the same functionality. . Performance parameters. • Identifying weak points in the architecture. no. Critical Query Execution. • Load test • Volume test • Stress test Stress testing: Finding break point of application. Volume testing: Execution of our application under huge amounts of resources is called volume testing.
• Selection of the operating system. no. Gradually increasing the load on the application and checking the performance. Load is increasing continuously till the customer is required load. interpreters. (customer will give max. ) Approach: performance tools. • Importance of selecting both old browser & new browser. Environment Selection : • Understanding the end users application environment. What is the software life cycle ? . browsers …. Test Bed Creation : Partition of the hard disk. Classification: • Network scalability • Server scalability • Application scalability 7) Compatibility testing: How a product will perform over a wide range of hardware.etc. software & network configuration and to isolate the specific problems. • Whether our application run on all customer expected platforms or not? • Platforms means that the required system software to run our application such as operating system. 6) Scalability testing: To find the maximum number of user system can handle.Load testing : With the load that customer wants ( not at the same time) . Approach: Load profile. Approach: ET Approach. compiler.
document preparation . as documented. test planning. • Time and budget constraints normally require very careful planning of the testing effort. Tester responsibilities : • Follow the test plans. integration testing. • Testing uneconomical. functional design. o Incomplete or ambiguous requirements may lead to inadequate or incorrect testing. re-testing. • Assess risk objectively. documentation planning. you have done best testing in the time available. • Report faults objectively and factually. maintenances . Test stop criteria: • Maximum number of test cases successfully executed. Testing limitations: ? • We can only test against system requirements. whenever you stop testing. and other aspects. • Uncover minimum number of defects (16/1000 stm). coding . o May not detect errors in the requirements. There is never enough time to do all testing you would like. internal design. When should we start designing . • Test results are used to make business decisions for release dates. so what testing should you do ? Prioritize tests. • Communicate the truth. updates. so that. When should prioritize tests ? We can’t test every thing. . requirements analysis. test cases / testing ? V model is the most suitable way to follow for deciding when to start writing test cases and conduct testing. • Compromise between through ness and budget. • Prioritize what you report. scripts etc. It includes aspects such a s initial concept . • Statement coverage. • Exhaustive (total) testing is impossible in present scenario.The life cycle begins when an application is first conceived (imagine) and ends when it is no longer in use. phase-out. • Check tests are correct before reporting s/w faults. • Reliability model.
or technically critical.Tips : • Possible ranking criteria (all risk based) • Test where a failure would be most severe • Test where failures would be most visible. • • • • Finance feasibility Cost feasibility Resource feasibility Ability to accept SDLC includes 4 phases : Analysis Design Coding Testing . • Most complex areas. If we feel it is feasible then we will go to SDLC phases. • Take the help of customer in understand what is most important to him. Software development life cycle (SDLC) : Before starting the analysis we first check the feasibility of the project/work/system. programs & documents. Software : Software is a collection / set of instructions. • Areas with most problems in the past. • What is most critical to the customers business. In feasibility we will see the below functions. • Areas changed most often.
The goal of the coding phase is to translate the design. Testing : i. Analysis is on identifying what is need from the system. v. ii.Analysis : i. iv. ii. TYPES OF SOFTWARE MODELS 1. Main goal of the requirements specification is to produce the SRS document. We do integration tests. Acceptance testing : client side on the real-life data of the client. Design : i. System testing: system is tested against the requirement to see if all the requirement are met all the specified by the documents.computer programs are available that can be executed for testing purpose different levels of testing are used. Testing is the major quality control measure used during s/w development. This is done by the coder himself simultaneously along with the coding of the module. This document similar to a blue print. Once the design is complete. The coding effect both testing & maintenance. vi. This is the simplest process model. Its basic function is to detect errors in the s/w. A module is tested separately. iv. iii. After the coding . v. most of the major decisions about the system have been made. The o/p of this phase is the design document. Purpose of the design is to plan a solution of the problem specified by the requirement documents. Well-written code can reduce the testing & maintenance efforts. iv. Once he understood the requirement must be specified in the document. ii. Coding: i. The starting point of testing is unit testing. Because of testing and maintenance costs of s/w are much higher than to coding cost. Water Fall Model: It includes all phases of SDLC. So the goal of the coding should be to reduce the testing & maintenance efforts. Requirements analysis is done to understand the problem the software system is to solve. iii. ii. Understanding the requirement of the system is a major task. After this modules are gradually integrated into subsystems which are then integrated from the entire system. iii. This phase first step is moving from the problem domain to solution domain. iii. .
e. It is like continuous model. project plan. Iterative Model: D C T In this model we can make changes at any level. final code software manuals. A D C T A D D C T C T A .O/P in water fall model: Requirements document. system design document. Uses: It is well suited for routine type of projects where the requirements are well understood & small project. Draw back : Once request made freeze. test plan and test reports. review reports. Prototype Model: In this model the requirements are not freeze before any design or can proceed. It is sample of how actual system looks like. changes cannot be done after requirements are freezed. The prototype is developed based on the currently known requirements. it cannot be changed i. detailed design document. 2. but all the four phases of SDLC will take place again. Requirement Analysis Design Code Test 3.
Spiral Model : In this model system is divided into modules and each module follows phases of SDLC. It is good & successful model. D D D C C C A T Module 2 T Module 3 .4.
there may be different types of domains like banking. finance. Software: Front End/Back End/ Process. • Domain: In domain. D2K. DB2 Process: Languages. Manufacturing etc. SEIBEL.TEST LIFE CYCLE(TLC) TLC PHASES: System study Scope/Approach/Estimation Test Plan Design Test Case Design Test Case Review Test Case Execution Defect Handling GAP Analysis 1. VB. MS access. SQL server. Front End: GUI. Back end : Oracle. Java.etc. ERP. 2. Marketing. Eg: c. . System study: We will study the particular s/w or project/system. Insurance.. Sybase. c++. Real-time.
• 3. 1 F.F =10 lines of code.P (functional point) / Resource.P = 10 lines of code. intranet applications. 1 P. 4. of modules in the software/system Pick one priority High / Medium / Low. Scope What not be tested. of days to be taken to develop software/system No. internet. Test Plan Design: • About the client/company (details of the company) • • Reference documents we are used to design the documents. of resources of software/system No. • Functional Point/LOC: No of lines writing for a micro function. Eg: U I S A Module Approach: Test Life Cycle ( All the phases of TLC) Estimation: LOC (lines of code) / F. of pages of software/system No. Scope/Approach/Estimation: What to be tested.• Hardware: servers. • Each testing o Definition o Technique o Start criteria o Stop criteria • Resources it include roles/responsibilities . • • • • No. Summary of the APP overview.
These are included in the review format. Test Case Review: Review means re-verification of test case. Test case items are : TC no. • A test case is simply a test with formal steps and instructions.• Defects • • Schedules Risks / contingencies / mitigation (how much we can recover) Deliverable to whom. 6. • Test cases are valuable because they are repeatable. First Time Right (FTR) TYPES OF REVIEWS: Peer – peer review same level Team lead review Team Manager review REVIEW PROCESS: Take demo of the functionality . 5. reproducible under the same/different environments and easy to improve upon with feedback. Pre-condition Description Expected output Actual output Status Remarks 7. Test case Design: (heart of testing) • Test case is description of what is to be tested what data to be used and what actions to be done to check the actual risk against the expected result.
ii. Output: Raise the defect Take a screen shot & save it.Go through use case / function specification Try to see TC & find out the gap between Test cases Vs. 9. Defect No. I/P: Test cases Test data Review comments SRS BRS System availability Data availability Database Review doc Process: Test it. iii. Use Cases Submit the review report 8./Id. Defect Handling : Identify the following things in defect handling. Test Case Execution : i. This case execution includes mainly 3 things. Description Origin TC id Severity o Critical o Major o Medium o Minor o Cosmetic .
scope. In most of the cases defects are reported by testing team. TEST PLAN DESIGN : What is Test Plan : A software project test plan is a document that describes the objectives. WHAT IS DEFECT: In computer technology. The completed document will help people outside the test group understand the “why & how “ of product validation. Defect is open / closed. It is defined by saying that “ A software error is present when the program does not do what its end user reasonably expects it to do. 10. According to priority. SRs Vs Test Case. approach & focus of a software testing effort. Defect.GAP Analysis: Finding the difference between the client requirement & the application developed . Priority o High o Medium o Low status Following is the flow of defect handling : Raise the defect Review it internally Submit to developer We have to declare severity of defect & after declare the priority. we will test the defect. . Deliverables: Test plan Test scenarios Defect reports BRs Vs SRs.” WHO CAN REPORT A DEFECT: Any one who has involved in software development lifecycle and who is using the software can report a defect. a defect is a coding error in a computer program. TC vs.
A defect occurred with severely restriction the A response or action plan system such as the inability to use a major function should be provided within of the system. Installation problem. Mediu m Response time / Turn around time Defect should be responded to within 24 hrs & the situation should be resolved test exit. but the problem does not inhibit the testing of other function A defect is occurred which places minor restriction A response or action plan a function that is not critical. 5 working days. Slow performance Unexpected behavior Unfriendly behavior HOW TO DECIDE THE SEVERITY OF THE DEFECT: Severity Description Level High A defect occurred due to the inability of a key function to perform. Missing feature. . Testers / QA engineers. TYPES OF DEFECTS: Cosmetic flow Data corruption Data loss Documentation issue. This problem causes the system to hang or the user dropped out of the sys. how fast the developer has to take up the defect.A short list of people expected to report bugs. Priority: Relative importance of the defect. Low DEFECT SEVERITY Vs. Technical support. DEFECT PRIORITY: Severity: How much the defect is effecting application. Developers. Sales and marketing engineers. There is no acceptable work around 3 working days. There is an acceptable should be provided within work-around for the defect. End users. Incorrect operation.
All the high severity defects should be fixed first. What kind of testing should be considered ? 1. These are based on coverage of code statements. 3. This may not be the same in all cases some times even though severity of the bug is high it may not be taken as the high priority. loops…etc. Automated testing tools can be especially useful for this type of testing. REGRESSION TESTING: It can be difficult to determine how much re-testing is needed. INTEGRATION TESTING: Testing of combined parts of an application to determine function together correctly. . FUNCTIONAL TESTING: Black box type of testing. 6. 2. BLACK BOX TESTING: Not based on any knowledge of internal design or code. conditions. Tests are based on requirements and functionality. The general rule fortune fixing the defects will depend on the severity. branches. 5. At the same time the low severity bug may be considered as high priority. This does not mean that the programmers should not check that their code works before releasing it. This type of testing should be done by testers. especially near the end of the development cycle. 4. paths. 7. ACCEPTANCE TESTING: Final testing based on specification of the end-user or customer or based on use by end-users / customers over same limited period of time. SYSTEM TESTING: Black box type testing that is based on over all requirements specifications. Covers all combined parts of a system. WHITE BOX TESTING: Based on knowledge of the internal logic of an application code.
hardware failures or other catastrophic(sudden calamity) problems. 13. 10. Typically done by endusers or others not by programmers or testers. minor design changes may still be made as a result of such testing. we want to write test cases for product whether development team released build is able to conduct complete testing or not ? 14.MUTENT TESTING: .BETA TESTING: Testing when development and testing are essentially completed and final bugs and problems need to be found before final release.MONKEY TESTING: Testing like monkey .COMPATABILITY TESTING: Testing how well software performs in a particular hardware / software / network etc. SECURITY TESTING: How well the system protects against unauthorized internal or external access. Typically done by end-users or others not by programmers or testers. Taking any functions and test it. 9. environment. As no proper approach.SMOKE TESTING: After testing major & medium or critical functions are closed or not 15. Coverage of main activities during testing is called monkey testing (If give one day for testing) 16. Application is stable or not.ALPHA TESTING: Testing of an application when development is nearing completion. 11. 12.SANITY TESTING: This is before testing.8. RECOVERY TESTING: Testing how well a system recovers from crashes.
AD-HOC TESTING: Doing a short cut way.PATH TESTING: To check every possible condition at least one navigation of flow. does not following a sequential order mentioned in the test cases or test plan. BIG BANG TESTING: (Informal testing) A single stage of testing after completion of entire coding is called Big bang testing (no reviews i. 19. SRS: It specifies Functional Specifications to develop. 20. direct system testing) 18.Is the defect we have to inject defect into application and test. SOFTWARE QUALITY: BRS: It specifies needs of customer. Meet customer requirements.e. Meet customer expectations Possible cost Time to market . Total business logic documents. User Name: KONDA Password : ************* OK 17.BIG BANG THEORY: Approach for the integration when checking the errors between module or sub module. HLD: High level design document.
its tough to maintain and modify code that is badly written or poorly documents.It specifies interconnection of modules. TESTING TEAM: Quality Control Quality Analyst Test Manager Test Lead Test Engineers REVIEWS DURING ANALYSIS: Conducted by business analyst Verifies completeness and correctness in BRs & SRs Are they right requirement ? Are they complete ? Are they reasonable ? Are they achievable ? Are they testable ? REVIEWS DURING DESIGN: Conducted by designers Verifies completeness and correctness in HLD & LLD Is the design good? Is the design complete ? Is the design possible ? Does the design meet requirements ? WHY DOES S/W HAVE BUGS: Programming errors -. Changing requirements Poorly documented code -.programmers like anyone else can make mistakes. the result is bug . LLD: It specifies Internal logic of sub-modules.
SEI: Software Engineering Institute. WHAT IS VERIFICATION & VALIDATION: VERIFICATION: Typically involves reviews and meetings to evaluate( estimate . It’s a model of 5 levels of organizational maturity that determine effectiveness in delivering quality software. (i. defense department to help improve software development processes. scripting tools else often introduce their own bugs or poorly documented. resulting in added bugs.S. ANSI --. i.e how far the application is affected by this defect ( low. compilers. VALIDATION: Typically involves actual testing and takes place after verifications are completed. What is software Quality: Quality s/w is reasonably bug-free.American National Standards Institute Will automated testing tools make testing easier ? . delivered on time and with in budget. critical). plans. PRIORITY: Relative importance of the defect. calculate) documents. giving preference to the defect low . meats requirements and/or expectations and is maintainable.e. Initiated by the U. Which life cycle method is followed in Your organization: Now we are using V model and we will also include in some other methods like prototype and spiral in single application. class libraries. CMM: Capability maturity model developed by the SEI. high. medium . SEVERITY: Relative impact of the system. walk through & inspections meetings. Software development tools: visual tools. issue lists. medium. This can be done with check lists. code requirements and specifications. high).
they can be valuable. a web site’s interactions are secure. Common factors in deciding when to stop are … Dead lines ( release deadlines. HOW CAN IT BE KNOWN WHEN TO STOP TESTING ? This can be difficult to determine. WHATS THE ROLW OF DOCUMENTATION IN QA ? Critical. QA practices should be documented such that they are repeatable specifications.) TC completed with certain percentage passed Test budget / depleted (used U P) Bug rate falls below a certain level Beta or alpha testing period ends. For larger projects or on-going long-term projects. etc…. the time needed to learn and implement them may not be worth it. WHATS A TEST CASE ? A test case is a document that describes an input action or event and an expected response. HTML code usage is correct. to determine if a feature of an application is working correctly. The project initial schedule should allow for some extra time corresponding with the possibility of changes.. Focus less on detailed test plans and test cases and more an ad-hoc testing. WHAT MAKES A GOOD TEST ENGINEER ? A good test engineer has a “test to break “ attitude (approach. a strong desire for quality and attention to details. test plans. WHAT IS THE DIFFERENCE BETWEEN A PRODUCT AND A PROJECT ? PRODUCT: . designs business rules. testing dead lines etc. manner) an ability to take the point of view of the customer.. inspection reports. WEB TEST TOOL: To check that links are valid . configurations. WHAT CAN BE DONE IF REQUIREMENTS ARE CHANGING CONTINUOUSLY: Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.Possible: For small project. code changes. client-side and server-side programs work.
Developing a product without interactions to two client before the product release PROJECT: Developing a product based on the client needs or requirements. It covers the process which is used to control. WHEN U START WRITING TEST CASES ? Once the requirements are frozen. WHAT IS CONFIGURATION MANAGEMENT ? It is version control. The changes made again and who made the changes. . the problem faced. WHAT IS TEST STRATEGY ? Applying a type of testing techniques to explore the maximum bugs. change request and design and the tools to be used. whether it is fulfilling the coverage or not. Source Error Description Status Priority Severity WHAT IS TRACABILITY MATRIX ? To map the test requirement and the test case ID. TESTING TECHNIQUE: Way of executing and preparing the test cases. WHAT IS A TEST PROCEDURE ? Execution of one or more test cases. TESTING METHODOLOGIES: Way of developing the test. we begin writing test cases. co-ordinate and track the requirement documentation. WHAT ARE THE DEFECT PARAMETERS ? There are 5 parameters.
.WHATS THE DIFFERENCE BETWEEN IST & VAT ? Particulars Acronym Base Doc’s Location Data Purpose IST UAT Integration System User Acceptance Test Testing line Functional Specification Business Requirements Off site Simulated Validation & Verification On site Live data User needs.