You are on page 1of 13

YRK - Fairly Contradictory

Copyright @ www.yrk-emperoroftechnology.blogspot.com

SOFTWARE TESTING
Software Testing: Testing is a process of executing a program with the intent of finding an error. OR Testing is a process of trying to discover every conceivable fault or weakness in the work product. 1. Testing can make sure that the product as per the specifications. 2. Testing helps in modifying features to make it more usable & friendly. 3. Testing can provide an indication of softwares reliability & quality. 4. Testing starts since requirements & it goes till maintenance. It is impossible to test every single combination and permutation of data inputs and interfaces that cause a system to react. The most practical means of testing is to determine the ways in which the system is most likely to be used. Role of a Software Tester: Role of a Software Tester is to find the defects & make sure that as early as possible they get fixed. At every point of time Tester has to prove the Developer is wrong. Skills required by a Tester: Tester should have good communication skills, good observation skills, people handling skills, good grasping power, patience, creativity in terms of identifying problematic areas etc. Why does software have bugs? a) Programming errors, b) Limitations of Language, c) Miscommunication or no communication, d) Changing requirement, e) Software complexity, f) Ego problem, g) Poorly documented code, h) Time pressure etc. Does every Software project need a Tester? If the project is short term, small, low risk project, with highly experienced programmers utilizing through unit testing or test-first development, then testers may not be required for the project to succeed. Verification: Verification involves Reviews and meetings (Informal & Formal) to evaluate documents, plans, codes, requirements and specifications. This can be done with Checklists and Issue lists. Verification is done to ensure that the software meets the required specifications. Verification is QA/Static/Preventive process. [QA means monitoring like Audit, Verification is baseline.] Verification means are we building the system right/product right? Verification takes place during every phase of the software development life cycle. 1) Review: It is a process or meeting during which a work product or set of work products is represented to project personal, managers, users or other interested parties for comment or approval. 2) Walkthrough: A walkthrough is an informal meeting for evaluation or informational purposes. For this Development & Testing team seat together, little or no preparation is usually required. 3) Inspection: An Inspection is formal meeting, typically with 3-8 people including a moderator, reader, and recorder to take the notes. For this type of meeting attendees should prepare by reading through the document. It is most cost effective method of ensuring the quality. Validation:

Copyright @ www.yrk-emperoroftechnology.blogspot.com

Validation involves actual testing and takes place after verifications are completed. Validation is done to ensure that the software meets the requirements of the customer. Validation is QC/Dynamic/Detective & Corrective Process. [QC means Actual Testing] Validation means are we building the right system / right product? Validation occurs at the end with user acceptance testing. Verification and Validation occurs through out the life cycle. Quality: * Quality is confirmation to specifications. * Quality is fitness for use. * Quality is degree of excellence. * Quality is an attribute of something. * A product is a quality product if it is defect free. For Producer/Programmer/Manufacturer, a quality product is that which accomplish the requirements of the Customer/User. For Customer/Client/User, a quality product is that which is fit for use. [Irrespective of whether requirements were met] Quality Attributes: 1) Correctness: Agreement of program code with specifications, independence of the actual application of the software system. 2) Reliability: It is a probability that this system accomplish a function for a specified number of input trials under specified input conditions in specified time interval. 3) User friendliness: a) Adequacy, b) Learn ability [design of UI, clarity & simplicity, User manual], c) Robustness. 4) Maintainability: a) Readability, b) Extensibility, c) Testability. 5) Efficiency: It is the ability of a software system to accomplish its purpose with the best possible utilization of all necessary resources [time, storage, transmission channels, & peripherals etc.] 6) Portability: It is the ease with which a software system can be adapted to run on computers other than the one for which it was designed. Quality Assurance: It involves the entire software development process monitoring & improving the process, making sure that any agreed upon standards and procedures are followed, and insuring that the problems are found and dealt with. QA is a Static/ Verification/Preventive/Monitoring process. Quality Control: In this organization focus is on testing a group of quality related attributes such as Correctness, Security, Portability, Inter-operability, Usability, and Maintainability. QC is Dynamic/ Validation/Detective & Corrective/Actual Testing process. QA Static Verification Preventive Monitoring Process QC Dynamic Validation Detective & Corrective Actual Testing Process

Procedure: It is the step by step method followed to ensure that standards are met. Process: It is the work effort that produces a product. Productivity: It is the ratio of the output of process to the input.

Copyright @ www.yrk-emperoroftechnology.blogspot.com

Software Engineering: It is a systematic development of software. As per IEEE 1990 standard, it is the application of a systematic disciplined, quantitative approach to the development, operation & maintenance of software, i.e. application of engineering to software. Software Engineering addresses systematic procedure, Software Engineering addresses technical, managerial & administrative issues involved in a software development. Phases in Software Development Life Cycle: 1) Requirements Engineering: Problem Definition => Software Requirements Engineering => Validated SRS Document, 2) Design: SRS => Design => Validated design document, 3) Implementation: Validated design document => Implementation => Source Code, 4) Testing: Source Code => Testing => Test Output, 5) Maintenance: Version 1.0 => Version 1.1 => Version 1.2 = Version 1.3 Version 2.0 One of the important reasons for the software crisis is lack of thrust for software testing. Work Product: SRS, Design document, & Source code etc. Criteria are for Success of a project: 1) The software must meet all the quality requirements, 2) The software must be developed within timeframe, 3) The software must be developed within budget, 4) Relation between the team members should be cordial during the project execution & after the project completed. Characteristics of a Software Product: 1) Operational Characteristics: [Specify the requirements during the operation or usage.] Correctness, Usability/Learn ability, Integrity, Efficiency, Reliability, Safety, & Security etc. 2) Transition Characteristics: [Specify the requirements for its usage in other hardware/software environments.] Portability, Reusability, & Interoperability etc. 3) Revision Characteristics: [Specify the requirements for making changes to software easy.] Maintainability, Testability, Flexibility, Scalability, Extensibility, & Modularity etc. Software Life Cycle: The life cycle begins when an application is first conceived and ends when it is no longer in use. Software Development Life Cycle [SDLC]: 1) System study, 2) Requirement Analysis, 3) Designing, 4) Coding, 5) Testing, 6) Maintenance, 7) Implementation etc. Requirements HLD LLD Release Build System Build software

QA, Static, { Verification

} QC, Dynamic, Validation

Coding Software Testing Life Cycle [STLC]: Requirements Review HLD Review

User Acceptance Testing System Testing

Copyright @ www.yrk-emperoroftechnology.blogspot.com

LLD Review Unit Testing

Integration Testing

Software Development Life Cycle Models: 1) Waterfall Model: It is most widely used for commercial software development projects. a) Requirements Specifications: SRS Document, Draft User Manual, Maintenance Plan b) System Design & Software Design: System Design Document, Hardware Design Document, Software Design Document, Interface Design Document, Unit Test Plan, System Test Plan c) Implementation & Unit Testing: Program Code, Unit Test Report d) Integration & System Testing: system Test Report, Final User Manual e) Operation and Maintenance: $ (if the software is without any defects) Advantages: 1) In this model, a visible output is available at each stage, so progress of the project will be evident to management. 2) Project Monitoring is easy because of visible output. Disadvantages: 1) If the client wants the developer to evolve specifications in a gradual manner, this model is not suitable. 2) 3) 4) 5) Prototyping Model: Evolutionary Development Model: Spiral Model: Rapid Application Development [RAID] / Synchronize & Stabilize Model:

Cost of Quality: 1) Failure Cost: Failure cost is the cost of fixing the bugs, 2) Appraisal Cost: Appraisal cost is the cost of accessing whether the software has any bugs. 3) Prevention Cost: Prevention cost is the cost of modifying the process to avoid bugs. Test Cycle: A Test Cycle is the period in which the product is tested and defects are verified. Test Case: A Test Case is a document that describes an input action or event, and an expected response to determine whether feature of an application is working correctly or not. It contains particulars as Test Case Identifier, Objective, Steps, and Input Data & Expected Result etc. Test Plan: A Test Plan is a document that describes the objective, scope, approach, and focus of all software testing efforts. OR A Test Plan is a document that describes the objective, scope, approach, methodology to be used, task to be performed, resources, schedules, risks, and dependencies. Test Scripts: Test script is commonly referred to the automated test procedure used with a testing tool. Test Specification: Test specification defines exactly what tests will be performed and what their scope and objective will be. Test Suite:

Copyright @ www.yrk-emperoroftechnology.blogspot.com

Test Suite is a group/set/collection of test cases. Test Bed: Test Bed is nothing but a pre-requisite environment of the testing. Bug/Defect: Bug/Defect is a manifestation/sign/appearance of an error in software. OR Defect is a product requirement that has not been met. OR A defect is a variance from expectations. Principles of Defect Management: 1) Primary goal is to prevent defects. 2) Defect management process should be risk driven. 3) Defect measurement should be integrated into development process. 4) QA analyst should look for trends & perform a root cause analysis to identify special & common cause problems. 5) Defect information should be used to improve the process. Severity: Severity is the impact or effect of an error on the software or application. Severity Status: 1-Low, 2-Medium, 3-High, 4-Very High, 5-Urgent. [According to Test Director] Priority: Priority of the Bug/Defect is how fast it gets fixed. Priority depends on two factors: 1) Impact of error on business, 2) How much user is going to use the functionality? [Where error occurs] Defect Density: Defect Density is the number of defects to program Language. [Kloc Kilo lines of Code] According to Six Sigma 3.5% must be the Defect Density. Defect Removal Efficiency: A Good Test: A Good Test is one which reveals/represents an error. Debugging: Debugging is the process of finding and removing the causes of failures in the software. Be bugging: Be bugging is adding known defects by seeding. Metric: A metric is a qualifiable measurement of the software product, process, or project that is directly observed, calculated, or predicted. 1) Person Month: 2) Product Metric: Size of project: KDSI/KLOC [Kilo/Thousand Delivered Source Instructions / Kilo Lines of Code] Small (<= 2KDSI), Intermediate (>2 & <= 8KDSI), Medium - (>8 & <= 32KDSI), Large - (>32 & <= 128KDSI), Very Large - (>128 KDSI) 3) Productivity Metrics:

Copyright @ www.yrk-emperoroftechnology.blogspot.com

For Programmer: No. of lines written by a Programmer in 1 hour, i.e. DSI per hour OR No. of Defects removed per hour by Programmer. Time required for execution of a project (in hrs.) = Total KDSI of the project / Average KDSI per hour Testing Strategies: 1) White Box / Glass Box / Code Based / Structural Testing: It is the testing of a function with knowing internal structure of the program. It is usually done at coding stage. 2) Black Box / Closed Box / Requirement Based / Functional Testing: It is the testing of a function without knowing internal structure of the program. 3) Gray Box Testing: It is the combination of White Box & Black Box testing. Types of Testing: 1. Unit Testing: It is the micro scale of testing; it is used to test particular functions or code modules. It is done by the programmer & not by the testers, as it requires detailed knowledge of the internal program design & code. 2. Integration Testing: It is the testing of combined parts of an application to determine whether they function together correctly or not. OR It is the testing in which different parts of the system are combined together and focus is only on integrated part or integration point. The parts can be code modules, individual applications, client & server applications on a network etc. This type of testing is especially relevant to client/server and distributed systems. Incremental Integration Testing: It is continuous testing of an application as new functionality is added. 3. Functionality Testing: It is the testing done to ensure that whether system meets its specified functional requirements or not. It is the Black Box type of testing. 4. System Testing: It is the testing of an integrated or whole system to verify that it meets specified requirements. It is negative type of testing because it is aimed at showing software does not work. It is the Black Box type testing. a) Usability Testing: It is the testing done to ensure that whether the system is user friendly or not. [Easy to use, easy to learn, look & feel, navigation, help etc.] b) Compatibility Testing: It is the testing done to ensure that whether system is compatible with all the software platforms [like different OS, different versions of specific OS, and different application softwares] or not. c) Configuration Testing: It is the testing done to ensure that whether system is compatible with all hardware platforms [like different Processors, HDD, & RAM size] or not. d) Performance Testing: It is the testing done to ensure the time response of the system against large amount of data during short time period. e) Load / Volume Testing:

Copyright @ www.yrk-emperoroftechnology.blogspot.com

It is the testing done to ensure the capacity of the system against unusually heavy or peak load. f) Stress Testing: It is the testing done to ensure the systems response after lowering its resources. g) Security Testing: It is the testing done to ensure that whether system meets its specified security objectives or not. OR It is how well the system projects against unauthorized internal or external access, willful damage & so on. It may require sophisticated testing techniques. h) Recovery Testing: It is the testing done to ensure the systems ability to recover from disaster or varying degrees of failures. i) Installabity Testing: It is the testing done to ensure that whether system follows installation procedures correctly or not. OR It is the testing done to ensure the features of installer. j) Uninstallabity Testing: It is the testing done to ensure that whether system follows the uninstallation procedure correctly or not. k) Maintainability Testing: It is the testing done to ensure that whether system meets its maintainability objectives or not. l) Portability Testing: It is the testing done to ensure that whether system is compatible with all software & hardware platforms or not. It is the combination of Compatibility & Configuration Testing. 5. Alpha Testing: It is the testing of an application when development is nearing completion, minor design changes may still be made as a result of this testing. It is done by the end user or others & not by programmers or testers. 6. Beta Testing: It is the testing of an application when development & testing are essentially completed & final bugs or problems need to be found before final release. It is done by the end-user and not by programmers or testers. It is done at users site. 7. Regression Testing: It is the retesting after the modifications of the software or its environment to check whether any changed functionality does not affect any unchanged functionality. Especially Automated Testing tools are used for this type of testing. 8. User Acceptance Testing: It is the final testing based on the specifications of the end-user or customer. It is done at developers premises or client premises. Different types of Testing: 1. Ad-hoc / Monkey / Gorilla Testing: It is the testing, in which no test cases are designed, tester goes according to his domain knowledge. Here tester must have domain knowledge. 2. Exploratory Testing: It is the testing in which no test cases are designed; tester goes according to his imagination and creativity in terms of finding out problems with the product.

Copyright @ www.yrk-emperoroftechnology.blogspot.com

3. Top-down and Bottom-up Testing: In the top-down testing the highest level modules are tested first where as in the bottomup testing the lower level modules are tested first. 4. Boundary value Testing: It is the testing which checks the feature using the values just below or above the lower limit and upper limit. It can be used as a Black Box Testing. 5. Branch Testing: It is the testing done to ensure the coverage criteria such that for each decision point each possible branch is executed at least once. 6. Component Testing: It is the testing of individual software component. 7. Conversion Testing: It is the testing of programs or procedures used to convert data from existing system to another system. 8. Isolation Testing: It is the component testing of individual component in isolation from surrounding components. 9. Feature Testing: It is the testing in which test case selection is based on the analysis of the specification of the component without reference to its internal working. 10. Arc Testing: It is a test case design technique for a component in which test cases are designed to execute branch outcomes. 11. Domain Testing: It is a test case design technique for a component in which test cases are designed to execute representatives form equivalence classes. 12. Exhaustive Testing: It is a test case design technique in which test case suite includes all combinations of input values and preconditions for component variables. 13. Path Testing: It is a test case design technique in which test cases are designed to execute paths of a component. 14. Cosmetic Testing: It is the type of Usability testing i.e. look and feel, for Web Based Application. Software Change / Configuration Management [SCM]: It is an independent process which can be introduced at any time in defect life cycle. It is the combination of few processes like Identifying Objects [1. Basic Objects like Labels, Text Boxes, & Button and 2. Aggregate Objects like collection of Basic Objects (e.g. Menu, Forms & Frame)], Change Control, Configuration Audit, Status Report, and Version Control. [According to SEI (Software Engineering Institute) Version Control is a part of SCM but

Copyright @ www.yrk-emperoroftechnology.blogspot.com

according to IEEE (Institute of Electronics & Electrical Engineers) Version Control is not a part of SCM.] SCM Process: Change requirement from user it goes to CCB (Change Control Board containing senior people of an organization) they do analysis (Risk Analysis and Cost Benefit Analysis) accept some or all requirements inform to the client after approval from client accepted request is send to ECO (Engineering Change Order) then developer decide module as per ECO Check Out Configuration Audit - Check In Configuration Audit Status report of Configuration Audit Read Me File. Generally either Check Out or Check In process or both the process takes place in the industries. Version Control: If there are changes like structure and functionality in the product then whole version change takes place, e.g. Version 1.0 to Version 2.0. And if there are minor changes in the present version then small version change takes place, e.g. Version 1.0 to Version 1.1. SCM Tools: VSS [Visual Source Safe], SVN [Sub Version] etc. Configuration Management = Change Control Change Control/Version Control: It allows storing all revisions of everything which is being written during development & maintenance of software application. Equivalence Class Portioning: It is the black box type of testing technique. It is used when we have large number of input data. In ECP data are divided into some classes which are equal in terms of their one of the characteristics. Generally data is divided into Valid and Invalid classes. There is no specific formula for defining classes. These classes defines boundary when we go for Boundary Value Analysis. ECP and BVA are related to each other but not a part of each other. ECP is useful to reduce data and time and to cover maximum functions for testing. Traceability Matrix: It is the mapping of requirements with test cases to check whether for every requirement there is test case written or not. Basic purpose is to ensure the requirement coverage. [Matrix: - Matrix is a table containing rows & columns.] Stubs & Drivers: 1. These are the dummy programs written by the developer, if some of the functions not yet prepared. 2. Drivers are calling programs and Stubs are Called programs. 3. Drivers are used to take the inputs performs some actions and get output. 4. Both are used in Top-Down approach [Incremental approach] and only drivers are used in Bottom up approach. 5. Drivers can be real or dummy but Stubs must be dummy. Testing Process: FSD / SRS / Use Cases Prepare Test Cases Review (Peer / Lead / Senior) Execution Waiting for next QA drop (development of the Product / Software)

Copyright @ www.yrk-emperoroftechnology.blogspot.com

FSD [Functional Specification Document]: It contains behavior of every functionality of an Application in detail. It is also called High Level Document, because everything is mentioned in it. SRS [System Requirement Specification]: It contains only about the Functionality. Use Cases: It is a narrative document that describes sequence of events of an actor (an external agent) using a system to complete a process. Testing of Windows Application: Windows application does not require browser to run. Windows application requires System server as well as Application server to run. Types of testing for Windows application: Functional, Usability, Performance, Security, Compatibility, Configuration, Smoke/Sanity, Regression Testing etc. a) Functionality Testing: - i) Core / Primary Functions: Testing of functions which are present in menus. ii) Secondary Functions: Testing of shortcuts [Ctrl+C, Ctrl+V] and Hot keys [Alt+O (Numeric Character)]. b) Usability Testing: i) GUI Objects, ii) Text written on GUI objects, iii) Message displayed for user by the system, iv) User friendliness, v) Counter should appear while typing the characters in specific text box. c) Performance Testing: Time response. d) Load Testing: e) Stress Testing: f) Security Testing: i) Login of different users, g) Compatibility Testing: i) Forward & Backward Compatibility, ii) Various Platforms like different OS or Different Versions of specific OS. h) Configuration Testing: Various configurations of Processor, HDD, RAM etc. Testing of Web based Application: Web based Application requires browser to run. Web based Application requires Web Server as well as Application Server to run. Types of Testing for Web based Application: Functional Testing, Usability Testing, Performance Testing, Security Testing, Compatibility Testing etc. a) Functional Testing: i) Links: - Text Links, Image Links, ii) Search Engine: - Related to that particular Website, Outside Domain Searching. b) Usability Testing: It is cosmetic testing means Look & feel. c) Performance Testing: Time response. d) Load Testing: e) Stress Testing: f) Security Testing: https:\\ i.e. Hyper Text Transfer Protocol Security, SSL Socket Security Level g) Compatibility Testing: Testing on Different Browsers like IE, Opera, Netscape Navigator, and Mozilla Fire fox etc. h) Configuration Testing: It is not essential. Things to remember while testing windows based application: 1. Ensure the alignment of all the Labels, Text Boxes, & Action Buttons present on the screen. 2. Ensure the indication present for mandatory fields. 3. Ensure the data size of all the text boxes using assigned limits for Characters [Alphabets, Numbers, and Special Characters etc.]. 4. Ensure all the shortcut keys assigned for performing different operations.

Copyright @ www.yrk-emperoroftechnology.blogspot.com

5. Ensure the status of the focus after opening each screen of the application. [Focus should present on the first control of the screen.] 6. Ensure the tab order on each screen of the application. Win runner 6.0 Does not support to Database Testing & Web Testing. Win runner Win runner use TSL script. Win runner does not support to OOPS. Win runner used for Functionality & Regression Testing. Win runner wont have Application Crash. Two recording modes are available in Win runner. File extension is .exe. Win runner 7.0 does not support to Web Testing but Win runner 8.0 supports to Web Testing. Win runner does not support XML files. Win runner have only 1 screen. Win runner 7.0 Support to Database Testing & not to Web Testing. Win runner 8.0 Support to Web Testing.

QTP QTP use VB scripts. QTP supports to OOPS. QTP used for Functionality & Regression Testing. QTP have Application Crash in Recovery Manager. Three recording modes are available in QTP. File extension is .vbs. QTP supports to Web Testing.

QTP supports XML files. QTP have 3 screens: a) Active screen, b) Action screen, c) Data table etc.

Advantages of Testing Tools: a) It provides improvement in quality & reliability of the software. b) It saves time, efforts & money needed for testing. c) It provides systematic approach to the testing process. How to select a testing tool: a) If the software is likely to be modified, then regression testing is must. Hence we need to go for Regression Testing Tool. b) If the software is a C/S application or a web application, then load testing is must. Hence we need to go for Performance Testing Tool. c) If we need to test the source code, then we need to go for Source Code Testing Tool. d) If the software project is very large, tracking the bugs is a major issue, then we need to go for Bug Tracking Tool. e) If the organization is having process oriented testing approaches then we need to go for Test Management Tool. f) If project teams are at different places, we need to go for Web Enabled Testing Tool. Test Automation: Test Automation is a testing tool, but it will run without our intervention. Test Automation is just another class of software testing tools. If these tools could be combined, started & run

Copyright @ www.yrk-emperoroftechnology.blogspot.com

with little or no intervention form us, they could run our test cases, look for bugs, analyze what they see, & log the results. That is Software test Automation. Advantages: a) It can speed up the amount of time it takes to run our test cases. b) It can make us more efficient by giving us more time for test planning and test case development. c) It is accurate, precise, & relentless etc. Keystroke & mouse action record & playback are the simplest type of automation that can be effectively find bugs. A testing tool will help us test, making it easier for us to perform a manual testing task. Test Automation is also a test tool, but it will run without our intervention.

Functional/Regression Testing Tools: These are black box testing tools. These tools are used to test the functionality of the software automatically by recording the GUI operations and automatically replying these operations to carry out unattended testing. Feature: a) Data driven testing, b) Recovery Management e.g.: Mercury Interactives WinRunner, Segue Softwares SilkTest, and IBM Rationals Robot etc. Performance/Load Testing Tools: These tools are used to carry out the testing by simulating multiple users on one or few machines. These tools are required for the testing of C/S applications, distributed applications and websites. e.g.: Mercury Interactives LoadRunner, Segue Softwares SilkPerformer, and IBM Rationals Performance Tester etc. Source Code Testing Tools: These tools are used to carry out white box testing. These tools test the statement coverage, path coverage, and branch coverage. These are also available for profiling, calculating coding metrics etc. Test Management Tools: These tools facilitate a process oriented test management by providing such facilities as test scheduling, generation of test cases, generation of test reports, bug tracking etc. e.g.: Mercury Interactives TestDirector, Segue Softwares SilkPlan Pro, and IBM Rationals Test Manager etc. Q. What is Usability Testing? Explain the different tools that are available for usability and accessibility. What is the role played by these tools in testing usability? Ans.: Usability Testing: It is the testing done to ensure that whether system is user friendly or not. [Easy to use, easy to learn, look & feel, navigation, help etc.] Important traits to good UI: a) Follows standards and guidelines, b) Flexible, c) Intuitive, d) Comfortable, e) Useful, & f) Consistent etc. Accessibility Testing/Testing for Disabled: Visual, hearing, motion, & cognitive impairments are the four types of disabilities that could affect software usability. While testing the accessibility enabled software, keyboard, mouse, sound, & display are areas we need to pay close attention.

Copyright @ www.yrk-emperoroftechnology.blogspot.com

You might also like