http://www.etestinghub.com/index.

php Introduction To Software Testing
In this section we will discuss the basics of Software Testing as what is Software Testing? what are the reasons for using Software Testing? and the necessity of using the Software Testing? Firstly, we will come across different terminology used throughout Software Testing, professional testers are all pretty much agreed on these basic ideas. Secondly, we take a look at the need for proper Software Testing and What are errors and how do they get into the software,and Life cycle of Software Testing,with different Software Testing Types and look at the cost of getting it wrong and we show why exhaustive Software Testing is neither possible not practical. Describe a fundamental test process on Software Testing, based on industry standards, and underline importance of planning tests and determining expected results in advance of test execution on Software Testing.
y y y y y y y y y y y y

Understand basic testing terminology on Software Testing. Understand why Software Testing is necessary. Be able to define error, fault and failure of Software Testing. Appreciate why errors occur and how costly they can be in Software Testing. Understand that you cannot test everything and that Software Testing is therefore a risk management process. Understand the fundamental test process of Software Testing. Understand that developers and testers have different mindsets on Software Testing. Learn how to communicate effectively with both developers and testers. Find out why you cannot test your own work. Understand the need for regression testing in Software Testing. Understand the importance of specifying your expected results in advance. Understand how and why tests should be prioritized in Software Testing.

Software Testing-What is Software Testing?
Actually what is Software Testing? This is the most important question to start with. Why to Test Your Software? Even the most carefully planned and designed software, cannot possibly be free of defects. Your goal as a quality engineer is to find these defects. This requires creating and executing many tests. In order for software testing to be successful, you should start the Software Testing process as soon as possible. Each new version must be tested in order to ensure that "improvements" do not generate new defects.

If you begin Software Testing only shortly before an application is scheduled for release, you will not have time to detect and repair many serious defects. Thus by Software Testing ahead of time, you can prevent problems for your users and avoid costly delays.

Let us derive the main definition for Software Testing. I.Identifying the defects in Software Testing. what does this defect mean in Software Testing? The main purpose of Software Testing is to identify the defects. Defect:-A flow in a component or system that can cause the component or system to fail to perform its required function. Check also Fault,Failure Error,Bug etc. Fault:- Fault is similar to a defect. Failure:-Deviation of the component or system from its expected delivery,service or result. Error:-A human action that produces an incorrect result. Bug:- Bug is similar to that of an defect. II.Isolate the defects. Isolating means seperation or dividing the defects. These isolated defects are collected in the Defect Profile What is Defect Profile Document? a.Defect Profile is a document with many columns in Software Testing. b.This is a template provided by the company. III.Subjected for rectification The Defect Profile is subjected for rectification that means it is send to developer IV.Defects are rectified After getting from the developer make sure all the defects are rectified, before defining it as a Quality product. What is Quality in Testing? Quality is defined as justification of user requirements or satisfaction of user requirements. When all the 4 steps are completed we can say that Software Testing is completed. Now Let Us Write A Proper Definition For Testing:--

SOFTWARE TESTING MAIN DEFINITION---This is the process in which the defects are identified, isolated , and subjected for rectification and finally make sure that all the defects are rectified ,

in order to ensure that the product is a Quality product. Objective of Software Testing
y y y y y y

Understand the difference between verification and validation testing activities Understad what benefits the V model offers over other models. Be aware of other models in order to compare and contrast. Understand the cost of fixing faults increases as you move the product towards live use. Understand what constitutes a master test plan in Software Testing. Understand the meaning of each testing stage in Software Testing.

Software Testing
Here are some of the terminology that has to be learned as part of Software Testing:First of all let us know the major difference between the project and product in Software Testing. There are many definitions relating this:1.Project:It means that exact rules i.e the customer requirements must be followed. 2.Product:This is based on the general requirements i.e on our own requirements. 3.Quotation in Software Testing:Estimating the cost of the project. 4.Bidding of the Project: Consider for example:If any bank want to Automate its procedures then that would bid or would call for various IT development companies. See this picture

5.Key Process Areas in Software Testing:-Instead of repeating the same process again and again we can keep out of them. Consider for example:-The login page for different projects would be the same. 6.Project Initiation Note(PIN) in Software Testing:-This is nothing but a mail to the Company Director.

Software Testing-Company Structure Software Testing-COMPANY STRUCTURE This is the basic Structure that is followed by most of the Companies. D:-Directors. QM:-Quality Manager. HTL:-Head Team Leader. . Now let us see the category of Human Resources in Software Testing QA:-Quality Assurance. TM:-Technical Manager. CEO:-Chief Executive Officer.

Test execution involves actually running the specified test on a computer system either manually or by using an automated test tool Test recording involves keeping good records of the test activities that you have carried out. Fundamental Test Process in Software Testing The fundamental test process in Software Testing comprises planning. Software Testing-Testing Process 1. SE:-Software Engineer. some test may need to be re-run and in some instances it may be appropriate to design some new test cases to meet a particular coverage target.The Functional part is called as Visible. PM:-Project Managers. 5.The Structural part is tested by Developer.The Application we are testing is called as Application Under Testing. specification. 2. TL/PL:-Team Leader/Project Leader. execution. TL:-Test Leader.QTL:-Quality Team Leader. Versions of the software you have tested and the test specifications are software you have tested and the test specifications are recorded along with the actual outcomes of each test Checking for test completion involves looking at the previously specified test completion criteria to see if they have been met. It is important to determine the expected results prior to test execution. STE:-Senior Test Engineer. Here it is usual to produce a separate document or documents that fully describe the tests that you will carry out. You will find organizations that have slightly different names for each stage of the process and you may find some processes that have just few stages for example. SSE:-Senior Software Engineer. TE:-Test Engineer.The Functional part is tested by Test Engineer. recording and checking for completion. However. If not. 4. y y y Test specification in Software Testing (sometimes referred to as test design) involves designing test conditions and test cases using recognized test techniques identified at the planning stage. QL:-Quality Leader.The Application is divided into 2 parts:a)Structure:b)Functional:3.The Structural part is called as Invisible. you will find that all good test processes adhere to this fundamental structure. . 6.

y Even with a quick and dirty ad-hoc test it is advisable to write down beforehand what you expect to happen. yet erroneous result. is a good thing Meaning of completion or exit criteria in Software Testing Completion or exit criteria are used to determine when testing (at any stage) is complete. faults found or coverage criteria. a successful test is one that may cause delay. y As you will see when designing test using black box and white box techniques in Software Testing there is ample room within the test specification in Software Testing to write down you expected results and therefore no real excuse for not doing it. If you are unable to determine expected results for a particular test that you had in mind then it its not a good test as you will not be able to (a) determine whether it has passed or not and (b) you will never be able to repeat it. Coverage criteria in Software Testing Coverage criteria are defined in terms of items that are exercised by test suites. as correct outcome. y " The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong does go wrong it usually turns out to be impossible to get at or repair." y Here is a small Power point presentation of Software Testing. This is counter-intuitive. The successful test reveals a fault which. such as branches. most frequently used transactions etc Testing and Expected Results in Software Testing The specification of expected results in advance of test execution is perhaps one of the most fundamental principles of testing computer software.user requirements. time. These criteria may be defined in terms of cost. if found later. This may all sound pretty obvious but many test efforts have floundered by ignoring this basic principle.y y y Successful tests detect faults As the objective of a test should be to detect faults. If this step is omitted then human subconscious desire for tests to pass will be overwhelming and tester may perhaps interpret a plausible. . may be many more times costly to correct so in the long run. because faults delay progress. a successful test is one that does detect a fault.

Decomposability in Software Testing: 1. Software testability is the ease with which a computer program is tested. Source code is accessible Controllability in Software Testing: 1. All possible outputs can be generated through some combination of input in Software Testing 3. All code is executable through some combination of input in Software Testing 4. Input and output formats are consistent and structured in Software Testing 6. The product evolves in functional stages (allows simultaneous development & testing) Observability in Software Testing: 1. The software system is built from independent modules 3. automated. and reproduced. transaction logs) 5. System states and variables are visible or queriable during execution 4. Functional simplicity y y y y . Metrics can be used to measure the testability of a product. The better the software is controlled.. the more quickly it can be tested in Software Testing 2. An effective Software Testing begins with a proper plan from the user requirements stage itself. and smarter testing can be performed. The requirements for effective Software Testing are given in the following subsections. All factors affecting the output are visible 6. Software and hardware states can be controlled directly by testing 5. 2. Tests can be conveniently specified. 2. By controlling the scope of testing. Software modules can be tested independently in Software Testing Simplicity in Software Testing: 1. The better the software works. Incorrect output is easily identified 7. The less there is to test. Incorrect input is easily identified 8. 2.Software Testing-Testing Requirements Software testing is not an activity to take up when the product is ready. the more the testing can be automated and optimised. Past system states and variables are visible or queriable (eg. 4. y Operability in Software Testing: 1. Internal errors are automatically detected through self-testing mechanism 9. What is seen is what is tested 2. No bugs block the execution of tests. The system has few bugs (bugs add analysis and reporting overhead to the test process) 3. problems can be isolated quickly. the more efficiently it can be tested. Distinct output is generated for each input 3. Internally errors are automatically reported 10.

1 and it does that once only.Faults and Failures are occurred while developing the Software Project. Therefore remaining code is executed once and not 10 times within the loop. Technical documentation is accurate y Software Testing-Why is Software Testing necessary In this section we will discuss the necessity of Software Testing Errors. The design is well understood in Software Testing 3.10 The programmer's intention was to execute a succeeding statements up to line 100 ten times then creating a loop where the integer variable I was using the loop counter. The computer program ABOA spacecraft contained the following statement with the FORTRAN programming language. Technical documentation is well organized in Software Testing 7. Unfortunately. Changes to the design are communicated. the fewer the disruptions to testing 2.DO 100 1=1. 5. Dependencies between internal external and shared components are well understood. Structural simplicity 4. conflicting priorities and so on . Changes to the software are controlled in Software Testing 4. However. The more information we have. Code simplicity y Stability in Software Testing: 1. The correct syntax for what the programmer intended is:-. As a result spacecraft went off course and mission was abort considerable cost!). We would see how do Errors occur in Software Testing:Why do we make errors that cause faults in computer software leading to potential failure of our systems? Well. Technical documentation is instantly accessible 6. . the smarter we will test 2. -->We can illustrate these points with the true story Mercury spacecraft. budget restrictions. This is an unavoidable fact of life. firstly we are all prone to making simple human errors. The software recovers well from failures in Software Testing Understandability in Software Testing: 1.10 So a small mistake make a very big thing.DO 100 I = 1. what this code actually does is writing variable I do to decimal value 1. starting 1 and ending at 10. this is compounded by the fact that we all operate under real world pressures such as tight deadlines. Changes to the software do not invalidate existing tests in Software Testing 5.3. 4. The fewer the changes. Technical documentation is specific and detailed 8. Changes to the software are infrequent 3.

maintainability. Software Testing is always a matter of judging risks against cost of extra testing effort. The aborted Mercury mission was obviously very costly but surely this is just an isolated example. Assigning priorities to tests will ensure that the most important tests have been done should you run out of time. nor practically possible.e. Exhaustive testing why not test everything? It is now widely accepted that you cannot test everything in Software Testing. reliability. A few examples are shown below: A nuclear reactor was shut down because a single line of code was coded as X = Y instead of X=ABS (Y) i. Software Testing can be performed in either the two types:1. If you executed one test per microsecond it would take approx. 4 times the age of the Universe to test this completely. 2. Software Testing is the measurement of software quality. Measures of reliability include MTBF (mean time between failure). . the absolute value of Y irrespective of whether Y was positive or negative. Planning test effort thoroughly before you begin. or. Before that we need to know couple of main definitions regarding the terminology in the company . Software Testing and risk How much testing would you be willing to perform if the risk of failure were negligible? Alternatively.UnConventional:-In this Software Testing is done from the Initial Phase.Conventional:-In this Software Testing is started after the Coding. Consider a 10 character string that has 280 possible input streams and corresponding outputs.reusability. Reliability in Software Testing Reliability is the probability that software will not cause the failure of a system for a specified time under specified conditions. Or is it? There are hundreds of stories about failures of computer systems that have been attributed to errors in the software. testability etc How much Software Testing is enough? It is difficult to determine how much Software Testing is enough. how much Software Testing would you be willing to perform if a single defect could cost you your life's savings.Cost of errors in Software Testing The cost of an error can vary from nothing at all to large amounts of money and even loss of life. and setting completion criteria will go some way towards ensuring the right amount of Software Testing is attempted. usability. but exhaustive testing you will not. We measure how closely we have achieved quality by testing the relevant factors such as correctness. Complete Software Testing is neither theoretically. even more significantly Software Testing and quality Software Testing identifies faults whose removal increases the software quality by increasing the software's potential reliability. Exhausted testers you will find. MTTF (mean time to failure) as well as service level agreements and other mechanisms.

The Software Development Life Cycle (SDLC) is also called as the Product Development Life Cycle (PDLC) in Software Testing. For the proof of his collection of information the Business Analyst(BA) would prepare one document called as either BDD in Software Testing:--Business Development Design BRS in Software Testing:--Business Requirement Specification URS in Software Testing:--User Requirement Specification CRS in Software Testing:--Customer Requirement Specification All are the same. (i)Analyse the requirements in Software Testing:-In this step all the requirements are analysed and studyed. He would collect all the information of what has to be developed and in how many days and all the basic requirements of the company. we need to know how the software is developed and its Life Cycle in Software Testing. (ii)Feasibility Study in Software Testing:-Feasibility means the possibility of the project developing. The SDLC in Software Testing has 6 phases:They are:a)Initial Phase in Software Testing b)Analysis Phase in Software Testing c)Design Phase in Software Testing d)Coding Phase in Software Testing e)Testing Phase in Software Testing f)Delivery & Maintenance Phase in Software Testing Now let Us discuss each phase in detail:-a)Initial Phase in Software Testing:(i)Gathering the Requirements:The Business Analyst(BA) will gather the information of the company through one template which is predefined and goes to client. (ii)Discussing the Financial Terms and Conditions:-The Engagement Manager(EM) would discuss all the Financial Matters. Go to Top b)Analysis Phase in Software Testing:In this phase the BDD document is taken as the input.PROJECT in which exact rules must be followed that is customer requirements PRODUCT which are based on general requirements that is on our own requirements Software Testing-Testing Life Cycles Before going to the Testing Life Cycle in Software Testing. . In this phase 4 steps are done.

The project is tested by client and is called as the User Acceptance Testing.Go to Top e)Testing Phase in Software Testing:1. In this phase the Developers would prepare the Source Code.In Manual Testing there would be upto 50% defect free and in Automation Testing it would be 93% defect free.Review Report in Software Testing is nothing but a document prepared by the Test Engineer while studying the BDD document and the points which he cannot understand and not clear are written in that report and sent to BA. 4.The project is installed in client environment and there testing is done called Port Testing in . (ii)Low Level Designing in Software Testing:-In this level of Designing the modules are further divided into number of submodules. 2. In this Phase the Cheif Architect would prepare the Technical Design Document or Detail Design Document. The output document for this phase is the Software Requirements Specification(SRS).In this Phase Testing people document called as Defect Profile DocumentGo to Top f)Delivery & Maintenance Phase in Software Testing:1. And this document is prepared by Senior Analyst(SR). 5.In the first phase when the BDD is prepared the Test Engineer would study the document.In this Phase after the project is done a mailis given to the client mentioning the completing of the project.And the Test Engineer would write the Test Cases of application.Go to Top d)Coding Phase in Software Testing:In this Phase the Developers would write the Programs for the Project by following the Coding standards.Number of people etc: During the Analysis Phase the Project Manager prepares the PROJECT PLAN. The Low Level Designing is done by Team Lead(TL).(iii)Deciding Technology in Software Testing:-Deciding which techonology has to be used for example: Either to use the SUN or Microsoft Technology etc: (iv)Estimation in Software Testing:-Estimating the resources. and send a Review Report to Business Analyst(BA).This is called as the Software Delivery Note. The High Level Designing is done by Technical Manager(TM) or Chief Architect(CA).for example:-Time. 2. 4.Go to Top c)Design Phase in Software Testing:The designing will be in 2 levels:(i)High Level Designing in Software Testing:-In this level of Designing the project is divided into number of modules. 3. 3.

or official memoranda. review the current project schedule. go over any additional inputs required by that particular stage. Participants work together to gather additional information and refine stage inputs into draft deliverables. draft deliverables are generated for formal review and comment. All of these communications are deemed informal. and review any open issues. controlled software. Informal Iteration Process: y y y y y Most of the creative work for a stage occurs here. documents of record. Activities of this stage may include interviews.After some time if client want to have some changes in the software and software changes are done by Maintenance. The intent here is to encourage. Formal iteration Process. This process concludes when the majority of participants agree that the work is substantially complete and it is time to generate draft deliverables for formal review and comment. Kickoff Process in Software Testing: y y y y Each stage is initiated by a kickoff meeting.Software Testing while installing if any problem occurrs the maintenance people would write the Deployment Document(DD) to Project Manager(PM). and Stage exit Process. and electronic correspondence. Informal iteration Process. . examine the anticipated activities and required outputs of the current stage. which can be conducted either in person. The purpose of the kickoff meeting is to review the output of the previous stage. All project participants are invited to attend the kickoff meeting for each stage. In-stage assessment Process. rather than inhibit the communication process. Formal Iteration Process y In this process. 5. Software Testing-Testing Life Cycles Go to Top The internal processes in each of the following software lifecycle stage descriptions are Kickoff Process in Software Testing. the generation of prototypes. or by Web teleconference. The Primary Developer Representative is responsible for preparing the agenda and materials to be presented at this meeting. meetings. and are not recorded as minutes.

their schedule and estimated level of effort for the next stage. and is intended to satisfy one or more outputs for the current stage. Out stages are maintained at a high level in the project plan. the final (release) draft of the deliverable is prepared and submitted to the Primary Developer Representative. The person in charge of developing the deliverable works to resolve these issues then releases another version of the deliverable for review. and are included primarily for informational purposes.Each deliverable was introduced during the kickoff process. Once all issues against a deliverable have been resolved or moved to open status. The updated project plan and schedule is a standard deliverable for each stage of the project. These issues are disassociated from the specific deliverable. reviews the amount of labor expended against this stage of the project. The project plan update includes a detailed list of tasks. This process iterates until all issues are resolved for each deliverable. and uses this information to update the project plan. The Primary Developer Representative then circulates the updated project plan and schedule for review and comment. The Primary Developer Representative in turn consolidates these reports into a series of issues associated with a specific version of a deliverable. The intent here is to encourage review and feedback. . As participants review the draft deliverables. and iterates these documents until all issues have been resolved or moved to open status. direct experience has shown that it is very difficult to accurately plan detailed tasks and activities for out stages in a software development lifecycle. The stages following the next stage (out stages) in the project plan are updated to include a high level estimate of schedule and level of effort. the Primary Developer Representative reviews the final suite of deliverables. When final drafts of all required stage outputs have been received. There are no formal check off / signature forms for this part of the process." Open issues are reviewed during the kickoff meeting for each subsequent stage. y y y y y Each draft deliverable is given a version number and placed under configuration management control. they are responsible for reporting errors found and concerns they may have to the Primary Developer Representative via electronic mail. and tagged as "open issues. certain issues may be reserved for resolution in later stages of the development lifecycle. y y y y y y y y y At the discretion of the Primary Developer Representative and Primary End-user Representative. based on current project experience.

as well as its compliance with the standards defined for deliverables of that class. A deliverable is considered to be acceptable when each reviewer indicates substantial or unconditional concurrence with the content of the deliverable and the review checklist items. the deliverable will be resubmitted to the reviewers for reassessment. The End-user Reviewer is tasked with verifying the completeness and accuracy of the deliverable in terms of desired software functionality. and a selected Technical Reviewer. Each reviewer follows a formal checklist during their review. The revised deliverable will then be released to project participants for another formal review iteration. This process is initiated when the Primary Developer Representative schedules an instage assessment with the independent Quality Assurance Reviewer (QAR). and the Primary Developer Representative initiates the next process. Any issues raised by the reviewers against a specific deliverable will be logged and relayed to the personnel responsible for generation of the deliverable. . indicating their level of concurrence with each review item in the checklist. Refer to the software quality assurance plan for this project for deliverable class standards and associated review checklists. Once all issues for the deliverable have been addressed. These reviewers formally review each deliverable to make judgments as to the quality and validity of the work product. The Technical Reviewer determines whether the deliverable contains complete and accurate technical information. but cannot raise formal issues that do not relate to the deliverable standard.y Once the project plan and schedule has been finalized. the Primary Developer Representative will release a final in-stage assessment report and initiate the next process. In-stage Assessment Process: y y y y y y y y y y y y y y y This is the formal quality assurance review process for each stage in Software Testing. Deliverable class standards are defined in the software quality assurance section of the project plan. a selected End-user Reviewer (usually a Subject Matter Expert). The QA Reviewer is tasked solely with verifying the completeness and compliance of the deliverable against the associated deliverable class standard. The QAR may make recommendations. all final deliverables for the current stage are made available to all project participants. Once all three reviewers have indicated concurrence with the deliverable.

Integration testing 6. Statement of Work. White box testing 3.Stage Exit Process in Software Testing: y y y y y y y The stage exit is the vehicle for securing the concurrence of principal project participants to continue with the project and move forward into the next stage of development. and to ensure an acceptable action plan exists for all open issues. with either physical or digital signatures of the project executive sponsor. provide a forum to raise issues and concerns. and the Primary Developer Representative. This is generally accomplished by entering the minutes of the exit review as a formal document of record. The Primary Developer Representative then schedules a stage exit review with the project executive sponsor and the Primary End-user Representative as a minimum. the Primary End-User Representative. The initial steps to get a project is one of our Organization Business Analysts will go to Client place and he collects all the requirements and negotiate with the Clients regarding project and once it is approved he prepares documents like Project Proposal. All interested participants are free to attend the review as well. This meeting may be conducted in person or via Web teleconference. The purpose of a stage exit is to allow all personnel involved with the project to review the current project plan and stage deliverables. System testing 8. Functional testing 7.What different testing approaches are in Software Testing? A: Each of the followings represents a different testing approach: 1. The process begins when the Primary Developer Representative notifies all project participants that all deliverables for the current stage have been finalized and approved via the In-Stage Assessment report. Incremental testing 5. The stage exit process ends with the receipt of concurrence from the designated approvers to proceed to the next stage. User Requirements Document and Business Rules. Unit testing 4. End-to-end testing 9.Black box testing 2. Software Testing-Testing Types 1. So these are the initial documents for any project. Sanity testing .

Security testing 18. Click on a link! 5. and at the start of the system testing the complete system is configured in a controlled environment. 3. Compatibility testing 19. all unit and integration test results are reviewed by . Before system testing. Acceptance testing 12. Regression testing 11. Unit testing is performed after the expected test results are met or differences are explainable/acceptable. 6. What is open box testing in Software Testing? A: Open box testing is same as white box testing. System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life. Black box testing considers neither the code itself. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed. based on client input. with little or no outside help. Beta testing 25. Upon completion of integration testing. performed by the Test Team.What is black box testing in Software Testing? A: Black box testing a type of testing that considers only externally visible behavior. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable. Alpha testing 24. Install/uninstall testing 16. Recovery testing 17. You CAN learn to do black box testing. Get CAN get free information. Exploratory testing 20. It is a testing approach that examines the application's program structure. Load testing 13. What is unit testing in Software Testing? A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. It is a testing approach that examines the application's program structure. nor the "inner workings" of the software. and derives test cases from the application's program logic. Mutation testing. Comparison testing 23. 4. Usability testing 15.10. and derives test cases from the application's program logic. system testing is started. What is system testing in Software Testing? A: System testing is black box testing. Performance testing 14. What is glass box testing in Software Testing? A: Glass box testing is the same as white box testing. User acceptance testing 22. 2. Ad-hoc testing 21.

Programmers and developers are usually not appropriate as usability testers. or system. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels. 12. application. . 7. What is functional testing in Software Testing? A: Functional testing is black-box type of testing geared to functional requirements of an application. It can be difficult to determine how much re-testing is needed. before testing proceeds to the next level. especially near the end of the development cycle. or interacting with other hardware. video recording of user sessions and other techniques can be used. Testing multiple units in parallel increases test throughput and lower a manufacturer's 8. Test engineers *should* perform functional testing. surveys. What is regression testing in Software Testing? A: The objective of regression testing is to ensure the software remains intact. This activity is carried out by the test team. 9. when actual results and expected results are either in line or differences are explainable/acceptable based on client input. the 'macro' end of the test scale is testing a complete application in a situation that mimics real world use. Integration testing is black box testing. such as interacting with a database. using network communication. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone" any previous code. Another Definition:With parallel testing. What is integration testing in Software Testing? A: Upon completion of unit testing.Software QA to ensure all problems have been resolved. What is usability testing in Software Testing? A: Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. Expected results from the baseline are compared to results of the software under test. integration testing begins. What is parallel/audit testing in Software Testing? A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly. Test cases are developed with the express purpose of exercising the interfaces between the components. users can easily choose to run batch tests or asynchronous tests depending on the needs of their test systems. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Software Testing-Testing Types 11. All discrepancies are highlighted and accounted for. 10. What is end-to-end testing in Software Testing? A: Similar to system testing. User interviews. Integration testing is considered complete. Another Definition Re-testing after fixes or modifications of the software or its environment.

such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail. it can be regarded as a distinct level of testing. a sanity test is performed. following installation testing. and dynamic tests focused on basic system functionality.Automated testing tools can be especially useful for this type of testing. Another Definition of Sanity testing Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example. What is recovery/error testing in Software Testing? . For example. Another Definition Load testing simulates the expected usage of a software program. Another Definition :Term often used interchangeably with 'stress' and 'load' testing. When necessary. if the new software is crashing systems every 5 minutes. bogging down systems to a crawl. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans 15. client/server models. Performance testing verifies loads. including web servers. This test includes the inventory of configuration items. 13. or install/uninstall processes. by simulating multiple users that access the program's services concurrently. performed by the application's System Administration. What is installation testing in Software Testing? A: Installation testing is testing full. upgrade. application servers. 18. This type of testing usually requires sophisticated testing techniques. This level of testing is a subset of regression testing. or destroying databases. What is performance testing in Software Testing? A: Although performance testing is described as a part of system testing. the load placed on the system is increased above normal usage patterns. the software may not be in a 'sane' enough condition to warrant further testing in its current state 14. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database. 17. in order to test the system's response at peak loads. What is sanity testing in Software Testing? A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. or willful damage. as defined by requirements. partial. Load testing is most useful and most relevant for multi-user systems. What is load testing in Software Testing? A: Load testing is testing an application under heavy loads. The installation test for a release is conducted with the objective of demonstrating production readiness. What is security/penetration testing in Software Testing? A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access. 16. volumes and response times. etc. the evaluation of data readiness. printers.

Software Testing-Testing Types 23. "beta versions" of the software are released to a group of people. the software is tested by in-house developers. the software QA staff. software engineers. e. Then. it is conducted with the full support of the project team. or test engineers Another Definition:. not programmers. 22. or network environment. The goal is to benefit the maximum number of future users. They use either debugger software. operating system. What is acceptance testing in Software Testing? A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. (and this is called the first phase of alpha testing). software. beta versions are made available to the general public. What is compatibility testing in Software Testing? A: Compatibility testing is testing how well software performs in a particular hardware. What is comparison testing? A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors' products. or hardware-assisted debuggers. for additional testing in an environment that is similar to the intended use. so that further testing can ensure the product has few bugs. in-house software test engineers. 19. and limited public tests are performed. Minor design changes can still be made as a result of alpha testing. hardware failures.Following alpha testing. however. Other times. First. What is beta testing in Software Testing? A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. 24. What is alpha testing in Software Testing? A: Alpha testing is testing of an application when development is nearing completion. 20.g. Another Definition Alpha testing is final testing before the software is released to the general public.A: Recovery/error testing is testing how well a system recovers from crashes. The acceptance test is the responsibility of the client/customer or project manager. in order to receive as much feedback as possible. Beta testing is typically performed by end-users or others. but still within the company. Alpha testing is typically performed by a group that is independent of the design team. or other catastrophic problems. What is stress testing in Software Testing? A: Stress testing is testing that investigates the behavior of software (and hardware) under . or software QA engineers. (and this is called second stage of alpha testing). The test team also works with the client/customer/project manager to develop the acceptance criteria. the software is handed over to us. The goal is to catch bugs quickly. 21.

28. bots. completenes. Another Definition Term often used interchangeably with 'load' and 'performance' testing. What is incremental testing in Software Testing? A: Incremental testing is partial testing of an incomplete product. Load testing generally stops short of stress testing. is often used synonymously with stress testing. and volume testing. etc. reliability testing. During stress testing. when a web server is stress tested. The term. though there is gray area in between stress testing and load testing. Actually. is often used synonymously with stress testing.extraordinary operating conditions. large complex queries to a database system. During stress testing. and various denial of service tools. heavy repetition of certain actions or inputs. What is software testing? A: Software testing is a process that identifies the correctness. in order to observe any negative results. the load is so great that errors are the expected results. Stress testing tests the stability of a given system or entity. input of large numerical values. Load testing generally stops short of stress testing. and volume testing. For example. It can find defects. The term. performance testing. . performance testing. Also used to describe such tests as system functional testing while under unusually heavy loads. load testing. but cannot prove there are no defects. Load testing generally stops short of stress testing. What is the difference between reliability testing and load testing in Software Testing? A: Load testing is a blanket term that is used in many different ways across the professional software testing community. What is the difference between volume testing and load testing in Software Testing? A: Load testing is a blanket term that is used in many different ways across the professional software testing community. and volume testing. though there is gray area in between stress testing and load testing. 29. reliability testing. It tests something beyond its normal operational capacity. the load is so great that errors are the expected results. load testing. The goal of incremental testing is to provide an early feedback to software developers. What is the difference between performance testing and load testing in Software Testing? A: Load testing is a blanket term that is used in many different ways across the professional software testing community. 25. 27. The term. testing aims to find out how many users can be on-line. testing cannot establish the correctness of software. performance testing. During stress testing. though there is gray area in between stress testing and load testing. For example. the load is so great that errors are the expected results. is often used synonymously with stress testing. without crashing the server. using scripts. 26. load testing. and quality of software. reliability testing. at the same time. a web server is stress tested.

The expectation is that. . 33. just outside boundaries. Beta testing is performed by the public. but it did not go through all the in-house quality checks. 38. What is boundary value analysis in Software Testing? A: Boundary value analysis is a technique for test data selection. Black box testing a type of testing that considers only externally visible behavior. Boundary values include maximum. Another Definition Similar to exploratory testing. or that test drivers be developed as needed. it is the least formal testing approach. is to exercise it at its natural boundaries. typical values. An effective way to test code. done by programmers or by testers. 34. Black box testing a type of testing that considers only externally visible behavior. What is functional testing in Software Testing? A: Functional testing is same as black box testing.30. but often taken to mean that the testers have significant understanding of the software before testing it. minimum. with little or no outside help. What is clear box testing in Software Testing? A: Clear box testing is the same as white box testing. What is automated testing in Software Testing? A: Automated testing is a formally specified and controlled method of formal testing approach. a few select prospective customers. 32. What is the difference between alpha and beta testing in Software Testing? A: Alpha testing is performed by in-house developers and software QA personnel. nor the "inner workings" of the software. then it will work correctly for all values in between. requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed. What is ad hoc testing in Software Testing? A: Ad hoc testing is a testing approach. Black box testing considers neither the code itself. 37. What is closed box testing in Software Testing? A: Closed box testing is same as black box testing. and derives test cases from the application's program logic. It is a testing approach that examines the application's program structure. just inside boundaries. nor the "inner workings" of the software. 31. Black box testing considers neither the code itself. Software Testing-Testing Types 35. What is incremental integration testing in Software Testing? Continuous testing of an application as new functionality is added. Cynics tend to refer to software releases as "gamma testing". You CAN learn clear box testing. 36. A test engineer chooses values that lie along data extremes. and error values. or the general public. What is gamma testing in Software Testing? A: Gamma testing is testing of software that has all the required features. if a systems works correctly for these extreme or special values.

40. What is bottom-up testing in Software Testing? A: Bottom-up testing is a technique for integration testing. But for small projects. that it becomes a very time-consuming task to continuously update the scripts. etc. One problem with automated testing tools is that if there are continual changes to the product being tested. logs. When do you choose automated testing in Software Testing? A: For larger projects. This activity is carried out by the test team. because. unit testing has to be completed. reliability testing. and volume testing. and response times. system testing is started. Performance testing verifies loads. A test engineer creates and uses test drivers for components that have not yet been developed. Integration testing is considered complete. 41. . You can learn to use automated tools. How do you perform integration testing in Software Testing? A: First.) that can be a time-consuming task. with bottom-up testing. Test cases are developed with the express purpose of exercising the interfaces between the components. and to test all functions of the system that are required in real life. data. The objective of bottom-up testing is to call low-level components first. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. lowlevel components are tested first. upon completion of integration testing. In other words. the complete system is configured in a controlled environment. 42. is to validate an application's accuracy and completeness in performing the functions as designed. Upon completion of unit testing. when actual results and expected results are either in line or differences are explainable/acceptable based on client input. For integration testing. integration testing begins. with little or no outside help. but it is also a distinct level of testing. the recordings have to be changed so often. The purpose of system testing. load testing. or ongoing long-term projects. What are the parameters of performance testing in Software Testing? A: The term 'performance testing' is often used synonymously with stress testing. volumes. not the system testing. the time needed to learn and implement the automated testing tools is usually not worthwhile. For system testing. automated testing can be valuable. Automated testing tools sometimes do not make testing easier. What is the difference between system testing and integration testing in Software Testing? A: System testing is high level testing. and not vice versa. and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment. Performance testing is a part of system testing. Another problem with such tools is the interpretation of the results (screens. Integration testing is black box testing. 43. for testing purposes. Integration testing is completed first.39. on the other hand. The purpose of integration testing in Software Testing is to ensure distinct components of the application still work in accordance to customer requirements. on the other hand. test cases are developed with the express purpose of exercising the interfaces between the components. and integration testing is a lower level testing.

the objective is to ensure the software has remained intact. before testing proceeds to the next level. Winrunner.and you want to pay special attention to LoadRunner and the Rational Toolset. then. If a Web application has a form with dozens of text fields that allow a user to enter text strings of unlimited length. along with other standard . exercise the application's functions when the database is empty and when the database contains an extreme amount of data. its input and output mechanisms and the technologies used to build the application. Sample volume testing considerations include. try feeding it both an empty text file and a huge (hundreds of megabytes) text file. A baseline set of data and scripts are maintained and executed.as defined by requirements. What is disaster recovery testing in Software Testing? A: Disaster recovery testing is testing how well the system recovers from disasters. LoadRunner. usually the regression testing is performed manually. send 100 requests simultaneously and then send the 101st request. usually the regression testing is performed by automated testing. 44. Rational Tools. or other catastrophic problems 45. hardware failures.) -. What is Sociability Testing in Software Testing? This means that you test an application in its normal environment. If the application is designed to handle 100 concurrent requests. The exact volume tests performed depend on the application's functionality. If the initial testing approach is manual testing. try populating all of the fields with a large amount of text and submit the form.e. if the initial testing approach is automated testing. Which of the tools should learn? A:Learn the most popular software tools (i.Is the regression testing performed manually in Software Testing? A: It depends on the initial testing approach. 49. 46. etc. then. but are not limited to: If the application reads text files as inputs. Conversely. crashes. What is Volume testing in Software Testing? Volume testing involves testing a software or Web application using corner cases of "task size" or input data size. Expected results from the baseline are compared to results of the software under test. 48. testers may be learning the software as they test it. All discrepancies are highlighted and accounted for.What is the objective of regression testing in Software Testing? A: The objective of regression testing is to test that the fixes have not created any other problems elsewhere. What is Exploratory testing in Software Testing? Often taken to mean a creative. LabView. In other words. informal software test that is not based on formal test plans or test cases. If the application stores data in a database. 47. to verify that changes introduced during the release have not "undone" any previous code. 50.

51. errors if any are eliminated. Any software product can be tested in one of the two ways: 1) Knowing the specific function the product has been designed to perform. The first test approach is called a)Black-box testingand the second is called b)White-box testing The attributes of both black-box and white-box testing can be combined to provide an approach that validates the software interface and also selectively assures that internal structures of software are correct. they don't crash. that is. Proper implementation requires large computational resources. tests can be conducted to ensure that the internal operation performs according to specification and all internal components are being adequately exercised and in the process. they don't lock up the system. and to find and correct the errors in it. architectures and applications but unique guidelines and approaches to testing are warranted in some cases. that they don't corrupt each other's files.UnConventional:-In this Testing is done from the Initial Phase. A method for determining if a set of test data or test cases is useful. etc. tests can be planned and conducted to demonstrate that each function is fully operational. 2) Knowing the internal working of a product. they don't consume system resources. and Client/Server Architectures.applications. to make sure they all get along together. 2.Conventional:-In this Testing is started after the Coding. This document covers Testing GUIs.Yes ----------------------------------------------------------------------------Integration Testing---------------Yes---------------------. All test cases shall be designed to find the maximum errors through their execution Testing methodologies are used for designing test cases. What is Mutation testing in Software Testing? . by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. The black-box and white-box testing methods are applicable across all environments.Yes ----------------------------------------------------------------------------- . they can share the printer peacefully. These methodologies provide the developer with a systematic approach for testing. The testing methodologies applicable to test case design in different testing phases are as given below: ---------------------------------------------------------------------------Types of Testing--------White-box Testing--------Black Box Testing ---------------------------------------------------------------------------Unit Testing --------------------. Test case design for software testing is as important as the design of the software itself. Software Testing -Testing Methods Software Testing can be performed in either the two types:1.

The test cases derived from white-box testing methods will: 1) Guarantee that all independent paths within a module have been exercised at least ones 2) Exercise all logical decisions on their true and false sides 3) Execute all loops at their boundaries and within their operational bounds 4) Exercise internal data structures to ensure their validity. c)White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Testing the application functionality and also testing the application structure will come under Grey Box Testing. Software Testing -Testing Methods-Black Box Testing . Important data structures can be probed for validity. d)White box testing need to be adopted under Unit level testing strategy. Basis path testing and control structure testing are some of the most widely used white-box testing techniques. b)Unfortunately. e)It is the testing method in which the user will test the application structure how it is acting. Due to this. It is also called as the mixture of Black Box and White Box Testing.Yes ----------------------------------------------------------------------------Acceptance Testing --------------------------------------------Yes ----------------------------------------------------------------------------Now a Days one more testing methodology have come called as Grey-Box Testing. even for few LOC. PROJECT in which exact rules must be followed that is customer requirements PRODUCT which are based on general requirements that is on our own requirements ETestingHub-Online Software Testing Tutorial-Testing Methods-White Box Testing White Box Testing:a)White-box testing of software is designed for close examination of procedural detail. Providing test cases that exercise specific sets of conditions and/or loops tests logical paths through the software. f)Usually Developers would perform the White Box Testing.System Testing-----------------------------------------------. a limited number of important logical paths can be selected and exercised. In which both the Black Box and White Box testing is performed. It can be adapted to a limited extent under integration testing if situation warrants for it. the numbers of paths become too many and present certain logistic problems.

Graph-based testing methods 2. data files) is maintained. d)Black-box testing uncovers errors of the following categories: y y y y y In-correct or missing functions Interface errors Errors in the data structures or external data base access Performance errors Initialisation and termination errors e)Black-box testing is applied during the later stages of the testing as it purposely disregards control structure and attention is focused on the problem domain.Equivalence partitioning 3. Presentation on Black Box Testing. Here is a small presentation given by famous Professor Cem Kaner.Boundary value analysis g)Black box testing (data driven or input/output driven) is not based on any knowledge of internal design or code. b)It is the test method in which the user always test the application functionality he need not to bother about the application structure because of the customer always looks at the screens how it is developed.g.. Test cases are to be designed to answer the following questions: 1) How is functional validity tested? 2) What categories of input will make good test case? 3) Is the system particularly sensitive to certain input values? 4) How are the boundaries of data input isolated? 5) What data rates and data volume can the system tolerate? 6) What effect will specific combinations of data have on system operation? f) The following black-box testing methods are practically feasible and adopted depending on the applicability: 1. that input is properly accepted and output is correctly produced.Black Box Testing:a)Black-box tests are used to demonstrate that the software functions are operational. Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It enables the developer to derive sets of input conditions (test cases) that will fully exercise all functional requirements for a program. Tests are based on requirements and functionality. and that the integrity of external information (e. . c)Usually Test Engineer will do the Black Box Testing. It is not an alternative to white box testing.

test cases should include values a and b and just above and just below a and b respectively. then one valid and one invalid equivalence class are defined. or below edges of input. It is based on an evaluation of equivalence classes for an input condition. For input ranges bounded by a and b. Test case Design for Boundary value analysis : Situations on. then one valid and one invalid equivalence class are defined. and condition classes have high probability of success Software Testing-Testing Models . 2. a test case should be designed to exercise the data structure at its boundary. 3. then one valid and two invalid equivalence classes are defined. above. If internal data structures have prescribed boundaries. It complements equivalence partitioning since it selects test cases at the edges of a class. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. Apply guidelines 1 and 2 to the output. BVA derives test cases from the output domain also. Rather than focusing on input conditions solely.Equivalence Partitioning This method divides the input domain of a program into classes of data from which test cases can be derived. Testcase Design for Equivalence partitioning y y y y Good test case reduces by more than one the number of other test cases which must be developed Good test case covers a large set of other possible cases Classes of valid inputs Classes of invalid inputs Boundary Value Analysis This method leads to a selection of test cases that exercise boundary values. An equivalence class represents a set of valid or invalid states for input conditions Equivalence classes may be defined according to the following guidelines: y y y y If an input condition specifies a range. If an input condition specifies a number of values. test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits. If an input condition specifies a member of a set. 4. one valid and two invalid equivalence classes are defined. output. If an input condition is boolean. BVA guidelines include: 1. If an input condition requires a specific value.

RAD. V-model in Software Testing Sequential (the traditional waterfall model). prototype model). Initially defined by the late Paul Rook in the late 1980s. Software Testing-Testing Models-V Model V-model The V Model. the fundamental principles are agreed on by experts and practitioners alike. Any reasonable model for SDLC must allow for change and spiral approach allows for this with emphasis on slowly changing (evolving) design. the V was included in the U.'s National Computing Centre publications in the 1990s with the aim of improving the efficiency and effectiveness of software development. gives equal weight to testing rather than treating it as an afterthought.S. yet in the U. It's accepted in Europe and the U. iterative. We have to assume change is inevitable will have to design for change.K. In fact. The V shows the typical sequence of development activities on the left-hand (downhill) side and the corresponding sequence of test execution activities on the right-hand (uphill) side. Definitions of these models will differ. There are many models used to describe the sequence of activities that make a Systems Development Life Cycle (SDLC). However. detailed design and coding. the V Model is often mistaken for the waterfall.This section will discuss various models for Software Testing. Spiral Model in Software Testing(the incremental. The waterfall model did considerable damage by supporting . SLDC is used to describe activities of both development and maintenance work in Software Testing. evolutionary. as a superior alternative to the waterfall model.. high-level design. while admittedly obscure.K. These models would all benefit from earlier attention to the testing activity that has to be done at some time during the SDLC in Software Testing.model in Software Testing Incremental (the function by function incremental model). the V Model emerged in reaction to some waterfall models that showed testing as a single phase following the traditional development phases of requirements analysis.

The test case design for system and acceptance testing however need to handle the OO specific intricacies. Class Integration testing based on sequence diagrams. The meaning of system testing and acceptance testing however remains the same in the OO and Web based Applications context also. The following table indicates the planning of testing at respective stages. Many managers still believe this. even though testing usually takes up half of the project time. state-transition diagrams. Several testing strategies are available and lead to the following generic characteristics: 1) Testing begins at the unit level and works "outward" toward the integration of the entire system 2) Different testing techniques are appropriate at different points of S/W development cycle. For Web Applications. For projects of tailored SDLC. Testing is divided into four phases as follows: a)Unit Testing b)Integration Testing c)Regression Testing d)System Testing e)Acceptance Testing The context of Unit and Integration testing changes significantly in the Object Oriented (OO) projects. . Class integration testing identifies the integration of classes to implement certain functionality. Relation Between Development and Testing Phases Testing is planned right from the URD stage of the SDLC. the testing activities are also tailored according to the requirements and applicability. class specifications and collaboration diagrams forms the unit and Integration testing phase for OO projects.the common impression that testing is merely a brief detour after most of the mileage has been gained by mainline development activities.

. Refinement from v.The "V" Diagram indicating this relationship is as follows DRE: . small scale and medium scale companies are following a refinement form of VModel. B defects found by customer side people during maintenance.Where A defects found by testing team.model. To decrease cost and time complexity in development process.

Unit testing focuses verification effort on the smallest unit of software design . Unit Testing As per the "V" diagram of SDLC. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm s execution. The units are identified at the detailed design phase of the software development life cycle. y All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. Five aspects are tested under Unit testing considerations: y y The module interface is tested to ensure that information properly flows into and out of the program unit under test. . exercising specific paths in a unit s control structure to ensure complete coverage and maximum error detection. y Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. and the unit testing can be conducted parallel for multiple units.the unit. testing begins with Unit testing.Software Testing Phases 1. Unit testing makes heavy use of White Box testing techniques.

all error-handling paths are tested. the path coverage technique employs a test that considers only a limited number of looping possibilities. Multiple-Condition Coverage: It takes care of covering different conditions. Wherever User Interface is available UI called from a web browser will initiate the testing process. Unit Testing (COM/DCOM Technology): The integral parts covered under unit testing will be: Active Server Page (ASP) that invokes the ATL component (which in turn can use C++ classes) The actual component Interaction of the component with the persistent store or database and Database tables Driver for the unit testing of a unit belonging to a particular component or subsystem depends on the component alone. It verifies coverage at high level rather than decision execution or Boolean expressions. Decision (Logic/Branch) Coverage The decision coverage test technique seeks to identify the percentage of all possible decision outcomes that have been considered by a suite of test procedures. Unit Test Coverage Goals: Path Coverage: Path coverage technique is to verify whether each of the possible paths in each of the functions has executed properly. Statement Coverage The statement coverage technique requires that every statement in the program to be evoked at least at once. A path is a set of branches of possible flow. which are interrelated. Condition Coverage This technique seeks to verify the accuracy of true or false outcome of each Boolean sub expression. The advantage is this measure can be applied directly to object code & does not require processing source code. This technique employs tests that measure the sub expressions independently. If UI is not .y And finally. It also requires that all possible conditions for a decision in the program be exercised at least once. It requires that every point of entry & exit in the software program be invoked at least once. Since loop introduces unbounded number of paths.

. ii) Bottom-Up Integration Testing:Bottom-Up integration testing. Integration testing is sub-divided as follows: i) Top-Down Integration Testing: Top-Down integration is an incremental approach to construction of program structure. iii)Integration Testing for OO projects: Thread Based Testing Thread based testing follows an execution thread through objects to ensure that classes collaborate correctly.e. Integration Testing After unit testing. The functionality of such units will be tested with separate unit test(s). GUI based: In case the unit is UI based. Presence: This validation ensures all mandatory fields should be present. as its name implies.available then appropriate drivers (code in C++ as an example) will be developed for testing. they should also be mandated by database by making the column NOT NULL (this can be verified from the low-level design document). duplicate check etc. begins construction and testing with atomic modules (i. (E.g. Integration testing is a systematic technique for verifying the software structure and sequence of execution while conducting tests to uncover errors associated with interfacing. Modules subordinate to the main control module are incorporated into the structure in either a depth-first or breadth-first manner. 2. beginning with the main control module (main program). modules shall be assembled or integrated to form the complete software package as indicated by the high level design. This will consist of two different units belonging to same component interacting with each other. Each unit of functionality will be tested for the following considerations: Type: Type validation that takes into account things such as a field expecting alphanumeric characters should not allow user input of anything other than that. each thread is integrated and tested individually Regression test is applied to ensure that no side effects occur . Modules are integrated by moving downward through the control hierarchy. processing required for modules sub-ordinate to a given level is always available and the need for stubs is eliminated. although limited amount of white box testing may be used to ensure coverage of major control paths. Unit testing would also include testing inter-unit functionality within a component. window sizes. modules at the lowest level in the program structure).: Range validation Body temperature should not exceed 106 degree Celsius). Since modules are integrated from the bottom up. GUI related consistency check like font sizes. Black-box test case design techniques are the most prevalent during integration. Validation: This is for any other business validation that should be applied to a specific field or for a field that is dependent on another field. Size: This validation ensures the size limit for a float or variable character string input from the user not to exceed the size allowed by the database for the respective column. In thread based testing o o o Set of class required to respond to one input or event for system are identified. background color. message & error boxes will be checked.

e. by re-executing a subset of all test cases. The aim is to verify that all system elements and validate conformance against SRS. The common practice is to employ the use cases to drive the validation process In Use Based Testing Initially independent classes (i. screens and report layouts are matched to OOAD and associated class integration test case report is generated. These changes may cause problems with functions that previously worked flawlessly. o Followed by the dependent classes that use independent classes. new I/O may occur. and new control logic may be invoked. Regression testing may be conducted manually. the regression test suite shall be designed to include only those tests that address one or more classes of errors in each of the major program functions. Here dependent classes with a layered approach are used o Followed by testing next layer of (dependent) classes that use the independent classes o This sequence is repeated by adding and testing next layer of dependent classes until entire system is tested. The purpose of system testing is to fully exercise the computer-based system. System testing verifies that all elements mesh properly and the overall system function/performance is achieved.. It is impractical and inefficient to re-execute every test for every program function once a change has occurred. Integration Testing for Web applications: Collaboration diagrams. System testing . Additional tests that focus on software functions and are likely to be affected by the change. 4. The regression test suite (the subset of tests to be executed) contains three different classes of test cases: A representative sample of tests that will exercise all software functions. Therefore. 3. sets of high order tests shall be conducted. classes that use very few other classes) are integrated and tested. In the context of integration test strategy.Use Based Testing Use based testing evaluates the system in layers. new data flow paths may be established. Regression Testing Each time a new module is added as part of integration testing. o o As integration testing proceeds the number of regression tests can grow quite large. System Testing After the software has been integrated (constructed). regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects. o Tests that focus on the software components that have been changed.

o Documentation Testing: Documentation testing is concerned with the accuracy of the user documentation. Different types of Tests that comes under System Testing are listed below: o Compatibility / Conversion Testing: In cases where the software developed is a plug-in into an existing system. . This involves i) Review of the user documentation for accuracy and clarity ii)Testing the examples illustrated in the user documentation by preparing test cases on the basis of these examples and testing the system o Facility Testing:Facility Testing is the determination of whether each facility (or functionality) mentioned in SRS is actually implemented. memory sizes).. each possible configuration of the software should be tested. For instance.is categorized into the following 20 types. then the software should be tested with each type of hardware device and with the minimum and maximum configuration. The testing of these installation procedures is part of System Testing. communication lines..g. the system generation (sysgen) process in IBM Mainframes. different types of I/O devices.g. the conversion procedures from the existing system to the new software are to be tested. If the software supports a variety of hardware configurations (e. Likewise. o Installability Testing:Certain software systems will have complicated procedures for installing the system. The type(s) of testing shall be chosen depending on the customer / system requirements. The objective is to ensure that all the functional requirements as documented in the SRS are accomplished. the compatibility of the developed software with the existing system has to be tested. components of the program can be omitted or placed in separate processors). o Configuration Testing: Configuration testing includes either or both of the following:  testing the software with the different possible hardware configurations  testing each possible configuration of the software If the software itself can be configured (e.

g. However the performance testing is complete when all system elements are fully integrated and the true performance of the system is ascertained as per the customer requirements. if a system has a downtime objective of two hours or less per forty years of operation. There is a paragraph on automated tools available for testing Web Applications at the end of this document. . Even at the unit level. o Performance Testing: Performance testing is designed to test run-time performance of software within the context of an integrated system. and maintaining Web applications. predicting. In the most basic terms.. For e. and controlling application performance. the final goal for any Web application set for highvolume use is for users to consistently have i) continuous availability ii) consistent response times even during peak usage times. the performance of an individual module is assessed as white-box tests are conducted. the interfaces of the developed software with the other components in the larger system shall be tested. Performance testing has five manageable phases: iii) architecture validation iv) performance benchmarking v) performance regression vi) performance tuning and acceptance vii) and the continuous performance monitoring necessary to control performance and manage growth. Performance testing occurs throughout all phases testing. then there is no known way of testing this reliability factor. Performance testing must be an integral part of designing. o Procedure Testing: If the software forms a part of a large and not completely automated system. o Performance Testing for Web Applications: The most realistic strategy for rolling out a Web application is to do so in phases. These may include procedures to be followed by i) The human operator ii) Database administrator iii) Terminal user These procedures are to be tested as part of System testing. It may not be practical to devise test cases for certain reliability factors. building.Proper Packaging of application. i) Automated testing tools play a critical role in measuring. configuration of various third party software and database parameters settings are some issues important for easy installation.

However. The requirements stated in the SRS may include i) service aids to be provided with the system. diagnostic programs ii) the mean time to debug an apparent problem .o Recovery Testing:Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. re-initialisation. o Serviceability Testing: Serviceability testing covers the serviceability or maintainability characteristics of the software.. data recovery. where. the tester plays the role(s) of the individual who desires to penetrate the system. when and what information). one has to take into consideration Data Transfer Checksum. e. auto log out based on system specifications (e. check pointing mechanisms. 5 minutes of inactivity). For User security. storage-dump programs. Mean-time-to-failure (MTTF) is 20 hours. Encryption or use of digital certificates. If recovery is automatic (performed by the system itself).g. MD5 hashing on all vulnerable data and database integrity. display of user information on the UI can be taken care by designing to code programmatically. audit trail logs containing (who. For data security. o Security Testing:Security testing attempts to verify that protection mechanisms built into a system will protect it from improper penetration. and restart are each evaluated for correctness. During security testing.g. o Reliability Testing:The various software-testing processes have the goal to test the software reliability. the time required to repair is evaluated to determine whether it is within acceptable limits. o Security Testing (Web applications): In case of web applications. If recovery requires human intervention. Security testing involves designing test cases that try to penetrate into the system using all possible mechanisms. if the reliability factors are stated as say. why. one has take into account testing with appropriate firewall set-up. The "Reliability Testing" which is a part of System Testing encompasses the testing of any specific reliability factors that are stated explicitly in the SRS. encrypted passwords. it is possible to device test cases using mathematical models.

One of the criteria for web applications would be number of concurrent users using the application. educational background. the amounts of the primary and secondary storage the software requires and the sizes of temporary files that get created. o Usability Testing: Usability testing is an attempt to uncover the software usability problems involving the human-factor. This requires some idea about expected load levels of the Web application. Examples: i) Has each user interface is amicable to the intelligence.iii) the maintenance procedures for the system iv) the quality of the internal-logic documentation Test cases are to be devised to ensure the coverage of the stated aspects. o Storage Testing: Storage testing is to ensure that the storage requirements are within the specified bounds.? iii) Are the error messages meaningful. o Stress Testing: Stress tests are designed to confront programs with abnormal situations. the software is subjected to heavy volumes of data and the behaviour is observed. Test cases may be tailored by keeping some of the following examples in view: i) Input data rates may be increased by an order of magnitude to determine how input functions will respond ii) Test cases that may cause excessive hunting iii) Test cases that may cause thrashing in a virtual operating system may be designed iv) Test cases that may create disk resident data. o Stress Testing (Web applications): This refers to testing system functionality while the system is under unusually heavy or peak load. For instance. useable. it is similar to the validation testing but is carried out in a "high-stress" environment. Stress testing executes a system in a manner that demand rescues in abnormal quantity. and environmental pressures of the end user? ii) Are the outputs of the program meaningful. frequency or volume. etc. v) Test cases that require maximum memory or other resources may be executed To achieve this. easy to understand? . storable.

o Link testing (for web based applications): This type of testing determines if the site's links to internal and external Web pages are working. the type of browser(s) expected to be used.o Usability Testing (Web Applications): The intended audience will determine the "usability" testing needs of the Web site. Examples: i) A compiler would be fed an absurdly large source program to compile ii) A linkage editor might be fed a program containing thousands of modules iii) An operating system's job queue would be filled to capacity iv) If a software is supposed to handle files spanning multiple volumes. such testing should take into account the current state of the Web and Web culture. A Web site with many links to outside sites will need regularly scheduled link testing. There should be adherence to the HTML programming guidelines as defined in Qualify. o HTML validation (for web based applications): The need for this type of testing will be determined by the intended audience. . but gives an appropriate message and/or makes a clean exit. the software is subjected to heavy volumes of data and the behaviour is observed. CGI scripts. the test cases shall try to test the extreme capabilities of the programs and attempt to break the program so as to establish a sturdy system. o Volume Testing: Volume Testing is to ensure that the software i) can handle the volume of data as specified in the SRS ii) does not crash with heavy volumes of data. enough data are created to cause the program to switch from one volume to another As a whole. which may have thousands of internal links) may also require frequent link testing. whether the site delivers pages based on browser type or targets a common denominator. Sites with many internal links (such as an enterprise-wide Intranet. database design. o Load testing (for web based applications): If there is a large number of interactions per unit time on the Web site testing must be performed under a range of loads to determine at what point the system's response time degrades or fails. Additionally. To achieve this. The Web server software and configuration settings. and other factors can all have an impact. because Web sites come and go and URLs change.

e.g.1. Software Testing-Testing Models-Waterfall Model The waterfall model derives its name due to the cascading effect from one phase to the other as is illustrated in Figure1. with identifiable deliveries to the next phase. . Note that this model is sometimes referred to as the linear sequential model or the software life cycle. the application can move from the development environment to a separate test environment. A series of acceptance tests are conducted to enable the customer to validate all requirements as per user requirement document (URD). Acceptance testing may be conducted either by the customer depending on the type of project & the contractual agreement. o Extensibility Promote-ability Testing: Software can be moved from one run-time environment to another without requiring modifications to the software. o 5. a series of acceptance tests are conducted to enable the customer to validate all the requirements. In this model each phase well defined starting and ending point. Acceptance tests are conducted at the development site or at the customer site depending upon the requirements and mutually agreed principles. Acceptance Testing When custom software is built for one customer.Validation or functional testing (for web applications): This is typically a core aspect of testing to determine if the Web site functions correctly as per the requirements specifications. Sites utilising CGI-based dynamic page generation or database-driven page generation will often require more extensive validation testing than static-page Web sites.

The model consist of six distinct stages, namely:

1. In the requirements analysis phase (a) The problem is specified along with the desired service objectives (goals) (b) The constraints are identified 2. In the specification phase the system specification is produced from the detailed definitions of (a) and (b) above. This document should clearly define the product function. Note that in some text, the requirements analysis and specifications phases are combined and represented as a single phase. 3.In the system and software design phase, the system specifications are translated into a software representation. The software engineer at this stage is concerned with: a) Data structure b) Software architecture c) Algorithmic detail and d) Interface representations The hardware requirements are also determined at this stage along with a picture of the overall system architecture. By the end of this stage should the software engineer should be able to identify the relationship between the hardware, software and the associated interfaces. Any faults

in the specification should ideally not be passed 'down stream' 4.In the implementation and testing phase stage the designs are translated into the software domain a) Detailed documentation from the design phase can significantly reduce the coding effort. b) Testing at this stage focuses on making sure that any errors are identified and that the software meets its required specification. 5.In the integration and system testing phase all the program units are integrated and tested to ensure that the complete system meets the software requirements. After this stage the software is delivered to the customer [Deliverable- The software product is delivered to the client for acceptance testing.] 6.The maintenance phase the usually the longest stage of the software. In this phase the software is updated to: a) Meet the changing customer needs b) Adapted to accommodate changes in the external environment c) Correct errors and oversights previously undetected in the testing phases d) Enhancing the efficiency of the software Observe that feed back loops allow for corrections to be incorporated into the model. For example a problem/update in the design phase requires a 'revisit' to the specifications phase. When changes are made at any phase, the relevant documentation should be updated to reflect that change. Advantages of Waterfall Model a) Testing is inherent to every phase of the waterfall model b) It is an enforced disciplined approach c) It is documentation driven, that is, documentation is produced at every stage Disadvantages of Waterfall Model The waterfall model is the oldest and the most widely used paradigm. However, many projects rarely follow its sequential flow. This is due to the inherent problems associated with its rigid format. Namely: a) It only incorporates iteration indirectly, thus changes may cause considerable confusion as the project progresses. b) As The client usually only has a vague idea of exactly what is required from the software product, this WM has difficulty accommodating the natural uncertainty that exists at the beginning of the project. c) The customer only sees a working version of the product after it has been coded. This may result in disaster any undetected problems are precipitated to this stage.

Software Testing-Testing Models-Spiral Model
This section will discuss various models for testing. Definitions of these models will differ. However, the fundamental principles are agreed on by experts and practitioners alike. There are many models used to describe the sequence of activities that make a Systems Development Life Cycle (SDLC). SLDC is used to describe activities of both development and maintenance work.

Developed by Barry Boehm in 1988. it provides the potential for rapid development of incremental versions of the software. In the spiral model, software is developed in a series of incremental releases. During early iterations , the incremental release might be a paper model or prototype. Each iteration consists of Planning, Risk Analysis, Engineering, Construction & Release & Customer Evaluation.

y

Customer Communication: Tasks required to establish effective communication between developer and customer.

y

Planning: Tasks required to define resources, timelines, and other project related information.

y

Risk Analysis: Tasks required to assess both technical and management risks.

y

Engineering: Tasks required to build one or more representatives of the application.

y

Construction & Release:

The artifact is examined on various levels. A code walkthrough is an effective tool in the areas of quality assurance and education. Benefits y Improved Code QualityImproved code quality is ensured by the enforcement of coding standards. few organizations implement and enforce them as a shop standard. but each has a practical solution. the first of which is for compliance with the requirements of the software. The technical lead is assured of an acceptable level of quality and the database administrator is assured of an acceptable level of database performance. a database administrator. Many excuses are given. Since the purpose of a review is to find errors. Despite all the benefits of source code walkthroughs.Tasks required to construct. Often the modified code is indicated after the fact on a hardcopy listing with annotations or a highlighting pen. Care must be taken to ensure that no hard feelings occur as a result.. install and provide user support (e. his programs. Software Testing-Testing Techniques 1. Code Walkthrough in Software Testing A source code walkthrough often is called a technical code walkthrough or a peer code review. or within the code itself with comments. This includes things like function and logic as well as implementation. Formal Technical Review in Software Testing A formal technical review is conducted by the software quality assurance group. and the entire application. and one or more peers to a meeting to review a set of source modules prior to production implementation. only one developer is usually responsible for the artifact. documentation and training) y Customer evaluation: Tasks required to obtain customer feedback based on evaluation of the software representations created during the engineering stage and implemented during the installation state. . The developer is exposed to alternate methods and processes as the technical lead and database administrator suggest and discuss improvements to the code. 2. A review typically examines only a small part of the software project.g. The review may consist of walk-throughs. The typical scenario finds a developer inviting his technical lead. code inspections or any other examination. The result is better performance of the developer. Also. The artifact must also conform to the standards of the process used on the project. Typically a review will last 2 hours. test. This ensures that all artifacts of the project are developed in a uniform manner. a review can be difficult to control.

or accepted on one occasion but rejected on another. y Volumes of data Technical leads and database administrators are unwilling to spend time wading through large Natural and COBOL listings searching for a few simple source changes. and fewer employers take advantage of those sessions. allowing a much more positive and less time-consuming review process. Reviewers can direct more attention to unusual and complex coding techniques. y Manual effort The amount of effort required by the developer is significant to create useful documentation for a technical walkthrough. the discussion of coding style and technique. y ExcusesHere are a few of the many excuses offered for not enforcing code walkthroughs as a shop standard. y Improved Developer Performanceis ensured by the mentoring of the developer by the DBA and technical lead.y Improved Application Performance is ensured by the review of all database access paths by the DBA and technical lead. How else can a developer hone his skills? Few training vendors offer formal sessions for developers with more than five years of experience. time-consuming. The manual procedures involved are difficult. y Deleted code Deleted code cannot be reviewed. Database administrators need to see quickly what database accesses were added or changed. Once standards are published. but leaving the code in place. developers can ensure compliance. developers will become confused and frustrated. can render illegible an otherwise well-structured module. There must be consensus among the reviewers. Peer reviews are an important component of a continuing training plan. converted to comments. If code is accepted by one reviewer but rejected by another. and the improvement/removal of questionable coding practices. tedious. and error-prone y Lack of Consistency There must be consistency. .

etc. Internal factors are associated with the manner in which the steps of the inspection are organized into a process (structure). In addition each issue the reviewer find has an associated document explaining why. Constructive criticism deteriorates into destructive criticism. such as algorithm design. and the developers themselves. and interactions with other inspections. External ones include differences in reviewer ability and code quality (inputs). but they certainly also take quite a bit of time. there are costs associated with carrying out inspections and these costs may outweigh the expected benefits. 4. To get the full benefits of code reviews you should still involve the human eye. Traditional code reviews certainly do a lot to improve the quality of the software developed. We believe that these are driven by several mechanisms. but that any increase in effectiveness will have a corresponding increase in inspection interval and effort. 3. (environment). It should be a win-win situation for all persons involved. However. performance issues. but with little or no empirical work conducted to demonstrate how they worked better and at what cost. there are certain tasks that an automated tool just can't do. a technical lead can allow a code walkthrough to degrade into something resembling a lynching. Code Review in Software Testing Code Reviews are a great way to improve both your software and your developers. experience. both internal and external to the inspection process. and how to fix the issue. We hypothesized that these changes will affect the defect detection effectiveness of the inspection. The process must be considered by all parties an opportunity to train the developer. but certain treatments dramatically increased the inspection interval. personal calendars. We evaluated this hypothesis with a controlled experiment on a live development project using professional software developers. and other issues. The developer then creates a report and goes over what he or she has found in the peer's code. Most of the existing literature on inspections have discussed how to get the most benefit out of inspections by proposing changes to the process structure. once a week for instance. adherence to coding standards. Many of the issues can be easily picked up by an automated code review tool such as CFDEV's tool for reviewing ColdFusion (CFML) code. We also noted a . CFDEV's tool also allows you to easily write your own rules most rules can be written in just 4 lines of CFML code. Code Inspection in Software Testing Software inspections have long been considered to be an effective way to detect and remove defects from software. and focus. It is important to understand the tradeoffs between these costs and benefits. or logic issues. Traditionally code reviews or peer reviews take place in a regular basis. the project schedule. as well as the manner in which each step is carried out (technique).y Developers are reluctant to be involved Without training. We found that these structural changes were largely ineffective in improving the effectiveness of inspections. and maintain or improve the quality and performance of the application system. enlighten the technical lead and database administrator. Developers swap code they produced during the week and go through a checklist to look for bugs security problems. This process allows the developers to learn the tricks other developers have attained over the years. While automated code review tools can cut down the time it takes to review code.

3.Input------>Small Function------>Output. to test particular functions or code modules. may require developing test driver modules or test harnesses.large amount of unexplained variance in the data suggesting that other factors must have a strong influence on inspection performance. are the key to improving inspection effectiveness. as it requires detailed knowledge of the internal program design and code. Typically done by the programmer and not by testers.Done by Team Lead.In this level of testing small functions and modules of the project are tested. Input------>Module------>Output 3. Integration can be top-down or bottom-up: y y y Top-down testing starts with main and successively replaces stubs with the real modules. not better process structures. The 'parts' can be code modules.Done by Developers. 4. 2. client and server applications on a network. 3. Bottom-up testing builds larger module assemblies from primitive modules.Done by Test Manager.Testing of combined parts of an application to determine if they function together correctly. 3)Integration Testing in Software Testing:In this level of testing all the modules which make a application are tested.The most 'micro' scale of testing.In the presence of customer conducting the testing known as User Acceptance Testing in Software Testing.In this level of testing small functions which make a module are tested. we found that the inputs into the process (reviewers and code units) account for more of the variation than the original treatment variables. Not always easily done unless the application has a well-designed architecture with tight code. etc. On further investigation. leading us to conclude that better techniques by which reviewers detect defects. Sandwich testing is mainly top-down with bottom-up integration and testing applied to certain widely used components 4)System Testing in Software Testing:5)User Acceptance Testing in Software Testing:1. Before that we need to know couple of main definitions regarding the terminology in the company . This type of testing is especially relevant to client/server and distributed systems.Input------>Small Function+Small Function+Small Function------>Output. Levels Of Software Testing There are basically 5 levels of Software Testing:-1)Unit Testing in Software Testing:1. 2)Module Testing in Software Testing:1. 2. 2. individual applications.

In summary Developers y y y y y Are perceived as very creative .g. is concerned that user really does get a system that does what they want. C++..they write code without which there would be no system! . He or she will work long hours and is usually highly motivated and very determined to do a good job. VB. . it can be perceived as a destructive process. Are often highly valued within an organization. SQL). Are sent on relevant industry training courses to gain recognized qualifications. The development process on the other hand is a naturally creative one and experience shows that staff working in development has a different mindset to that of testers.PROJECT in which exact rules must be followed that is customer requirements PRODUCT which are based on general requirements that is on our own requirements Software Testing-Psyhcology of Software Testing The purpose of this section is to explore differences in perspective between tester and developer (buyer & builder) and explain some of the difficulties management and staff face when working together developing and testing computer software in Software Testing. there is often much friction between developer and tester. merely that they view systems development from another perspective. He or she will also work long hours looking for faults in software but will often find the job frustrating as their destructive talents take their tool on the poor developers. A tester. Developer wants to finish system but tester wants all faults in software fixed before their work is done. Different mindsets? y y y y y y We have already discussed that none of the primary purposes of testing is to find faults in software i.e. A developer is looking to build new and exciting software based on user's requirements and really wants it to work (first time if possible). Are rarely good communicators (sorry guys)! Can often specialize in just one or two skills (e. however. At this point. is reliable and doesn't do thing it shouldn't. JAVA. We would never argue that one group is intellectually superior to another.

e. testing. tack & diplomacy. In summary we suggest that it is generally cost effective to use resources on testing throughout the project lifecycle starting as soon as possible. If fault is detected at an early stage of design.Testers y y y y y Are perceived as destructive . even if you've been up all night trying to test the wretched software Economics of Software Testing This section looks at some of the economic factors involved in Software Testing activities. Although some research has been done to put forward the ideas discussed. The fixing of the so-called millennium but is probably one of the greatest product recall notices in history Boehm's research suggests that cost of fixing faults increases dramatically as we move software product towards field use. Communication b/w developer and tester It is vitally important that tester can explain and report fault to developer in professional manner to ensure fault gets fixed. Usually do not have any industry recognized qualifications. until now Usually require good communication skills. few organizations have yet to provide accurate figures to confirm these theories. However. Normally need to be multi-talented (technical. team skills). if removed earlier they will not propagate into other design documents. The alternative is to potentially incur much larger costs associated with the effort required to . then development based on that documentation may generate many related faults which multiply the effect of the original fault. Major retailers and car manufacturers often issue product recall notices when they realize that there is a serious fault in one of their products. Tact and diplomacy are essential. This will also prevent faults from multiplying i. as project progresses and other components are built based on faulty design. Analysis of specifications during test preparation (early test design) often brings faults in specifications to light. Tester must not antagonize developer. Analysis of specifications during test preparation (early test design) often brings faults in specifications to light. This will also prevent faults from multiplying i. Perhaps you can think of other examples.coding and testing will have to be repeated for components of system previously thought to have been completed. if removed earlier they will not propagate into other design documents.only happy when they are finding faults! Are often not valued within the organization. it may be only design documentation that has to change resulting in perhaps just a few hours work. more work is obviously needed to correct fault once it has been found. This is because design work. If faults are found in documentation.e.

However..If a particular Test Case's Actual and Expected Result will mismatch then we will report a Bug against that Test Case.Defect-Nonconformance to requirements or functional / program specification. 12. which causes the program to perform in an unintended or unanticipated manner. few organizations are able to accurately compare the relative costs of Software Testing and the costs associated with re-work. After that developer starts working on it during that time by changing the bug status as "Open" once it got fixed he will change the status to "Fixed". It reduces the complexity of opening a file and finding for which project it belongs to. 3. Software Testing-Defect Profile 1. Remember that the amount of resources allocated to testing is a management decision based on an assessment of the associated risks. 2. In the next Cycle we have to check all the Fixed bugs if those are really fixed then concerned tester change the status of that bug to "Closed" else change the status to "Reviewed-not-ok". Release Version Number and Date on the top of the Sheet. 6.Here also the name of Bug Report file follows some naming convention like Project-> Name-> Bug-> Report-> Ver No-> Release Date 8. Finally "Deferred".First time when tester identifies a bug then he will give the Status of that bug as "New".All the bolded words should be replaced with the actual Project Name. 5. 11.It maintains the details of Project ID.correct and re-test major faults.3 01_12_04 9.After seeing the name of the file anybody can easily recognize that this is a Bug Report of so and so project and so and so version released on the particular date.Bug Report comes into picture once the actual testing starts.0.2. those bugs which are going to be fixed in the next iteration. 4. See the following sample template used for Bug Reporting. Project Name. 10. Version Number and Release Date. For each bug it maintains :-a)Bug ID b) Test Case ID c) Module Name d) Bug Description e) Reproducible (Y/N) f) Steps to Reproduce g) Summary h) Bug Status i) Severity j) Priority k) Tester Name l) Date of Finding .Bug-A fault in a program.For each bug we are having a Life Cycle. For eg. Bugzilla Bug Report 1. 7.Once the Developer Team lead go through the Bug Report and he will assign each bug to the concerned Developer and he changes the bug status to "Assigned".

With this reference we can navigate very easy in the Test Case Document for more details. Steps to Reproduce: This column specifies the complete steps to produce that bug.Closed It is given by the tester if the bug is fixed in new Build 6. Bug Description: It gives the summary of the Bug. Severity It is specified by the tester how much effect it gives to the application Very High Tester will give this status when u r not able to continue your testing eg.Reviewed-not ok It is given by the test if the bug is not fixed in new build 7. Reproducible: This column is very important for developers based on this they know whether it can be reproducible or not. . Tester Name: This column is for the name of the tester. Simply it is Yes or No. Bug Status It is used to keep track the status of the bug in this bug report 1. in which the bug was raised. what was happened actually instead of expected result. Module Name: Module Name refers to the Module. Finally based on this information we can estimate for each module how many bugs are there in each Module. Time Schedule. Risks associated with the project especially for that bug. Usually it is given by the Testers. We can say it as Navigation. Bug Status: This column is very important in Bug Report.Fixed It is given by the developer after he fixed the bug 5.New it is given by the tester when he find out the bug 2. Otherwise this column is Null. Not opening application High Tester will give this status if he is not able to test this Module but he can test some other module Medium Tester will give the status if he is not able to progress in the current module Low Its like cosmetic some spell mistake or look and feel problem Priority: This column is filled by Test Lead he will consider the severity of the bug. Bug ID: Bug ID column represents the unique Bug Number for each bug. Test Case ID: This column gives the reference to the Test Case Document. who identifies that particular bug by using this column developers can easily communicate with that particular Tester if any confusion is there to understand the Bug Description. What is that bug. This is very useful both for testers and developers to reproduce the bug and to debug the bug. If the Reproducible column is yes then only we will specify steps to reproduce column. Based on that he will give the Status to Priority Very High. For this one each organization follows their own standard to define the format of Bug ID. it is used to track the bug in each level. against which test case the bug was reported. If it is reproducible then it is very to developer team to debug that otherwise they will try to find it out.Deferred The bug which is going to be fixed in next iteration Severity: This column tells the effect of that bug to the application. Medium or Low based on the all aspects.Assigned It is given by the developer team lead after assigning to concerned developer 3. Summary: This column gives the detailed description of the bug. Here I am providing sample of this severity based on its affect.m) Developer Name n) Date of Fixing. For this severity also various organizations follow different conventions. High.Open It is given by the developer while he fixing the bug 4.

This section provides a list of notational and other document conventions used within the document. degraded. So that we will get a report for a particular day how many bugs are fixed. f)If states and/or modes are required. who fixed that particular bug. Scope. modes within states. developer. This paragraph describes the scope of requirements covered by this document. It shall describe the general nature of the system. alert. and identify current and planned operating and user sites. post-use analysis. This information is very useful when a particular bug is fixed but still there then Testers can communicate with the concerned Developer to clear doubt." "object. and warning icons. active. The word "capability" may be replaced with "function. Developer Name: This column contains the name of the Developer.Date of Finding: This column contains the Date when the tester reported the bug. It shall depict the context of the covered requirements with respect to other related and interfacing systems to illustrate what requirements are not covered herein. ready. b) A "capability" is defined as a group of related requirements. Include a depiction of symbology used in diagrams along with the meaning of each symbol. this paragraph shall so state. c) The distinction between states and modes is arbitrary. identify the project sponsor. this paragraph shall identify and define each state and mode. 2." or other term useful for presenting the requirements. b) Examples of states and modes include: idle. states within modes. without the need to create artificial distinctions. emergency. 6. 4. summarize the history of system development. acquirer. e) If no states or modes are required." "subject. a)This paragraph shall be divided into subparagraphs to itemize the requirements associated with each capability of the system. wartime. peacetime. Documentation Conventions. Requirements 6. 3.:-a) If the system is required to operate in more than one state or mode having requirements distinct from other states or modes. and maintenance. or any other scheme that is useful. So that we will get a report for a particular day how many bugs are reported. user. operation. modes only. Provide a description of special text usage such as fixed width fonts. training. 5. This paragraph shall briefly state the purpose of the system to which this document applies.Functional and Performance. back up. and support agencies. each requirement or group of requirements in this specification shall be correlated to the states and modes.1. Date of Fixing: This column contains the Date when the developer fixed the bug. This document specifies the requirements for a system and the methods to be used to ensure that each requirement has been met. c)This paragraph shall identify a required system capability and shall itemize and uniquely . System Identification and Overview. click here for the Defect profile document Software Testing-Software Requirements Specification 1. Purpose. Required States and Modes. d) A system may be described in terms of states only.

duty cycles. installation and configuration. locations. 6. d) If the capability can be more clearly specified by dividing it into constituent capabilities. foreseeable human errors under both normal and extreme conditions. e) The requirements shall specify required behavior of the system and shall include applicable parameters.6. support. accuracy. throughput times. This paragraph shall specify organizations. 6. and use of auditory signals. System External Interface. This paragraph shall state requirements associated with operations and maintenance such as system availability. c) Also included shall be the human factors engineering requirements. b) The identification of each interface shall include an application-unique identifier and shall .2. a)This paragraph shall specify the system requirements.a) This paragraph shall specify the system requirements. e) Examples include requirements for color and duration of error messages. providing or exchanging data). such as response times. a)This paragraph shall identify the required external interfaces of the system (that is. imposed on the system. the security/privacy environment in which the system must operate. batch scheduling. 6. and allowable deviations based on operating conditions. d) These requirements shall include. unallowed. if any. requirements for error handling.identify the requirements associated with the capability. the constituent capabilities shall be specified in subparagraphs. and defect repairs. as applicable. skill levels. auditing. the type and degree of security or privacy to be provided.4. backup and recovery. 6. as applicable. Security and Privacy Protection. other timing constraints. f) The requirements shall include. or other information about the personnel who will use or support the system. training needs. the security/privacy risks the system must withstand. Organizational.3. monitoring and tuning. the security/privacy accountability the system must provide. if any. and any provisions to be incorporated into the system to provide continuity of operations in the event of emergencies. 6. concerned with maintaining security and privacy. roles and other user attributes and the functional requirements that each must execute. priorities. relationships with other systems that involve sharing. the security/privacy policy that must be met. access instructions for user roles. as applicable. user roles responsible for performing functions. required safeguards to reduce those risks.5. b)These requirements shall include. or "out of bounds" conditions. sequencing. Human-Factors Engineering (Ergonomics). b) Examples include requirements for number of simultaneous users and for built-in help or training features. Operations and Maintenance. continuous operation requirements. considerations for the capabilities and limitations of humans. if any. and the criteria that must be met for security/privacy certification/accreditation. physical placement of critical indicators or keys. required behavior under unexpected. capacities (how much/how many). and specific areas where the effects of human error would be particularly serious. enhancement. included to accommodate the number.

If this results in extra tests being required. the system shall. The Change Management documentation also has to be updated since the documentation is intimately connected to the testware.. it is the discipline that enable us to keep evolving software products under control. or other characteristics of data elements): More on Software Requirements Specification Document Software Configuration Management The procedure for managing the test object and related testware should be described. 6. as applicable. and documentation references. and shall be divided into subparagraphs as needed to state the requirements imposed on the system to achieve the interface. version. a bottleneck could be created in the project.designate the interfacing systems by name. d)The requirements shall include the following. c)This paragraph may reference other documents (such as data dictionaries. Therefore. i.Application-unique identifier of interface. and thus contributes to satisfying quality and delay constraints. as an attempt to address some of these issues. the test team must be informed so that any new tests can be included in the test plan. building. like architecture. What is Software Configuration Management? Current definition would say that SCM is the control of the evolution of complex systems. d)One or more interface diagrams shall be provided to depict the interfaces. evolution and so on. The version Management of the testware is ultimately the test team responsibility. this is why there is no clear boundary to SCM topic coverage. and standards for user interfaces) in place of stating the information here. as applicable. c) The identification shall state which systems have fixed interface characteristics (and therefore impose interface requirements on interfacing systems) and which are being developed or modified (thus having interface requirements imposed on them). 1. and that other issues were hampering SE development. SCM emerged. shall briefly identify the interfacing system.." not as requirements on the other systems. the way in which change requests are dealt with must be indicated. In addition. number. In the early 80s SCM focussed in . a)This paragraph shall identify a system external interface by application-unique identifier. SCM emerged as a discipline soon after the so called "software crisis" was identified.7. One of the issues that must be addressed in Configuration is that modified objects may only be installed in the test environment with the test team permission. frequency. More pragmatically. This is to prevent a test from failing when a different version of the object under test is unexpectedly being used. after action has been taken on the basis of the errors. when it was understood that programming does not cover everything in Software Engineering (SE). presented in any order suited to the requirements. b)Interface characteristics of the other systems involved in the interface shall be stated as assumptions or as "When [the system not covered] does this.e. and shall note any differences in these characteristics from the point of view of the interfacing system (such as different expectations about the size. during the late 70s and early 80s. standards for communication protocols.

Adele which introduced a specialized product model with automatic configuration building. A file thus evolves as a succession of revisions. in the right location. c. SE involves applying tools to objects (files). These systems are much better. reliable and essential technology for successful software development. in the 90s in programming in the many (process support. the SCM market was over $1 billion sales in 1998.2). it became clear that. which introduced the system model concept which was an Architecture Description Language ancestor. This topic includes version management. concurrent engineering). Many observers consider SCM as one of the very few Software Engineering successes.2. Currently. 4. currently the tendency is to extend process support capability beyond these aspects. foo. Managing a repository of components. they provide workspace support. NSE which introduced workspace and cooperative work control. This generation included Clear Case (DSEE successor) which introduced the virtual file system and Continuus which introduced. delta. Compilation and derived object control is a major issue. Continuus and Clear Case are currently the market leaders. Summary of Concepts: Most SCM products are based on a tiny core of concepts and mechanisms. late 90s in programming in the wide (web remote engineering). leading to a revision tree. Process control and support. They often use a relational database but still rely on file control. Short History In the 80s.Versioning In the early 70s. Deltas were provided because two successive revisions are often very similar (98% similar in average). explicit process support. History simply records when and who created a revision along with a comment.2.2 . Most of them were built as a set of Unix scripts over RCS (a simple version control tool) and Make (for derived object control). foo. but no or built-in process support. a new line of change can be created. product modeling and complex object management. b. There is a need for storing the different components of a software product and all their versions safely. The .2 is called foo. Later (end 80s). This is often referred as workspace control. the only serious commercial product. Help engineers in their usual activities. From this period we can mention DSEE.). merging facilities. process support was added and most products matured. 3 services were provided: History. multi user management and a bit later. The idea is simple: each time a file is changed a revision is created. a typical SCM system tries to provide services in the following areas: a.1. and Aides de Camp (now TRUE software) which introduced the change set. change control is an integral part of an SCM product. This period saw the consecration of SCM. composition). In the second half of the 90s. Here is a summary of these concepts. The first real SCM products appeared in the early 90s. usually referred to by successive numbers (foo.1. with Adele.programming in the large (versioning. 2. major issue is related to people. Each line is called a branch (the branch issued from foo. the first version control system appeared. rebuilding. At the same time. the first systems were built in house and focussed closely on file control. Then from any revision. Traditionally. if not the. as a mature. SCM products try to provide engineers with the right objects. 3.

It is a consequence of a weak data model in which complex objects and explicit relationships are not available. None of these approaches is available in the vast majority of today's systems. 5. a change.Extention2"). but difficult to use and inadequate in many respects.Workspace Support A workspace is simply a part of a file system where the files of interest (w. 6. . In the Adele system. This is archaic and contrasts with today's data modeling. Of course. Surprisingly. it is still the base of the vast majority of today SCM systems. i. only that user can create a new revision for that file (check-in). but "something" special.e. in most systems. and (2) how to build it. a configuration can be produced as a set of change-sets to add or remove from a base configuration (like "C2 = C1 + FixBug243 . C1 being the base configuration and C2 the new one. It is this service that really convinced practitioners that SCM was there to help them. automatically. develop etc) are located. 8. a given task like debug. isolated from the outside world. The traditional way to build a configuration is by changing an existing one. even it involves many files. it vastly reduces the amount of required memory. In the change-set approach. Multi-user management consists in preventing concurrent changes from overlapping each other. to recompile. Make proved to be extremely successful and versatile. prove its properties and so on. Despite the fact that all this is 25 years old. a configuration is not an object.Engineers Support Practitioners rejected the early systems because they were helping the configuration manager. Later on. even today. the data model proposed by most vendors resemble a file system. and to save the changes automatically when the job is done.Building and Rebuilding The aim of rebuilding is to reduce compilation time after a change. the focus was on file control. No correctness criteria are available. The workspace acts as a sphere where the programmer can work.r. in the right file system. for the task duration. to let users work (almost) independently. Most systems "only" generate the make files. and their last modification date. plus a few attributes.Composition A configuration is often defined as a set of files. It is no surprise to see that. which together constitute a valid software product. All attempts to do substantially better have so far failed. A user who wants to change a file creates a copy and sets a lock on that file (check-out).t. 9.idea is to store only differences (the 2% that are different).Product Model From the beginning. a configuration is built interpreting a semantic description which looks like a query: the system is in charge of finding the needed components based on their attributes and their dependencies. The question is twofold: (1) what is the real nature of a configuration. only what is needed. The SCM system is responsible for providing the right files (often a configuration). A major move toward acceptance was to consider the software programmer as a major target customer: helping him/her in the usual SE activity became a basic service. and bothering everybody else. 7. receives a logical name (like "FixBug243"). Make is the ancestor of a large family of systems based on the knowledge of "dependencies" between files. often predefined.

Mergers found in today tools simply compare (on a line by line basis) the two files to merge. cross platform capability. so far. This algorithm is simply a heuristic that provides in output a set of lines with absolutely no guaranties about correctness. Nevertheless mergers proved to work fine. but have been experimented. Controlling concurrent work means defining who can perform a change. and workspace support. traceability etc. it was added. the merger is able to decide automatically what should be the merged file. Resynchronizing. it was removed and must be removed.10. activity control. no tool really provides. Unfortunately. Practitioners think tools are good and stable enough but still lack efficiency. It is a product-centered modeling. but integration is not easy and the few tools that intended to do so only propose two independent modeling.Cooperative Work Support A workspace is a support for concurrent engineering. to be very useful and became almost unavoidable. on which attribute of which object. if a line is present in the ancestor and not in the file. The alternative way to model processes is the so-called activity centered modeling. scalability and interoperability with the other SE tools. the distinctive features between tools will be. if a large process is to be structured. 11. their strength in process support. PDM compatibility. when. concurrent and distributed engineering support. Worse aspect. Experience has demonstrated that both are needed. most missing feature Clearly the number one was: Better and more flexible process support. like versioning and merging. for a product type. their capability to grow with the company needs to inter-operate with other company tools and to support concurrent. distributed and remote engineering.Most useful / appreciated features Clearly the number one was change control. interoperability etc. functionally. and thus describes the legal way to evolve for entities of that type. incrementality. and a file that is historically common to both (the common ancestor). This kind of modeling is preferred if a global view is required. the legal succession of states (and optionally which actions produce the transition). it is no surprise many process models are based on STDs. experience shows that complex and finegrained process models can be define that way. and (2) the mechanisms to help/force reality to conform to this model.Process Support Process support means (1) the "formal" definition of what is to be performed on what (a process model). in the near future. It is one of the topics of process support that. High-level process models mixing both are not currently available in commercial products. and must be kept. If a line is present in a file but not in the common ancestor. technically. and models express the data and control flow between activities. Indeed. . If changes occurred at different places. and that large processes are difficult to define using (only) STDs. currently. Then comes: scalability. Thus there is a need for (1) resynchronizing objects and (2) controlling concurrent work. Since SCM aims to control software product evolution. A State Transition Diagram (STD) describes. 12. It is interesting to see that both the most appreciated and the most criticized feature concern process support. means merging source files. in which the activity plays the central role. But this approach lacks precision for product control. since many concurrent workspaces may contain and change the same objects (files). Almost no comments concerned the basic aspects of SCM. in differing order: Global view. or if products are not the main concern. Then comes. efficiency. experience also shows that STDs do not provide a global view of a process. It is likely that.

The RTM captures all requirements and their traceability in a single document. But maintaining a track of all the requirements specified in the requirement document and checking whether all the requirements have been met by the end product is a cumbersome and a laborious process. and is a mandatory deliverable at the conclusion of the lifecycle. The RTM is used to record the relationship of the requirements to the design. the development team may also propose various value added suggestions that could be added on to the software. testing and release of the software as the requirements are allocated to a specific release of the software. or a portion of the system.Software Testing-Requirements Traceability Matrix What is the need for Requirements Traceability Matrix in Software Testing? Automation requirement in an organization initiates it to go for a custom built Software. The Requirements Traceability Matrix (RTM) captures the complete user and system requirements for the system. Here I am providing the sample template of Requirement Traceability Matrix. Changes to the requirements are also recorded and tracked in the RTM. and is reviewed and baselined at the end of the release. These work products include Software requirements. It is very useful document to track Time. The client who had ordered for the product specifies his requirements to the development Team and the process of Software Development gets started. Software code. test plans and other artifacts of the systems development process. Requirements tracing helps the project team to understand which parts of the design and code implement the user's requirements. design specifications. and which tests are necessary to verify that the user's requirements have been implemented correctly. The RTM is maintained throughout the lifecycle of the release. development. What is Requirements Traceability Matrix in Software Testing? Requirements tracing is the process of documenting the links between the user requirements for the system you're building and the work products developed to implement and verify those requirements. Change Management and Risk Management in the Software Development. The RTM Template shows the Mapping between the actual Requirement and User . Requirements Traceability Matrix Document is the output of Requirements Management phase of SDLC. which gives detailed idea of the importance of RTM in SDLC. The remedy for this problem is the Requirements Traceability Matrix. In addition to the requirements specified by the client.

Requirement Description 5. Trace to Design Specification 7. UAT * User Acceptance Test Cases 11. Requirement ID 2. Any changes that happens after the system has been built we can trace the impact of the change on the Application through RTM Matrix. Trace to Test Script The following is the sample Template of Requirements Traceability Matrix. In any case. design specifications. The following information is provided for each requirement: 1.Requirement/System Requirement. This helps us in tracing the changes that may happen with respect to the Design Document during the Development process of the application. and test scripts. Risks 3. This is also the mapping between actual Requirement and Design Specification. which is associated with that particular requirement to easily trace that particular document. Requirements Traceability Matrix Template Instructions: Introduction This document presents the requirements traceability matrix (RTM) for the Project Name [workspace/workgroup] and provides traceability between the [workspace/workgroup] approved requirements. Requirement Type (User or System) 4. if you want to change the Requirement in future then you can use the RTM to make the respective changes and you can easily judge how many associated test scripts will be changing. IT * Integration Test Cases 9. . ST * System Test Cases 10. UT * Unit Test Cases 8. The table below displays the RTM for the requirements that were approved for inclusion in [Application Name/Version]. Here we will give specific Document unique ID. Trace to User Requirement/Trace From System Requirement 6.

there is no means of tracking the changes c) If there is no mapping of test cases to the requirements. e) If the code component that constitutes the customer s high priority requirements is not known.Software Testing-Requirements Traceability Matrix Disadvantages of not using Traceability Matrix What happens if the Traceability factor is not considered while developing the software? a) The system that is built may not have the necessary functionality to meet the customers and users needs and expectations b) If there are modifications in the design specifications. time and effort. then the areas that need to be worked first may not be known thereby decreasing the chances of shipping a useful product on schedule f) A seemingly simple request might involve changes to several parts of the system and if proper Traceability process is not followed. it may result in missing a major defect in the system d) The completed system may have Extra functionality that may have not been specified in the design specification . resulting in wastage of manpower. the evaluation of the work that may be needed to satisfy the request may not be correctly evaluated .

the codes are developed. (i. At any point of time . if there is a Design description A. and hence irrespective of the size of the project. The biggest advantage of Traceability Matrix is backward and forward traceability. Similarly in the test plan. Such a kind of Traceability in the form of a matrix is the Traceability Matrix.Where can a Traceability Matrix be used? Is the Traceability Matrix applicable only for big projects? The Traceability Matrix is an essential part of any Software development process. implying that the design A takes care of Requirement A. whenever there is a requirement to build a Software this concept comes into focus. Finally. Test Case A takes care of testing the Design A. Developing a Traceability Matrix How is the Traceability Matrix developed? In the diagram. based on these the tests are created. there is always the provision for checking which test case was developed for which design and for which requirement that design was carried out. based on the requirements the design is carried out and based on the design. which in turn takes care of Requirement A and so on.e) At any point of time in the development life cycle the status of the project and the modules that have been tested could be easily determined thereby reducing the possibility of speculations about the status of the project. . which can be traced back to the Requirement Specification A. In the design document.

It is a technique to support an objective of requirements management. This helps to ensure that no requirement is left uncovered (either un-designed / un-tested). Sometimes there is one test case for each requirement or several requirements can be validated by one test scenario. This document helps to identify. Traceability Matrix in testing Where exactly does the Traceability Matrix gets involved in the broader picture of Testing? The Traceability Matrix is created even before any test cases are written. Creating a Traceability Matrix Traceability Matrix gives a cross-reference between a test case document and the Functional/design specification document. Requirements Traceability enhances project control and quality. if the Test case document contains tests for all the identified unit functions from the design specification. to make certain that the application will meet end-users needs. from Test plan back to Design document and so on. because it is a complete list indicating what has to be tested. .There has to be references from Design document back to Requirement document. This purely depends on the kind of application that is available for testing. It is a process of documenting the links between user requirements for a system and the work products developed to implement and verify those requirements. Usually Unit test cases will have Traceability to Design Specification and System test cases /Acceptance Test cases will have Traceability to Requirement Specification. From this matrix we can collect the percentage of test coverage taking into account the percentage of functionalities to the total tested and not tested.

Why is this important? Establishing a test plan based on business requirements and design specification is essential for the successful acceptance of a project's deliverables. code is developed. The project Schedule & Task Plan and the project Staffing Plan need to account for testing requirements during the planning and execution phases of the project. it is possible to drill down to the level of identifying the Low Level Design for the Requirement specified. It is important to note that the higher risk a project has. Though IT project practices require testing throughout the execution phase of a project. the requirement specifications are clearly spelt out in the requirementcolumn. qualification and acceptance testing needed to complete a project properly. We are introducing the concept of high level test plans to show that there are a lot more activities involved in effective testing than just writing test cases. integration.5.8 tells about the requirement that has been specified in the Requirement document (i. With the help of the Design Specification. You should be aware that many people use the term 'test plan' to describe a document detailing individual tests for a component of a system. Testing validates the requirements defined for the projects objectives and deliverables. The test cases corresponding to the requirements are available in the Test Case column. the greater the need for a commensurate amount of testing. the program corresponding to a particular requirement could be easily traced back. Based on the requirements. At any point of time. Software Testing-Test Plan High Level Test Plan Test Plan is the scheduler for entire Testing process.e it tells about the requirement to which a test case is designed). system.According to the above table. The functional specification notified as BRD section 6. unit. undoubtedly the . The Test Plan describes the approach to all development.

(description of hardware. Instructions Prepare a Test Plan describing the scope. ." This standard specifies the following test plan outline: Test Plan Identifier: 1. 4. and test outcome report at the end of testing detailing the overall results of testing progress). . . system and user interface test cases. A unique identifier . Describe testing approach. (for example. 6.Validate test requirements. location. The actual test methods and techniques must be adapted to the type of project being developed and the testing environment and tools that are available. Describe test methodologies. Best practices dictate that testing be done early and often. who will do each task. For each test phase such as unit. and schedule of intended testing activities. It describes a test plan as: "A document describing the scope. approach. classification code and prioritization scheme for error tracking and resolution.Provide testing references as required. data sources.Give a short system description. the features to be tested. keeping in mind the process and stages for testing. identify definition. user acceptance test plans. resources. The plan should describe the following elements: 1. 5. It identifies test items. known risks and contingencies. Define test scope. 2. 7. Schedule testing tasks and make resource assignments. How to Scale The size and nature of the project requirements should determine the scale of the test plan. requirements and work products. and any risks requiring contingency planning. entrance and exit criteria. status reports of testing.Note any outstanding issues. . Orderly test plans that specify the criteria for test passage or failure are critical to a project's success. tracking mechanisms for test results such as a test case validation log or test error log). software.most important testing occurs at the end of development and prior to deployment.Provide all test documents. etc.Define test control procedures. staffing and training). . Project managers need to think about the purpose of the testing. Features not to be tested). .Define the Test Plan objectives. Define test approvals process and result distributions.. 3. .Describe test data (test cases. processes and criteria for testing particular deliverables of the project. participants. integration. . IEEE Standard for Software Test Documentation (ANSI/IEEE Standard 829-1983) This is a summary of the ANSI/IEEE Standard 829-1983. assumptions. system. (features to be tested. the testing tasks. Define and describe test phases. 8. Provide an overview: Describe project objectives and background (providing some context for the testers). Define the test environment.

Specify criteria to be used to suspend the testing activity 2. test incident reports. test logs. Identify all task interdependencies 3. testing-resource availability. References to test-design specifications associated with each feature and combination of features Features Not to Be Tested 1. Specify testing activities which must be redone when testing is resumed Test Deliverables 1. and tools which are to be used to test the groups 4. Identify tasks necessary to prepare for and perform testing 2. test design specifications. Identify any special skills required Environmental Needs . References to lower level test plans Test Items 1. All software features and combinations of features to be tested 2. Identify significant constraints on testing. installation guide 4. Identify which techniques will be used to judge comprehensiveness 6. Test items and their version 2. test case specifications. Need for and history of each item (optional) 3. design specification. test procedure specifications. QA plan. test item transmittal reports. techniques. Items which are specifically not going to be tested (optional) Features to be Tested 1. and deadline Item Pass/Fail Criteria 1. project plan. Specify major activities. Identify test tools (optional) Testing Tasks 1. configuration management plan. The reasons these features won t be tested Approach 1. All features and significant combinations of features which will not be tested 2. relevant policies. operations guide. Specify the criteria to be used to determine whether each test item has passed or failed testing Suspension Criteria and Resumption Requirements 1. Characteristics of their transmittal media 3. For each major group of features of combinations of features. References to related documents such as requirements specification. References to related documents such as project authorization. Summary of the items and features to be tested 2. users guide. such as test-item availability. References to bug reports related to test items 5. test summary reports 2. Specify a minimum degree of comprehensiveness required 5. Specify any additional completion criteria 7.Introduction 1. Identify test input and output data 3. relevant standards 4. specify the approach 3. Identify the deliverable documents: test plan. Overall approach to testing 2. Specify techniques which are to be used to trace requirements 8.

Estimate time required to do each testing task 4. executing. Each build will be tested before next subsequent build date. Identify training options for providing necessary skills Schedule 1. Specify contingency plans for each Approvals 1. specify its periods of use Testing scheduling and status reporting are performed by the Project Lead and project Administrator to monitor progress towards meeting product testing schedules and release date. Provide space for signatures and dates Software Testing-Manual Testing Mainly in Manual Testing the following documents are required i) Test Policy --QC ii)Test Strategy--Company Level . Identify any other testing needs 5. preparing. the mode of usage (i. and any other software or supplies needed 2. For each testing resource. Specify the names and titles of all persons who must approve the plan 2. Identify groups responsible for managing.. designing. stand-alone). Schedule all testing tasks and test milestones 5. Software testing schedules will coincide with module development and release schedules Risks and Contingencies 1. Identify groups responsible for providing the test items identified in the Test Items section 3.1. as well as to identify any project scheduling risks. Identify the high-risk assumptions of the test plan 2. checking and resolving 2. Specify the level of security required 3. Identify the source for all needs which are not currently available Testing is performed using hardware with the following minimum system requirements: 133 MHz Pentium Microsoft Window. witnessing. Identify special test tools needed 4. Specify staffing needs by skill level 2.e. Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware. Specify all item transmittal events 3. 98 32 MB RAM 10 MB available hard disk space A display device capable of displaying 640x480 (VGA) or better resolution Internet connection via a modem or network Responsibilities 1. communications and system software. Identify groups responsible for providing the environmental needs identified in the Environmental Needs section Staffing and Training Needs 1. Specify test milestones 2.

Components in Test Strategy: y Scope and Objective: About testing need and their purpose y Business Issues: Budget control for testing in terms of time and cost 100%----Project Cost 64% Development & Maintenance and 36% for Testing y Test Approach: It defines mapping between development stages and testing issues y Test Matrix (TM)/ Test Responsibilities Matrix (TRM) . &nbspTMM ( Test Management Measurements) PCM (Process Capability Measurements) II) Test Strategy: It is also a company level document and developed by QA people. This document defines "Testing Objective" in that organization y y Small-scale company test policy Testing Def: Verification + Validation y Testing Process: Proper planning before testing y Testing Standard: Defect per 280 LOC / Defect per 10 functional points y Testing Measurements: QAM (Quality Assessment Measurements). It defines testing approach followed by testing team.( At most management).iii) Test Factors--QA iv)Test Methodology----TL I)Test Policy: This is a company level document and will be developed by QC People.

& PCM y Defect Reporting and Tracking: Required negotiations between testing team and development team y Risks & Mitigations: Possible risks and mitigations to solve( risks indicates a future failure) y Change and Configurations Management: .. Test case etc. y Roles & Responsibilies: Names of jobs in testing team and their responsibility y Communication and Status Reporting: Require negotiations between two consecutive job in testing team y Automation Testing Tools: Need of automation in our organization level project testing y Testing Measurements and Metrics: QAM. TMM.y Test Deliverables: Required documents to prepare during testing of a project Ex: Test Methodology. Test Plan.

. exporting etc.How to handle change requests coming from customers during testing and maintenance y Training Plan : Need of training to tester before starts of every project testing III)Test Factors 1: To define quality S/W. quality analyst defines 15 testing issues. dumping. Test factor or issue means that a testing issue to apply on S/W to achieve quality. y Correctness: Meet customer requirements in terms of inputs and out puts y Coupling : Co-existence with other existing S/W y Ease of Use : User friendliness of screens y Ease of operate : Installation. The test factors are: y Authorization: Whether user is valid or not to correct application y Access Control: Authorized to access specific services y Audit Trail: Meta data about user operations y Continuity of processing: Inter process communication (IPC) during execution. y File integrate: creation of internal files (ex back up) . un installation.

Functional or requirement testing y Access Control : Security Testing .y Reliability: Recover from abnormal situation y Portable : Run on different plat forms y Performance : speed of processing y Service Levels : order of services y Methadology : Follow standards y Maintainable : Long time serviceable to customers 2.Test Factors VS Black Box Testing Techniques y Authorization : Security Testing.if there is no separate team then Functional or Requirement testing y Audit Trail : Functionality or requirements. error handling testing y Correctness : Functionality or requirements testing y Continuity of processing : execution testing . operation testing (white Box) .

QA or PM depends on below factors.Configuration testing y Performance: Load & Stress .Storage & Data volume testing y Service Level : Functionality or requirements testing y Maintainable: Compliance Testing y Methodology: Compliance Testing IV)Test Methodology: It is a project level document and developed by QA(Quality Assurance Leader) or PMProject manager. error handling testing y Realiability : Recovery . y Step1 : . It is a refinement form of Test Strategy.Stress testing y Portable: Compatability.y Coupling : Intersystems testing y Ease of use : Usability testing y Ease of operate : Installation testing y File Integrate : Recovery . To prepare test methodology .

QA decrease no of factors in selected list) y Step 4: Determine scope of application (depends on expected future enhancements. Out sourcing and Maintanence (depends on project type QA decreases no of columns in TRM) y Step 2 : Determine application requirements (depends on application requirements QA will decrease no of rows in TRM) y Step 3: Determine tactical risks (depends on risks. QA add some of the deleted factors to TRM) y Step 5 : Finalize TRM for current project y Step 6 : Prepare system test plan (defining scheduling for above finalized approach)---Test Lead will do y Step 7 : Prepare module test plans if require Software Testing-Manual Testing v)Test Process vi)Test Plan .Determine project type such as Traditional.

tools and technology) This testing process developed by HCL and approved quality analyst form of India.vii)Test Design Test Script viii)Test Execution V)Testing Process PET Process: (Process experts. It is a refinement of V.Modal to define testing .

.process along with development stages.

.

In this step test plan author depends on below factors a) Availability of testers b) Test duration c) Availability of test environment resources Case Study: Test duration C/S. ERP ---. Test plan author follows below work bench (process) to prepare test plan document. FORMAT: 1) Test Plan ID: Unique number 2) Introduction: About project and test team 3) Test Items: Modules/Features/services/functions 4) Features to be tested: Responsible modules to prepare test cases ."When to test". "How to test" .3 to 5 months of functional & system testing Team size 3:1 2.VI)Test Planning: After completion of test initiation. Web.Team formation: In general test planning process starts with testing team formation.Identify tactical risks: After completion of team formation test plan author study possible risks raised during testing of that project Ex Risk 1: Lack of knowledge on domain Risk 2: Lack of budget (time) Risk 3: Lack of resources Risk 4: Delay in delivery Risk 5: Lack of development process rigor (seriousness of dev team) Risk 6: Lack of test data (some times test engg conducting Ad Hoc testing) Risk 7: Lack of communication 3) Prepare Test Plan: After completion of team formation and risk analysis test plan author prepare test plan document in IEEE format. TL of the testing team concentrate on test planning to define "What to test". 1. "Who to test"?.

QA or PM Review Test Plan: After completion of test plan preparation. . responsible person conducts coverage analysis. error handling. In this review. we can follow below approach. test plan author review the document for completeness and correctness. and manual support testing). A usecase describes that how a user use specific functionality in our application. selected test engineers involved in required training sessions to understand business logic. They are 1)Business Logic based test case design (80%) 2)Input domain based test case design (15%) 3)User Interface based test case design (5%) 1. To prepare this type of test cases depends on usecases. After completion of required training sessions test engineers are preparing test cases for responsible modules There are 3 methods to prepare core level test cases (UI. Topics in test plan review based on a) BRS & SRS based coverage b) Risks based coverage c) TRM based coverage VII) Test Design: After completion of test plan finalization. This type of training provided by business analyst or functional lead or business consultant.5) Features to be not tested: which ones and why not? 6) Approach: Required list of testing techniques (depends on TRM) 7) Feature Pass or Fail Criteria: When a feature is pass when a feature is fail 8) Suspension Criteria : Possible abnormal situations raised during above features testing 9) Testing Tasks (Pre requisite): Necessary tasks to do before starts of every feature testing 10) Test Deliverables: Required test documents to prepare during testing 11) Test Environment: Required HW and SW including testing tools 12) Staff and Training Needs: Names of selected test engineers 13) Responsibilities: Work allocation 14) Schdule: Dates & Time 15) Risks and Mitigations : 16) Approvals : Signatures of test plan author. Functionality. Input domain. Business Logic: In general functionality and error handling based test cases prepared by test engineers depends on usecase in SRS. A test case describes that a test condition to apply on that application to validate.

6 Identify alternative flows and exceptions Step 3: Prepare test cases depends on above study Step 4: Review the test case as per completeness and correctness Usecase 1 From a usecase and data modal.Pass----------------------------------Valid----Invalid Max 16-.Step 1: Collect responsible usecases Step 2: Select Usecase and their dependencies 2.Fail Min-1-.Pass Max+1-.Pass----------------------------------Valid------Invalid . a login process allows userid and password. Test case 1: Successful entry of userid BVA (Size)------------------------------------ECP (Type ) Min 4-.Pass--------------------------------0-9-----A-Z Max -1-. Password allow alphabets in lower case from 4 to 8 characters long.4 Identify out put and out come (expected) 2.Fail Test Case 2: Successful entry of password BVA (Size)------------------------------------ECP (Type ) Min 4-. Userid is taking alphanumeric and lowercase from 4 to 16 characters long.2 Identify input required (test data) 2.3 Identify exit condition (end state) 2.1 Identify entry condition (base state) 2.5 Study normal flow (call states) 2.Pass--------------------------------a-z-----Special Char and Blank Spaces Min+1-.

Pass------------------------------------------------Valid-----Invalid Max 59 --Pass----------------------------------------------.Criteria Valid--------------------Valid---------------------------------Pass Valid--------------------Invalid-------------------------------Fail Invalid------------------Invalid-------------------------------Fail Valid--------------------Blank--------------------------------Fail Blank------------------. our system returns price of one item & total amount Test Case 1: Successful item no BVA (Size) --------------------------------------------------ECP (Type ) Min 4 --Pass--------------------------------------------------Valid--------Invalid Max 6 --Pass--------------------------------------------------0-9---------A-Z Max -1-. Age value should be greter than 18 years and should be less than 60 Test case 1: Successful selection of type B Insurance Test Case 2 : Successful focus to age when you select type B Test Case 3: Successful entry of age value BVA (Size)---------------------------------------------------ECP (Type) Min 19 -.Pass-----------------------------------------------.Pass-------------------------------------------------a-z-------.Special Char and Blank Spaces Min+1 -.Max 8 --Pass---------------------------------------------A-Z Max -1 -.Pass-------------------------------------------------A-Z Max+1 -.Fail Test Case 2: Successful selection of Qty .Fail Usecase 3: In shopping application customer can try for purchase order creation.Fail Min-1 -.Pass---------------------------------------------0-9 Max+1 --Fail Min-1-.Pass-------------------------------a-z---------Special Char and Blank Spaces Min+1 -.Valid---------------------------------Fail UseCase 2 In a insurance application.Fail Min-1 -.0-9------.Pass----------------------------------------------------------Special Char and Blank Spaces Min+1 -. When a user select type B insurance. Application takes item no & qty. system asks age to enter.A-Z Max -1 -. user can apply for different types of insurances. Item no allows alphanumeric from 4-6 charaters long and quantity allows upto 10 items to purchase.----------a-z Max+1-.Fail Test Case 3: Successful log in User id------------------Password--------------------------. After filling item no & qty .

Pass 0-9----------A-Z Max -1-.balance enquiry.bill pay Test Case 1: Successful entry of password BVA (Size) ECP (Type ) Min 6 --Pass Valid--------Invalid Max 6 --Pass 0-9--------A-Z Max -1 -.Fail Min-1 --Fail Test Case 4: Successful suffix .Fail ---------------------------------------------------a-z Max+1 -. Area code--3 digit no & allows blank Prefix-.Fail Test Case 2: Successful area code BVA (Size) ECP (Type ) Min 3 -.Fail Blank-------Special Char Min+1 --Fail----------------------------------------------------------------a-z Max+1 -.Pass------------------------------------------------0-9----------A-Z Max -1 -.Fail Min-1 -.Pass 0-9----------A-Z Max -1-.Deposit .Pass-------------------------------------------------------------Special Char and Blank Spaces Min+1 -.Pass-------------------------------------------------a-z Max+1 -.3 digit no.Pass Valid--------Invalid Max 999 -. In this process user can use 6 digit pwd & below fields.BVA (Size)--------------------------------------------------ECP (Type ) Min 1 -. mini statement .Fail Min-1 -.Fail-------------------------------------------------------------Special Char and Blank Spaces Min+1 -. not starts with 0 or 1 Suffix-.6 digit & alphanumeric values Commonds-.Pass Valid--------Invalid Max 3 -.Pass Max+1 -.Fail Min-1 --Fail Test Case 3 : Successful prefix: BVA (Size) ECP (Type ) Min 200 -.Fail Test case 3: Successful calculation Total=price * qty Usecase 4 In banking application user can dial bank using his person computer.Pass--------------------------------------------------Valid--------Invalid Max 10 -.Pass-------------------------------------------------Special Char and Blank Spaces Min+1 -.

.General functionality ( ex I/P domain.Basic Functionality P1--. 1) Test Case ID: Unique name or number 2) Test Case Name: Name of the test condition 3) Feature to be tested: Module or feature or service or component 4) Test Suite ID: Batch name.--------Click inbox link--------------------------------Mail box appear ---------3. (data model or ER diagrams). ex 20 min 8) Test Duration: Date & Time 9) Test Setup: Necessary tasks to do before starts this case execution 10) Test Procedure *: Step by step procedure from base state to end state Step No----Action----I/P Required----ExpectedResult---. error handling.. in which this case is a member 5) Priority: Importance of test case P0--. test enggs are preparing test case documents in IEEE format. But usecases are not responsible to define size and type of input objects.Defect ID----Comments----Test Design 11) Test Case Pass or Fail Criteria : When this case is pass. Due to this reasons test engineers are reading LLD's also.--------Log on to site----Valid UID and PWD---------Inbox page appear ---------2.) 6) Test Environment: Require H/W and S/W including testing tools 7) Test Effort (person / hr): Time to execute this case.BVA (Size) ECP (Type) Min 6 -.--------Select Received mail subject-----------------Mail message appear ---------4---------Click reply---------------------------------------Compose window appears with to Received mailed ---------5.Fail-------------------------------------------------. flow and output. when this is case is fail? Note: In general test engg s are preparing test case documents with step-by-step procedure only (i.Pass 0-9 Max -1 -. balance enquiry etc. Step 1: Collect data models of responsible modules . intersystem testing etc. ----Step No----Action------------IP Required--------------Expected ---------1. To steady data model test engineer follows below approach. compatibility. Test Case 6: Successful dialing with valid values Test Case 7: Unsuccessful dialing without filling all field values except area code Test Case 8: Successful dialing with out filling area code Test Case Format: During test design.fail A-Z------------Special Char and Blank Spaces Min+1 -.e 10th field only) Ex Prepare test case document for successful mail reply.a-z Max+1 -.Fail Min-1 --Fail Testcase 5: Successful commands such as deposit.----------Enter new message and click save-----------Ack from web server 2 Input Domain Based Test Case: Usecases are describing functionality in terms of inputs.Pass Valid----------Invalid Max 6 -.

which are just input/ output type. type and constraints Step 3: Identify critical attributes. Prepare test case document from above scenario. Ex:AC no----Account Name----Balance----Address &nbsp----------Critical---------------&nbsp---------Non Critical---------Step 4 : Identify non critical attributes.Step 2: Study the data model to under every input attribute in terms of size . if the tenor is greater than 10 months our system allows interest also as greater than 10%. which are participating in data retrievals and data manipulation. From the data model that form consists of below fields Customer Name: Alphabets in lower case middle _ Amount : 1500 to 100000 Tenor : upto 12 months Interest: Numeric with decimal From this usecase. Test Case 1: Successful entry of customer name Test Case 2: Successful entry of amount . Step 5: Prepare data matrices for very input attribute in terms of BVA & ECP Input Attribute-------------------------ECP-----------------------BVA &nbspValid---------Invalid Max-----------Min Ex 1: From usecase a bank application allows a fixed deposit form.

Test Case 3: Successful entry of tenor Test Case 4: Successful entry of input Test Case 5: Successful fixed deposit with all valid values .

Test Case 6: Un successful operation due to tenor is greater than 10 months and interest is less than 10%. Test Case 7: Unsuccessful operation due to without filling all field values .

Amount----DOB c.DOB--------dd/mm/yy 6) Accuracy of data in the database as a result of user input i)Form ii)table iii)Report 7) Accuracy of data in the data base as a result of external factors Ex: Imported files Test case selection review: After completion of all possible test cases writing. User Interface based test case design: To conduct user interface testing.Amount----DOB b. TL creates Requirement Trace ability Matrix.Test Execution Levels . (RVM). This is also known as Requirement Validation Matrix. VII) Test Execution After completion of all possible test cases writing for responsible modules and their review. global UI conventions (Microsoft 6 rules) and interest of customer site people.3. This is the mapping between BRS and prepared test cases. testing team concentrate on test execution to detect defect in build. test engineers are preparing UI test cases depends on our organization. test lead and test engg are concentrating on test case selection review for completeness and correctness. 1. user interface rules. In this review test lead apply coverage analysis on that cases a) BR based b) Usecase based c) Data model based d) UI based e) TRM based At the end of this review. Example: 1) Spelling check 2) Graphic check 3) Meaningful error messages 4) Meangful help documents (Manual support testing) 5) Accuracy of data displayed a.

2) Test Execution vs Test Cases Level -0 All P0 Level -1 All P0. P1 & P2 test cases wrt modification Level -3 Selected P0.P1 & P2 test cases wrt build 3) Build Version Control: Testing team receive build from development team through below process . P1 & P2 test cases as batches Level -2 Selected P0.

For this version controlling development. During this testing. Octagonal testing and other shake-ups are called smoke testing. team people are using Visual SourceSafe. To distinguish between old and modified builds. From the above factors. development use unique no version system. This system is understandable to test engineers. BVT. testing team covers basic functionality of that build to estimate stability. Sanity testing is also known as Testability testing. 5) Test Harness (Ready for testing) Test harness= Test Environment + Test Bed . 4) Level -0: After receiving initial build from development team.From the above model. testing team receives build from development through File Transfer Protocol . testing team applies below factors to check whether the build is stable for complete testing or not? š Understandable š Operatable š Observable š Consistency š Simplicity š Controllable š Maintainable š Automatable These are Testability Factors to do Sanity testing.

test engineer's create "Test Log". testing team Re-execute their previous tests on that modified build to ensure bug fix work and possibility of side effects. Before concentrating on remaining comprehensive testing.6) Test Automation: After receiving stable build from development.   Passed.any one expected vary with actual   Blocked. During this test case execution as manual or automated. . all expected equal to actual   Failed . due to failing of parent test Level -2 (Regression testing): During comprehensive test execution. From the above model test engg's are following selective automation for repeatable and critical test cases only 7) Level -1 (Comprehensive testing): After receiving stable build from development team and completion possible automation. testing team concentrate on test execution to detect defects. test engg s are reporting defects to development team. They execute tests as batches. testing team receives modified build. Every test batch consists of a set of dependent test cases. testing team concentrate on test automation to create automated test script if possible. Test batch is also known as test suite or test set.execution of tests is called regression testing. It consists of 3 types of entries. After bug resolving. This type of re.

where two developers test each other's work is more independent and often more effective. 2. Usually white box (structural) testing techniques are used to design test cases for component tests but some black box tests can be effective as well. 3. 4.User Acceptance Testing is a key feature of Project implementation. the component test strategy should describe what level of independence is applicable to a particular component. . Software Testing-User Acceptance Testing Overview of User Acceptance Testing 1. "Buddy" testing. test engineer executes all P0. and test cycles that must be performed to ensure that acceptance testing follows a precise schedule and that the system is thoroughly tested before releasing. The definition from BS7925 is simply the testing of individual software components. P1 and carefully selected P2 test cases.Each module implemented will be subject to one or more user acceptance tests before sign off. Software Component Testing Component testing is described fully in BS-7925 and should be aware that component testing is also known as unit testing. This has proved to be less effective than if someone else designs and funs the tests for the component.User Acceptance Testing(UAT) is the formal means by which Company ensures that the new system actually meets the essential user requirements.Note: If development team release modified builds due to project requirement changes. However. test conditions.This UAT Plan describes the test scenarios. module testing or Program Testing. Component testing has often traditionally been carried out by the programmer.

Reports. the UAT must cover the following areas: . Software Testing-User Acceptance Testing Scope of User Acceptance Testing(UAT):-User Acceptance Testing addresses the broadest scope of requirements.5. y Verify the functionality of the application to ensure that users are comfortable with the application.UAT also allows designated personnel to observe how the application will behave under business functional operational conditions. Interfaces). User Acceptance Testing Team The UAT team will be assigned the following tasks: y Verify the completeness & accuracy of the business functionality provided in the application (Screens. the software is tested for compliance with business rules as defined in the Software Requirement Specifications and the Detailed Design documents. The acceptance procedure ensures the intermediate or end-result supplied meets the users' expectations by asking questions such as: y Is the degree of detail sufficient? y Are the screens complete? y Is the content correct from the user's point of view? y Are the results usable? y Does the system perform as required? 6.In UAT. 7. therefore.

Quality and accuracy of data being produced 4. data distribution and data archiving are met.  Interface Requirements ensure all business systems linked to the software system in UAT pass and receive data or control as defined in the requirements specification.  During UAT. is responsible for: 1. Configuration management issues . This includes the: 1. Operational Requirements ensure requirements for data capture. Planning tests 2. it must still be tested to see how it will perform in the business environment before release for general use. data processing. the way the software is intended to perform and behave upon release for general use is assessed.  Functional Requirements ensure all business functions are performed as per the business rules. with limited help from the developers. Reporting and clearing incidents Objectives of UAT  User Acceptance Testing determines the degree to which the application actually meets the agreed functional specifications. Accuracy and utility of user documentation and procedures 3.    Even if software passes functional testing. It confirms whether the software provides new business improvements and if existing processes continue to work correctly.  The user. as stated in the Business Functional Specifications and the Detailed Design documents. Release and installation procedures 5. Executing tests 3. Accuracy of successful completion of business processes 2.

User Acceptance Testing has been divided into four major functions: y Planning y Execution y Follow. as recorded in the reference documents? Does the system s business functionality perform as required? Execution of the UAT Plan will be completed by performing the following tests: 1.Up y Re-Test a)Planning:The goal of a UAT(User Acceptance Testing) Plan is to identify the essential elements of the software to be tested. using the Acceptance Test Feedback Form against the following: y y y y Is the degree of detail for business functionality sufficient? Do the screens completely capture business functionality? Is the business functionality content correct from the user's point of view.Software Testing-User Acceptance Testing User Acceptance Testing Process For purposes of this planning document. A User Acceptance Testing(UAT) Plan delineates high-level testing procedures and outlines the tests to be conducted. Requirements Testing . b)Execution:The application will be verified / tested.

3)Business Functional Requirements Testing Business Functional Requirements testing information shall be created based on the Functional Requirements contained in the System Requirements Specification Document. company will ensure that each Business Functional Requirement has been tested. Verification of Online Help 6. Documentation Testing 5. This validation shall involve Test Case creation. 6)Interface Testing Interface Testing validates that the application interfaces with external systems and databases. it is the testers responsibility to assure that the Business Functional Requirements Testing occurs. . 2)Test Case Creation Test case data shall be created on manual forms for data entry on crucial screens in order to cover all attributes of UAT testing.The UAT team will provide test cases for UAT testing. 5)Verification of Online Help Online Help shall be verified by the following: a) Corresponds to the User documentation b) Corresponds to the screens presented in the application c) User is directed to the appropriate help on the desired page by clicking on a page level help icon available to the users on each screen. However. During UAT. Test Case Creation 3. Business Functional Requirements Testing 4. 4)Documentation Testing Documentation Testing ensures that the hard copy and online documentation are understandable and accurate. Each test case includes the steps necessary to perform the test. Interface Testing 1)Requirements Testing The purpose of the Requirements Testing is to validate that the system meets all business functional requirements.2. as well as Business Functional Requirement Testing. expected results and contains (or refers to) any data needed to perform the test and to verify it works.

y The tester must produce physical evidence. each incident must be recorded separately by the tester using the Acceptance Test Feedback Form. y y Test results of User Acceptance Testing will reflect certain items requiring change in specifications or functionality.Up y It isn t sufficient just to find an error.. the actions taken during the test. and what results occurred. for example screen prints. a clarification to be applied to the specifications.Software Testing-User Acceptance Testing 3)Follow . . Ambiguities in specifications are common and may not be discovered until UAT. will only be able to record the problem for historical purposes in the event it is reproduced in the future. or an enhancement to be provided in some future release. Reporting y As well as logging problems that result from the discovery of defects. These are clarified and resolved. y If testing problems are not reciprocated. These items will be raised as change requests to be registered in the change control process. or the need for enhancements. between the teams involved. testers will also encounter test incidents caused by the need for clarification. and agreed to be either a software defect. the tester must also record the conditions prior to starting a test. y To ensure adequate control on the clearance of errors and to improve management forecasting of UAT completion. and be able to repeat the problem. to be cleared now.

4)Re -Testing The application / artifact released to the UAT team for retesting shall be tested again for the points submitted in the feedback sheets.y Detailed checklists will be prepared for testing. a) Verified and approved test case records b) Test results captured on test activity records c) Screen dumps of error screens d)Updated checklists Software Testing-User Acceptance Testing Feedback Form . y Once all the feedback has been resolved. incorporating changes and enhancements of the software to the UAT team. y All user acceptance testing and feedback shall be captured on the Review or Test Activity Record (Appendix B). y The development team clears errors. Detailed regression testing shall occur as per the Integration / Regression Test Plan to be produced for the integration test phase. the fine-tuned application / artifact shall be released again to the UAT team for additional testing. which the artifact is being tested toward. OUTPUT The following will form the outputs of the testing activities and will be stored electronically in the "Test Results" folder secured in an archive. However. Checklists shall be prepared by the UAT team prior to commencement of these tests. and provides a new release. user acceptance testing will be done against detailed test cases prepared by the UAT team prior to commencement of this phase. This process is repeated until all reported incidents are resolved to the UAT organization s satisfaction. These checklists shall list each parameter.

Software Testing-Automation Testing Welcome to all those interested in Software Testing which is emerging as a radicle boost in Software Industry .

Software Testing-Manual Vs Automation Welcome to all those interested in Software Testing which is emerging as a radicle boost in Software Industry No need to search for all the websites of testing In this website you can get the basic information on testing which is the starting step So All the Best for everyone Software Testing-Automation Tools Automation Testing of the project is done through the Automation Testing Tools. Automated testing increases the significance and accuracy of testing and results in greater test coverage. Test automation enables one to achieve detailed product testing with significant reduction in test cycle time. 6. 10. 2. 9.IBM . Enhanced productivity. Automated test scripts are re-usable.No need to search for all the websites of testing In this website you can get the basic information on testing which is the starting step So All the Best for everyone Benefits of Software Automation Testing 1. In that mainly. which is not achievable through the use of manual testing. There are many Automation Tools in use developed by different companies. Better.Mercury Interactive b. and can be used across varying scenarios and environments. 3. a. 8. 4. 7. The efficiency of automated testing incorporated into product lifecycle can generate sustainable time and money savings. Rapid validation of software changes with each new release of application is possible. 5. Automated testing offers a level of consistency. Automated testing eliminates the time constraints associated with manual testing. Automation eliminates many of the mundane functions associated with regression testing. Scripts can be executed at any time without human intervention. faster testing.

Load Runner 4.net. Peoplesoft. Main Features of Win Runner are y y y y y y y y Developed by Mercury Interactive Functionality testing tool Supports C/s and web technologies such as (VB. HTML.0 follows Auto learning. Winrunner run on Windows only. 5) Analyze Results Tester analyzes the tool given results to concentrate on defect tracking if required. 2) Recording Winrunner records over manual business operation in TSL 3) Edit Script depends on corresponding manual test. Java. test engineer inserts check points in to that record script. Multimedia we can use QTP.Quick Test Professional The Tools from IBM are 1. Cibell (ERP)) To Support . xml.Rational Robot Software Testing Tools--Win Runner Win Runner is the most used Automated Software Testing Tool. Oracle applications.Silk Test 5. 4) Run Script During test script execution. Tool developed in C on VC++ environment. Winrunner 7.Win Runner 2. VC++. D2K. SAP. winrunner compare tester given expected values and application actual values and returns results.Test Director 3. Software Testing Tools-Test Director . Delphe.Mercury Interactive is developed by Mercury International Corporation The main Tools from this are 1. Power Builder. Xrunner run only UNIX and linux. To automate our manual test win runner used TSL (Test Script language like c) The main Testing Process in Win Runner is 1) Learning Recognazation of objects and windows in our application by winrunner is called learning.

or one that checks a specific feature.Software Automated Tool TestDirector simplifies test management by helping you organize and manage all phases of the software testing process. If you choose to perform a test manually. Define test subjects. you maintain a project's database of tests. and execute them directly from TestDirector. you can create a test set that checks a new version of the software. TestDirector lets you report defects detected in the software. Determine the tests you want to create and add a description of each test to the test plan tree. 3. For example. 6. Define test subjects by dividing your application into modules or functions to be tested. Examine your application. and tracking defects. WinRunner enables you to create and execute automated test scripts. You can include WinRunner automated tests in your project. Visual API. Automate tests. QuickTest 2000. and displays the results. 5. and testing resources to determine what and how you want to test. executing tests. . Define tests. If you choose to automate a test. Generate reports and graphs to help you analyze your test plan. creating tests. system environment. With TestDirector. and XRunner). Decide whether to perform each test manually or to automate it. Mercury Interactive's automated GUI Testing tool. as well as with third-party and custom testing tools. Build a test plan tree that represents the hierarchical relationship of the subjects. 2. 1. Astra QuickTest. 4. As you execute tests. TestDirector works together with WinRunner. you can build test sets groups of tests executed to achieve a specific goal. Break down each test into steps describing the operations to be performed and the points you want to check. Analyze the test plan. runs the tests. including planning. Design test steps. Determine whether the tests in the project will enable you to successfully meet your goals. use WinRunner to create automated test scripts in Mercury Interactive s Test Script Language (TSL). TestDirector offers integration with other Mercury Interactive testing tools (LoadRunner. Define the expected outcome of each step. Defect records are stored in a database where you can track them until they are resolved in the software. From a project. Define your testing goals. the test is ready for execution as soon as you define the test steps. TestDirector activates WinRunner. The TestDirector workflow consists of 3 main phases: In each phase you perform several tasks: y y y Planning Tests Running Tests Tracking Defects Planning Tests Divide your application into test subjects and build a project.

you can start running the tests on your application. 2.Running Tests Create test sets and perform test runs. Create test sets by selecting tests from the project. Schedule test execution and assign tasks to testers. Review all new defects reported to the database and decide which ones should be repaired. Test a new version of the application after the defects are corrected. and assigning this group of tests a descriptive name. However. TestDirector helps you organize test runs by building test sets. You build a test set by selecting tests from the test plan tree. and end users in all stages of the testing process. Each new defect is added to the defect database. f)Reporting a New Defect .Track defects. A test set is a subset of the tests in your project. Using TestDirector. You can then run the test set at any time. Report defects detected in the software. and to help you determine when to release the application. c)Repair the open defect. Create test sets. you can report flaws in your application. testers. deciding how to manage the test run process may seem overwhelming. A test set is a group of tests you execute to meet a specific testing goal. 1. 3. Run test sets. and track data derived from defect reports. If the defect does not reoccur. Run the manual and/or automated tests in the test sets.Report defects detected in your application and track how repairs are progressing. Analyze defect tracking. Defects can be detected and reported by software developers. 2. 3. Do You Keep Track of Defects? Locating and repairing software defects is an essential phase in software development. since a project database often contains hundreds or thousands of tests. Generate reports and graphs to help you determine the progress of test execution Tracking Defects 1. Generate reports and graphs to help you analyze the progress of defect repairs. on any build of your application. d)Test a new build of the application after the defect is corrected. 4. e)Generate reports and graphs to help you analyze the progress of the defects in your TestDirector project. What is a Test Set? After planning and creating a project with tests. run together in order to achieve a specific goal. change the status of the defect. When a defect is detected in the software: a)end a defect report to the TestDirector database. Analyze the testing progress. b)Review the defect and assign it to a member of the development team.

y Daily interaction with a variety of people requires good oral and written communication skills as well as good people skills. a load tester should know what multi-way joins. and Closed. operating systems. y y You need not have "guru" level knowledge of each of the components but should have operational knowledge and an understanding of the performance issues associated with the components. schedules and resources. When you initially report a defect to the project database. There is also a icon script view which completely hides the C code. but the scripts are generated and manipulated by LoadRunner. These are the some of the Faq's of LoadRunner . Each defect is tracked through four stages: New. indexes and spin counts are and what affect they have on a database server. ODBC. application servers. Software Testing Tools-Load Runner For having the knowledge on Load Runner Specifically. you should stay in functional testing or development. y Protocol(s) used between the client and server such as HTTP/HTML. database servers. Fixed. y Load testing is not a heads down coding exercise. you assign it the status New. It helps to know the C language.You can report a new defect at any stage of the testing process by adding a defect record to the project database. you need to know the following knowledge and skills: y Components such as web servers. For example. networks and network elements such as load balancers. so there is usually not need to directly edit the code. Open. If you prefer to sit in a cube by yourself. y The LoadRunner script language is ANSI C. SQL*NET. You will work with many parts of an organization to coordinate activities. and DCOM.

step2: Creating Vusers Here.When do you do load and performance Testing? We perform load testing once we are done with interface (GUI) testing. Version 7. streaming media resource. scripts. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.1. we create Vuser scripts that contain tasks performed by each Vuser. LoadRunner automatically builds a scenario for us. This gives rise to issues such as what is the response time of the system. we define the number of Vusers. can it . We create scenarios using LoadRunner Controller. the load generator machines. step3: Creating the scenario A scenario describes the events that occur during a testing session. tasks performed by Vusers as a whole.2.Did u use LoadRunner? What version? Yes. 3.What is Performance testing? Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. does it crash. For example. 2. system resource. Vuser groups. We use LoadRunner s graphs and reports to analyze the application s performance. and percentage of Vusers to be assigned to each script. we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives. Whereas single user testing primarily on functionality and user interface of a system component. step6: Analyzing test results During scenario execution. a typical application-testing scenario might depict 1000 users logging in simultaneously to a system.What is load testing? Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users. LoadRunner records the performance of the application under different loads. and Vusers that run during the scenario. we set the scenario configuration and scheduling. and Java performance monitors. or individual Vusers. and tasks measured as transactions. database server resource. we may create a goal-oriented scenario where we define the goal that our test has to achieve. transactions and to determine weather it can handle peak usage periods. Modern system architectures are large and complex. 5. We can run the entire scenario. step4: Running the scenario We emulate load on the server by instructing multiple Vusres to perform tasks simultaneously. For web tests. 4. ERP server resource. application testing focuses on performance and reliability of an entire system. transaction. It includes a list of machines. Web server resource. Web resource. We can create manual scenarios as well as goal-oriented scenarios. Web application server resource. step5: Monitoring the scenario We monitor scenario execution using the LoadRunner online runtime. network delay.Explain the Load testing process? step1: Planning the test Here. Before the testing. will it go with different software applications and platforms. In manual scenarios. firewall server resource.

For example. 6. etc.Why do you create parameters? Parameters are like script variables. They are used to vary input to the server and to emulate real users. one script can emulate many different users on the system. and the machines on which the virtual users run their emulations. LoadRunner Analysis and Monitoring.What is correlation? Explain the difference between automatic correlation and manual correlation? Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time. and received from.What Component of LoadRunner would you use to Play Back the script in multi user mode? The Controller component is used to playback the script in multi-user mode. and see the list of values which can be correlated. 13. 8.What Component of LoadRunner would you use to record a Script? The Virtual User Generator (VuGen) component is used to record a script. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point. It enables you to develop Vuser scripts for a variety of application types and communication protocols. In manual correlation. From this . This is done during a scenario run where a vuser script is executed by a number of vusers in a group. 9. 7. 11. and c) Insert the generated function calls into a Vuser script. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code(to avoid nested queries). b) Generate the required function calls. VuGen monitors the client end of the database and traces all the requests sent to. 14. Here values are replaced by data which are created by these rules. to emulate peak load on the bank server. It can be application server specific. b)Better simulate the usage model for more accurate testing from the Controller. the Agent process. For example. in web based applications. This is when we set do load and performance testing.How do you find out where correlation is required? Give few examples from your projects? Two ways: First we can scan for correlations.Explain the recording mode for web vuser script? We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. the value we want to correlate is scanned and create correlation is used to correlate. LoadRunner Books Online. the actions to be performed. 10. We use VuGen to: a) Monitor the communication between the application and the server. VuGen creates the script by recording the activity between the client and the server. 12. For example.hold so many hundreds and thousands of users. a scenario defines and controls the number of users to emulate. Automatic correlation is where we set some rules for correlation. Controller.What are the components of LoadRunner? The components of LoadRunner are The Virtual User Generator.What is a scenario? A scenario defines the events that occur during each testing session. in order that they may simultaneously perform a task. a)Different sets of data are sent to the server each time the script is run.What is a rendezvous point? You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. the database server.

can be set in recording options and correlation tab. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. it was generated automatically and it was sequential and this value was unique. The debug information is written to the Output window. logging is automatically disabled. If we know the specific value to be correlated. GetPltform are some of the user defined functions used in my .How do you debug a LoadRunner script? VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. logging is automatically disabled. Here we can enable correlation for the entire script and choose either issue online messages or offline actions.What is a function to capture dynamic values in the web vuser script? Web_reg_save_param function saves dynamic data information to a parameter.When do you disable log in Virtual User Generator. When we add a script to a scenario. We can manually set the message class within your script using the lr_set_debug_message function. we just do create correlation for the value and specify how the value to be created. In my project. This is useful if we want to receive debug information about a small section of the script only. 18. 15. in order to avoid errors while running my script. The function should have the following format: __declspec (dllexport) char* (char*. 17.Where do you set automatic correlation options? Automatic correlation from web point of view. 19.How do you write user defined functions in LR? Give me few functions you wrote in your previous project? Before we create the User Defined functions we need to create the external library(DLL) with the function. When do you choose standard and extended logs? Once we debug our script and verify that it is functional.we can pick a value to be correlated. Once the library is added then we assign user defined function as a parameter. 16. logging is automatically disabled Extended Log Option: Select Extended log to create an extended log. Automatic correlation for database. GetCurrentTime. Secondly. Disable this option for large load testing scenarios. it was nothing but Insurance Number. there was a unique id developed for each customer. where we can define rules for that correlation. we can record two scripts and compare them. We can specify which additional information should be added to the extended log using the Extended log options. I had to correlate this value. When you copy a script to a scenario. We add this library to VuGen bin directory. including warnings and other messages. it creates a standard log of functions and messages sent during script execution to use for debugging. can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. we can enable logging for errors only.char*) Examples of user defined functions are as follows: GetVersion. We can look up the difference file to see for the values which needed to be correlated. I did using scan for correlation. Disable this option for large load testing scenarios. Standard Log Option: When you select Standard log. When you copy a script to a scenario.

If you want to stop the execution of your script on error. operating system. 20. By increasing the amount of Vusers. The configuration of any client machine includes its hardware settings. 25. 27. This limits the number of Vusers that can be run on a single generator. Each thread shares the memory of the parent driver program. thus taking up a large amount of memory. the response time also decreased. we can determine how much load the server can sustain. go to 'Scenario Scheduling Options' 24. thus enabling more Vusers to be run per generator. 21. . When we compare this with the transaction response time.earlier project. It instructs the Vuser to stop executing the Actions section. 22. To set Ramp Up. An initial value is set and a value to wait between intervals can be specified. execute the vuser_end section and end the execution. memory. For this to take effect. how do you do that? The lr_abort function aborts the execution of a Vuser script. The navigation for this is Run time settings. d) General .Where do you set Iteration for Vuser testing? We set Iterations in the Run Time Settings of the VuGen. This enables more Vusers to be run per generator. If the Vuser is run as a thread. If the Vuser is run as a process.In think time we have two options like Ignore think time and Replay think time. we have to first uncheck the Continue on error option in Run-Time Settings. the web server. 26. b) Log . and any other components that go with this larger system so as to achieve the load testing objectives. c) Think Time . Similarly. development tools.What is Ramp up? How do you set this? This option is used to gradually increase the amount of Vusers/load on the server. This system component configuration should match with the overall system configuration that would include the network infrastructure. software applications.Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.Under this we have Disable Logging Standard Log and Extended.What is the advantage of running the vuser as thread? VuGen provides the facility to use multithreading. 23. the Vuser is assigned the status "Stopped". set number of iterations.What is the relation between Response Time and Throughput? The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. we will notice that as throughput decreased. only one instance of the driver program is loaded into memory for the given number of Vusers(say 100).Explain the Configuration of your systems? The configuration of our systems refers to that of the client machines on which we run the Vusers. etc. When you end a script using this function.It has iteration count. the same driver program is loaded into memory for each Vuser. This function is useful when you need to manually abort a script execution as a result of a specific error condition.How do you perform functional testing under load? Functionality under load can be tested by running several Vusers concurrently. the peak throughput and highest response time would occur approximately at the same time. the database server. Pacing tab.What are the changes you can make in run-time settings? The Run Time Settings that we make are: a) Pacing .

number of hits per second that occured during scenario. Left Y-axis on the merged graph show s the current graph s value & Right Y-axis show the value of Yaxis of the graph that was merged.28. Example: When a user receives data from a server. the user may wait several seconds to review the data before responding. These monitors might be application server monitors. This delay is known as the think time. You can specify the resource you want to measure on before running the controller and than you can see database related issues 32. web server monitors . Y-axis of the graph that was merged becomes merged graph s Y-axis. What is the difference between standard log and extended log? . hits/sec. What is the difference between Overlay graph and Correlate graph? Overlay Graph: It overlay the content of two graphs that shares a common x-axis. 34. They help in finding out the troubled area in our scenario which causes increased response time. the number of http responses per second. throughput. Explain all the web recording options? 33. The default value is five (5) seconds. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. 37. 29. Task Distribution Diagram and Transaction profile. The peak usage and off-usage are decided from this Diagram. The active graph s Y-axis becomes X-axis of merged graph. How did you find web server related issues? Using Web resource monitors we can find the performance of web servers. database server monitors and network monitors. E. etc. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding. 35. We can change the think time threshold in the Recording options of the Vugen. Using these monitors we can analyze throughput on the webserver. The measurements made are usually performance response time. network delay graphs. Correlate Graph: Plot the Y-axis of two graphs against each other. 31. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. 38. the number of downloaded pages per second. what kind of machines we are going to use and from where they are run.g. What is think time? How do you change the threshold? Think time is the time that a real user waits between actions. 36. database and Network are all fine where could be the problem? The problem could be in the system itself or in the application server or in the code written for the application. What does vuser_init action contain? Vuser_init action contains procedures to login to a server. How did you plan the Load? What are the Criteria? Load test is planned to decide the number of users. 30. What does vuser_end action contain? Vuser_end section contains log off procedures. How do you identify the performance bottlenecks? Performance Bottlenecks can be detected by using monitors. If web server. It is based on 2 important documents. How did you find database related issues? By running Database monitor and help of Data Resource Graph we can find database related issues.

used for recording the logoff. 2) Actions . The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. 41.used for recording the logon. and Same line as . 40. 45. How can reusing the same data during iterative execution of a business process negatively affect load testing results? . Random. Recording Options) 46.used for recording the business process. 3) Vuser_end . 44. They let you build Vusers that emulate real users defined in the User Community Profile. integers are not. Unique. 43. They also allow you to record the login and logoff separately from the Action files and thus to avoid iteration. 48. This function sets a SQL statement to be processed. b) lr_output_message The lr_output_message function sends notifications to the Controller Output window and the Vuser log file. How can you tell the difference between an integer value and a string value in a VuGen script? Strings are enclosed in quotes. What are the three sections of a Vuser script and what is the purpose of each one? 1) Vuser_init . What are the benefits of multiple Action files within a Vuser? They allow you to perform different business processes in one Vuser to represent a real user who does the same thing. When would you parameterize a value rather than correlate queries? Parameterize a value only when it is input by the user. c) lr_error_message The lr_error_message function sends an error message to the LoadRunner Output window. What is the purpose of a LoadRunner transaction? To measure one or more steps/user actions of a business process. e) lrd_fetch The lrd_fetch function fetches the next row from the result set.The standard log sends a subset of functions and messages sent during script execution to a log. For what purpose are Vusers created? Vusers are created to emulate real users acting on the server for the purpose of load testing. Explain the following functions: a) lr_debug_message The lr_debug_message function sends a debug message to the output log when the specified message class is set. What are the four selection methods when choosing data from a data file? Sequential. This is mainly used during debugging when we want information about: a) Parameter substitution b) Data returned by the server c) Advanced trace 39. What is the easiest way to get measurements for each step of a recorded script? For the entire action file? Enable automatic transactions. 47. 42.(Runtime settings. d) lrd_stmt The lrd_stmt function associates a character string (usually a SQL statement) with a cursor.

This results in the hard-coding of all the request information in a web_submit_data statement. (2) To see what happens when there is a spike in system usage. and/or hidden data values with the page that is stored in the record proxy cache. 55. 50. using LoadRunner. 58.In reusing the same data for each iteration. Why should you run more Vusers than your anticipated peak load? (1) To test the scalability of the system. data fields. test results do not reflect the same performance they would if real users were loading the system with different data. 57. 60. How can caching negatively affect load testing results? When data is cached in the server s memory. 59. (lr_eval_string ("{DepartCity}"). Why is it recommended to add verification checks to your Vusers? You would want to verify. What is difference between manual scenario and Goal oriented scenario? What Goal Oriented scenarios can be created? Manual scenario: Main purpose is to learn how many Vusers can run concurrently Gives you manual control over how many Vusers run and at what times . How can you determine which field is data dependent? Rerecord the same script using different input values. 51. method. Comparison failures are typically caused by something other than HTML setting the properties of the HTTP request. Then. In this case. Run-Time Viewer. it cannot find all the properties of the HTTP request in memory. that the business process is functioning as expected under load. For what purpose should you select continue on error? Set it only when making Execution Logs more descriptive or adding logic to the Vuser. 53. What tools does VuGen provide to help you analyze Vuser run results? Execution Log. This will not provide the correct results during the analysis of the load test. lr_eval_string ("{ArrivalCity}")). the server does not need to fetch it from the database during playback. the rendezvous should be placed right before starting the UpdateOrder transaction. 56." how could you have the Vuser script return an error message which included the city names? lr_error_message ("The Vuser could not submit the reservation request for %s to %s". 54. 49. This is useful for debugging your Vuser during the initial stages of Web Vuser creation. Where should the rendezvous be placed in the script? The rendezvous should be placed immediately before the transaction where you want to create peak load. The load test then gets performance results that is not based on real server activity but caching. What do you need to do to be able to view parameter substitution in the Execution Log? Check Extended log and Parameter substitution in the Run-Time Settings. What is the purpose of selecting Show browser during replay in the General Options settings? This setting allows you to see the pages that appear during playback. Because VuGen can parse only HTML. If your Vuser script had two parameters. When does VuGen record a web_submit_data instead of a web_submit_form? Why? (Be as specific as possible) A web_submit_data is recorded when VuGen cannot match the action. then compare the two scripts. "DepartCity" and "ArrivalCity. the server recognizes the same data is requested and places it in its cache. 52. and Mercury Test Results window.

63. in terms of the number of hits.Goal oriented scenario: Goal may be throughput. b) Pages download per second graph The Pages Downloaded per Second graph shows the number of Web pages (y-axis) downloaded from the server during each second of the scenario run (x-axis). By having both the Controller and the Vusers on the same machine. Why wouldn t you want to run virtual users on the same host as the Load-Runner Controller or Database Server? Running virtual users on the same host as the LoadRunner Controller will skew the results so that they no longer emulate real life usage. the tester will not be able to determine the effects of the network traffic. This graph helps you evaluate . or number of concurrent Vusers LoadRunner manages Vusers automatically Different Goal Oriented Scenarios are: y Virtual Users y Hits per second y Transaction per second y Transaction Response time y Pages per minute 61. What are some of the factors that can cause differences in performance measurements? Different factors can effect the performance measurements including network traffic. CPU usage and caching. Explain the following: a) Hits per second graph The Hits per Second graph shows the number of HTTP requests made by Vusers to the Web server during each second of the scenario run. response time. This graph helps you evaluate the amount of load Vusers generate. What are some of the reasons to use the Server Resources Monitor? To find out how much data is coming from the cache To help find out what parts of the system might contain bottlenecks 64. the results will be slightly different. Each time you run the same scenario. 62.

e) Network delay time graph The Network Delay Time graph shows the delays for the complete path between the source and destination machines (for example. d) Transaction Response time (percentile) graph The Transaction Response Time (Percentile) graph analyzes the percentage of transactions that were performed within a given time range. This graph helps you view the general impact of Vuser load on performance time and is most useful when analyzing a scenario with a gradual load. Other protocols are available but are not necessarily full supported. 65. c) Transaction Response time (under load) graph The Transaction Response Time (Under Load) graph is a combination of the Running Vusers and Average Transaction Response Time graphs and indicates transaction times relative to the number of Vusers running at any given point during the scenario. in terms of the number of pages downloaded.What protocols does LoadRunner support? LoadRunner ships with support for the following protocols. This graph helps you determine the percentage of transactions that met the performance criteria defined for your system. the database server and Vuser load generator). E-Business y FTP y LDAP y Web/Winsocket Dual Protocol y Palm y SOAP y Web (HTTP/HTML) Wireless .the amount of load Vusers generate.

y i-mode y VoiceXML y WAP Streaming y Media Player (MMS) y Real Mailing Services y Internet Messaging (IMAP) y MS Exchange (MAPI) y POP3 y SMTP Enterprise Java Beans y Enterprise Java Beans (EJB) .

y Rmi-Java Distributed Components y COM/DCOM y Corba-Java y Rmi-Java Middleware y Jacada y Tuxedo 6 y Tuxedo 7 ERP y Baan y Oracle NCA y PeopleSoft .Tuxedo .

y Siebel .MSSQL y SAP Client/Server y DB2 CLI y Domain Name Resolution (DNS) y Informix y MS SQL Server y ODBC y Oracle (2-Tier) y Sybase CtLib .Oracle y Siebel .DB2 CLI y Siebel .

y Sybase Dblib y Windows Sockets Legacy y Terminal Emulation (RTE) Custom y C Vuser y Javascript Vuser y Java Vuser y VB Script Vuser y VB Vuser 66. hits per second. Other monitors are available but are not necessarily full supported.Provide end-user response times. transactions per second. i Hits per Second i HTTP Responses per Second . Client-side Monitors End-to-end transaction monitors .What can I monitor with LoadRunner? LoadRunner ships with support for the following components.

Provides performance data inside the Web application server. such as connections per second. active database connections. hits per second.Provides performance data for network devices such as bridges and routers. network and operating system performance metrics. such as CPU. Web Server Performance Monitors Web server monitors . memory and network throughput.Provide performance data inside the Web servers. i NT server resources i UNIX / Linux server monitor Load Appliances Performance Monitors i Antara.Provide hardware. i Apache i Microsoft IIS i iPlanet (NES) Web Application Server Performance Monitors Web application server monitor . i SNMP monitor . i Allaire ColdFusion i ATG Dynamo i BEA WebLogic (via JMX) i BEA WebLogic (via SNMP) i BroadVision i IBM WebSphere i iPlanet Application Server i Microsoft COM+ Monitor i Microsoft Active Server Pages . such as active connections.Provides a breakdown of the network segments between client and server.i Pages Downloaded per Second i Throughput i Transaction Response Time i Transaction per Second (Passed) i Transaction per Second (Failed) i User-defined Data Point i Virtual User Status i Web Transaction breakdown Graphs Server Monitors NT/UNIX/Linux monitors . etc. etc.net Application Deployment Solutions i Citrix MetaFrame (available only for LoadRunner) Network Monitors i Network delay monitor . as well as network delays.

What is the difference between LoadRunner and Astra LoadTest? Astra LoadTest is another load test tool from Mercury Interactive built specifically for testing web applications.What is the current shipping version of LoadRunner? 7.Provides performance data inside the database. In that LoadRunner supports web applications plus much more. the frequency of execution (iteration pacing and think times) and the amount of logging. i Is easier to learn. i IBM WebSphere MQ (MQSeries) (available only for LoadRunner) In addition to these monitors. such as current requests in queue. 67. etc. the protocol(s) used.Provides performance data inside a BEA Tuxedo application server.02 footprints. i Has less functionality.com/group/LoadRunner/files. i Microsoft Windows Media Server i Real Networks RealServer Firewall Server Resource Monitors i CheckPoint FireWall-1 Database Server Resource Monitors Database monitor . 68. i Uses the VBScript scripting language.How much memory is needed per user? You can get some approximation of the memory needs by looking at the "LR 7. the size and complexity of the script(s). 69.i Oracle 9iAS HTTP Server i SilverStream Streaming Media Performance Monitors (available only for LoadRunner) Streaming specific monitors for measuring the end user quality on the client side.8 70. CPU speed.pdf" file located on the LoadRunner discussion group at groups. Relative to LoadRunner it: i Supports only HTTP and HTTPS protocols. memory and operating system). it is the preferred tool for load .yahoo. i Costs less. LoadRunner also supports user defined monitors which allows you to easily integrate the results from other measurement tools with LoadRunner data collection. i Has a larger footprint (~ 5 MBytes). such as active database connections.How many users can I emulate with LoadRunner on a PC? This greatly depends on the configuration of the PC (number of CPUs. and isolate performance bottlenecks on the server-side. i SQL Server i Oracle i DB2 i Sybase (available only for LoadRunner) ERP Performance Monitors (available only for LoadRunner) i SAP R/3 Monitor Middleware Performance Monitors i Tuxedo .

When do you disable logging.testing web applications. What are the different phases in Loadrunner? 3. Explain the Configuration of your systems? 20. What are the components of LoadRunner? 4. Which analysis tools did you use? 14. Did u use LoadRunner? What version? 16. Scripts built for load testing with LoadRunner can be used by Topaz for monitoring without modification. Why do you compare two scripts? How you do that? 26. Where do you set Iteration for Vuser testing? 15. Which tool did you use to recorded code in Load Runner? 17. Explain Throughput and Hits per second? 19. patches. What is a scenario? 7.What is the relation between LoadRunner and Topaz? Topaz is Mercury Interactive's line of products and hosted services for monitoring applications after deployment to production. The Topaz products are built with LoadRunner technology and use the same script recorder. What are the different Vuser types? 2. What is the relation between Response Time and Throughput? 18. 71. How do you synchronize the scripts in LoadRunner? 31. What is the function of web_create_html_param? 30. What does vuser_end contain? 11. When did you decide to do Load Testing? 22. What are the different graphs in LoadRunner? 27. Why do you create parameters? 24. What is a transaction in LoadRunner? 6. What are the different reports in LoadRunner? 28. Modes of logging in LoadRunner? 32. The exception is if the load testers are non-technical (bad idea) or the load test project's budget is too limited to afford LoadRunner. The total cost of LoadRunner typically runs from USD$50.000 to $100. What are the sections of Vuser Script? 9. Maintenance cost is 18% of the total list price. What does vuser_init contain? 10. What is a rendezvous point? 12. When do you choose standard and extended logs? 29. Types of extend logging? .000 or more. What is the difference between iteration and vusers? 8. phone support and access to the support web site. You will need to talk a sales representative to price out the various components. 72. What is correlation? 25. What are the changes you can make in run-time settings? 5. The maintenance includes new LoadRunner releases. What are 2 modes in LoadRunner? 13. What is cross-scenario analysis? 23. How did you plan the Load? What are the Criteria? 21. 1.How much does LoadRunner cost? The main cost drivers for a LoadRunner license are the number of users to be simulated and number and type of protocols used.

How do you plan load test? 36. and ensuring that problems are found and dealt with. Each . What are LoadRunner for web correlation function? 49. How do we parameterize? 35. What steps would you use to replace data in a script with a Parameter? 47. The controlled conditions should include both normal and abnormal conditions. Which mode would you use to debug a LoadRunner script? 44. What are three types of Extend log? 45.this is a personal documentation standard. Every software development. each of us has some idea of how documentation should be written. and programmers review their products to make sure they meet their personal standards. It is oriented to 'prevention' Testing involves operation of a system or application under controlled conditions and evaluating the results (eg. database and Network are all fine where could be the problem? 38. Where do you set how many times a LoadRunner script should repeat? 46. enhancement. what is the difference between standard log and extended log? Software Testing-Quality Assurance What are the differences of QA & testing Software QA involves the entire software development PROCESS . or maintenance project includes some quality assurance activities. If web server. delivered on time and within budget. How do you identify the bottlenecks in load test? 37. How would you know that it s a resource contention problem? 39. and is maintainable. making sure that any agreed-upon standards and procedures are followed. Each programmer has some idea of how code should be written. It is oriented to 'detection'. and this idea functions as a coding standard for that programmer. Even a simple. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. meets requirements and/or expectations. one-person development job has QA activities embedded in it. or audits. Quality software is reasonably bug-free. even if the programmer denies that "quality assurance" plays a part in what is to be done. then D should happen'). How did you report web server related issues? 40. What is think time? How do you change the threshold? 51. what are three main components in LoadRunner? 42. What are three modes of LoadRunner Logging? 43. Similarly. and does C. What is a rendezvous point 50. These are QA reviews.monitoring and improving the process. We proofread and revise our documents. How many times you can run a script? 34.33. 'if the user is in interface A of the application while using hardware B. How did you report database related issues? 41. What is the LoadRunner for web statement used to performer a Text check? 48.

prioritized and translated into test conditions and test cases using the right test techniques.The amount of time and money available for testing. In the test strategy. OR The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built. and these are verification and validation processes. By matching these test conditions to the requirements. The extent and formality of project QA activities are decisions that the client. Software Quality Assurance involves reviewing and auditing the software products and activities to verify that they comply with the applicable procedures and standards and providing the software project and other appropriate managers with the results of these reviews and audits. To do this. b)Efficiency: We want to test as much as possible in as short a time as possible. we must create a test set that realizes the coverage we need with a minimal number of test cases. and ensuring that problems are found and dealt with. we want to . project manager. we have to prioritize the allocation of our testing effort. Each characteristic views the testing process from a different perspective. a)Effectiveness: Given the amount of time and money available for testing. and the importance of specific test results becomes apparent. making sure that any agreed-upon standards and procedures are followed. A project's formal QA program includes the assurance processes that each team member goes through. rather than relying on personal standards and processes. and the QA department make based on their assessment of the project and its risks. Software quality assurance Software QA involves the entire software development PROCESS monitoring and improving the process. It is oriented to 'prevention'. the risks of the system under test are well defined.programmer and writer tests or inspects his or her own work. the completeness of the test set is enhanced. Thus. we need to evaluate the test process itself against several properties of a quality standard. but it involves planning and establishing project-wide standards. Software Testing-About Quality Assurance To establish the quality of the test process.

eliminate test cases that overlap, since they have no added value. Instead, our goal is to ensure that every test case focuses on a unique aspect, or a set of related aspects. As with effectiveness, test techniques are very helpful tools in developing efficient testware.

c)Measurability: To determine if a test has been successful, the test results must be measurable. This means that test cases and test scenarios should be made in such a manner that the result is quantifiable according to binary logic. The result of the execution of test case should be 1)passed or 2)not passed 3)never or may be d)Divisibility: There are many different aspects of an information system that could be tested. Not all aspects are equally important, especially not to different kinds of stakeholders. For example, end users are generally not concerned about software maintainability , their focus is on time-to-market. On the other hand developers may be primarily interested in maintenance as it defines their future workload. This means that there are different kinds of tests (white-box and black-box), which are undertaken by different stakeholders. It is important that the division of responsibilities concerning testing is made clear to all stakeholders before testing commences, otherwise completeness may not be achieved. e)Maintainability: One of the main problems with testware is maintenance. After initial project, the test set should be maintained so that it can be used for the next release of the information system being tested. Maintenance should be made as easy as possible, otherwise the test set will be neglected. Thus the structure of the test set should be as simple as possible, and formally registered,thus if a specification changes, the affected test cases should also be noted. This not only means adding new test cases, but also that those which are no longer needed should be removed. Maintainability has a big impact on reproducibility.

f)Reproducibility: Since we try to avoid developing throw away software, we should also not develop throw away testware. A lot of time and money is wasted when a test is not easily reproducible, since new tests must be created. Since we want to ensure that our tests are replicable with the same test cases, a well-developed test set makes it possible to retest and regression test with a minimum of effort. These tests are particularly well suited to automation since a computer can, if required, repeat test cases again and again.

Importance of Software Testing Quality Assurance

Quality assurance is the planned and systematic set of activities that ensures that software processes and products conform to requirements, standards, and procedures. Processes include all of the activities involved in designing, developing, enhancing, and maintaining software. Products include the software, associated data, its documentation, and all supporting and reporting paperwork. QA includes the process of assuring that standards and procedures are established and are followed throughout the software development lifecycle. Standards are the established criteria to which the software products are compared. Procedures are the established criteria to which the development and control processes are compared. Compliance with established requirements, standards, and procedures is evaluated through process monitoring, product evaluation, audits, and testing. The three mutually supportive activities involved in the software development lifecycle are management, engineering, and quality assurance. Software management is the set of activities involved in planning, controlling, and directing the software project. Software engineering is the set of activities that analyzes requirements, develops designs, writes code, and structures databases. Quality Assurance ensures that the management and engineering efforts result in a product that meets all of its requirements. GOALS OF QUALITY ASSURANCE Software development, like any complex development activity, is a process full of risks. The risks are both technical and programmatic; that is, risks that the software or website will not perform as intended or will be too difficult to operate/browse, modify, or maintain are technical risks, whereas risks that the project will overrun cost or schedule are programmatic risks. The goal of QA is to reduce these risks. For example, coding standards are established to ensure the delivery of quality code. If no standards are set, there exists a risk that the code will not meet the usability requirements, and that the code will need to be reworked. If standards are set but there is no explicit process for assuring that all code meets the standards, then there is a risk that the code base will not meet the standards. Similarly, the lack of an Error Management and Defect Life Cycle workflow increases the risk that problems in the software will be forgotten and not corrected, or that important problems will not get priority attention. The QA process is mandatory in a software development cycle to reduce these risks, and to assure quality in both the workflow and the final product. To have no QA activity is to increase the risk that unacceptable code will be released. Check out the Testing Effectiveness QA Activities and Deliverables Within the Delivery Lifecycle Each of the five phases of Project Delivery Lifecycle will incorporate QA activities and deliverables that off-set the risks of common project problems. This summary of the Project Delivery Lifecycle incorporates a high-level list of the QA activities and deliverables associated with each phase. ASSESSMENT PHASE

Assessment process consists of market research and a series of structured workshops that the and client teams participate in to discuss and analyze the project objectives and develop a strategic plan for the effort. The products of these meetings, combined with market research, form the basis for the final output of the assessment: a tactical plan for realizing specific business and project objectives. QA Deliverables a) QA Editor submits revised and approved deliverable documents. PLANNING PHASE In the Planning phase, the team defines specific system requirements and develops strategies around the information architecture (static content and information flows) and the business functions that will be addressed. QA Activities a) Establishing Standards and Procedures: QA records the set requirements. b) Planning (Test Matrix): QA develops a test matrix. QA confirms that all set requirements are testable and coincide with the project objectives. c) Auditing Against Standards and Procedures: QA editor edits the documents and confirms that they meet the objectives and the quality standards for documents. d) Establishing Completion Criteria: QA records the completion criteria for the current phase. QA Deliverables a) QA submits an initial test matrix. b) QA Editor submits revised and approved deliverable documents. DESIGN PHASE During the Design phase, the team identifies all of the necessary system components based on the requirements identified during the Assessment and Planning phases. The team then creates detailed design specifications for each component and for the associated physical data requirements. QA Activities Auditing Standards and Procedures: QA confirms that all designs meet the set requirements and notes any discrepancies. Additionally, QA identifies any conflicts or discrepancies between the final design of the system and the initial proposal for the system and confirms that an acceptable resolution has been reached between the project team and the client. Planning (QA Plan, QA Test Plan): a) QA begins developing the QA Plan. b) QA revised the test matrix to reflect any changes and/or additions to the system. QA Deliverables a) QA presents the initial QA test plan. b) QA submits a revision of the test matrix. DEVELOPMENT PHASE During the Development phase, the team constructs the components specified during the Design Phase. QA Activities a) Planning (Test Cases): Using the test matrix, QA develops a set of test cases for all deliverable functionality for the current phase. b) Prepare for Quality Assurance Testing: c) QA confirms that all test cases have been written according to the guidelines set in the QA test

poor information work flow. Z Lack of process for integration (the big picture). the team focuses on testing and review of all aspects of the system. Z Lack of solid testing methodology. The team will also develop system documentation and a training or market test plan in preparation for system launch. Z Scant record of who changed what. when. or why. QA Deliverables a) QA submits a set of Test Cases. MISSING COMPONENTS With the following programmatic components in place. QA Activities a) QA Testing: QA executes all test cases in the QA testing cycle. IMPLEMENTATION PHASE In the Implementation phase. Z Lack of processes. the root causes of many common project problems may be corrected: Z Quality assurance Z Configuration management Z Version control Z Controlled testing environment Z Quality assurance testing Z Error management Z Standardized work flows for all of the components above Software Testing-Quality Assurance-FAQ'S . d) Quality Assurance works closely with the Configuration Management group to prepare a test environment. b) QA Environment is set up. Z Modules that do not work together. Z Limited roll-back capabilities ROOT CAUSES Such problems often stem from the following root causes: Z Lack of communication. Z Inflexibility. Z Lack of standards and/or procedures. inability to adapt to changing requirements. Z Late discovery of serious project flaws. projects risk the following common technical and programmatic problems: Z Inaccurate understanding of the project requirements.plan. QA Deliverables a) Test Results b) Defect Reports QA and the Project Delivery Lifecycle Common Project Problems Like all software development activities.

what format & what information to be included. Z Confirm that deliverables meet client expectations/requirements. and test procedures) and reviews the documentation for completeness and adherence to standards. test specifications. QA confirms that: . frequency.) 8) High level over view of the test environment (what operating systems. Z Report. Z Confirm the full functional capabilities of the final product.) 9) Who is responsible for test data? How do we generate test data? 10) What are the test reports / test deliverables due to each of the stake holders (what templates. what software etc. guidelines for test case designing. and guidelines for test case prioritizing. guidelines for test case execution.. what App Servers/web servers. PREPARING FOR QA TESTING Prior to conducting formal software testing. naming conventions etc)? 12) What are the existing test cases that the customer has? What is the style? How is it designed? Does the team need to follow the same approach as that of the client or could it make changes? 13) What is the build & deployment process? What is the involvement of the software provider s project team? 14) What is the Issue/Defect reporting process & what are the relevant guidelines? 15) What are the customer contact points & what is the escalation process? What is the communication protocol? Software Testing-Quality Testing QA Testing Software testing verifies that the software meets its requirements and that it is complete and ready for delivery. QA develops testing documentation (including test plans. and execute a full testing lifecycle. Z Confirm stability and performance (response time. what databases. OBJECTIVES OF QA TESTING Z Assure the quality of client deliverables. etc. what browsers.15 point questions to plan for QA in your Project? 1) What is the scope of testing? (UI testing? Database testing? Multi Platform testing? API testing? Java Classes testing? Test Automation using a specific tool?) 2) What is the skill set required from the test team? (White Box skills or Black Box skills? What is the test tool? What trainings are required for building the skill set?) 3) What is the proposed architecture of the product (high level architecture from the business perspective)? 4) Over view of the product functionality with emphasis on the critical modules? 5) What is test process followed by the client? Does the team need to follow the process of the client or use the software provider s test process? 6) What tailoring to the test process is required? 7) What are the specific tools used in the project (Test automation tools. assemble. what hardware. Unit testing tools. test automation guidelines. Build & deployment tools etc.) of the final product.. Z Design. who requires what report?) 11) What are the guidelines (guidelines for test coverage. document and verify code and design defects.

Unit testing (conducted by Development) Unit test case design begins after a technical review approves the high level design. 5. but that cause each such condition to be executed at least once. In other words: Z Each decision statement in the program shall take on a true value and a false value at least once during testing. then the QA team will reject the build Z If portions of the website are testable and some portions are not yet available. Examples of testable standards include response time and compatibility with specified browsers and operating systems.Z The test cases are testing the software requirements in accordance with test plans. Test case designers shall generate cases that not only cause each condition to take on all possible values at least once. the QA team runs an initial battery of basic tests to verify the build. 6. To accomplish this. and other functional documents produced during the course of the project (such as records of change requests. Any functionality that does not meet the requirements will be recorded as a defect until resolution is delivered. the project manager. a statement and condition technique shall be used. and reviews the test reports. documents and reports defects. Build Verification When a build has met completion criteria and is ready to be tested. feedback. Z Each condition shall take on each possible outcome at least once during testing. Functional Testing Functional testing assures that each element of the application meets the functional requirements of the business as outlined in the requirements document/functional brief. system design specification. Z If all portions of the build pass for testing. therefore. 2. Configuration Management The configuration management team prepares the testing environment 3. The white box testing technique ignores the function of the program under test and focuses only on its code and the structure of that code. . 4. White box testing is used to test the modules and procedures that support the modules. the QA team will proceed with testing. QA then conducts the testing in accordance with procedure. The unit test cases shall be designed to test the validity of the program's correctness. Non-functional Testing (Performance Testing) Non-functional testing proves that the documented performance standards or requirements are met. Z The test cases are verifiable. The final integration test proves that the system works as an integrated unit when all the fixes are complete. THE KEY TO PRODUCTIVE QA TESTING It is crucial to recognize that all testing will be conducted by comparing the final product to the product s set requirements. technical lead and QA team will reassign the build schedule and deliverable dates. TWELVE TYPES OF QA TESTING 1. Z If the build is not testable at all. product requirements must state all functionality of the software and must be updated as changes are made. Z The correct or "advertised" version of the software is being tested (by QA monitoring of the Configuration Management activity). and resolution of issues). Integration Testing Integration testing proves that all areas of the system interface with each other correctly and that there are no gaps in the data flow.

reports outstanding defects/known issues. and each test case should be submitted for peer review. then the system will be tested for those levels as well. interface development and project management to discuss defects. Regular meetings will take place between QA. .If the system hardware specifications state that the system can handle a specific amount of traffic or data volume. Error Management During the QA testing workflow. outputs. Regression Testing Regression testing is performed after the release of each phase to ensure that there is no impact on previously released software. 8. execution preconditions. A set of inputs. and expected outcomes developed for a particular objective. QA tests specifically in those areas to validate the fixes. test steps. a test case describes how to perform a particular test. such as to exercise a particular program path or to verify compliance with a specific requirement. and expected outcomes developed for a particular objective. such as to exercise a particular program path or to verify compliance with a specific requirement. 9. priority of defects. A set of inputs. Regression testing cannot be conducted on the initial build because the test cases are taken from defects found in previous builds. Regression testing ensures that there is an continual increase in the functionality and stability of the software. 11. This is usually the smallest unit of testing. QA engineers simulate a user conducting a set of intended actions and behaving as a user would in case of slow response. prerequisites. Ad Hoc Testing This type of testing is conducted to simulate actual user scenarios. all defects will be reported using the error management workflow. etc. Software Testing-Testing Test Cases y y y y y y Test Case is a commonly used term for a specific test. Whereas the test plan describes what to test. 7. Test cases should be written by a team member who understands the function or technology being tested. 10. and makes a recommendation for release into production. etc. verification steps. Defect Fix Validation If any known defects or issues existed during development. A test case is a detailed procedure that fully tests a feature or an aspect of a feature. QA Reporting QA states the results of testing. Release into production If the project team decides that the build is acceptable for production. test environment. A Test Case will consist of information such as requirements testing. 12. such as clicking ahead before the page is done loading. You need to develop a test case for each test listed in the test plan. and fixes. execution preconditions. system development. the configuration management team will migrate the build into production.

Version Number. Cycle#3. an ideal template is providing below to prepare Test Cases The Name of this Test Case Document itself follows some name convention like below so that by seeing the name we can identify the Project Name and Version Number and Date of Release. consuming time that would be better spent on testing. prioritize your testing so that you perform the most important tests those that focus on areas that present the greatest risk or have the greatest probability of occurring first. In descriptive test cases. Detailed test cases are more time-consuming to develop and maintain.3 01_12_04 On the Top-Left Corner we have company emblem and we will fill the details like Project ID. the steps describe exactly how to perform the test. Author of Test Cases. detailed test cases are reproducible and are easier to automate than descriptive test cases. Instead of trying to test every combination. the tester decides at the time of the test how to perform the test and what data to use. So that we can easily identify to which Module and which sub-module y . To prepare these Test Cases each organization uses their own standard template. remember that it is not feasible to test everything. Cycle#1. recipe-like steps to writing general descriptions. Again this Cycle is divided into Actual Result. In detailed test cases. If a test case is related to Module then we will name it as M01TC001. Cycle #2. Date of Creation and Date of Release in this Template. When planning your tests. Requirement Number. Type of Test Case. Bugzilla Test Cases 1. In addition. Cycle#4 for each Test Case. Status.2. Version Number. if we are expecting more than one expected result for the same test case then we will name it as TC001. On the other hand. these range from developing detailed. Expected Result.Organizations take a variety of approaches to documenting test cases. Bug ID and Remarks.0. the role of individual testers will start from the preparation of Test Cases for each level in the Software Testing like Unit Testing. test cases that are open to interpretation are not repeatable and can require debugging. Version Number and Release Date. Project Name. and if a module is having a sub-module then we name that as M01SM01TC001. such as when you are optimizing configurations. This is particularly important if you plan to compare the results of tests over time. Once the Test Lead prepared the Test Plan.1. Test Case Name. And we will maintain the fields Test Case ID. Action. System Testing and User Acceptance Testing and for each Module. Project Name-----Test Cases-----Ver No-----Release Date y y y The bolded words should be replaced with the actual Project Name. For eg. Most organizations prefer detailed test cases because determining pass or fail criteria is usually easier with this type of case. Integration Testing. Test Case ID: To Design the Test Case ID also we are following a standard: If a test case belongs to application not specifically related to a particular Module then we will start them as TC001.

which are included in the Test Plan. Regression. Action: This is very important part in Test Case because it gives the clear picture what you are doing on the specific object. Under each Cycle#1 we are having Actual. . For Test Case we will specify to which Requirement it belongs to. So that we can identify finally how many Test Cases are there for each Version. System. Login form. And one more advantage of this convention is we can easily add new test cases without changing all Test Case Number so it is limited to that module only Requirement Number: It gives the reference of Requirement Number in SRS/FRD for Test Case.. Expected Result: This is the result of the above action. I mean to say we will specify the Object name for which it belongs to. After that he approved the Document. Status. For eg. for which that particular Test Case belongs to. But here I provided only one Cycle in this Template but you have to add more cycles based on your requirement. For Passed Test Cases Bug ID should be null and for failed Test Cases Bug ID should be Bug ID in the Bug Report corresponding to that Test Case. Now we are ready for testing with this Document and we will wait for the Actual Application. Based on this we can estimate the resources. It specifies what the specification or user expects from that particular action. Bug ID and Remarks.y y y y y y y y it belongs to. The main objective of this column is we can predict totally how many GUI or Functionality test cases are there in each Module. Now we will use the Cycle #1 parts. We can say the navigation for this Test Case. The advantage of maintaining this one here in Test Case Document is in future if a requirement will get change then we can easily estimate how many test cases will affect if we change the corresponding Requirement. Version Number: Under this column we will specify the Version Number. in which that particular test case was introduced. Actual: We will test the actual application against each Test Case and if it matches the Expected result then we will say it as "As Expected" else we will write the actually what happened after doing those action. Test Case Name: This gives more specific name like particular Button or text box name. He will review this document for coverage of all user Requirements in the Test Cases. Based the steps we have written here we will perform the operations on the actual application. Some organizations document Three Cycles some organizations maintain the information for Four Cycles. Number of Cycles is based on the Organization. User Acceptance. Status: It simply indicates Pass or Fail status of that particular Test Case. OK button. Performance etc. Type of Test Case: It provides the List of different type of Test Cases like GUI. Functionality. So while designing Test Cases we can select one of this option. Security. After that we will send this document to the concerned Test Lead for approval. If Actual and Expected both mismatch then the Status is Fail else it is Pass. It should be clear and for each expectation we will sub-divide that Test Case. Load. Up to the above steps we will prepare the Test Case Document before seeing the actual application and based on System Requirement Specification/Functional Requirement Document and Use Cases.. So that we can specify pass or fail criteria for each expectation.

usability). If it can be shown that there is no scaling problem. iii) Analysis Using some form of analysis to validate that the product will perform as needed when demonstrating it is too costly. Also can be used to perform some acceptance tests where the product is running in the intended environment versus some test or development lab. Test cases are Effective--Find Faults Exemplary--represents others Evolvable--easy to maintain Economic--cheap to use There are many Test cases written and here are some of the sample test cases Sample Test Cases for Calculator Sample Test Cases To Verify The Functionality Of Home Page Sample Test Cases for Login Button Software Testing-Testing Validation Validation refers to a set of activities that ensure that software that has been built is traceable to the customer requirements. Early validation activities reveal: . ii) Demonstration Having the customer or a representative use the product to ensure it meets some minimum constraints (i. Early validation Leaving validation until the end of the project severely increases the risk of failure. Validation activities early in the project can reduce that risk.. that is feasible to generate. For example: Having pilots fly an aircraft before the customer signs off on the program. the most common being: i) Inspection Focused on meeting particular customer constraints." There are several ways to accomplish validation. or generally impractical.y Bug ID: This is gives the reference of Bug Number in Bug Report. unsafe. iv) Prior data When a component being used has been already validated for a previous project that had similar or stricter constraints. this would be sufficient to validate the performance need. So that Developer/Tester can easily identify the Bug associated with that Test Case. For example: Using a well-known encryption component to meet security needs when the component has been already validated for tougher security requirements. For example: Using interpolation of performance load based on the worst case. to validate a need that is more stringent than this worst case.e. For example: An inspection of a machine to see that it will fit in the desired space or an inspection of code modules to ensure their compliance with maintenance demands. "Are we building the right product?" "Confirmation by examination and provisions of objective evidence that the particular requirements for a specific intended use are fulfilled.

or motion. iv) Hidden expectations: Discussions with the customer can reveal unstated expectations or assumptions about the design.i) Clarifications Perhaps the most important purpose of early validation is to clarify the real meaning of requirements. An issue is that no spec is totally complete. in its simplest terms. iii) Additions You can use early validation to discover and coordinate new requirements during the program. Software Testing-Testing Verification Verification refers to a set of activities that ensure that software correctly implements a specific function." Verification is the activity." Various approaches to early validation include: Very early validation of requirements closely parallels good requirements elicitation and analysis. ii) Drivers Some requirements are more critical to the customer than others. site visits. or goal-based use cases. which ensures the work products of a given phase fully implement the inputs to that phase. Another use is to coordinate derived requirements with the customer. placement. Particularly in a new environment that the designer is not familiar with. In this case." Levels of Verification There are four levels of verification: i) Component Testing Testing conducted to verify the implementation of the design for one software element (unit. In other words. module) or a collection of software elements. is the demonstration that the software implements each of the software requirements correctly and completely. One hint is extreme detail in the requirements that may be surrogate for "I want it to work like the old or another system. Some have larger cost or design impact on the product. These include phrases such as "readable" or "user-friendly" or involve human interfaces in general. or "the product was built right. ii) Integration Testing An orderly progression of testing in which various software elements and/or hardware elements . the "right product was built. "Are we building the product right?" "Confirmation by examination and provisions of objective evidence that specified requirements have been fulfilled." Using the above definitions in software development Validation. With early validation you can uncover the customer s priorities and relate them to the development impact to identify the serious drivers. early validation of requirements can uncover missing requirements. Early validation can get a response to various interpretations and provide more specifics in areas such as acceptable size. and it is assumed that the designer has a familiarity with the intended end use environment. the riskiest requirements are subjective. The obvious cases are where requirements are incomplete. the need is often driven by the customer s lack of knowledge with the technologies being applied and their impact on the use of the product. Techniques for doing this include involving the user. However.

iii) Testing Also known as "white box" or "logic driven" testing. Given input values are entered. Fagan approach)." These statements make for a very expensive test program. which is not on the safety critical list. Typical techniques include statement coverage.g. and the resulting output values are compared against the expected output values. technical reviews. very low complexity software function. As an example. To minimize the costs of component verification. boundary-value analysis. and formal inspections (e. inspection is not applicable at the system level (you don't look at the details of code when performing system level testing). Companies need to avoid making statements like "all paths and branches will be executed during component testing. condition coverage. Given input values are traced through the test item to assure that they generate the expected output values. iii) System Testing The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. may only need informal inspection (walkthrough) performed. Component level verification can easily get very expensive. and equivalence partitioning. software reviews. Typical techniques include error guessing. On the other hand. This testing proceeds until the entire system has been integrated. as all code developed is required to have one of the most labor-intensive type of testing performed on it. the V&V group develops rules for determining the type of verification method(s) needed for each of the software functions. walkthroughs. Types of Verification There are four types of verification that can be applied to the various levels outlined above: i) Inspection Typical techniques include desk checking. iv) Demonstration Also known as "black box" or "input/output driven" testing. the most effective way to find anomalies at the component level is inspection.are integrated together and tested. Other complicated functions typically require white box testing since the functions become difficult to determine how they work. We recommend performing inspections before doing the white box testing for a given module as .. A logical approach to testing is to utilize techniques and methods that are most effective at a given level. ii) Analysis Mathematical verification of the test item. As an example. Explanation The four methods for verification can be used at any of the levels although some work better than others for a given level of verification. iv) Acceptance Testing Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. with the expected intermediate values along the way. which can include estimation of execution times and estimation of system resources. and decision coverage.

version 1. Major part involves a total rewrite or rearchitecting of a software product.it is less expensive to find the errors earlier in the development.0 should be incompatible with version 1.6. major changes in design.8. The resulting V&V effort has become a significant part of the software development effort for a medical device. This number starts at 0 (zero). version 1. For example. Revision part:-A new revision part indicates a QFE (Quick Fix Engineering) release that is compatible with the previous version and that should be installed. You can also specify all four parts of the version number explicitly: All version numbering is based on external view. code.6. You're free to use whatever version numbering you'd like as you release new versions.2. unit. Minor part involves additions in features that require changes in documentation/external API. Build part:-A new build part indicates probable compatibility. version 2. 0 for the minor part. Version number has four parts: Major part . One of the key pieces to demonstrate that the system is implemented completely is a Requirements Traceability Matrix (RTM). This is any change that doesn't require documentation/external API changes. This means that each of the four parts can be any number in the range zero to 65. where are they implemented.13 might be a mandatory bug-fix upgrade to version 1. Version numbers are stored as a 128-bit number that is logically partitioned into four 32bit numbers. Setting the Version attribute to "1. and changes in platform fall into this category. which documents each of the requirements traced to design items.5. Changes in language.0. Getting more confusing by software companies all using different schemes.what are the requirements.0.12. Thus. Minor part :-A new Major or Minor part indicates that the new version is incompatible with the old one. However. not the internal production.0 is probably compatible with version 1. this is a number scheme that intended to reflect the release of a product. and how have you tested them. For example.7. and to come up with build and revision part numbers automatically. integration and system test cases. Software Testing-Versioning y y y y y Version numbering is a confusing topic. For example.0. You should change major version numbers whenever you introduce an incompatibility into your code. The RTM is an easy and effective way for documenting .536.*" tells use 1 for the major part.5. This number starts at 0 (zero).1.0.0. it's useful to establish standards for choosing new version numbers.0. This number starts at 1 (one). Meaning. recompilation does .5. Typically you should change minor version numbers when you introduce a service pack or a minor upgrade.

not effect the version number. The terms "alpha" and "beta" shall not be used in the version number. These terms reflect little these days and aren't qualitative (i.e. "1.0.0" sorts before "1.0.0 alpha", but should sort after).

Software Testing Development Life Cycle
Life Cycle of Software Testing Process The following are some of the steps to consider:
y y y y y y y y y y y y y y y

y y y y y y

Obtain requirements, functional design, and internal design specifications and other necessary documents Obtain schedule requirements Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.) Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc. Determine test environment requirements (hardware, software, communications, etc.) Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.) Determine test input data requirements Identify tasks, those responsible for tasks Set schedule estimates, timelines, milestones Determine input equivalence classes, boundary value analyses, error classes Prepare test plan document and have needed reviews/approvals Write test cases Have needed reviews/inspections/approvals of test cases Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data Obtain and install software releases Perform tests Evaluate and report results Track problems/bugs and fixes Retest as needed Maintain and update test plans, test cases, test environment, and testware through life cycle

This is the basic diagram of Software Testing Development Life Cycle.

Glossary of Terms Of Software Engineering
In this section we go through the list of Glossary of Software Engineering Terms ACCEPTANCE CRITERIA: The criteria that the software component, product, or system must satisfy in order to be accepted by the customer. ACCEPTANCE PROCESS: The process used to verify that a new or modified software product is fully operational and meets the customer's requirements. ACCEPTANCE TESTING: Formal testing conducted by the customer to determine whether or not a software product or system satisfies the documented acceptance criteria. Successful completion of acceptance testing defines the point at which the customer will accept the product as a successful implementation. ACTIVITY: A major unit of work to be completed in achieving the objectives of a software project. An activity incorporates a set of tasks to be completed, consumes resources, and results in work products. An activity may contain other activities in a hierarchical manner. All project

ALGORITHM: A set of well-defined rules for the solution to a problem in a finite number of steps. BUSINESS PROCESS COMPLEXITY: A project risk factor that takes into consideration the complexity of the business process or processes under automation. exchanges with external systems or significant validation/processing logic. information. for the presence of a specific set of attributes and structural elements. Project risk is considered low when all processes involve fairly simple data entry and update operations. Project risk is considered medium when a minority of the business processes under automation are complex. AUTHENTICATION: The ability of each party in a transaction to verify the identity of the other parties. ATTRIBUTE: A piece of information describing part of a particular entity. and knowledge to produce a product or service. involving multiple steps. Generally implemented as a logical or mathematical test or calculation. ANOMALY: A nice word for "bug. ASSESSMENT: A formal examination of a deliverable. generally by a quality assurance reviewer. in a sales environment. which can be changed only through formal change control procedures. are being automated.activities are described in the Project Plan. APPLICATION: One or more software executables designed to fulfill a specific set of business functions individually or in cooperation with other applications. AUDIT: An independent examination of software or software documentation to assess compliance with predetermined criteria. BUSINESS RULE: A logical or mathematical test that determines whether data entered in a . BASELINE: A set of software components and documents to that has been formerly reviewed and accepted. the information used and steps taken to record a new customer order is considered a business process. BATCH PROCESSING: A method of collecting and processing data in which transactions are accumulated and stored until a specified time when it is convenient or necessary to process them as a group. ASSUMPTION: A condition that is generally accepted as truth without proof or demonstration. An assessment is not an in-depth examination of content. BUSINESS PROCESSES: The unique ways in which organizations coordinate and organize work activities. Project risk is considered high when a majority of the business processes under automation are considered to be complex." Anything observed in the operation of software that deviates from expectations based on design documentation or user references. Project risk rises significantly when the development team is attempting to automate one or more new or unusual business processes. For example. BUSINESS PROCESS MATURITY: A project risk factor that takes into consideration the maturity and stability of the business process or processes to be automated. that serves as the basis for further development or current production. Project risk is considered medium when one or more nonstandard but stable business processes. Project risk is considered low when standard business processes that have been stable and in place for a significant period of time are being automated. ACTOR: A person or system that interacts with the software application in support of a specific process or to perform a specific operation or related set of operations. as the content of a deliverable may be outside the reviewer s domain of expertise. generally unique to the customers situation. BANDWIDTH: The capacity of a communications channel.

database complies with an organization s method of conducting its operations. CLIENT: 1. The user point-of-entry for an application. Normally a software executable residing on a desktop computer, workstation, or laptop computer. The user generally interacts directly only with the client, using it to input, retrieve, analyze and report on data. 2. A device or application that receives data from or manipulates a server device or application. CODE REVIEW: A meeting at which source code is presented for review, comment, or approval. COMPONENT: One of the parts that make up a system. A component may be hardware, software, or firmware and may be subdivided into other components. COMPUTER SOFTWARE: Detailed, pre-programmed instructions that control and coordinate the work of computer hardware and firmware components in an information system. COMPUTER-AIDED SOFTWARE ENGINEERING (CASE): The automation of step-by step methodologies for software and systems development to reduce the amount of repetitive work required of the analyst or developer. CONFIGURATION MANAGEMENT: A process that effectively controls the coordination and implementation of changes to software components. CONSTRAINT: A restriction, limitation, or regulation that limits a given course of action. CONTEXT DIAGRAM: Overview data flow diagram depicting an entire system as a single process with its major inputs and outputs. CONVERSION: The process of changing from the old system to the new system. CRITICAL SUCCESS FACTORS (CSFS): A set of specific operational conditions shaped by the business environment that are believed to significantly impact the success potential of an organization or business function. In a software development effort, critical success factors are composed of assumptions and dependencies that are generally outside the control of the development team. CUSTOMER RESOURCES: The number of subject matter experts for each Use Case (UC) in an application under development. This project risk factor is considered low when more than one SME is available per UC. A high risk ensues when outside SMEs are involved with a software development effort. DATA: Streams of raw facts representing events before they have been organized and arranged into a form that people can understand and use. DATA DICTIONARY: A structured description of database objects such as tables, indexes, views and fields, with further descriptions of field types, default values and other characteristics. DATA ENTITY: A data representation of a real world object or concept. Usually represented as a row in a database table, such as information about a specific Product in inventory. DATA FLOW DIAGRAM: A primary tool in structured analysis that graphically illustrates a system's component processes and the flow of data between them. DATA TYPE: A description of how the computer is to interpret the data stored in a particular field. Data types can include text or character string data, integer or floating point numeric data, dates, date/time stamps, true/false values, or Binary Large Objects (BLOBs) which can be used to store images, video, or documents. DATABASE: A set of related data tables and other database objects, such as a data dictionary, that are organized as a group. A collection of data organized to service many applications at the same time. DATABASE OBJECT: A component of a database, such as a table or view. DATABASE ADMINISTRATOR: Person(s) responsible for the administrative functions of

databases, such as system security, user access, performance and capacity management, and backup and restoration functions. DATABASE MANAGEMENT SYSTEM (DBMS): Software used to create and maintain a database and enable individual business applications to extract the data they need without having to create separate files or data definitions for their own use. DEFAULT: An initial value assigned to a field by the application when a new database record is created. Used to facilitate data entry by pre-entering common values for the user. DELIVERABLE: A specific work product, such as requirements or design documentation, produced during a task or activity to validate successful completion of the task or activity. Sometimes, actual software is delivered. DESIGN ELEMENT: A specification for a software object or component that fulfills, or assists in the fulfillment of a functional element. A part of the system design specification. DESIGN STAGE: A stage in the software development lifecycle that produces the functional and system design specifications for the application under development. DEVELOPER SKILLS/RESOURCES: The availability of developers and other resources with appropriate skills is a significant factor in project success. When developers and resources are readily available, the likelihood of project success is very high. Most development firms manage multiple projects, allowing some contention between projects for developers and other resources. This project risk factor is considered high when one or more developers with specific skill sets, or resources with specific capabilities, need to be acquired before the project can continue. DOCUMENTATION: Information made available to: 1) assist end-users in the operation of a software application, generally in the form of on-line help, or 2) assist developers in locating the correct root procedure or method for a specific software function, generally in the form of an implementation map. Note that printed manuals are rarely delivered with software anymore; online documentation is more consistently available from within the application and is easier to use. ENCRYPTION: The coding and scrambling of messages to prevent unauthorized access to or understanding of the data being stored or transmitted. END-USER REVIEW: The review of a deliverable for functional accuracy by a Subject Matter Expert who is familiar with the software product under design or development. ENTITY: A collection of attributes related to and describing a specific subject, such as Products. ENTITY-RELATIONSHIP DIAGRAM: A diagram illustrating the relationship between various entities in a database. EXECUTABLE: A binary data file that can be run by the operating system to perform a specific set of functions. In Windows, executables carry the extension .EXE and can be launched by double-clicking on them. EXTERNAL INTERFACE: In database applications, an external interface is a defined process and data structure used to exchange data with other systems. For example, an order processing application may have an interface to exchange data with an external accounting system. EXTERNAL INTERFACE COMPLEXITY: The level of complexity associated with an external interface. A simple interface is generally unidirectional, with limited, stable logic defining the structure of the exchanged data. A standard export from a database to a spreadsheet is considered a simple interface. A complex interface may be bi-directional, or may have extensive, adaptive logic defining the structure of the exchanged data. The transmission of labor

data to a corporate payroll system, with its attendant validation and transaction confirmation requirements, is considered a complex interface. FEASIBILITY STUDY: A process that determines whether the solution under analysis is achievable, given the organization's resources and constraints. FIELD: Synonym for a data element that contains a specific attribute's value; a single item of information in a record or row. FOCUS: The application object to which the user-generated input (usually keyboard and mouse) is directed. FOREIGN KEY: A field or set of fields in a table whose value must match a primary key in another table when joined with it. FORM: A screen formatted to facilitate data entry and review. Utilizes data entry fields, option selection tools, and control objects such as buttons and menu items. FUNCTIONAL AREA: Any formally organized group focused on the development, execution, and maintenance of business processes in support of a defined business function. FUNCTIONAL DESIGN STAGE: That stage of the software development lifecycle that focuses on the development and validation of designs for architecture, software components, data and interfaces. Often combined with the system design stage into a single stage for smaller applications. FUNCTIONAL ELEMENT: A definition that specifies the actions that a software component, product, or system must be able to perform. FUNCTIONAL TESTING: Also known as end-user testing. Testing that focuses on the outputs generated in response to selected inputs and execution conditions. FUNCTION POINT ANALYSIS: A software measurement process that focuses on the number of inputs, outputs, queries, tables, and external interfaces used in an application. Used for software estimation and assessment of developer productivity. GROUP: During report generation, one or more records that are collected into a single category, usually for the purpose of totaling. Also used to identify a collection of database users with common access privileges. HARDWARE: Physical computer equipment and peripherals used to process, store, or transmit software applications or data. HIERARCHICAL MENU: A menu with multiple levels, consisting of a main Menu bar that leads to one or more levels of sub menus from which choices or actions are made. HYPERTEXT MARKUP LANGUAGE (HTML): A programming tool that uses Hyper Text to establish dynamic links to other documents stored in the same or remote computers. IMPLEMENTATION ELEMENTS: A specific software component created to fulfill a specific function defined in the functional and system design documents. IMPLEMENTATION STAGE: A stage in the software development lifecycle during which a software product is created from the design specifications and testing is performed on the individual software units produced. INCREMENTAL DEVELOPMENT: A software development technique where multiple small software development lifecycles are used to develop the overall software product in a modular fashion. INDEX: A specialized data structure used to facilitate rapid access to individual database records or groups of records. INFORMATION: Data that has been shaped into a form that is meaningful and useful to humans.

INTERSECTION: A group of data elements included in two or more tables as part of a Join operation. INSPECTION: Also termed desk checking. KEY PROCESS AREA: A software engineering process identified by the Software Engineering Institute Capability Maturity Model that is an essential to an organization's ability to develop consistently high-quality software products. In some cases. INSTALLATION STAGE: A software lifecycle stage that consists of the testing. and other problems. KEY FIELD: A field used to identify a record or group of records by its value. Each stage is finite in scope. INHERITANCE: A feature of object-oriented programming where a specific class of objects receives the features of a more general class. programs or data. analysis. and disseminate information to support decision-making. MAINTENANCE: The process of supporting production software to detect and correct faults. A unit of computer storage capacity. JOIN: A database operation or command that links the rows or records of two or more tables by one or more columns in each table. KNOWLEDGE MANAGEMENT: The process of systematically managing and leveraging the stores of knowledge in an organization. This knowledge is generally stored as sets of documents or database records. a large amount of data is transferred from one or more legacy systems that the new database application is replacing. MEGABYTE (MB): Approximately one million bytes. INTEGRITY: The degree to which a software component or application prevents unauthorized access to. KILOBYTE (KB): One thousand bytes (actually 1024 storage positions). and conversion efforts necessary to place the developed software application into production. A quality assurance technique that relies on visual examination of developed products (usually source code or design documentation) to detect errors.INFORMATION SYSTEM: Interrelated components working together to collect. certain sets of data are preloaded to support operations. INITIAL DATA LOAD: When a new database application is first brought online. and ensure appropriate availability to end-users. LIFECYCLE: A set of software development activities. . or modification of. and/or visualization in an organization. control. INTERFACE TESTING: A testing technique that evaluates whether software components pass data and control correctly to one another. Master tables have a primary key that's matched to a foreign key in a detail table and often have a one-to-many relationship with detail tables. coordination. Used as a measure of storage capacity. and produces a specific set of deliverables. store. The initial data load figure is calculated as the sum of all records in operational and support data areas on day zero of the applications production lifecycle. This figure is used as a baseline for estimating development effort. INTERFACE: A formal connection point defined between two independent applications for the purpose of data exchange. violation of development standards. requires a specific set of inputs. or stages that function together to guide the development and maintenance of software products. training. JOINT APPLICATION DESIGN (JAD): A design technique that brings users and IT professionals into a facilitated meeting for the purpose of interactively designing an application. server hardware requirements and network loads. MASTER TABLE: A table containing data on which detail data in another table depends. process. optimize performance.

MODULE: A functional part of an application that is discrete and identifiable with a specific subject. which use transaction processing to assure data integrity. METADATA: Data that describes the structure. for all users. OPERATIONAL DATA AREA: A module in a database application that supports the maintenance of data associated with a major operational process. METRICS: Numeric data representing measurements of business processes or database activity. NORMALIZATION: The process of creating small stable data structures from complex groups of data during the design of a relational database. and/or location of data. OPERATIONAL TRANSACTION LOAD: The quantity of transactions per unit of time. This figure is commonly expressed as the number of transactions per day for all operational data areas. multidimensional databases often known as data cubes. organization. MILESTONE: In project management. in an order entry system. procedures.MEGAHERTZ (MHZ): A measure of the clock speed of the CPU in a computer. Data cubes are often created via specialized processing from relational databases. and is used to estimate server and network capacity requirements. electronic mail. ONLINE ANALYTICAL PROCESSING (OLAP): A technology that operates on nonrelational. desktop database files. and spreadsheets for the purpose of exchanging and manipulating database information. usually through the use of client workstations. ONLINE TRANSACTION PROCESSING (OLTP): OLTP most commonly refers to largescale database applications. MODEL: An abstract representation that illustrates the components or relationships of a specified application or module. Often used to measure progress. OPEN DATABASE CONNECTIVITY (ODBC): A set of software drivers and database functions that allow different applications to access client/server RDBMSs." METHODOLOGY: A set of processes. the maintenance of customer data and the entry of orders would be considered operational data areas. OFFICE AUTOMATION SYSTEM (OAS): A combination of software applications such as word processing. and calendaring. . The objective of OLAP is to allow the end user to perform highly flexible analysis and reporting. metadata is "data about data. in all operational data areas of a database application. OPERATING SYSTEM: System software that manages and controls the activities of the computer. In essence. MODULE TESTING: The process of testing individual software modules or sets of related modules to verify the implementation of the software. text files. OBJECT CODE: Program instructions that have been translated into machine language so that they can be executed by the computer. and standards that defines an engineering approach to the development of a work product. One megahertz equals one million cycles per second. such as order entry and payroll systems. a scheduled event of significance for which an individual or team is accountable. MULTIUSER: Concurrent access to a single database by more than one user. The operating system acts as the interface between applications and the computer hardware. that is designed to increase the productivity of data workers in the office. For example.

PROJECT PLAN: A document that describes a technical and management approach to be followed for a project. PLANNING STAGE: The first stage in the software development lifecycle. RELATIONAL DATABASE MANAGEMENT SYSTEM (RDBMS): An RDBMS is a database management application that can create. PEER REVIEW: See Technical Review. including the structure of the activities. QUALITY: Satisfaction of customer criteria. The RDBMS treats data as if they were stored in two-dimensional tables. PRODUCTION: The time period after the new system is installed and any data conversion efforts are complete. PROJECT MANAGER: The individual with total business responsibility for all activities of a project. the methods to be used. PRIMARY KEY: A field or fields whose individual or combined values uniquely identify a record in a database. the needs and expectations of the customer are identified. organize. The plan typically describes the scope of work. as well as a list of deliverables and other key events required for the project to be considered a success. resource utilization. PARAMETER: A value passed to an application to direct performance. PSEUDOCODE: A combination of programming language constructs and natural language used to define an algorithm or business rule. PROJECT: A concerted effort that is focused on developing or maintaining a specific software product or system. . a query could be set up with a parameter that limits the returned records to those falling after a specific date. and store data. A project has a fixed scope. QUERY: A statement structured to direct the retrieval or manipulation of data in a database. and the Project Plan is developed. and schedule management. The system is now being used for normal operations. structure. PROTOTYPING: The process of building an experimental system quickly and inexpensively for demonstration and evaluation so that end-users can better determine the requirements of an application. RECORD: A group of related fields. Generally performed by a Quality Assurance Reviwer who is not a member of the development team or end user base. PERMISSIONS: A synonym for privileges. OUTER QUERY: A synonym for the primary query in a statement that includes a subquery. OUTER JOIN: A SQL Join operation in which all rows of the joined tables are returned. the feasibility of the project is determined. or key. conformance to design specifications. During the planning stage. A single row of a relational database table that contains each field defined for the table. the project structure and initial schedule. PROCEDURE: A written description or diagram of a course of action to be taken to perform a given task. PRIVILEGES: The authorities assigned to an end user by the database administrator or database owner to perform operations on data objects. It can relate data stored in one table to data in another as long as the two tables share a common data element.ORGANIZATION: A formally defined structure that takes resources from the environment and processes them to produce outputs. and delivery schedule. whether or not a match is made between columns. Pseudocode is often used as a communications bridge between end-users and analysts or programmers. Changing the value of the parameter changes the returned selection of records. For example. QUALITY ASSURANCE ASSESSMENT: The assessment of a deliverable for the presence of required internal and supporting elements.

and testability. RISK: The possibility of suffering loss. The RTM acts as a bridge between the different stages of the software development lifecycle. RELIABILITY: The ability of a software application or component to perform its required functions under design-compliant conditions for a specified period of time. The reviewer examines the content for correctness. RETIREMENT: Permanent removal of an application or software system from its operational environment. the requirements for a software product are defined and documented. REUSABILITY: The degree to which a software application. REQUIREMENT: A condition or capability needed by the customer to solve a problem or achieve an objective. not how those functions will be executed. prioritize. analyzed. RISK MANAGEMENT: An approach to problem analysis that is used to identify. and test cases. deft. alteration. Rules are classified as validation rules and business rules. From this point. REQUIREMENTS STAGE: A stage in the software lifecycle that immediately follows the planning stage. . REVERSE ENGINEERING: The process of examining an existing application that has characteristics that are similar to a desired application. and technical measures used to prevent unauthorized access. requiring that the table be assigned two different names. This condition or capability must be met or possessed by the developed software before it will be accepted by the customer. analyze. REQUIREMENTS TRACEABILITY MATRIX (RTM): A table or spreadsheet describing the relationships between application requirements. implementation elements. Self-joins join a table with itself. consistency. readability. SELF-JOIN: A SQL Join operation used to compare values within the columns of one table. and control risks. and provides an auditable trail that shows how each requirement is fulfilled and tested. RULE: A specification that determines the data type and data value that can be entered in a column of a table. Using the existing application as a guide. The requirements specification focuses on what functions the application is to perform. or other work product can be used in more than one computer program or software system. During this stage. accuracy. or destruction of information systems or data. functional elements. The output of this stage is a Requirements Specification. and placed into production. the requirements for the new application are defined. SECURITY: Policies. See production. and extracted all the way back to specifications. REVIEW: The examination of a deliverable for specific content by a reviewer with expertise in the domain of the deliverable.REFERENTIAL INTEGRITY: A set of rules governing the relationships between parent and child tables within a relational database that ensures data consistency. the specifications are altered to comply with any new customer requirements and the new application is developed. design elements. REGRESSION TESTING: Structured retesting of a software component or application to verify that any modifications made have not caused unintended effects and that the software still complies with its specified requirements. completeness. REQUIREMENTS SPECIFICATION: A deliverable that specifies the manual and automated requirements for a software product in non-technical language that the customer and end-users can understand. procedures. component. RELEASE VERSION: A software application or component that has been tested and found to be in compliance with design documentation. one of which must be an alias.

and verifiable manner. SPIRAL DEVELOPMENT MODEL: An iterative version of the waterfall software development model. the next component in order of applicability is developed. design. uniform approach to software development and maintenance. Rather than producing the entire software product in one linear series of steps. Integration & Test. referred to as stages. or deliverables. who is considered to be an expert in one or more operational processes that are the focus of an automation effort. defined rules. or other characteristics in a complete. STANDARD OPERATING PROCEDURES (SOPS): Precise. SMEs are generally the primary sources of application requirements.SIMULTANEOUS USERS: A quantity of users and/or external systems connected to a multiuser database application for the purpose of exchanging and/or maintaining data. data dictionaries. A formal process for evaluating the work products produced during the software development lifecycle. SOURCE CODE: Software programming instructions written in a language readable by humans that must be translated into machine language before it can be executed by the computer. STRUCTURED ANALYSIS: A top-down method for defining system inputs. Design. The detailed instructions that control the operation of a computer system. and data store descriptions. procedure logic descriptions. generally a customer staff member. SOFTWARE: Computer programs. a dependency tree is used where components that have other components dependent upon them are developed first. . Installation & Acceptance. and outputs to build models of software products or systems. STAKEHOLDERS: Those individuals with decision-making authority over a project or group of projects. SOFTWARE QUALITY ASSURANCE (SQA): A process designed to provide management with appropriate visibility into the software engineering processes being used by the project team. SOFTWARE DEVELOPMENT LIFECYCLE (SDLC): A set of activities. precise. and associated documentation pertaining to the operation of an application. SOFTWARE METRICS: Objective assessments (in the form of quantified measurements) of a software application or component. STANDARDS: Approved reference models and protocols as determined by standard-setting groups to prescribe a disciplined. The four basic features in structured analysis are data flow diagrams. STAGE: A partition of the software development cycle that represents a meaningful and measurable set of related tasks which are performed to obtain specific work products. designs. Development. and play very significant roles in the requirements. procedures. STRUCTURED QUERY LANGUAGE (SQL): A standard data definition and data manipulation language for relational database management systems. and practices developed by an organization to consistently manage normal operations. Often. and Maintenance. the spiral development model implies that specific components or sets of components of the software product are brought through each of the stages in the software development lifecycle before development begins on the next set. Requirements. and testing stages of the software development lifecycle. The generally accepted stages for software development are Planning. processes. arranged to produce a software product. activities. Once one component is completed. SPECIFICATION: Documentation that describes software requirements. SUBJECT MATTER EXPERT (SME): A person.

Also referred to as the customer or client. Contains detailed instructions for the set up. TABLE: A database object consisting of a group of rows (records) divided into columns (fields) that contain data or Null values. the system owner. SYSTEMS ANALYSTS: Specialists who translate business problems and requirements into information systems requirements. Typically.SUBQUERY: Any SQL Select statement that's included (nested) within another Select. For example. SYSTEM DESIGN STAGE: A stage in the software development lifecycle during which the requirements for the software product architecture. . SUPPORT DATA AREA: A module in a database application that supports the maintenance of data used primarily for reference and support of operational data areas. TECHNOLOGY: Software and hardware tools and services used to develop and support a database application. SYSTEM: A collection of hardware. SYSTEMS ANALYSIS: The analysis of a problem that the organization will try to solve with an information system. TEST CASE: A defined set of database records. A task is the lowest level of work division typically included in the Project Plan and Work Breakdown Structure. such as the central processor. software. and technical resources. and other support elements needed to conduct a test of a database application. interfaces. SYSTEM TESTING: Testing of the application or information system as a whole to determine if discrete modules will function together as planned and to evaluate compliance with specified requirements. TASK: The smallest accountable unit of work. software. execution conditions and anticipated results designed to exercise specific application components and verify compliance with design criteria and requirements. Insert. unique performance requirements dictate the use of new technology for the project to have any chance of success. software. in an order entry system. instrumentation. SYSTEM OWNER: The organizational unit or person that provides funding and has approval authority for the project. and documentation components organized to accomplish a specific function or set of related functions. Caveat: in rare cases. mature tools is substantially more likely to succeed than a team working with newly released. These activities typically focus on support of the organization's infrastructure. and logistics. and peripheral devices. acting as a bridge between the information systems department. test inputs. TESTBED: A specific set of hardware. the maintenance of lists of customer business types or order shipment methods would be considered support data areas. a development team working with well-known. system owners are also system users. SUPPORT ACTIVITIES: Activities that make the delivery of the primary services or products of a firm possible. Update. simulators. A table is treated as a database device or object. TECHNICAL FEASIBILITY: Determines whether a proposed application can be implemented with the available hardware. SYSTEM SOFTWARE: Specialized programs that manage the resources of the computer. TECHNICAL REVIEW: The review of a deliverable for technical accuracy by a qualified developer who is familiar with the software product under design or development. and data structures are refined and expanded to the extent that the design is sufficiently complete to be implemented. components. communication links. technologies. firmware. and end-users. unfamiliar technology. human resources. Related tasks are usually grouped to form activities. or Delete statement or nested within another subquery. In general. execution.

TRACEABILITY: The degree to which a relationship can be established between two or more products of the software development lifecycle. TEST PLAN: A document that defines the preparations. If all of the record management activities are successful. TRANSACTION ANALYSIS: A process used to divide complex data flow diagrams into smaller. The expected output from the execution of the logical path is predefined to allow comparisons of the planned output against the actual output. test items. Includes descriptions of the information the Actors send to the system. a description of functions and data structures. and test cases for the series of tests to be performed on an application. while the database server manages the associated data storage as directed by the application server or front end client. and the generation of reports. 2. The front end handles user input and output. and possible error messages. or rolled back. TIMESTAMP: A set of date and time data attributes applied to a disk file or database record when it is created or edited. For example. the entire activity is canceled. focused on how Actors (user and interfacing systems) interact with the process. USER INTERFACE: The part of the application through which the end-user interacts with the system. USER MANUAL: A document that describes a software application in sufficient detail to enable an end-user to obtain desired results. The process of evaluating software to ensure compliance with established requirements and . Typically includes a tutorial. and the operations they perform using the system. A stage in the software development lifecycle where the components of a software product are executed under specified conditions. the data the Actors receive from the process. TESTING STAGE: Also often referred to as the test and acceptance stage. expected outputs. TEST REPORT: A document that contains a chronological record of the execution and results of the testing carried out for a software component or application. USE CASE: A description of a business process under automation. individual data flow diagrams for each class of transaction that the application will process. the application server serves front end components and handles the business rules. If any of the record management tasks fail. options. the transaction is committed. and database server. UNIT TESTING: The isolated testing of each logical path of a specific implementation element or groups of related elements. User activities can include data entry. The process of determining whether a value in a table's data cell its within the allowable range or is a member of a set of acceptable values. USER: Those individuals who use a specific software product or system.and evaluation of the results for the test case. THREE-TIER: The architecture of a database application consisting of a front end. and an evaluation is made to determine whether or not the requirements have been met. the execution of batch operations. test data. application server. the results are observed and recorded. a transaction would entail all the steps necessary to insert and or modify values in multiple tables when a new invoice is created. VALIDATION: 1. USABILITY: The ease with which a user can learn to operate a software application. allowable inputs. or made permanent. queries and updates. TEST ITEM: A software component that is the object of a test case. TRANSACTION: A set of processing tasks that are treated as a single activity to perform a desired result.

and ideas to create and distribute products and services without being limited to traditional organizational boundaries or physical location. design element. Baseline The point at which some deliverable produced during the software engineering process is put under formal change control. Generally described in outline form to show the hierarchical relationship between activities and tasks. Agile Testing Testing practice for projects using agile methodologies.design criteria. mentally disabled etc. See also Test Driven Development. which causes the program to perform in an unintended or unanticipated manner. VERIFICATION: The process of evaluating an application to determine whether or not the work products of a stage of a software development lifecycle fulfill the requirements established during the previous stage. Testing Important Definitions Acceptance Testing Testing conducted to enable a user/customer to determine whether to accept a software product. Accessibility Testing Verifying a product is accessible to the people having disabilities (deaf. Can include negative testing as well. violation of development standards. usually for conformation to an ABI specification. WATERFALL DEVELOPMENT MODEL: A software development lifecycle in which each stage is dependent upon the outputs of the previous stage. The vision is that of water flowing downhill. Binary Portability Testing Testing an executable application for portability across system platforms and environments. Ad Hoc Testing A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. or implementation element by a team of subject matter experts to detect possible errors. Beta Testing Testing of a rerelease of a software product conducted by customers. WORK BREAKDOWN STRUCTURE (WBS): A listing of all activities and tasks related to those activities that make up a complete project. See also Monkey Testing.WALK-THROUGH: The review of a functional element.). Black Box Testing Testing based on an analysis of the specification of a piece of software without reference to its internal workings. CMM The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes. VIRTUAL ORGANIZATION: An organization that uses networks to link people. Bug A fault in a program. Normally performed to validate the software meets a set of agreed acceptance criteria. Once a stage is completed. Previously completed stages cannot be re-initiated. Code Inspection A formal testing technique where the programmer reviews source code with a . blind. The goal is to test how well the component conforms to the published requirements for the component. the next stage begins. Code Coverage An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention. treating development as the customer of testing and emphasizing a test-first design paradigm. WORK PRODUCT: A specific document or software component that results from a project activity or task. assets. and other problems.

Functional Testing Testing the features and operational behavior of a product to ensure they correspond to its specifications. while the state of program variables is manually monitored. Installation Testing Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Endurance Testing Checks for memory leaks or other problems that may occur with prolonged execution.e just few tests here and there to ensure the system or an application does not crash out. End-to-End testing Testing a complete application environment in a situation that mimics realworld use. applications.group who ask questions analyzing the program logic. or hardware. Performance Testing Testing conducted to evaluate the compliance of a system or component with specified performance requirements.g. Identifies and measures the level of locking. Integration Testing Testing of combined parts of an application to determine if they function together correctly. Race Condition A cause of concurrency problems. module or database records. Monkey Testing Testing a system or an Application on the fly. See also Negative Testing. . functionality heavily. to analyze the programmer's logic and assumptions. analyzing the code with respect to a checklist of historically common programming errors. Operating Systems. Also known as "test to fail". Events can include shortage of disk space. Often this is performed using an automated test tool to simulate large number of users. Glass Box Testing A synonym for White Box Testing. Positive Testing Testing aimed at showing software works. Multiple accesses to a shared resource. Metric A standard of measurement. using network communications. Usually performed after unit and functional testing. deadlocking and use of single-threaded code and locking semaphores. Compatibility Testing Testing whether software is compatible with other elements of a system with which it should operate. Code Walkthrough A formal testing technique where source code is traced by a group with a small set of test cases. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or interacting with other hardware. Gray Box Testing A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. See also Static Testing. Software metrics are the statistics describing the structure or content of a program. such as interacting with a database. Also know as "Load Testing". This type of testing is especially relevant to client/server and distributed systems. with no mechanism used by either to moderate simultaneous access. and analyzing its compliance with coding standards. at least one of which is a write. Gorilla Testing Testing one particular module. browsers. or power out conditions. Negative Testing Testing aimed at showing software does not work. See also Positive Testing. or systems if appropriate. A metric should be a real objective measurement of something such as number of bugs per lines of code. e. Concurrency Testing Multi-user testing geared towards determining the effects of accessing the same application code. i. Also known as "test to pass". Defect Nonconformance to requirements or functional / program specification Dynamic Testing Testing software through executing it. unexpected loss of communication.

Traceability Matrix A document showing the relationship between Test Requirements and Test . Ref IEEE Std 829. approach. System Testing Testing that attempts to discover defects that are properties of the entire system rather than of its individual components. who will do each task. Test Driver A program or test tool used to execute a tests. Testing The process of exercising software to verify that it satisfies specified requirements and to detect errors. network topology. Often this is performance testing using a very high level of simulated load. other application or system software. Security Testing Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level. A Test Case will consist of information such as requirements testing. Test Case Test Case is a commonly used term for a specific test. See also Smoke Testing. It identifies test items. and to evaluate the features of the software item (Ref. prerequisites. The process of analyzing a software item to detect the differences between existing and required conditions (that is. Recovery Testing Confirms that the program recovers from expected or unexpected events without loss of data or functionality. the testing tasks. running several times more transactions in an entire day (or night) than would be expected in a busy day. and expected outcomes developed for a particular objective. Test Harness A program or test tool used to execute a tests. Regression Testing Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. configuration of the product under test.Ramp Testing Continuously raising an input signal until the system breaks down. Smoke Testing A quick-and-dirty test that the major functions of a piece of software work. test environment. verification steps. Test Plan A document describing the scope. etc. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire. Scalability Testing Performance testing focused on ensuring the application under test gracefully handles increases in work load. observing or recording the results. Stress Testing Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. IEEE Std 829). and schedule of intended testing activities. Test Bed An execution environment configured for testing. to identify and performance problems that appear after a large number of transactions have been executed. resources. OS. execution preconditions. and making an evaluation of some aspect of the system or component. Sanity Testing Brief test of major functional elements of a piece of software to determine if its basically operational. May consist of specific hardware. This is usually the smallest unit of testing. test steps. the features to be tested. bugs). Also known as a Test Driver. The process of operating a system or component under specified conditions. Soak Testing Running a system at high load for a prolonged period of time. or power out conditions. such as to exercise a particular program path or to verify compliance with a specific requirement. and any risks requiring contingency planning. A set of inputs. etc. Events can include shortage of disk space. outputs. The Test Plan for a project should enumerated the test beds(s) to be used. For example. Also known as a Test Harness. unexpected loss of communication.

flakiness problem go under this priority for the release. logs. walkthroughs and inspection meetings. code. FAQ'S Of Software Testing In this section we go through the list of FAQ'S. The subject of the inspection is typically a document. reliability. What is validation? A: Validation ensures that functionality. Verification The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. Q1. What is a walkthrough? A: A walkthrough is an informal meeting for evaluation or informational purposes. as defined in requirements. The purpose of an inspection is to find problems and see what is missing. It's the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). Validation The process of evaluating software at the end of the software development process to ensure compliance with software requirements. Volume Testing Testing which confirms that any values that may become large over time (such as accumulated counts. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities. The techniques for verification are testing. requirements and specifications. it typically involves reviews and meetings to evaluate documents. performance. more formalized than a walkthrough and typically consists of 3-10 people including a moderator. Q4. Walkthroughs also offer opportunities to assess an individual's or team's competency. issues lists. inspection and reviewing. this can be done with checklists. What is verification? A: Verification ensures the product is designed to deliver all functionality to the customer. can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner. is the intended behavior of the product. The purpose of code walkthroughs is to ensure the code fits the purpose. Attendees should prepare for this type of meeting by reading through the . The result of the meeting should be documented in a written report. not to fix anything. plans. reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). and data files). such as a requirements document or a test plan. Q3. What is an inspection? A: An inspection is a formal meeting. Use Case The specification of tests that are conducted from the end-user perspective. Show Stopper Bug Any Required features of the release whose Test is defined in the Test Plan of the release fails then Bug should be opened in this category against Proxy module of Vocal and Release can't be done if it's not fixed Or All the Memory Leaks. The techniques for validation is testing.Cases. validation typically involves actual testing and takes place after verifications are completed. A walkthrough is also a process at an abstract level. Q2. inspection and reviewing. Both above Categories BUGS are considered as Show stoppers and Release can't be made till they are fixed.

Good internal design is indicated by software code whose overall structure is clear. Sometimes customers do not understand the effects of changes. rescheduling of resources and some of the work already completed have to be redone . while an end-user might define quality as user friendly and bug free. or understand them but request them anyway. meets requirements and expectations and is maintainable. testers. Programming errors occur because programmers and software engineers. quality is a subjective term. The accounting department might define quality in terms of profits. in some fast-changing business environments. internal design. software complexity. Software complexity. Customers of a software development project include end-users. Each type of customer will have his or her own slant on quality. enormous relational databases and the sheer size of applications. but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. is free of bugs and is readable and maintainable. maintenance. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. client-server and distributed applications. Q9. 4. is robust with sufficient error handling and status logging capability. Preparation for inspections is difficult. As to changing requirements. integration. like everyone else. re-testing and phase-out. before the meeting starts. time pressure. Good functional design is indicated by software functionality can be traced back to customer and end-user requirements. continuously modified requirements are a fact of life. However. Q6. All of the followings contribute to the exponential growth in software and system complexity: Windows interfaces. data communications. Peer reviews and code analysis tools can be used to check for problems and enforce standards. There are unclear software requirements because there is miscommunication as to what the software should or shouldn't do. most problems are found during this preparation. testers. document preparation. Why are there so many software bugs? A: Generally speaking. easily modifiable and maintainable. but is one of the most cost-effective methods of ensuring quality. Q7. testing. Quality depends on who the customer is and their overall influence in the scheme of things. And the changes require redesign of the software. requirements analysis. What is software life cycle? A: Software life cycle begins when a software product is first conceived and ends when it is no longer in use. What is quality? A: Quality software is software that is reasonably bug-free. customer contract officers. 1. updates. changes in requirements. can make mistakes. What is good code? A: A good code is code that works. It includes phases like initial concept. customer management. software engineers. 3. test engineers. the development organization's management. but often refers to functional design or internal design. coding. since bug prevention is more cost effective than bug detection. stockholders and accountants. test planning. 2. functional design. delivered on time and within budget. poorly documented code and/or bugs in tools used in software development. documentation planning. errors made in bug tracking. salespeople. and works correctly when implemented. Q5. there are bugs in software because of unclear requirements. understandable. customer acceptance test engineers. What is good design? A: Design could mean to many things. programming errors. Organizations usually have coding standards all developers should adhere to.document. Q8.

Generally speaking. because scheduling of software projects is not easy and it often requires a lot of guesswork and when deadlines loom and the crunch comes. can introduce their own bugs. Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes. management and organizational buy-in and a slower. or ongoing long-term projects. The result is bugs. Q11. in order to keep any bureaucracy from getting out of hand. or programmers and software engineers feel they cannot have job security if everyone can understand the code they write. incomplete. Software development tools . adding new features after development is underway and poor communication. the greatest value for effort is in managing requirement processes. it should be hard to read. they can be valuable. 2. A lot depends on team leads and managers. or some underlying code in the application is changed). mistakes will be made. Do automated testing tools make testing easier? A: Yes and no. Other times the tools are poorly documented. too general. the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to . Sometimes there is no incentive for programmers and software engineers to document their code and write clearly documented. managers. a serious management buy-in is required and a formalized QA process is necessary. understandable code. in a GUI and has an automated testing tool record and log the results. an ad-hoc process is more appropriate. QA processes should be balanced with productivity. Code documentation is tough to maintain and it is also tough to modify code that is poorly documented. For larger projects. The schedule is unrealistic if too much work is crammed in too little time. A: Poorly written requirements. For medium size organizations with lower risk projects.or discarded and hardware requirements can be effected. etc. test engineers and testers. step-by-step process is required. Q10. buttons. therefore there will be problems. 3. the time needed to learn and implement them is usually not worthwhile. 7. where the goal is requirements that are clear. 4. class libraries. developers. 6. a test engineer clicks through all combinations of menu choices. Q12. based on a scripting language that the testing tool can interpret. inadequate testing. dialog box choices. new buttons are added. How do you introduce a new software QA process? A: It depends on the size of the organization and the risks involved. But for small projects. The recording is typically in the form of text. 5. Requirements are poorly written when requirements are unclear. too. unrealistic schedules. or customers have unrealistic expectations and therefore problems are guaranteed. Sometimes developers get kudos for quickly turning out code. feedback to developers and good communication is essential among customers. 1. or they believe if the code was hard to write. If a change is made (e. It's extremely common that new features are added after development is underway. or not testable. Bug tracking can result in errors because the complexity of keeping track of changes can result in errors. A common type of automated tool is the record/playback type. For example. complete and testable. scripting tools. 5.g. including visual tools. 8. Miscommunication either means the developers don't know what is needed. Give me five common problems that occur during software development. Regardless the size of the company. too. which can create additional bugs. Time pressures can cause problems. For smaller groups or projects. compilers. For large organizations with high-risk projects.

data. Q13. gives the test engineer an appreciation for the developers' point of view and reduces the learning curve in automated test tool programming. networked bug-tracking tools. Personnel should be able to complete the project without burning out. 4. adequate testing. Previous software development experience is also helpful as it provides a deeper understanding of the software development process. Stick to initial requirements as much as possible. complete. tools of change management. All players should agree to requirements. Use prototypes to help nail down requirements. detailed. One problem with such tools is that if there are continual changes to the product being tested. Has a "test to break" attitude. take the point of view of the customer. re-testing. once development has begun and be prepared to explain consequences. A: Solid requirements. Promote teamwork and cooperation. not paper.check effects of the change. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non-technical people. this will minimize changes later on. 5. Tactful and diplomatic and 6. etc. Have schedules that are realistic. Rob Davis' communication skills and the ability to understand various sides of issues are important. bug fixing. design. Has a strong desire for quality. 1. both oral and written. Q14. Q15. clear. good test engineers. Use documentation that is electronic. Rob Davis understands the entire software development process and how it fits into the business approach and the goals of the organization. Has previous software development experience. And he 7. Good test engineers have a "test to break" attitude. 3. Be prepared to defend design against changes and additions. testing. 3. the recordings have to be changed so often that it becomes a very timeconsuming task to continuously update the scripts. 4. Use prototypes early on so customers' expectations are clarified and customers can see what to expect. Ensure documentation is available and up-to-date. firm requirements and good communication. 2. Ensure the requirements are solid. re-test after fixes or changes. What makes a good QA engineer? A: The same qualities a good test engineer has are useful for a QA engineer. 2. have a strong desire for quality and an attention to detail. Communicate. changes and documentation. Has good a communication skill. make extensive use of e-mail. and plan for sufficient time for both testing and bug fixing. We. realistic schedules. ensure they're adequately reflected in related schedule changes. He's also 5. Allow adequate time for planning. Do testing that is adequate. too. attainable and testable. Has an attention to detail. logs. Avoid new features. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the . Another problem with such tools is the interpretation of the results (screens. cohesive. Additionally.) that can be a time-consuming task. Takes the point of view of the customer. If changes are necessary. Start testing early on. Require walkthroughs and inspections when appropriate. What makes a good test engineer? A: A good test engineer should have 1. Give me five solutions to problems that occur during software development.

If you're a college graduate looking for your first job. The purpose of a resume is to get you an interview. The followings are some of the comments I have personally heard: "Well. the resume needs to be longer. there seems to be an unending discussion of whether you should or shouldn't have a one-page resume. Communication skills and the ability to understand various sides of issues are important. Once the resume is in there and searchable. Some candidates use a 7-point font so they can get the resume onto one page. Joe Blow (car salesman) said I should have a one-page resume. and that is also part of the problem. If the resume isn't getting you interviews. Five." "Well. and I was told to never make the resume more than one page long." "Gosh." Or. I read a book and it said you should have a one page resume. here's another comment. it will be read thoroughly. Q17. The first thing to look at here is the purpose of a resume. Generally speaking. you have accomplished one of the goals of resume distribution. able to communicate with technical and non-technical people. then it is considered to be a good resume. have the people skills needed to promote improvements in QA processes. So what's the answer? There is no scientific answer about whether a onepage resume is right or wrong. in light of the current scanning scenario. Why? Because. your resume should tell your story. more than one page is not a deterrent because many will scan your resume into their database. there are lots of resumes out there these days. The biggest mistake you can make on your resume is to make it hard to read." "I can't really go into what I really did because if I did. for one. Three." I have heard some more. able to promote cooperation between Software and Test/QA Engineers.for people long on experience -. I wish I could put my job at IBM on my resume but if I did it'd make my resume more than one page." "I'm confused. have the ability to withstand pressures and say *no* to other managers when quality is insufficient or QA processes are not being adhered to. as well . should my resume be more than one page? I feel like it should. then you should change it. If the resume is mechanically challenging. a one-page resume is just fine. It all depends on who you are and how much experience you have. "People just don't read resumes that are longer than one page. FAQ'S Of Software Testing Q16. but we can start with these. What makes a good resume? A: On the subject of resumes. resume readers do not like eye strain either. Short resumes -. they just throw it aside for one that is easier on the eyes. Big mistake. it'd take more than one page on my resume. Four. The real audience for these short resumes is people with short attention spans and low IQ. I assure you that when your resume gets into the right hands. Two. but I don't want to break the rules. resume readers don't like to guess and most won't call you to clarify what is on your resume. What makes a good QA/Test Manager? A: QA/Test Managers are familiar with the software development process. scanners don't like odd resumes. If you have a longer story. able to maintain enthusiasm of their team and promote a positive atmosphere. able to promote teamwork to increase productivity.are not appropriate. Please put your experience on the resume so resume readers can tell when and for whom you did what. If the resume is getting you interviews. Small fonts can make your resume harder to read.organization.

code changes. 2. customer contract officers. Please note. if . cohesive. reasonably detailed. QA practices should be documented. Q18. some type of documentation with detailed requirements will be needed by test engineers in order to properly plan and execute tests. testers. approach and focus of a software testing effort. and 6.as able to run meetings and keep them focused. salespeople and anyone who could later derail the project. No matter what they are called. functional specification documents. in order to determine if a feature of an application is working correctly. The completed document will help people outside the test group understand the why and how of product validation. action. Test conditions/setup. requirements may end up in high-level project plans. Q20. Specifications. Input data requirements/steps. which is too subjective. business rules. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. In some organizations.. A testable requirement would be something such as. the process of developing test cases can help find problems in the requirements or design of an application. design documents. What about requirements? A: Requirement specifications are important and one of the most reliable methods of insuring problems in a complex software project is to have poorly documented requirement specifications. configurations. test cases. bug reports. For this reason. Expected results. or other documents at various levels of detail. Test case name. What is a test plan? A: A software project test plan is a document that describes the objectives. 4. future software maintenance engineers. "userfriendly". Q21. designs. if possible. or event and its expected result. A test case should contain particulars such as a. customer management. test plans. Test case identifier. Requirements should be clear.. inspection reports. complete. so that they are repeatable. 1. Customers could be in-house or external and could include end-users. 5. Q19. it is useful to prepare test cases early in the development cycle. Without such documentation there will be no clear-cut way to determine if a software application is performing correctly. If his/her expectations aren't met. It should be thorough enough to be useful. scope. there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. but not so thorough that none outside the test group will be able to read it. A non-testable requirement would be. Requirements are the details describing an application's externally perceived functionality and properties. 3. Ideally. What is a test case? A: A test case is a document that describes an input. attainable and testable. for example. they should be included as a customer. if possible. user manuals should all be documented. "the product shall allow the user to enter their previously-assigned password to access the application". customer acceptance test engineers. since it requires you to completely think through the operation of the application. Care should be taken to involve all of a project's significant customers in the requirements process. Use documentation change management. What is the role of documentation in QA? A: Documentation plays a critical role in QA. Objective.

Many modern software applications are so complex and run in such an interdependent environment. software. Test budget has been depleted. 4. documentation. Which aspects of similar/related previous projects caused problems? . Test cases completed with certain percentage passed. for regression testing to check the fixes didn't create other problems elsewhere. Q22. Common factors in deciding when to stop are. change requests. improper build or release procedures. Which functionality is most visible to the user? 3. or everything that could go wrong. What is configuration management? A: Configuration management (CM) covers the tools and processes used to control. that complete testing can never be done. problemtracking/management software tools are available. This requires judgment skills. Which functionality has the largest safety impact? 4.. poor design. 3. Which parts of the application were developed in rush or panic mode? 9. Q26. What if there isn't enough time for thorough testing? A: Since it's rarely possible to test every possible aspect of an application.. with the focus being on critical bugs. Which aspects of the application can be tested early in the development cycle? 7. Additionally. functionality. risk analysis is appropriate to most software development projects. or requirements reaches a specified point. Which functionality is most important to the project's intended purpose? 2. testing deadlines. fixes should be re-tested. it should encapsulate these determinations. libraries. 2. 1. Beta or alpha testing period ends. Coverage of code. A variety of commercial. etc. What if the software is so buggy it can't be tested at all? A: In this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up. managers should be notified and provided with some documentation as evidence of the problem. will give the team complete information so developers can understand the bug. Deadlines.possible. safety impact. it needs to be communicated and assigned to developers that can fix it. such as insufficient unit testing. How do you know when to stop testing? A: This can be difficult to determine. Which parts of the code are most complex and thus most subject to errors? 8. requirements. Q23. hardware. The checklist should include answers to the following questions: 1. designs. insufficient integration testing.. Bug rate falls below a certain level. Which functionality has the largest financial impact on users? 5. or 6. What should be done after a bug is found? A: When a bug is found. These tools. Q25. compilers. If a problem-tracking system is in place. Since this type of problem can severely affect schedules and indicates deeper problems in the software development process. every dependency. Q24. determinations should be made regarding requirements. get an idea of its severity. patches. Use risk analysis to determine where testing should be focused. coordinate and track code.g. 5. every possible combination of events. problems. changes made to them and who makes the changes. After the problem is resolved. e. common sense and experience. reproduce it and fix it. release deadlines. Which aspects of the application are most important to the customer? 6. with the detailed input of software test engineers. tools.

in order to minimize regression-testing needs. Negotiate to allow only easily implemented new requirements into the project. 3. new requirements into future versions of the application. after all. Use rapid prototyping whenever possible. if extensive testing is still not justified. 7. or write up a limited test plan based on the risk analysis. Focus initial automated testing on application aspects that are most likely to remain unchanged. 1. What if the application has functionality that wasn't in the requirements? A: It may take serious effort to determine if an application has significant unexpected or hidden functionality. move more difficult. 12. Which aspects of similar/related previous projects had large maintenance expenses? 11. 6. 4. Design some flexibility into automated test scripts. What kinds of tests could easily cover multiple functionalities? 16. What do the developers think are the highest-risk aspects of the application? 13. Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails. Which tests will have the best high-risk-coverage to time-required ratio? Q27.. 2. It is helpful if the application's initial design allows for some adaptability. this will help customers feel sure of their requirements and minimize changes. Q28. so that alternate test plans and strategies can be worked out in advance. Ensure the code is well commented and well documented. Move new requirements to a 'Phase 2' version of an application and use the original requirements for the 'Phase 1' version. Design some flexibility into test cases. Devote appropriate effort to risk analysis of changes. 8. the best bet is to minimize the detail in the test cases. inherent risks and costs of significant requirements changes. or set up only higher-level generic-type test plans. What if the project isn't big enough to justify extensive testing? A: Consider the impact of project errors. 10. risk analysis is again needed and the considerations listed under "What if there isn't enough time for thorough testing?" do apply. Additionally. In the project's initial schedule. it should be removed. Then let management or the customers decide if the changes are warranted. as it may have unknown impacts or dependencies that were not taken into account by the designer or the .10. What kinds of problems would cause the most customer service complaints? 15.. Q29. If the functionality isn't necessary to the purpose of the application. 5. What kinds of problems would cause the worst publicity? 14. this makes changes easier for the developers. not the size of the project. 9. However. allow for some extra time to commensurate with probable changes. so that later changes do not require redoing the application from scratch. this is not easily done. which it would indicate deeper problems in the software development process. What can be done if requirements are changing continuously? A: Work with management early on to understand how requirements might change. that's their job. Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes. try to. Ensure customers and management understand scheduling impacts. Which parts of the requirements and design are unclear or poorly thought out? 12. 11. The test engineer then should do "ad hoc" testing.

when Rob Davis does it. Problem prevention will lessen the need for problem detection. hire Rob Davis) 2. Q34. it may not be a significant risk. like bureaucracy and in the short run things may slow down a bit. is also oriented to . A typical scenario would be that more days of planning and development will be needed. I/O devices and quick enough runtime for the final product. when performed by Rob Davis. especially talented technical types. Productivity will be improved instead of stifled. compact. testable and maintainable. especially in new technology areas. Why do you recommended that we test during the design phase? A: Because testing during the design phase can prevent defects later on. What if organization is growing so fast that fixed QA processes are impossible? A: This is a common problem in the software industry. FAQ'S Of Software Testing Q31. design information will be needed to determine added testing needs or regression testing needs. how to pass data. If not removed. 2. Verify the design meets the requirements and is complete (specifies all relationships between modules. However. 1. attempts should be made to keep processes simple and efficient. Hire good people (i.. There is no easy solution in this situation. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary). 1.customer. How can software QA processes be implemented without stifling productivity? A: Implement QA processes slowly over time. If the application was well designed this can simplify test design. is oriented to *prevention*. 3. Verify the design is good. making sure any agreed-upon standards and procedures are followed and ensuring problems are found and dealt with. efficient.e. other than. no one. but less time will be required for late-night bug fixing and calming of irate customers... minimize paperwork. Q33. Ruthlessly prioritize quality issues and maintain focus on the customer. such as minor improvements in the user interface. white-box testing can be oriented to the application's objects. Management should be made aware of any significant added risks as a result of the unexpected functionality. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Q32. Software Testing. promote computer-based processes and automated tracking and reporting. What is software quality assurance? A: Software Quality Assurance. How is testing affected by object-oriented designs? A: A well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. Everyone in the organization should be clear on what quality means to the customer. It involves the entire software development process. If the functionality only affects areas. Q30. Panics and burnout will decrease and there will be improved focus and less wasted effort. At the same time. starting state of each module and how to guarantee the state of each module). minimize time required in meetings and promote training as part of the QA process.. Verify the design incorporates enough memory. what happens in exceptional circumstances. Prevention is monitoring and improving the process. We recommend verifying three things.

A lot will depend on team leads or managers. What is black box testing? A: Black box testing is functional testing. Q41. making it easier for a user to find what they want. with overall QA processes monitored by project managers. Q39. managers. This document details some aspects of how he can provide software testing/QA service. e-mail rob@robdavispe. What is quality assurance? A: Quality Assurance ensures all parties concerned with the project adhere to the process and procedures. Testing involves the operation of a system or application under controlled conditions and evaluating the results. standards and templates and test readiness reviews. Unit testing is performed after the expected test results are met or differences are explainable/acceptable. It also helps in learning where information is located. Organizations vary considerably in how they assign responsibility for QA and testing. What is functional testing? A: Functional testing is black-box type of testing geared to functional requirements of an application. What is unit testing? A: Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Rob Davis' QA service depends on the customers and projects. testers and developers who work closely together. Standards and templates maintain document uniformity. . Q38. he documents the results. He will also recommend improvements and/or additions.what is supposed to be in a document? A: All documents should be written to a certain standard and template. Process and procedures . Standards and templates . information will not be accidentally omitted from a document. branches. he will follow them. It depends on what best fits your organization's size and business structure. which include a mix of test engineers. Sometimes they're the combined responsibility of one group or individual. Q36. He will also recommend improvements and/or additions. Black box testing are based on requirements and functionality. Tests are based on coverage of code statements. They also ensure a process is repeatable. What are the different levels of testing? A: Rob Davis has expertise in testing at all testing levels listed below. Q40. with standards and templates.why follow them? A: Detailed and well-written processes and procedures ensure the correct steps are being executed to facilitate a successful completion of a task. For more information. Lastly. Once Rob Davis has learned and reviewed your standards and templates. Rob Davis can provide QA and/or Software QA. What is white box testing? A: White box testing is based on knowledge of the internal logic of an application's code. developers' test engineers and testers. Also common are project teams.*detection*. Q43. not based on any knowledge of internal software design or code. feedback to developers and communications among customers. he will use them. What is parallel/audit testing? A: Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly. Once Rob Davis has learned and reviewed customer's business processes and procedures. paths and conditions. Q37. At each test level. Each level of testing is either considered black or white box testing. Q42. Test engineers *should* perform functional testing.com Q35.

All discrepancies are highlighted and accounted for. and at the start of the system testing the complete system is configured in a controlled environment. such as interacting with a database. It . A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone" any previous code. Integration testing is considered complete. What is system testing? A: System testing is black box testing. or test engineers. Q47. all unit and integration test results are reviewed by Software QA to ensure all problems have been resolved. Q45. What is usability testing? A: Usability testing is testing for 'user-friendliness'. This may require that various aspects of an application's functionality are independent enough to work separately. or that test drivers are developed as needed. integration testing begins. before testing proceeds to the next level. based on client input. system testing is started. Integration testing is black box testing. System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life. FAQ'S Of Software Testing Q46. Q50. when actual results and expected results are either in line or differences are explainable/acceptable based on client input. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable. This type of testing may be performed by programmers. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels. video recording of user sessions and other techniques can be used. This level of testing is a subset of regression testing. Q49. surveys. User interviews. using network communication. What is end-to-end testing? A: Similar to system testing. Expected results from the baseline are compared to results of the software under test. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed. before all parts of the program are completed. software engineers. What is regression testing? A: The objective of regression testing is to ensure the software remains intact. application. Before system testing. Programmers and developers are usually not appropriate as usability testers. or interacting with other hardware. This activity is carried out by the test team. performed by the Test Team. Q48. Test cases are developed with the express purpose of exercising the interfaces between the components. What is sanity testing? A: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications.Q44. What is integration testing? A: Upon completion of unit testing. Clearly this is subjective and depends on the targeted end-user or customer. What is incremental integration testing? A: Incremental integration testing is continuous testing of an application as new functionality is recommended. or system. the *macro* end of the test scale is testing a complete application in a situation that mimics real world use. Upon completion of integration testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.

or install/uninstall processes. upgrade. volumes and response times. Q57. or other catastrophic problems. software. Q59. Q56. What is performance testing? A: Although performance testing is described as a part of system testing. a sanity test is performed. Q53. What is installation testing? A: Installation testing is testing full. partial. or software QA engineers. or test engineers. printers. Q52. The test team also works with the client/customer/project manager to develop the acceptance criteria. or willful damage. in-house software test engineers. Q54. FAQ'S Of Software Testing . it can be regarded as a distinct level of testing.g. This test includes the inventory of configuration items. What is alpha testing? A: Alpha testing is testing of an application when development is nearing completion. What is compatibility testing? A: Compatibility testing is testing how well software performs in a particular hardware. What is beta testing? A: Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. and dynamic tests focused on basic system functionality. What is comparison testing? A: Comparison testing is testing that compares software weaknesses and strengths to those of competitors' products. the evaluation of data readiness. Q58. e. What is acceptance testing? A: Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. as defined by requirements.normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database. What is load testing? A: Load testing is testing an application under heavy loads. but still within the company. Minor design changes can still be made as a result of alpha testing. Q55. The acceptance test is the responsibility of the client/customer or project manager. Q60. hardware failures. Alpha testing is typically performed by a group that is independent of the design team. Performance testing verifies loads. such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail. When necessary. it is conducted with the full support of the project team. software engineers. This type of testing usually requires sophisticated testing techniques. The installation test for a release is conducted with the objective of demonstrating production readiness. What is recovery/error testing? A: Recovery/error testing is testing how well a system recovers from crashes. Beta testing is typically performed by end-users or others. etc. however. or network environment. following installation testing. operating system. performed by the application's System Administration. not programmers. What is security/penetration testing? A: Security/penetration testing is testing how well the system is protected against unauthorized internal or external access. application servers. Q51.

before bugs and design flaws damage the reputation of your company. What is a Test/QA Team Lead? A: The Test/QA Team Lead coordinates the testing activity. We execute test procedures and scripts. to both the application and the operating system. 11. We also. Assure the successful launch of your product by discovering bugs and design flaws. Maximize the value of the devices that use it. test engineers. FAA. System Administrator. What is a Database Administrator? . Save money by discovering defects 'early' in the design process. 1. Technical Analyst. System Administrators. What testing roles are standard on most testing projects? A: Depending on the organization. or in the field. Test/QA Manager. Depending on the project. before failures occur in production. one person may wear more than one hat. 5. Provide documentation required by FDA. What is a Test Engineer? A: Test Engineers are engineers who specialize in testing. You CAN get a job in testing. the following roles are more or less standard on most testing projects: Testers. Improve problem tracking and reporting. Q63. Test Build Manager and Test Configuration Manager. Test/QA Team Lead. Give you the evidence that your software is correct and operates properly. For instance. analyze standards of measurements. Speed up the work of the development staff. For instance. What is a System Administrator? A: Test Build Managers. We. 12. Test Engineers. a Test Engineer may also wear the hat of a Test Build Manager. Maximize the value of your software. Reduce your organization's risk of legal liability. create test cases. Q66. install the application's software and apply software patches. maintain and back up test environment hardware. scripts and generate data. Depending on the project. 6. Promote continual improvement. Q65. before users get discouraged. Click on a link! Q62. 8. Test Build Manager and Test Configuration Manager. one person may wear more than one hat. procedures. communicates testing status to management and manages the test team. so the development team can devote its time to build up your product. Test Engineers may also wear the hat of Technical Analyst.Q61. 3. Q64. a Test Engineer may also wear the hat of a System Administrator. Depending on the project. before shareholders loose their cool and before employees get bogged down. install the application's software and apply software patches. Save the reputation of your company by discovering bugs and design flaws. Help the work of your development staff.. set-up. Database Administrator. 4. Database Administrators deliver current software versions to the test environment. 10. one person may wear more than one hat.. 7. to both the application and the operating system. other regulatory agencies and your customers. For instance. maintain and back up test environment hardware. What is a Test Build Manager? A: Test Build Managers deliver current software versions to the test environment. set-up. 2. 9. evaluate results of system/integration/regression testing.

For instance. The test plan may include test cases. conditions. one person may wear more than one hat. This information comes from man-hours and schedules. Q68. software and test data. Q67. System Administrators and Database Administrators deliver current software versions to the test environment. What is a test schedule? A: The test schedule is a schedule that identifies all tasks required for a successful testing effort. This methodology can be used and molded to your organization's needs. scripts. e. set-up. maintain and back up test environment hardware. An approved and signed off test strategy document. How do you create a test strategy? A: The test strategy is a formal description of how a software product will be tested. 4. a Test Engineer may also wear the hat of a Database Administrator. one person may wear more than one hat. as required. 2. Executing tests. The test team analyzes the requirements.. to both the application and the operating system. What is software testing methodology? A: One software testing methodology is the use a three step process of. Functional and technical requirements of the application. Outputs for this process: 6. For instance. Q69. This information comes from requirements. including test cases. Requirements that the system can not provide. Depending on the project. a list of related tasks. Creating a test strategy. What is a Test Configuration Manager? A: Test Configuration Managers maintain test environments. 1. Q72. Depending on the project. Creating a test plan/design. A test strategy is developed for all levels of testing. This information comes from the test environment. Q70. Test Engineers may also wear the hat of a Test Configuration Manager. Testing methodology. What is a Technical Analyst? A: Technical Analysts perform test assessments and validate system/functional test requirements. a schedule of all test activities and resource requirements.A: Test Build Managers. creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests. system limitations. pass/fail criteria and risk assessment. test plan. Inputs for this process: 1. 7. Test Engineers may also wear the hat of a Technical Analyst. the test environment. including test tools. What is the general testing process? A: The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases). Rob Davis believes that using this methodology is important in the development and in ongoing maintenance of his customers' applications. change request. and 3. technical and functional design documents. Q71. A description of roles and responsibilities of the r resources required for the test and schedule constraints..g. Usually this requires additional negotiation at the project . Testing issues requiring resolution. including test tool data. install the application's software and apply software patches. 5. one person may wear more than one hat. writes the test strategy and reviews the plan with the project team. Depending on the project. This is based on known standards. A description of the required hardware and software components. For instance. 3. 2.

develops test cases and scenarios for integration and system testing. Checkpoint meetings are held daily. 15. 3. Test engineers define unit test requirements and unit test cases. derived from general and detailed design documents. Test engineers also execute unit test cases. 4. to address and discuss testing issues. Base-lined data is used to support future application maintenance via regression testing. if applicable. Checkpoint meetings are held throughout the execution phase. with assistance of developers and clients. Test procedures define test conditions. 17. A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. file outputs. if applicable. Test results are . test conditions and test data. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. It is the test team that. How do you create a test plan/design? A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. 1. Q73. Test procedures or scripts may cover multiple test scenarios. Test tools. A test readiness document is created to indicate the status of the entrance criteria of the release. Test scenarios are executed through the use of test procedures or scripts. prior to testing. Generally speaking. Approved documents of test scenarios. e. As each test procedure is performed. source code and software complexity data. Outputs for this process: 18. 5. 19. Q74.. 9. The output from theexecution of test procedures is known as test results. Reports of software design issues. Test procedures or scripts include the specific data that will be used for testing the process or transaction. if required. Some output data is also base-lined for future comparison. A good understanding of software complexity and module path coverage. 6. 10. Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application. Approved Test Strategy Document. Previously developed scripts. test cases. 2. 16. software design document. Test documentation problems uncovered as a result of testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment. an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. 7. How do you execute tests? A: Execution of tests is completed by following the test documents in a methodical manner.management level. including database updates. or automated test tools. Test data is captured and base lined. 8. 1. report results. Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope. given to software developers for correction. Inputs for this process: 12.g. 11.. status and activities. 14. data to be used for testing and expected results.

Changes to the design. Changes to the code. and update documents where appropriate. Software Design Document. The summary report is reviewed by the Project Manager. General and Detailed Design Documents. based on the severity of the problem. Usually this is part of the Test Report.evaluated by test engineers to determine whether the expected results have been obtained. Following completion of the test. 5. Fixes are regression tested and flawless fixes are migrated to a new baseline. Formal record of test incidents. 2. also known as test fixes. 7. via the Configuration/Build Manager. Test document problems uncovered as a result of testing. if applicable. 4. Test Cases. 17. i. Base-lined package. e. hardware test lead. 12. The severity of a problem. 15.g. i. Inputs for this process: 6. The software is only migrated to the production environment after the Project Manager's formal acceptance. 2. Change Request Documents. members of the test team prepare a summary report. 19. Availability of the test team and project team. ready for migration to the next level. This needs to be approved and signed-off with revised testing deliverables.e. Test data. as documented in the Configuration Management Plan. unit tested code. Software QA Manager and/or Test Team Lead.e. A software that has been migrated to the test environment. 9. 8. 3. Test Plan. 11. Outputs for this process: 16. Test Readiness Document. Reports on software design issues. 13. and results are recorded in a test summary report. Unit testing. 10. 18. Requirements Document. A pass/fail criteria is used to determine the severity of a problem. Log and summary of the test results. . including automated test tools. All discrepancies/anomalies are logged and discussed with the software team lead. Approved test documents. Proposed fixes are delivered to the testing environment. 14. The test team reviews test document problems identified during testing. given to software developers for correction. After a particular level of testing has been certified. Examples are bug reports on code issues. programmers.e. found during system testing. Every company has a different process for logging and reporting bugs/defects uncovered during testing. i. Test tools. 3. is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool. Document Updates. Black box testing. software engineers and documented for further investigation and resolution. Q75. it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level. usually part of problem tracking. Developed scripts. also known as tested source and object code. Test Procedures. Examples are Requirements document and Design Document problems. What testing approaches can you tell me about? A: Each of the followings represents a different testing approach: 1. White box testing.

and 24. 15. by simulating multiple users that access the program's services concurrently. though there is gray area in between stress testing and load testing. System testing. Click on a link! Q79. without crashing the server. 16. reliability testing. with little or no outside help. Performance testing. Q77. What is stress testing? A: Stress testing is testing that investigates the behavior of software (and hardware) under extraordinary operating conditions. It tests something beyond its normal operational capacity. 13. Get CAN get free information. using scripts. is often used synonymously with stress testing. bots. Security testing. Regression testing. 6. For example. ad-hoc testing. End-to-end testing. Recovery testing. 20. Comparison testing. You CAN learn testing. Beta testing. What is load testing? A: Load testing simulates the expected usage of a software program. 21. including web servers. Compatibility testing. Exploratory testing. the load is so great that errors are the expected results. 5. Load testing generally stops short of stress testing. the load placed on the system is increased above normal usage patterns. Install/uninstall testing. Alpha testing. Stress testing tests the stability of a given system or entity. Sanity testing. Usability testing. Incremental testing. and volume testing. with little or no outside help. For example. 9. at the same time. 19. 12. Functional testing. testing aims to find out how many users can be on-line. performance testing.4. in order to observe any negative results. Load testing is most useful and most relevant for multi-user systems. 10. 14. Click on a link! . 8. During stress testing. Load testing. What is the difference between performance testing and load testing? A: Load testing is a blanket term that is used in many different ways across the professional software testing community. User acceptance testing. Integration testing. 7. in order to test the system's response at peak loads. 11. Acceptance testing. Mutation testing. 18. FAQ'S Of Software Testing Q76. load testing. You CAN learn load testing. when a web server is stress tested. 22. and various denial of service tools. a web server is stress tested. For example. client/server models. 23. The term. Get CAN get free information. 17.

with little or no outside help. testing cannot establish the correctness of software. in order to receive as much feedback as possible. The expectation is that. Q87. They use either debugger software. What is boundary value analysis? A: Boundary value analysis is a technique for test data selection. The term. Beta testing is performed by the public. Get CAN get free information. just outside boundaries. The term. What is automated testing? A: Automated testing is a formally specified and controlled method of formal testing approach. reliability testing. so that further testing can ensure the product has few bugs. just inside boundaries. The goal of incremental testing is to provide an early feedback to software developers. What is software testing? A: Software testing is a process that identifies the correctness. or the general public. What is the difference between reliability testing and load testing? A: Load testing is a blanket term that is used in many different ways across the professional software testing community. though there is gray area in between stress testing and load testing. During stress testing. though there is gray area in between stress testing and load testing. minimum. the load is so great that errors are the expected results. with little or no outside help. What is clear box testing? A: Clear box testing is the same as white box testing. You CAN learn clear box testing. the software is handed over to us. a few select prospective customers. and quality of software. What is beta testing? A: Following alpha testing. reliability testing. A test engineer chooses values that lie along data extremes. It is a testing approach that examines the application's program structure. Boundary values include maximum. (and this is called the first phase of alpha testing). but cannot prove there are no defects. Q81. What is the difference between volume testing and load testing? A: Load testing is a blanket term that is used in many different ways across the professional software testing community. First. Q88. beta versions are made available to the general public. and error values. or hardware-assisted debuggers. (and this is called second stage of alpha testing). Q86. During stress testing. Actually. for additional testing in an environment that is similar to the intended use. What is incremental testing? A: Incremental testing is partial testing of an incomplete product. completenes. if a . is often used synonymously with stress testing. load testing. Q85. the software QA staff. You CAN learn software testing. is often used synonymously with stress testing. Load testing generally stops short of stress testing. and volume testing. The goal is to benefit the maximum number of future users. Click on a link! Q84. Get CAN get free information. typical values. The goal is to catch bugs quickly. What is alpha testing? A: Alpha testing is final testing before the software is released to the general public. It can find defects. Q82. Load testing generally stops short of stress testing.Q80. Click on a link! Q89. Q83. load testing. What is the difference between alpha and beta testing? A: Alpha testing is performed by in-house developers and software QA personnel. Then. "beta versions" of the software are released to a group of people. performance testing. and derives test cases from the application's program logic. performance testing. and limited public tests are performed. and volume testing. the load is so great that errors are the expected results. the software is tested by in-house developers. Other times.

reliability. actions. portability. Q97. Some common quality attributes are stability. where column 1 is the "Test Case ID . What is black box testing? A: Black box testing a type of testing that considers only externally visible behavior. It is a testing approach that examines the application's program structure.systems works correctly for these extreme or special values. Q94. Often these templates are in the form of a table. Q98. for testing purposes. is to exercise it at its natural boundaries. nor the "inner workings" of the software. What is gamma testing? A: Gamma testing is testing of software that has all the required features. See quality standard ISO 9126 for more information on this subject. usability. Black box testing considers neither the code itself. and derives test cases from the application's program logic. Black box testing a type of testing that considers only externally visible behavior. and their expected results. nor the "inner workings" of the software. Click on a link! Q95. What is software quality? A: The quality of the software does vary widely from system to system. One example of this table is a 6-column table. The objective of bottom-up testing is to call low-level components first. then it will work correctly for all values in between. lowlevel components are tested first. in order to determine if all features of an application are working correctly. An effective way to test code. FAQ'S Of Software Testing Q91. with bottom-up testing. Q90. Black box testing considers neither the code itself. How do test case templates look like? A: Software test cases are in a document that describes inputs. What is ad hoc testing? A: Ad hoc testing is a testing approach. What is functional testing? A: Functional testing is same as black box testing. because. A test engineer creates and uses test drivers for components that have not yet been developed. What is bottom-up testing? A: Bottom-up testing is a technique for integration testing. or events. and derives test cases from the application's program logic. Cynics tend to refer to software releases as "gamma testing". with little or no outside help. What is open box testing? A: Open box testing is same as white box testing. Test case templates contain all particulars of every test case. Get CAN get free information. Q93. and maintainability. Q96. Q92. it is the least formal testing approach. but it did not go through all the in-house quality checks. nor the "inner workings" of the software. It is a testing approach that examines the application's program structure. You CAN learn to do black box testing. What is glass box testing? A: Glass box testing is the same as white box testing. Black box testing considers neither the code itself. What is closed box testing? A: Closed box testing is same as black box testing. Q99. Black box testing a type of testing that considers only externally visible behavior.

Communication skills and the ability to understand various sides of issues are important. Q105. are successful if people listen to us. column 3 is the "Test Objective". but QA engineers do more than just testing. I would love to see QA departments staffed with experienced software developers who coach . What is the role of test engineers? A: Test engineers speed up the work of the development staff. All documents should be written to a certain standard and template. Q104. create test cases. We save the reputation of your company by discovering bugs and design flaws. and if we're happy doing our work. scripts and generate data. They also help in learning where information is located. before shareholders loose their cool and before employees get bogged down. A software fault. when the software is ported to a different complier. making it easier for users to find what they want. test engineers. and column 6 is the "Expected Results". column 2 is the "Test Case Name". test engineers. or in the field. and reduce the risk of your company's legal liability. other regulatory agencies. information will not be accidentally omitted from a document. Or. on the other hand. users get discouraged. and your customers. What is the difference between a software fault and a software failure? A: Software failure occurs when the software does not do what the user expects to see. Standards and templates maintain document uniformity. with standards and templates. We. and the faulty portion of the code is executed on the CPU. A software fault becomes a software failure only when the exact computation conditions are met. column 5 is the "Input Data Requirements/Steps". Q102. QA engineers. before bugs and design flaws damage the reputation of your company. We. You CAN learn to create test case templates. What is software failure? A: Software failure occurs when the software does not do what the user expects to see. FAA.Number". Get CAN get free information. so the development team can devote its time to build up the product. if people think that we're useful. if people use our tests. We. analyze standards of measurements. Q103. What is a test engineer? A: Test engineers are engineers who specialize in testing. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. before. We execute test procedures and scripts. We also improve problem tracking and reporting. What is a software fault? A: Software faults are hidden programming errors.. They provide documentation required by FDA. Click on a link! Q100. procedures. Q101. also give the company the evidence that the software is correct and operates properly. test engineers help the work of software development staff. Or. Lastly. with little or no outside help. We. before failures occur in production. maximize the value of the software. when the software is ported to a different hardware platform. This can occur during normal usage. Or. column 4 is the "Test Conditions/Setup". We also assure the successful launch of the product by discovering bugs and design flaws. test engineers also promote continual improvement.. What is a QA engineer? A: QA engineers are test engineers. when the software gets extended. We. and the value of the devices that use it. is a hidden programming error. test engineers save your company money by discovering defects EARLY in the design process. evaluate results of system/integration/regression testing. Software faults are errors in the correctness of the semantics of computer programs. We.

total number of bugs that have been fixed. and there is no QA team. testing and quality of the entire product? No. Statistics are excellent tools that project managers can use. You CAN learn to use defect tracking software. Statistics can be used. integration testing begins. On the negative side. Should he take responsibility to set up a QA infrastructure/process. The problem is. This activity is carried out by the test team. Integration testing is considered complete. i. when bug rate falls below a certain level. What is integration testing? . Instead of coaching. so that they can be analyzed in terms of statistics. Click on a link! Q107. What metrics are used for bug tracking? A: Metrics that can be used for bug tracking include: total number of bugs. and tell the developers. What metrics can be used in software development? A: Metrics refer to statistical process control. FAQ'S Of Software Testing Q106.g. with little or no outside help. unit testing has to be completed. find ways to replicate the bugs. sometimes. But. find all the bugs.e. Metrics for bug tracking can be used to determine when to stop testing. one CAN use statistics. with little or no outside help. But I've never seen it.development teams to write better code. QA engineers. What we CAN do is to detect lack of quality. Why? Because we QA engineers cannot assure quality. Q108. You CAN learn to perform integration testing. Upon completion of unit testing. why should we label them quality assurance tools? Q110. for example. and number of fixes per week. statistical process control works only with processes that are sufficiently well defined AND unvaried. Get CAN get free information. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. What is the solution? We need to drop the QA label. We need to offer to help with quality assessment only. if these are project management tools. number of new bugs per week. Click on a link! Q111. The problem is. What are the responsibilities of a QA engineer? A: Let's say. and prevent low-quality products from going out the door. How do you perform integration testing? A: First. On the positive side. tend to be process people. to determine when to stop testing. submit bug reports to the developers. And because QA departments cannot create quality. we. tell them if they've achieved the desired level of quality. Test cases are developed with the express purpose of exercising the interfaces between the components. when actual results and expected results are either in line or differences are explainable/acceptable based on client input. Integration testing is black box testing. The idea of statistical process control is a great one. they are responsible for the quality of their own work. and to provide feedback to the developers. an engineer is hired for a small software company's QA role. they will slack off on their testing. i. or when bug rate falls below a certain level. because taking this responsibility is a classic trap that QA people get caught in. Q109. most software development projects are NOT sufficiently well defined and NOT sufficiently unvaried. e. as soon as the developers learn that there is a test department.e. Get CAN get free information. test cases completed with certain percentage passed. What is role of the QA engineer? A: The QA Engineer's function is to use the system much like real users would. but it has only a limited use in software development.

or when bug rate falls below a certain level. but it has only a limited use in software development. Statistics are excellent tools that project managers can use. The problem is. On the positive side. 8. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. for example. so that they can be analyzed in terms of statistics. 5. Statistics can be used. and reflects the complexity of the module's calling patterns to its immediate subordinate modules. 7. to determine when to stop testing. The idea of statistical process control is a great one. Pathological Complexity Metric (pv(G)). 3. McCabe Data-Related Software Metrics . What metrics are used for test report generation? A: Metrics refer to statistical process control. Essential Complexity Metric (ev(G)). i. Cyclomatic Complexity is a measure of the complexity of a module's decision structure. Object Integration Complexity Metric (OS1). It can be no less than one and no more than the cyclomatic complexity of the original flowgraph. Global Data Complexity Metric (gdv(G)). Global Data Complexity Metric quantifies the cyclomatic complexity of a module's structure as it relates to global/parameter data. Integration Complexity Metric measures the amount of integration testing necessary to guard against errors. Cyclomatic Complexity Metric (v(G)). 4. It is the basis upon which program design and integration complexities (S0 and S1) are calculated. This activity is carried out by the test team. Module Design Complexity Metric (iv(G)). This metric measures the degree of structuredness and the quality of the code. 6. Design Complexity Metric (S0). test cases completed with certain percentage passed. when actual results and expected results are either in line or differences are explainable/acceptable based on client input. and modules that simply contain complex computational logic. why should we label them quality assurance tools? The followings describe some of the metrics in quality assurance: McCabe Metrics 1. most software development projects are NOT sufficiently well defined and NOT sufficiently unvaried. This metric is used to predict the required maintenance effort and to help in the modularization process. if these are project management tools. 9. Pathological Complexity Metric is a measure of the degree to which a module contains extremely unstructured constructs.e. Design Complexity Metric measures the amount of interaction between modules in a system. the minimum number of paths that should be tested. Module Design Complexity is the complexity of the design-reduced module. Q112. Integration Complexity Metric (S1). statistical process control works only with processes that are sufficiently well defined AND unvaried. Essential Complexity is a measure of the degree to which a module contains unstructured constructs. This metric differentiates between modules that seriously complicate the design of a program they are part of.A: Integration testing is black box testing. one CAN use statistics. Test cases are developed with the express purpose of exercising the interfaces between the components. On the negative side. Object Integration Complexity Metric quantifies the number of tests necessary to fully integrate an object or class into an OO system. It's the number of linearly independent paths and therefore. 2. But. Integration testing is considered complete. Actual Complexity is the number of independent paths traversed during testing. Actual Complexity Metric (AC).

2. Data Reference Severity Metric (DR_severity). ROOTCNT is the total number of class hierarchy roots within a program. FANIN is the number of classes from which a class is derived. 4. 8. Maximum v(G) (MAXV). Lack of Cohesion of Methods (LOCM).1. Number of Roots (ROOTCNT). MAXEV is the maximum essential complexity value for any single method within a class. 2. It is the total number of times that data-related variables are used in a module. 2. and therefore. 3. 6. Quality 1. It is an indicator of high levels of data logic in test paths. Tested Data Reference Metric is the total number of tested references to data-related variables. Tested Data Complexity Metric quantifies the complexity of a module's structure as it relates to data-related variables. Maintenance Severity Metric (maint_severity). MAXV is the maximum cyclomatic complexity value for any single method within a class. Maintenance Severity Metric measures how difficult it is to maintain a module. 2. a module is data dense if it contains data-related variables in a large proportion of its structures. Global Data Severity Metric measures the potential impact of testing data-related basis paths across modules. PCTPUB is the percentage of public and proteced data within a class. Polymorphism 1. Data Reference Metric (DR). 3. Access to Public Data (PUBDATA) PUBDATA indicates the number of accesses to public and protected data. Percent Public Data (PCTPUB). 3. therefore. Data Reference Severity Metric measures the level of data intensity within a module. Tested Data Reference Metric (TDR). Data Complexity Severity Metric measures the level of data density within a module. Other Object-Oriented Software Metrics 1. Hierarchy Quality(QUAL). It is an indicator of high levels of data related code. Encapsulation 1. 5. Data Complexity Severity Metric (DV_severity). LOCM is a measure of how the methods of a class interact with the data in a class. It is the number of independent paths through data logic. Depth (DEPTH). Depth indicates at what level a class is located within its class hierarchy. a measure of the testing effort with respect to data-related variables. Fan-in (FANIN). McCabe Object-Oriented Software Metrics. Tested Data Complexity Metric (TDV). Maximum ev(G) (MAXEV). Data Reference Metric measures references to data-related variables independently of control flow. . QUAL counts the number of classes within a system that are dependent upon their descendants. 7. It is based on global data test paths. a module is data intense if it contains a large number of data-related variables. Percent of Unoverloaded Calls (PCTCALL). Global Data Severity Metric (gdv_severity). 2. It is the number of independent paths through data logic that have been tested. therefore. Data Complexity Metric quantifies the complexity of a module's structure as it relates to data-related variables. Data Complexity Metric (DV). McCabe Object-Oriented Software Metrics. PCTCALL is the number of non-overloaded calls in a system. McCabe Object-Oriented Software Metrics.

At any time during the . RFC is a count of methods implemented within a class plus the number of methods accessible to an object of this class type due to inheritance. approach and focus of a software testing effort. One example of this template is a 4-section document. approach and focus of a software testing effort. NOC is the number of classes that are derived directly from a specified class. Program Level and Program Difficulty. The completed document will help people outside the test group understand the why and how of product validation. 6. Lines of Comment 3. Lines Left Blank Q113. Click on a link! Q114. Lines of Mixed Code and Comments 4. They also help in learning where information is located. with little or no outside help. 3. 4. he will use them. With standards and templates. section 2 is the the description of "Scope of Testing".3. You CAN learn to generate test plan templates. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. Halstead Software Metrics 1. scope. where section 1 is the description of the "Test Objective". Programming effort is the estimated mental effort required to develop a program. section 3 is the the description of the "Test Approach". Standards and templates maintain document uniformity. and section 4 is the "Focus of the Testing Effort". Error estimate calculates the number of errors in a program. 2. He will also recommend improvements and/or additions. Weighted Methods Per Class (WMC). Programming Time. How do test plan templates look like? A: The test plan document template helps to generate test plan documents that describe the objectives. All documents should be written to a certain standard and template. making it easier for a user to find what they want. Program length is the total number of operator occurences and the total number of operand occurences. 5. Error Estimate. 5. Program volume is the minimum number of bits required for coding the program. Intelligent content shows the complexity of a given algorithm independent of the language used to express the algorithm. Program Length. information will not be accidentally omitted from a document. Response For a Class (RFC). A software pro ject test plan is a document that describes the objectives. Test document templates are often in the form of documents that are divided into sections and subsections. 7. What is a "bug life cycle"? A: Bug life cycles are similar to software development life cycles. 4. Line Count Software Metrics 1. Programming Effort. Intelligent Content. scope. Once Rob Davis has learned and reviewed your standards and templates. Number of Children (NOC). Programming time is the estimated amount of time to implement an algorithm. WMC is a count of methods implemented within a class. Get CAN get free information. Lines of Code 2. Program level and program difficulty is a measure of how easily a program is comprehended. Program Volume.

) that can be a time-consuming task. it should encapsulate these determinations. find ways to replicate the bugs. Bug life cycle begins when a programmer. but depends on what phase of the software development life cycle the project is in. 5:1. hardware. submit bug reports to developers. safety impact. I find all the bugs. Should I take a course in manual testing? A: Learning how to perform manual testing is an important part of one's education. software developer. with little or no outside help. I see no reason why one should skip an important part of an academic program.e. fixes should be re-tested. should I sign up for a course at a nearby educational institution? A: The cheapest. information on the Internet. requirements analysis. automated testing can be valuable. retesting and phase-out. and provides feedback to the developers. heavily in favor of developers. or even 1:2. with little or no outside help. Automated testing tools sometimes do not make testing easier. or architect makes a mistake. maintenance. or 3:1. Q115. document preparation. will give the team complete information so developers can understand the bug. logs. If a problem-tracking system is in place. and whatever information you can lay your hands on. it needs to be communicated and assigned to developers that can fix it. When a product is first conceived. updates. and ends when the bug is fixed. To learn to use WinRunner. a bug. if you put your mind to it! You CAN learn to use WinRunner. A variety of commercial. etc. while one is getting paid to do a job that requires the use of WinRunner and many other software testing . data. this ratio tends to be 1:1. When do you choose automated testing? A: For larger projects. by an employer. in favor of testers. What is your role in your current organization? A: I'm a Software QA Engineer. and developed. and the bug is no longer in existence. there is a way! You CAN do it. or ongoing long-term projects. Get CAN get free information. Additionally. creates an unintentional software defect. this ratio tends to be 10:1. the time needed to learn and implement the automated testing tools is usually not worthwhile. What is the ratio of developers and testers? A: This ratio is not a fixed one. After the problem is resolved. Then the next step is getting some hands-on experience on how to use WinRunner. the recordings have to be changed so often. books. Get CAN get free information. If there is a will. integration. or free. with the detailed input of software test engineers. manuals. You CAN learn to use automated tools. One problem with automated testing tools is that if there are continual changes to the product being tested.software development life cycle errors can be made during the gathering of requirements. internal design. etc. test planning. i. when the product is near the end of the software development life cycle.e. documentation planning. unit testing. Another problem with such tools is the interpretation of the results (screens. These tools. But for small projects. education is sometimes provided on the job. tell them if they've achieved the desired level of quality. reproduce it and fix it. coding. problemtracking/management software tools are available. for regression testing to check the fixes didn't create other problems elsewhere. Click on a link! Q116. Q119.e. testing. get an idea of its severity. Click on a link! Q120. Q118. software. functional design. I use the system much like real users would. In sharp contrast. organized.. What should be done after a bug is found? When a bug is found. that it becomes a very time-consuming task to continuously update the scripts. i. i. determinations should be made regarding requirements. How can I learn to use WinRunner. and that includes reading product description pamphlets. Q117. without any outside help? A: I suggest you read all you can.

by an employer. CVS. often distant. Database Administrators. and preferences. change requests. Rational Tools. changes that are made to the software and documentation throughout the software development life cycle (SDLC). made by Rational Software. DOORS. and there are many others. and changes made to them. for revision control of source code. Q126. is a requirements version control software tool. CVS enables several. . Which of these tools should I learn? A: I suggest you learn the most popular software tools (i. PVCS is a document version control tool. System Administrators. tends to be cheap. Q124. Get CAN get free information. tools. Test Build Managers. Diff is a UNIX command that compares contents of two files. Test/QA Team Leads. we Test Engineers often wear the hat of Technical Analyst. Rational Tools. Rational ClearCase is a popular software tool. FAQ'S Of Software Testing Q122. depending on the end client. What software tools are in demand these days? A: The software tools currently in demand include LabView. I don't have a lot of money. libraries. developers to work together on the same source code. Test/QA Managers. education is sometimes provided on the job. LoadRunner. Technical Analysts. What other roles are in testing? A: Depending on the organization. In lieu of a job. LoadRunner. documentation. How can I become a good tester with little or no cost to me? A: The cheapest.e. a competitor of SCCS. DOORS. Q123. is a popular. and their needs. SCCS is an original UNIX program. Test Engineers. open source version control system to keep track of changes in documents associated with software projects. Classroom education. and can easily adapt to an organization's software tool and process needs. compilers. Click on a link! Q125.but there are many others.and especially the Loadrunner and Rational Toolset -. SCM covers the tools and processes used to control. and the recording of. etc.) -. What are some of the software configuration management tools? A: Software configuration management tools include Rational ClearCase. Winrunner. based on "diff". with little or no outside help. Test Build Manager and Test Configuration Manager as well. Q121. CVS. it is often a good idea to sign up for courses at nearby educational institutions. What is software configuration management? A: Software Configuration management (SCM) is the control. and Winrunner -.tools. LabView. and to keep track of who makes the changes. For instance.and you want to pay special attention to LoadRunner and the Rational Toolset. especially non-degree courses in local. community colleges. or "Concurrent Version System". one person can and often wear more than one hat. requirements. PVCS. Depending on the project. or "Dynamic Object Oriented Requirements System". coordinate and track code. while one is getting paid to do a job that requires the use of WinRunner and many other software testing tools. Rob Davis has experience with a full range of CM tools and concepts. patches. or free. and Test Configuration Managers. problems. the following roles are more or less standard on most testing projects: Testers. designs. You CAN learn to use SCM tools.

Q132. a precedence established by order of importance (or urgency). The "best" job is the job that makes YOU happy. These tools. tools. change requests. CM covers the tools and processes used to control. The output of validation. will lead to success. plans. Q130.Q127. . The 'severity' of a problem is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool. with the detailed input of software test engineers. is a nearly perfect. Q129. plans. on the other hand. requirements. Persistence.g. using the skills. issues lists. What is documentation change management? A: Documentation change management is part of configuration management (CM). and specifications. libraries. you need to experiment. patches. "Severity" is the state or quality of being severe. Up time is the sum of busy time and idle time. means producing. and talents YOU have. severe implies adherence to rigorous standards or high principles and often suggests harshness. For example. problems. The words priority and severity do come up in bug tracking. compilers. Q128. working or producing with a minimum of waste. if we count the number of applicants and resumes. combined with experimentation. and "severity" is associated with standards. documentation. requirements. Test/QA Team Leads. on the other hand. What is upwardly compatible software? A: Upwardly compatible software is compatible with a later or more complex version of itself. is the actual testing of an actual product. and requirements document. What's the difference between priority and severity? A: "Priority" is associated with scheduling. severe is marked by or requires strict adherence to rigorous standards or high principles. The fixes are based on project 'priorities' and 'severity' of bugs. Rob Davis has had experience with a full range of CM tools and concepts. actual product. and not vice versa. walkthroughs and inspection meetings. Tester roles tend to be the most popular. "Piority" means something is afforded or deserves prior attention. What is up time? A: Up time is the time period when a system is operational and in service. A buggy software can 'severely' affect schedules. Q131. The inputs of verification are checklists. get an idea of its 'severity'. give the team complete information so developers can understand the bug. Verification evaluates documents. in turn can lead to a reassessment and renegotiation of 'priorities'. "For rapid long-distance transportation. Less popular roles are roles of System Administrators. To find the best job. code. A variety of commercial. What is the difference between verification and validation? A: Verification takes place before validation. What's the difference between efficient and effective? A: "Efficient" means having a high ratio of output to input. or capable of producing. coordinate and track code. "An efficient engine saves gas". For example. Rob Davis can easily adapt to your software tool and process needs. e. The input of validation. the jet engine is more effective than a witch's broomstick". and Test/QA Managers. and "play" different roles. or having a striking effect. "Effective". resources. Which of these roles are the best and most popular? A: As a yardstick of popularity. specifications. a severe code of behavior. reviews and meetings. Q133. The output of verification is a nearly perfect set of documents. The best job is the one that works for YOU. reproduce it and fix it. problem-tracking/management software tools are available. evaluates the product itself. changes made to them and who makes the changes. Validation. which. on the other hand. on the other hand. designs. an intended result.

It enables the passage of information between a human user and hardware or software components of a computer system. what is described are system and component capabilities. Typically. Q142. What is user documentation? A: User documentation is a document that describes the way a software product or system should be used to obtain the desired results. Q140. What is the difference between user documentation and user manual? A: When a distinction is made between those who operate and use a computer system for its intended purpose. Q141. in which a subordinate module is copied into the body of a superior module. What is a user friendly document? A: A document is user friendly. when it is designed with ease of use. limitations. as one of the primary objectives of its design. and special instructions. and users get user manuals. Q144. and special instructions. It is a document that presents information necessary to employ a system or component to obtain the desired results. Q136. options. as one of the primary objectives of its design. Uilization is a useful measure in evaluating computer performance. What is usability? A: Usability means ease of use. . expected outputs. FAQ'S Of Software Testing Q138. expected outputs. a program to print files. What is utilization? A: Utilization is the ratio of time a system is busy.For example. What is a user manual? A: User manual is a document that presents information necessary to employ software or a system to obtain the desired results. What is a utility? A: Utility is a software tool designed to perform some frequently used support function. upward compression means a form of demodularization. permitted inputs. What is user friendly software? A: A computer program is user friendly. What is a user guide? A: User guide is the same as the user manual. the ease with which a user can learn to operate. Q134. divided by the time it is available. What is upward compression? A: In software design. Q143. Operators get user documentation. Q135. when it is designed with ease of use. Q139. error messages. and interpret outputs of a software product. What is user interface? A: User interface is the interface between a human user and a computer system. error messages. options. permitted inputs. a separate user documentation and user manual is created. Q137. an upwardly compatible software is able to handle files created by a later version of itself. For example. limitations. Typically. prepare inputs for. what is described are system and component capabilities.

They allow those location to be accessed as though they were part of the main storage. Q152. Q148. What is variable trace? A: Variable trace is a record of the names and values of variables accessed and changed during the execution of a computer program. What is a software version? A: A software version is an initial release (or re-release) of a software associated with a complete compilation (or recompilation) of the software. What is a variable? A: Variables are data items whose values can change. Q147. Q155. It stands for "version description document". FAQ'S Of Software Testing Q154. Q151. Vertical microinstructions are short. Typically the VDD includes a description. What is value trace? A: Value trace is same as variable trace. For example: "capacitor_voltage". and installation and operating information unique to this version of the software. What is a document version? A: A document version is an initial release (or complete a re-release) of a document.Q145. Q153. What is a vertical microinstruction? A: A vertical microinstruction is a microinstruction that specifies one of a sequence of operations needed to carry out a machine language instruction. What is a virtual address? A: In virtual storage systems. and constants. What is a version description document (VDD)? A: Version description document (VDD) is a document that accompanies and identifies a given version of a software product. . It is a record of the names and values of variables accessed and changed during the execution of a computer program. Q146. as opposed to a revision resulting from issuing change pages to a previous release. 12 to 24 bit instructions. there are horizontal as well as diagonal microinstructions as well. correct. There are local and global variables. and if the software of each development phase fulfills the requirements and conditions imposed by the previos phase. These 12 to 24 bit microinstructions instructions are required to carry out a single machine language instruction. What is VDD? A: VDD is an acronym. and if the final software complies with the applicable software requirements. Q149. What is V&V? A: V&V is an acronym for verification and validation. They're called vertical because they are normally listed vertically on a page. What is verification and validation (V&V)? A: Verification and validation (V&V) is a process that helps to determine if the software requirements are complete. virtual addresses are assigned to auxiliary storage locations. Besides vertical microinstructions. identification of changes incorporated into this version. What is a variant? A: Variants are versions of a program. and identification of the software. Q156. Variants result from the application of software diversity. Q150.

upon completion of integration testing. What is virtual storage? A: Virtual storage is a storage allocation technique. probably with overlap. What are the parameters of performance testing? A: The term 'performance testing' is often used synonymously with stress testing. test phase. but is nevertheless considered suitable for use "as is". rapid prototyping model. on the other hand. implementation phase. test phase.robdavispe. found to depart from specified requirements. design phase. documentation. The purpose of system testing. implementation phase. as soon as time permits. Q158. In virtual storage. from a tester's point of view? A: Yes. the complete system is configured in a controlled environment. as I am going to add more answers. For system testing. in which auxiliary storage can be addressed as though it was part of main storage. Q165. Q160. What is the waterfall model? A: Waterfall is a model of the software development process in which the concept phase. and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment. I can. Q159. What are the phases of the software development process? A: The software development process consists of the concept phase. please be patient. and spiral model. and on pages www. or after rework by an approved method. Q162. Can you give me more information on software QA/testing. load testing. Q163. Q164. and to test all functions of the system that are required in real life.com/free and www.Q157. on the other hand. test cases are developed with the express purpose of exercising the interfaces between the components. Integration testing is completed first. system testing is started. design phase. Q161. requirements phase. Portions of a user's program and data are placed in auxiliary storage. You can visit my web site. portions of a user's program and data are placed in auxiliary storage. and checkout phase are performed in that order. installation phase. Q166. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. In other words.robdavispe. As to questions and answers that are not on my web site now. incremental development model. What is a waiver? A: Waivers are authorizations to accept software that has been submitted for inspection. . What is SDLC? A: A: SDLC is an acronym. but with little or no iteration. requirements phase. What is the difference between system testing and integration testing? A: System testing is high level testing. from a tester's point of view. and the operating system automatically swaps them in and out of main storage as needed. is to validate an application's accuracy and completeness in performing the functions as designed. and integration testing is a lower level testing. not the system testing. What models are used in software development? A: In software development process the following models are used: waterfall model. and software testing.com/free2 you can find answers to many questions on software QA. and checkout phase. What is virtual memory? A: Virtual memory relates to virtual storage. installation phase. and not vice versa. For integration testing. It stands for "software development life cycle". and the operating system automatically swaps them in and out of main storage as needed.

requirements document or test plan. beta testing. usability testing. Q173. Expected results from the baseline are compared to results of the software under test. When testing the password field. Q171. Attendees should prepare for this type of meeting by reading through documents. exploratory testing. load testing. and mutation testing. integration testing. In other words. What is the objective of regression testing? A: The objective of regression testing is to test that the fixes have not created any other problems elsewhere. install/uninstall testing. How do you conduct peer reviews? A: The peer review. compatibility testing. What is disaster recovery testing? A: Disaster recovery testing is testing how well the system recovers from disasters. Security/penetration testing is testing how well the system is protected against unauthorized internal or external access. ad-hoc testing. as defined by requirements. regression testing. acceptance testing. sometimes called PDR. alpha testing. and walk-throughs. sanity testing. volumes. since bug prevention is more cost effective than bug detection. i. more formalized than a walkthrough. but is one of the most costeffective methods of ensuring quality. Q174. and typically consists of 3-10 people including a test lead.g. task lead (the author of whatever is being reviewed). or other catastrophic problems Q169. The result of the meeting should be documented in a written report. Q172. or document. and response times. This type of testing usually requires sophisticated testing techniques. we do boundary value testing. Performance testing verifies loads. to verify that changes introduced during the release have not "undone" any previous code. e. Q168. and a facilitator (to make notes). hardware failures. A baseline set of data and scripts are maintained and executed. is a formal meeting. performance testing. Performance testing is a part of system testing. we can use security/penetration testing.reliability testing. unit testing. crashes. The purpose of the PDR is to find problems and see what is missing. FAQ'S Of Software Testing Q170. What stage of bug fixing is the most cost effective? A: Bug prevention.e. before . is more cost effective than bug detection. or willful damage. recovery testing. security testing. the objective is to ensure the software has remained intact. not to fix anything. All discrepancies are highlighted and accounted for. before the meeting starts. one needs to verify that passwords are encrypted. functional testing. PDRs. most problems are found during this preparation. release. How do you test the password field? A: To test the password field. The subject of the PDR is typically a code block. and volume testing. what is your focus? A: When testing the password field. user acceptance testing. feature. Q167. comparison testing. but it is also a distinct level of testing. inspections. Preparation for PDRs is difficult. system testing. How do you check the security of your application? A: To check the security of an application. What types of testing can you tell me about? A: Each of the followings represents a different type of testing approach: black box testing. white box testing. incremental testing. end-to-end testing.

A: Visit my web site. The selection of tools will depend on the end client.robdavispe. please be patient. Q180. www. as I am going to add more answers. Q175. including Winrunner. especially non-degree courses in local community colleges. For my knowledge on software testing. Open box testing is also a white box type of testing.com/free2 you can find answers to the vast majority of other testers' FAQs on testing. Software QA/testing is easy. System testing is also a black box type of testing. Conversely. testing. if testing is started early on.com/free2. if the initial testing approach is automated testing. if project schedules are realistic.robdavispe. while you are getting paid to do software testing. www. LoadRunner.robdavispe.com/free and www. changes. How can I learn software testing? A: I suggest you visit my web site. their needs. On the job you can use many software tools. Black box testing is based on requirements and functionality. Integration testing is also a black box type of testing. visit my web site. and you will find answers to most questions on software testing. if new features are . Q178. attainable and testable. design. re-testing. and derives test cases from the application's program logic. and if sufficient time is planned for both testing and bug fixing.testing proceeds to the next level. Acceptance testing is also a black box type of testing. as soon as time permits. and on pages www.robdavispe. Software QA/testing is easy. Glass box testing is also a white box type of testing. as I am going to add more answers.robdavispe. and documentation. Closed box testing is also a black box type of testing. LabView. if requirements are solid. Q179. As to questions and answers that are not on my web site now. usually the regression testing is performed manually. Functional testing is also a black-box type of testing geared to functional requirements of an application. as I am going to add more FAQs. and Rational Toolset. What types of black box testing can you tell me about? A: Black box testing is functional testing. detailed. Please give me others' FAQs on testing. Clear box testing is a white box type of testing. Q177. not based on any knowledge of internal software design or code.com/free and www. if schedules are realistic. Can you share with me your knowledge of software testing? A: Surely I can. If the initial testing approach is manual testing. clear. I also suggest you get a job in software testing. as soon as time permits. Why? Because you can get additional. if adequate time is allowed for planning. Classroom education. bug fixing. and if there is good communication. tends to be highly cost effective. As to questions and answers that are not on my web site now. education on the job. usually the regression testing is performed by automated testing. I also suggest you sign up for courses at nearby educational institutes. as soon as time permits.com/free2. and preferences. then.com/free and www. Functional testing is also a black box type of testing. Is the regression testing performed manually? A: It depends on the initial testing approach. Q176. As to knowledge that is not on my web site at the moment.robdavispe. if fixes or changes are re-tested. usually free. What is your view of software QA/testing? A: Software QA/testing is easy. please be patient. complete. What types of white box testing can you tell me about? A: White box testing is a testing approach that examines the application's program structure. from a tester's point of view. Software QA/testing is a piece of cake. then. Q181. please be patient. cohesive.

mistake. Take classes! Classroom education. What do we use for comparison? A: Generally speaking. We are tactful and diplomatic. while you are paid to do the job of a tester. LoadRunner. we compare the difference between two text files.avoided. or unwanted behavior of a computer program.robdavispe. or document version control. take the customers' point of view. bit by bit. While the term bug has been a part of engineering jargon for many-many decades. Diff is a UNIX utility that compares the difference between two text files. This . A test strategy is developed for all levels of testing. 'software defect' and 'software failure'. a competitor of SCCS. and on www. writes the test strategy and reviews the plan with the project team. Get an education! Sign up for courses at nearby educational institutes. We have a "test to break" attitude. Additional sections in the test strategy document include: A description of the required hardware and software components. What does a Test Strategy Document contain? A: The test strategy document is a formal description of how a software product will be tested. there are many who believe the term 'bug' was named after insects that used to cause malfunctions in electromechanical computers. Q182. when we write a software program to compare files. flaw. if one is able to stick to initial requirements as much as possible. Get additional education. Q188.robdavispe. Q184. DOORS. Q189. the test environment. both oral and written. What is the reason we compare files? A: Configuration management. and Rational Toolset. error. we ensure the correct steps are being executed. Improve your attitude! Become the best software QA/tester! Always strive to exceed the expectations of your customers! Q185.com/free2 you will find answers to the vast majority of questions on testing. on the job. requirement version control. or "diff". Q187. fault. revision control.g. from software QA/testers' point of view. When we use "diff". Other terms. tends to be inexpensive. CVS. LabView. an attention to detail. SCCS is an original UNIX program. for example. developers to work together on the same source code. a UNIX utility. SCCS. are *more specific*. Examples are Rational ClearCase. When is a process repeatable? A: If we use detailed and well-written processes and procedures. failure. Q186. Previous software development experience is also helpful as it provides a deeper understanding of the software development process. Find an employer whose needs and preferences are similar to yours. How do you compare two files? A: Use PVCS. often you can use many software tools. Q183. This facilitates a successful completion of a task. and good communication skills. often distant. especially non-degree courses in local community colleges. PVCS. and a list of related tasks. and CVS. e. PVCS is a document version control tool. How can I improve my career in software QA/testing? A: Invest in your skills! Learn all you can! Visit my web site. The test team analyzes the requirements. On the job.com/free and www. The test plan may include test cases. pass/fail criteria and risk assessment. including test tools. How can I be a good tester? A: We. as required. including WinRunner. This is a way we also ensure a process is repeatable. enables several. What is the difference between a software bug and software defect? A: A 'software bug' is a *nonspecific* term that means an inexplicable defect. we compare two files. conditions. Free education is often provided by employers. a strong desire for quality. good testers. based on "diff".

including test tool data. Testing methodology. Requirements that the system cannot provide. This is based on known standards. manuals. Functional and technical requirements of the application. Q191.g. and functional design documents. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his customers' applications. Creating a test strategy. Q190. and Executing tests. and whatever information you can lay your hands on. If there is a will. Two. How can I start my career in Automated testing? A: For one. A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from requirements. if you put your mind to it! You CAN learn to use WinRunner. get hands-on experience on how to use automated testing tools. there is a way! You CAN do it. This information comes from man-hours and schedules. technical. What is test methodology? A: One test methodology is a three-step process. change request. I suggest you read all you can. This methodology can be used and molded to your organization's needs. books. system limitations.information comes from the test environment. e. Creating a test plan/design. Click on a link! . and many other automated testing tools. with little or no outside help. information on the Internet. and that includes reading product description pamphlets.

Sign up to vote on this title
UsefulNot useful