You are on page 1of 21

What IS a Test Engineer? We, test engineers are engineers who specialize in testing.

We, test engineers, create test cases, procedures, scripts and generate data. We execute test procedures and scripts, analyse standards of measurements, and evaluate results of system/integration/regression testing. What is the role of test engineers? We, test engineers, speed up the work of your development staff, and reduce the risk of your company's legal liability. We give your company the evidence that the software is correct and operates properly. We also improve your problem tracking and reporting. We maximize the value of your software, and the value of the devices that use it. We also assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool, and before your employees get bogged down. We help the work of your software development staff, so your development team can devote its time to build up your product. We also promote continual improvement. We provide documentation required by FDA, FAA, other regulatory agencies, and your customers. We save your company money by discovering defects EARLY in the design process, before failures occur in production, or in the field. We save the reputation of your company by discovering bugs and design flaws, before bugs and design flaws damage the reputation of your company. What is a QA engineer? We, QA engineers, are test engineers but we do more than just testing. Good QA engineers understand the entire software development process and how it fits into the business approach and the goals of the organization. Communication skills and the ability to understand various sides of issues are important. We, QA engineers, are successful if people listen to us, if people use our tests, if people think that we're useful, and if we're happy doing our work. I would love to see QA departments staffed with experienced software developers who coach development teams to write better code. But I've never seen it. Instead of coaching, we, QA engineers, tend to be process people. What is the role of a QA engineer? The QA engineer's role is as follows: We, QA engineers, use the system much like real users would, find all the bugs, find ways to replicate the bugs, submit bug reports to the developers, and provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality. What are the responsibilities of a QA engineer? Let's say, an engineer is hired for a small software company's QA role, and there is no QA team. Should he take responsibility to set up a QA infrastructure/process, testing and quality of the entire product? No, because taking this responsibility is a classic trap that QA people get caught in. Why? Because we QA engineers cannot assure quality. And because QA departments cannot create quality. What we CAN do is to detect lack of quality, and prevent low-quality products from going out the door. What is the solution? We need to drop the QA label, and tell the developers, they are responsible for the quality of their own work. The problem is, sometimes, as soon as the developers learn that there is a test department, they will slack off on their testing. We need to offer to help with quality assessment only. What is software failure? Software failure occurs when the software does not do what the user expects to see. What is the difference between software fault and software failure? Software failure occurs when the software does not do what the user expects to see. Software fault, on the other hand, is a hidden programming error. Software fault becomes software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended. What metrics are used for bug tracking? Metrics that can be used for bug tracking include the followings: the total number of bugs, total number of bugs that have been fixed, number of new bugs per week, and the number of fixes per week. Metrics for bug tracking can be used to determine when to stop testing, for example, when bug rate falls below a certain level. You CAN learn to use defect tracking software, with little or no outside help. Get CAN get free information. Click on a link! What metrics can be used in software development?

Metrics refer to statistical process control. The idea of statistical process control is a great one, but it has but a limited use in software development. On the negative side, statistical process control works only with processes that are sufficiently well defined and unvaried, so that they can be analysed in terms of statistics. Still on the negative side, the problem is, most software development projects are NOT sufficiently well defined and NOT sufficiently unvaried. On the positive side however, one CAN use statistics. Statistics are excellent tools that project managers can use. Statistics can be used, for example, to determine when to stop testing, i.e. test cases completed with certain percentage passed, or when bug rate falls below a certain level. But, if these are project management tools, why should we label them quality assurance tools? How do you perform integration testing? To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client input. You CAN learn to perform integration testing, with little or no outside help. Get CAN get free information. Click on a link! What is integration testing? Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input. What is bottom-up testing? Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes. What metrics are used for test report generation? Metrics refer to statistical process control. The idea of statistical process control is a great one, but it has only a limited use in software development. On the negative side, statistical process control works only with processes that are sufficiently well defined AND unvaried, so that they can be analysed in terms of statistics. The problem is, most software development projects are NOT sufficiently well defined and NOT sufficiently unvaried. On the positive side, one CAN use statistics. Statistics are excellent tools that project managers can use. Statistics can be used, for example, to Determine when to stop testing, i.e. test cases completed with certain percentage passed, or when bug rate falls below a certain level. But, if these are project management tools, why should we label them quality assurance tools? The followings describe some of the metrics in quality assurance: McCabe Metrics Cyclomatic Complexity Metric (v(G)). What is a bug life cycle? Bug life cycles are similar to software development life cycles. At any time during the software development life cycle errors can be made during the gathering of requirements, requirements analysis, functional design, internal design, documentation planning, document preparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and phase-out. Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. a bug, and ends when the bug is fixed, and the bug is no longer in existence. What should be done after a bug is found? When a bug is found, it needs to be Communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be retested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problemtracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-

tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. What is the ratio of developers and testers? This ratio is not a fixed one, but depends on what phase of the software development life cycle the project is in. When a product is first conceived, organized, and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favour of developers. In sharp contrast, when the product is near the end of the software development life cycle, just before alpha testing begins, this ratio tends to be 1:1, or even 1:2, in favour of testers. What is alpha testing? Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by end-users or others, not programmers, software engineers, or test engineers. What is software life cycle? Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out. What are some of the software configuration management tools? Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS; and there are many others. Rational ClearCase is a popular software tool, made by Rational Software, for revision control of source code. A DOOR, or Dynamic Object Oriented Requirements System, is a requirements version control software tool. CVS, or "Concurrent Version System", is a popular, open source Version control system to keep track of changes in documents associated with software projects. CVS enables several, often distant, developers to work together on the same source code. PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX command that compares contents of two files. You CAN learn to use SCM tools, with little or no outside help. Get CAN get free information. Click on a link! What is software configuration management? Software Configuration management (SCM) is the control, and the recording of, changes that are made to the software and documentation throughout the software development life cycle (SDLC). SCM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, and changes made to them, and to keep track of who makes the changes. Rob Davis has experience with a full range of CM tools and concepts, and can easily adapt to an organization's software tool and process needs. What's the difference between priority and severity? "Priority" is associated with scheduling, and "severity" is associated with standards. "Priority" means something is afforded or deserves prior attention; a precedence established by order of importance (or urgency). "Severity" is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behaviour. The words priority and severity do come up in bug tracking. A variety of commercial, problem-tracking / management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its 'severity', reproduce it and fix it. The fixes are based on project 'priorities' and 'severity' of bugs. The 'severity' of a problem is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool. Buggy software can 'severely' affect schedules, which, in turn can lead to a reassessment and renegotiation of 'priorities'. Why are there so many software bugs? Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly

documented code and/or bugs in tools used in software development. What is the difference between efficient and effective? "Efficient" means having a high ratio of output to input; working or producing with a minimum of waste. For example, "An efficient engine saves gas." Or, "An efficient test engineer saves time." "Effective", on the other hand, means producing, or capable of producing, an intended result, or having a striking effect. For example, "For rapid long-distance transportation, the jet engine is more effective than a witch's broomstick." Or, "For developing software QA test procedures, engineers specializing in software QA are more effective than engineers who are generalists." What is verification? Verification ensures the product is designed to deliver all functionality to the customer. It typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, and walkthroughs and inspection meetings. What is validation? Validation ensures that functionality, as defined in requirements, is the intended behaviour of the product; validation typically involves actual testing and takes place after verifications are completed. What is the difference between verification and validation? Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product. What is documentation change management? Documentation change management is part of configuration management (CM). CM covers the tools and processes used to control, coordinate and track code, requirements, documentation, problems, change requests, designs, tools, compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your software tool and process needs. What is supposed to be in a document? All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document. What is the role of documentation in QA? Documentation plays a critical role in QA. QA practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. Use documentation change management, if possible. What is up time? Up time is the time period when a system is operational and in service. Up time is the sum of busy time and idle time. What is utilization? Utilization is the ratio of time a system is busy, divided by the time it is available. Utilization is a useful measure in evaluating computer performance. What is compatibility testing? Compatibility testing is testing how well software performs in a particular hardware, software, operating system, or network environment.

What is upwardly compatible software? Upwardly compatible software is compatible with a later or more complex version of itself. For example, an upwardly compatible software is able to handle files created by a later version of itself. What testing approaches can you tell me about? Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing. What is upward compression? In software design, upward compression means a form of demutualization, in which a subordinate module is copied into the body of a superior module. What is a version? 'Software version' is an initial release (or re-release) of a software associated with a complete compilation (or recompilation) of the software. What is a version description document? Version description document (VDD) is a document that accompanies and identifies a given version of a software product. Typically the VDD includes a description, and identification of the software, identification of changes incorporated into this version, and installation and operating information unique to this version of the software. What is Variants? Variants are versions of a program. Variants result from the application of software diversity. What is usability? Usability means ease of use; the ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a software product. What is usability testing? Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted enduser or customer. User interviews, surveys, video recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers. What is user friendly software? A computer program is user friendly, when it is designed with ease of use, as one of the primary objectives of its design. What is a user friendly document? A document is user friendly, when it is designed with ease of use, as one of the primary objectives of its design. What is a user manual? User manual is a document that presents information necessary to employ software or a system to obtain the desired results. Typically, what is described are system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions. What is the difference between user documentation and user manual? When a distinction is made between those who operate and use a computer system for its intended purpose, separate user documentation and user manual is created. Operators get user documentation, and users get user manuals. What is a user documentation? User documentation is a document that describes the way a software product or system should be used to obtain the desired results. What is a user manual? User manual is a document that presents information necessary to employ software or a system to obtain

the desired results. Typically, what is described are system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions. What is a user guide? User guide is the same as the user manual. It is a document that presents information necessary to employ a system or component to obtain the desired results. Typically, what is described are system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions. What is verification? Verification ensures the product is designed to deliver all functionality to the customer. It typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs and inspection meetings. What is validation? Validation ensures that functionality, as defined in requirements, is the intended behaviour of the product; validation typically involves actual testing and takes place after verifications are completed. What is the difference between verification and validation? Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect actual product. What is the waterfall model? Waterfall is a model of the software development process in which the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase are performed in that order, probably with overlap, but with little or no iteration. What models are used in software development? In software development process the following models are used: waterfall model, incremental development model, rapid prototyping model, and spiral model. What is load testing? Load testing simulates the expected usage of a software program, by simulating multiple users that access the program's services concurrently. Load testing is most useful and most relevant for multi-user systems, client/server models, including web servers. For example, the load placed on the system is increased above normal usage patterns, in order to test the system's response at peak loads. What is the difference between stress testing and load testing? Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that the expected results are errors, though there is grey area in between stress testing and load testing. What is software life cycle? Software life cycle begins when a software product is first conceived and ends when it is no longer in use. It includes phases like initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, re-testing and phase-out. What are the phases of the software development life cycle? The software development life cycle consists of the concept phase, requirements phase, design phase,

implementation phase, test phase, installation phase, and checkout phase. What is a software fault? Software faults are hidden programming errors. Software faults are errors in the correctness of the semantics of computer programs. What is system testing? System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application's accuracy and completeness in performing the functions as designed. System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life. System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input. Upon completion of integration testing, system testing is started. Before system testing, all unit and integration test results are reviewed by SWQA to ensure all problems have been resolved. For a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels. What is the difference between system testing and integration testing? System testing is high level testing, and integration testing is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa. For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components. For system testing, on the other hand, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. The purpose of system testing, on the other hand, is to validate an application's accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life. What are the parameters of performance testing? The term 'performance testing' is often used synonymously with stress testing, load testing, reliability testing, and volume testing. Performance testing is a part of system testing, but it is also a distinct level of testing. Performance testing verifies loads, volumes, and response times, as defined by requirements. What is performance testing? Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance testing verifies loads, volumes and response times, as defined by requirements. What is the difference between performance testing and load testing? Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is grey area in between stress testing and load testing. What is the difference between volume testing and load testing? Load testing is a blanket term that is used in many different ways across the professional software testing community. The term, load testing, is often used synonymously with stress testing, performance testing, reliability testing, and volume testing. Load testing generally stops short of stress testing. During stress testing, the load is so great that errors are the expected results, though there is grey area in between stress testing and load testing. What is disaster recovery testing? Disaster recovery testing is testing how well the system recovers from disasters, crashes, hardware failures, or other catastrophic problems. What is end-to-end testing? Similar to system testing, the *macro* end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.

How do you conduct peer reviews? The peer review, sometimes called PDR, is a formal meeting, more formalized than a walk-through, and typically consists of 3-10 people including the test lead, task lead (the author of whatever is being reviewed), and a facilitator (to make notes). The subject of the PDR is typically a code block, release, feature, or document. The purpose of the PDR is to find problems and see what is missing, not to fix anything. The result of the meeting is documented in a written report. Attendees should prepare for PDRs by reading through documents, before the meeting starts; most problems are found during this preparation. On the positive side, the PDR is a cost-effective method of ensuring quality, since bug prevention is more cost effective than bug detection. What is wrong with PDRs? PDRs are IRS-style audits, where (from a tester's point of view) you are the "defendant", and everyone else in that conference room is your "prosecutor", "jury", and "judge", whose accusations and judgments are only about you, and your work. How do you check the security of your application? To check the security of an application, we can use security/penetration testing. Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or wilful damage. This type of testing usually requires sophisticated testing techniques. What is security/penetration testing? Security/penetration testing is testing how well the system is protected against unauthorized internal or external access, or wilful damage. This type of testing usually requires sophisticated testing techniques. What testing approaches can you tell me about? Each of the followings represents a different testing approach: black box testing, white box testing, unit testing, incremental testing, integration testing, functional testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance testing, load testing, performance testing, usability testing, install/uninstall testing, recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc testing, user acceptance testing, comparison testing, alpha testing, beta testing, and mutation testing. What stage of bug fixing is the most cost effective? Bug prevention techniques (i.e. inspections, peer design reviews, and walk-throughs) are more cost effective than bug detection. What is the objective of regression testing? The objective of regression testing is to test that the fixes have not created any other problems elsewhere. In other words, the objective is to ensure the software has remained intact. A baseline set of data and scripts are maintained and executed, to verify that changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level. What types of white box testing can you tell me about? White box testing is a testing approach that examines the application's program structure, and derives test cases from the application's program logic. Clear box testing is a white box type of testing. Glass box testing is also a white box type of testing. Open box testing is also a white box type of testing. What is white box testing? White box testing is based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths and conditions. What is clear box testing? Clear box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic. What is glass box testing? Glass box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic. What is open box testing? Open box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.

What black box testing types can you tell me about? Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality. Functional testing is also a black-box type of testing geared to functional requirements of an application. System testing is also a black box type of testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of testing. What is functional testing? Functional testing is black-box type of testing geared to functional requirements of an application. What is closed box testing? Closed box testing is the same as black box testing. Black box testing a type of testing that considers only externally visible behaviour. What is the difference between a software bug and software defect? 'Software bug' is a *non-specific* term that means an inexplicable defect, error, flaw, mistake, failure, fault, or unwanted behaviour of a computer program. Other terms, e.g. 'software defect' and 'software failure', are *more specific*. While the term bug has been a part of engineering jargon for many-many decades, there are many who believe the term 'bug' is a reference to insects that used to cause malfunctions in electromechanical computers. Why are there so many software bugs? Generally speaking, there are bugs in software because of unclear requirements, software complexity, programming errors, changes in requirements, errors made in bug tracking, time pressure, poorly documented code and/or bugs in tools used in software development. What should be done after a bug is found? When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it. How do you compare two files? Use PVCS, SCCS, or "diff". PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX utility that compares the difference between two text files. What is the reason we compare files? We compare files because of configuration management, revision control, requirement version control, or document version control. Examples are Rational ClearCase, DOORS, PVCS, and CVS. CVS, for example, enables several, often distant, developers to work together on the same source code. What do we use for comparison? Generally speaking, when we write a software program to compare files, we compare two files, bit by bit. For example, when we use "diff", a UNIX utility, we compare two text files. What are some of the software configuration management tools? Software configuration management tools include Rational ClearCase, DOORS, PVCS, CVS; and there are many others. PVCS is a document version control tool, a competitor of SCCS. SCCS is an original UNIX program, based on "diff". Diff is a UNIX utility that compares the difference between two text files. What does a test strategy document contain? The test strategy document is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyses the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the

test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process: A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data. A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules. Testing methodology. This is based on known standards. Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents. Requirements that the system can not provide, e.g. system limitations. Outputs for this process: An approved and signed off test strategy document, test plan, including test cases. Testing issues requiring resolution. Usually this requires additional negotiation at the project management level. What is monkey testing? Monkey testing is random testing performed by automated testing tools (after the latter are developed by humans). These automated testing tools are considered "monkeys", if they work at random. We call them "monkeys" because it is widely believed, if we allow six monkeys to pound on six typewriters at random, for a million years, they will recreate all the works of Isaac Asimov. There are "smart monkeys" and "dumb monkeys". "Smart monkeys" are valuable for load and stress testing, and will find a significant number of bugs, but they're also very expensive to develop. "Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be hangs and crashes, i.e. the bugs you least want to have in your software product. "Monkey testing" can be valuable, but they should not be your only testing. What is stochastic testing? Stochastic testing is the same as "monkey testing", but stochastic testing is a more technical sounding name for the same testing process. Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time. The software under test typically passes the individual tests, but our goal is to see if it can pass a large series of the individual tests. What is automated testing? Automated testing is a formally specified and controlled method of formal testing approach. What is mutation testing? In mutation testing, our goal is to make mutant software to fail, and thus demonstrate the adequacy of our test case. Step one: we create a set of mutant software. Each mutant software differs from the original software by one mutation, i.e. one single syntax change made to one of its program statements. Each mutant software contains one single fault. In step two, we apply test cases to the original software and to the mutant software. In step three, we evaluate the results. Our test case is inadequate, if both the original software and all mutant software generate the same output. Our test case is adequate, if our test case detects faults in our software, or if at least one mutant software generates a different output than does the original software for our test case. What is the exit criterion? "Exit criteria" is a checklist, sometimes known as the "PDR sign-off sheet", i.e. a list of peer design review related tasks that have to be done by the facilitator, or the attendees of the PDR, during the PDR, or near the conclusion of the PDR. By having a checklist, and by going through a checklist, the facilitator can A) verify that the attendees have inspected all the relevant documents and reports, and B) verify that all suggestions and recommendations for each issue have been recorded, and C) verify that all relevant facts of the meeting have been recorded. The facilitator's checklist includes the following questions: Have we inspected all the relevant documents, code blocks, or products? Have we completed all the required checklists? Have I recorded all the facts relevant to this peer review? Does anyone have any additional suggestions, recommendations, or comments? What is the outcome of this peer review? As the end of the peer review, the facilitator asks the attendees of the peer review to make a decision as to

the outcome of the peer review. I.e., "What is our consensus?" Are we accepting the design (or document or code)? Or, are we accepting it with minor modifications? Or, are we accepting it, after it is modified, and approved through e-mails to the participants? Or, do we want another peer review? This is a phase, during which the attendees of the PDR work as a committee, and the committee's decision is final. What is the entry criterion? The entry criteria is a checklist, or a combination of checklists that includes the "developer's checklist", "testing checklist", and the "PDR checklist". Checklists are list of tasks that have to be done by developers, testers, or the facilitator, at or before the start of the peer review. Using these checklists, before the start of the peer review, the developer, tester and facilitator can determine if all the documents, reports, code blocks or software products are ready to be reviewed, and if the peer review's attendees are prepared to inspect them. The facilitator can ask the peer review's attendees if they have been able to prepare for the peer review, and if they're not well prepared, the facilitator can send them back to their desks, and even ask the task lead to reschedule the peer review. The facilitator's script for the entry criteria includes the following questions: Are all the required attendees present at the peer review? Have all the attendees received all the relevant documents and reports? Are all the attendees well prepared for this peer review? Have all the preceding life cycle activities been concluded? Are there any changes to the baseline? What are the parameters of peer reviews? By definition, parameters are values on which something else depends. Peer reviews depend on the attendance and active participation of several key people; usually the facilitator, task lead, test lead, and at least one additional reviewer. The attendance of these four people is usually required for the approval of the PDR. According to company policy, depending on your company, other participants are often invited, but generally not required for approval. Peer reviews depend on the facilitator, sometimes known as the moderator, who controls the meeting, keeps the meeting on schedule, and records all suggestions from all attendees. Peer reviews greatly depend on the developer, also known as the designer, author, or task lead -- usually a software engineer -- who is most familiar with the project, and most likely able to answer any questions or address any concerns that may come up during the peer review. Peer reviews greatly depend on the tester, also known as test lead, or bench test person -- usually another software engineer -- who is also familiar with the project, and most likely able to answer any questions or address any concerns that may come up during the peer review. Peer reviews greatly depend on the participation of additional reviewers and additional attendees who often make specific suggestions and recommendations, and ask the largest number of questions. What types of review meetings can you tell me about? Of review meetings, peer design reviews are the most common. Peer design reviews are so common that they tend to replace both inspections and walk-throughs. Peer design reviews can be classified according to the 'subject' of the review. I.e., "Is this a document review, design review, or code review?" Peer design reviews can be classified according to the 'role' you play at the meeting. I.e., "Are you the task lead, test lead, facilitator, moderator, or additional reviewer?" Peer design reviews can be classified according to the 'job title of attendees. I.e., "Is this a meeting of peers, managers, systems engineers, or system integration testers?" Peer design reviews can be classified according to what is being reviewed at the meeting. I.e., "Are we reviewing the work of a developer, tester, engineer, or technical document writer?"

Peer design reviews can be classified according to the 'objective' of the review. I.e., "Is this document for the file cabinets of our company, or that of the government (e.g. the FAA or FDA)?" PDRs of government documents tend to attract the attention of managers, and the meeting quickly becomes a meeting of managers. How can I shift my focus and area of work from QC to QA? Number one, focus on your strengths, skills, and abilities! Realize that there are MANY similarities between Quality Control and Quality Assurance! Realize that you have MANY transferable skills! Number two, make a plan! Develop a belief that getting a job in QA is easy! HR professionals cannot tell the difference between quality control and quality assurance! HR professionals tend to respond to keywords (i.e. QC and QA), without knowing the exact meaning of those keywords! Number three, make it a reality! Invest your time! Get some hands-on experience! Do some QA work! Do any QA work, even if, for a few months, you get paid a little less than usual! Your goals, beliefs, enthusiasm, and action will make a huge difference in your life! Number four, I suggest you read all you can, and that includes reading product pamphlets, manuals, books, information on the Internet, and whatever information you can lay your hands on! If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to do QA work, with little or no outside help! Click on a link! What is the difference between build and release? Builds and releases are similar, because both builds and releases are end products of software development processes. Builds and releases are similar, because both builds and releases help developers and QA teams to deliver reliable software. Build means a version of software, typically one that is still in testing. Usually a version number is given to a released product, but, sometimes, a build number is used instead. Difference number one: builds refer to software that is still in testing, release refers to software that is usually no longer in testing. Difference number two: builds occur more frequently; releases occur less frequently. Difference number three: versions are based on builds, and not vice versa. Builds, or usually a series of builds, are generated first, as often as one build per every morning, depending on the company, and then every release is based on a build, or several builds, i.e. the accumulated code of several builds. What is acceptance testing? Acceptance testing is black box testing that gives the client/customer/project manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or project manager; however, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria. What is CMM? CMM is an acronym that stands for Capability Maturity Model. The idea of CMM is, as to future efforts in developing and testing software, concepts and experiences do not always point us in the right direction, therefore we should develop processes, and then refine those processes. There are five CMM levels, of which Level 5 is the highest... CMM Level 1 is called "Initial". CMM Level 2 is called "Repeatable". CMM Level 3 is called "Defined". CMM Level 4 is called "Managed". CMM Level 5 is called "Optimized". There are not many Level 5 companies; most hardly need to be. Within the United States, fewer than 8% of software companies are rated CMM Level 4, or higher. The U.S. government requires that all companies with federal government contracts to maintain a minimum of a CMM Level 3 assessment. CMM assessments take two weeks. They're conducted by a nine-member team led by a SEI-certified lead assessor.

What are CMM levels and their definitions? There are five CMM levels, of which Level 5 is the highest. CMM Level 1 is called "Initial". The software process is at CMM Level 1, if it is an ad hoc process. At CMM Level 1, few processes are defined, and success, in general, depends on individual effort and heroism. CMM Level 2 is called "Repeatable". The software process is at CMM Level 2, if the subject company has some basic project management processes, in order to track cost, schedule, and functionality. Software processes are at CMM Level 2, if necessary processes are in place, in order to repeat earlier successes on projects with similar applications. Software processes are at CMM Level 2, if there are requirements management, project planning, project tracking, subcontract management, QA, and configuration management. CMM Level 3 is called "Defined". The software process is at CMM Level 3, if the software process is documented, standardized, and integrated into a standard software process for the subject company. The software process is at CMM Level 3, if all projects use approved, tailored versions of the company's standard software process for developing and maintaining software. Software processes are at CMM Level 3, if there is process definition, training programs, process focus, integrated software management, software product engineering, intergroup coordination, and peer reviews. CMM Level 4 is called "Managed". The software process is at CMM Level 4, if the subject company collects detailed data on the software process and product quality, and if both the software process and the software products are quantitatively understood and controlled. Software processes are at CMM Level 4, if there are software quality management (SQM), and quantitative process management. CMM Level 5 is called "Optimized". The software process is at CMM Level 5, if there is continuous process improvement, if there is quantitative feedback from the process, and from piloting innovative ideas and technologies. Software processes are at CMM Level 5, if there are process change management, and defect prevention technology change management. What is the difference between bug and defect in software testing? In software testing, the difference between bug and defect is small, and depends on your company. For some companies, bug and defect are synonymous, while others believe bug is a subset of defect. Generally speaking, we, software test engineers, discover BOTH bugs and defects, before bugs and defects damage the reputation of our company. We, QA engineers, use the software much like real users would, to find BOTH bugs and defects, to find ways to replicate BOTH bugs and defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality. Therefore, we, software QA engineers, do not differentiate between bugs and defects. In our bug reports, we include BOTH bugs and defects, but, as I see it, there are only minor differences. Difference number one: in my bug reports I find the defects are easier to describe. Difference number two: in my bug reports I find it easier to write the descriptions on how to replicate the defects. Defects tend to require only a brief explanation. What is grey box testing? Grey box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test. In grey box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the grey box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs. Gray box testing is a powerful idea. The concept is simple; if one knows something about how the product works on the inside, one can test it better, even from the outside. Grey box testing is not to be confused with white box testing; i.e. a testing approach that attempts to cover the internals of the product in detail. Grey box testing is a test strategy based partly on internals. The testing approach is known as gray box testing, when one does have some knowledge, but not the full knowledge of the internals of the product one is testing. In gray box testing, just as in black box testing, you test from the outside of a product, just as you do with

black box, but you make better-informed testing choices because you're better informed; because you know how the underlying software components operate and interact. What is the difference between version and release? Both version and release indicate a particular point in the software development life cycle, or in the life cycle of a document. The two terms, version and release, are similar (i.e. mean pretty much the same thing), but there are minor differences between them. Version means a VARIATION of an earlier, or original, type; for example, "I've downloaded the latest version of the software from the Internet. The latest version number is 3.3." Release, on the other hand, is the ACT OR INSTANCE of issuing something for publication, use, or distribution. Release is something thus released. For example, "A new release of a software program." What is data integrity? Data integrity is one of the six fundamental components of information security. Data integrity is the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data. In databases, important data -- including customer information, order database, and pricing tables -- may be stored. In databases, data integrity is achieved by preventing accidental, or deliberate, or unauthorized insertion, or modification, or destruction of data. How do you test data integrity? Data integrity testing should verify the completeness, soundness, and wholeness of the stored data, and testing should be performed on a regular basis, because important data can and will change over time. Data integrity tests include the followings: Verify that you can create, modify, and delete any data in tables. Verify that sets of radio buttons represent fixed sets of values. Verify that a blank value can be retrieved from the database. Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur. Verify that the default values are saved in the database, if the user input is not specified. Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software. What is data validity? Data validity is the correctness and reasonableness of data. Reasonableness of data means, for example, account numbers falling within a range, numeric data being all digits, dates having a valid month, day and year, spelling of proper names. Data validity errors are probably the most common, and the most difficult to detect, data-related errors. What causes data validity errors? Data validity errors are usually caused by incorrect data entries, when a large volume of data is entered in a short period of time. For example, 12/25/2005 is entered as 13/25/2005 by mistake. This date is therefore invalid. How can you reduce data validity errors? Use simple field validation rules. Technique 1: If the date field in a database uses the MM/DD/YYYY format, then use a program with the following two data validation rules: "MM should not exceed 12, and DD should not exceed 31. Technique 2: If the original figures do not seem to match the ones in the database, then use a program to validate data fields. Compare the sum of the numbers in the database data field to the original sum of numbers from the source. If there is a difference between the figures, it is an indication of an error in at least one data element. What is the difference between data validity and data integrity?

Difference number one: Data validity is about the correctness and reasonableness of data, while data integrity is about the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data. Difference number two: Data validity errors are more common, and data integrity errors are less common. Difference number three: Errors in data validity are caused by HUMAN BEINGS -- usually data entry personnel -- who enter, for example, 13/25/2005, by mistake, while errors in data integrity are caused by BUGS in computer programs that, for example, cause the overwriting of some of the data in the database, when somebody attempts to retrieve a blank value from the database. What is structural testing? Structural testing is also known as clear box testing, glass box testing. Structural testing is a way to test software with knowledge of the internal workings of the code being tested. Structural testing is white box testing, not black box testing, since black boxes are considered opaque and do not permit visibility into the code. What is the difference between static and dynamic testing? The differences between static and dynamic testing are as follows: Difference number 1: Static testing is about prevention, dynamic testing is about cure. Difference number 2: She static tools offer greater marginal benefits. Difference number 3: Static testing is many times more cost-effective than dynamic testing. Difference number 4: Static testing beats dynamic testing by a wide margin. Difference number 5: Static testing is more effective! Difference number 6: Static testing gives you comprehensive diagnostics for your code. Difference number 7: Static testing achieves 100% statement coverage in a relatively short time, while dynamic testing often often achieves less than 50% statement coverage, because dynamic testing finds bugs only in parts of the code that are actually executed. Difference number 8: Dynamic testing usually takes longer than static testing. Dynamic testing may involve running several test cases, each of which may take longer than compilation. Difference number 9: Dynamic testing finds fewer bugs than static testing. Difference number 10: Static testing can be done before compilation, while dynamic testing can take place only after compilation and linking. Difference number 11: Static testing can find all of the following that dynamic testing cannot find: syntax errors, code that is hard to maintain, code that is hard to test, code that does not conform to coding standards, and ANSI violations. What is the definition of top down design? Top down design progresses from simple design to detailed design. Top down design solves problems by breaking them down into smaller, easier to solve sub problems. Top down design creates solutions to these smaller problems, and then tests them using test drivers. In other words, top down design starts the design process with the main module or system, and then progresses down to lower level modules and subsystems. To put it differently, top down design looks at the whole system, and then explodes it into subsystems, or smaller parts. A systems engineer or systems analyst determines what the top level objectives are, and how they can be met. He then divides the system into subsystems, i.e. breaks the whole system into logical, manageable-size modules, and deals with them individually. What is the definition of bottom up design? Bottom up design begins the design at the lowest level modules or subsystems, and progresses upward to the design of the main program, main module, or main subsystem. In software design - assuming that the data you start with is a pretty good model of what you're trying to do -

bottom up design generally starts with the known data (e.g. customer lists, order forms), then the data is broken into into chunks (i.e. entities) appropriate for planning a relational database. This process reveals what relationships the entities have, and what the entities' attributes are. Bottom up design makes it easy to reuse code blocks. For example, many of the utilities you write for one program are also useful for programs you have to write later. Bottom up design also makes programs easier to read. What is the difference between top down and bottom up design? Top down design proceeds from the abstract entity to get to the concrete design. Bottom up design proceeds from the concrete design to get to the abstract entity. Top down design is most often used in designing brand new systems, while bottom up design is sometimes used when one is reverse engineering a design; i.e. when one is trying to figure out what somebody else designed in an existing system. Bottom up design begins the design with the lowest level modules or subsystems, and progresses upward to the main program, module, or subsystem. With bottom up design, a structure chart is necessary to determine the order of execution, and the development of drivers is necessary to complete the bottom up approach. Top down design, on the other hand, begins the design with the main or top-level module, and progresses downward to the lowest level modules or subsystems. Real life sometimes is a combination of top down design and bottom up design. For instance, data modelling sessions tend to be iterative, bouncing back and forth between top down and bottom up modes, as the need arises. What is a backward compatible design? The design is backward compatible, if the design continues to work with earlier versions of a language, program, code, or software. When the design is backward compatible, the signals or data that has to be changed, does not break the existing code. For instance, a (mythical) web designer decides he should make some changes, because the fun of using JavaScript and Flash is more important than backward compatible design. Or, he decides, he has to make some changes, because he doesn't have the resources to maintain multiple styles of backward compatible web design. Our mythical web designer's decision will inconvenience some users, because some of the earlier versions of Internet Explorer and Netscape will not display his web pages properly, as there are some serious improvements in the newer versions of Internet Explorer and Netscape that make the older versions of these browsers incompatible with, for example, DHTML. This is when we say, "His design doesn't continue to work with earlier versions of browser software, therefore his design is not backward compatible". On the other hand, if the same mythical web designer decides that backward compatibility is more important than fun, or, if he decides that he does have the resources to maintain multiple styles of backward compatible code, then, obviously, no user will be inconvenienced, even when Microsoft and Netscape make some serious improvements in their web browsers. This is when we can say, "Our mythical web designer's design is backward compatible". What is smoke testing? Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan. With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing. Sometimes, if testing occurs very early or very late in the software development life cycle, this can be the only kind of testing that can be performed. Smoke tests are, by definition, not exhaustive, but, over time, you can increase your coverage of smoke testing. A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested. Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale. Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly. Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration. At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more. What is the difference between monkey testing and smoke testing? Difference number 1: Monkey testing is random testing, and smoke testing is a non-random testing. Smoke testing is non-random testing that deliberately exercises the entire system from end to end, with the goal of exposing any major problems. Difference number 2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually. Difference number 3: Monkey testing is performed by "monkeys", while smoke testing is performed by skilled testers. Difference number 4: "Smart monkeys" are valuable for load and stress testing, but not very valuable for smoke testing, because they are too

expensive for smoke testing. Difference number 5: "Dumb monkeys" are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing. Difference number 6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly. Difference number 7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough. Difference number 8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours. Tell me about daily builds and smoke tests. The idea is to build the product every day, and test it every day. The software development process at Microsoft and many other software companies requires daily builds and smoke tests. According to their process, every day, every single file has to be compiled, linked, and combined into an executable program. And, then, the program has to be "smoke tested". Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Please note that you should add revisions to the build only when it makes sense to do so. You should to establish a build group, and build daily; set your own standard for what constitutes "breaking the build", and create a penalty for breaking the build, and check for broken builds every day. In addition to the daily builds, you should smoke test the builds, and smoke test them daily. You should make the smoke test evolve, as the system evolves. You should build and smoke test daily, even when the project is under pressure. Think about the many benefits of this process! The process of daily builds and smoke tests minimizes the integration risk, reduces the risk of low quality, supports easier defect diagnosis, improves morale, enforces discipline, and keeps pressure cooker projects on track. If you build and smoke test DAILY, success will come, even when you're working on large projects! What is the difference between build and release? Difference number 1: build refers to software that is still in testing; release refers to software that is usually no longer in testing. Difference number 2: builds occur more frequently, while releases occur less frequently. Difference number 3: versions are based on builds, and not vice versa. Builds, or usually a series of builds, are generated first, as often as one build per every morning, depending on the company, and then every release is based on one or more builds, i.e. the accumulated code of several builds. What is the purpose of test strategy? Reason number 1: The number one reason of writing a test strategy document is to "have" a signed, sealed, and delivered, FDA (or FAA) approved document, where the document includes a written testing methodology, test plan, and test cases. Reason number 2: Having a test strategy does satisfy one important step in the software testing process. Reason number 3: The test strategy document tells us how the software product will be tested. Reason number 4: The creation of a test strategy document presents an opportunity to review the test plan with the project team. Reason number 5: The test strategy document describes the roles, responsibilities, and the resources required for the test and schedule constraints. Reason number 6: When we create a test strategy document, we have to put into writing any testing issues requiring resolution (and usually this means additional negotiation at the project management level). Reason number 7: The test strategy is decided first, before lower level decisions are made on the test plan, test design, and other testing issues. What is a test strategy document? The test strategy document is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyses the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, and a list of related tasks, pass/fail criteria and risk assessment. What do you mean by "the process is repeatable"? A process is repeatable, whenever we have the necessary processes in place, in order to repeat earlier successes on projects with similar applications. A process is repeatable, if we use detailed and well-written processes and procedures. A process is repeatable, if we ensure that the correct steps are executed. When the correct steps are executed, we facilitate a successful completion of the task. Documentation is critical. A software process is repeatable, if there are requirements management, project planning, project tracking,

subcontract management, QA, and configuration management. Both QA processes and practices should be documented, so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented, so that they are repeatable. Document files should be well organized. There should be a system for easily finding and obtaining documents, and determining what document has a particular piece of information. We should use documentation change management, if possible. Once Rob Davis has learned and reviewed a customer's business processes and procedures, he will follow them. He will also recommend improvements and/or additions. When is a process repeatable? When we use detailed and well-written processes and procedures, we ensure the correct steps are being executed. When we ensure the correct steps are being executed, we facilitate a successful completion of a task. When we document these processes and procedures, we ensure the process is repeatable. What is the purpose of a test plan? Reason number 1: We create a test plan because preparing it helps us to think through the efforts needed to validate the acceptability of a software product. Reason number 2: We create a test plan because it can and will help people outside the test group to understand the why and how of product validation. Reason number 3: We create a test plan because, in regulated environments, we have to have a written test plan. Reason number 4: We create a test plan because the general testing process includes the creation of a test plan. Reason number 5: We create a test plan because we want a document that describes the objectives, scope, approach and focus of the software testing effort. Reason number 6: We create a test plan because it includes test cases, conditions, and the test environment, a list of related tasks, pass/fail criteria, and risk assessment. Reason number 7: We create test plan because one of the outputs for creating a test strategy is an approved and signed off test plan document. Reason number 8: We create a test plan because the software testing methodology a three step process, and one of the steps is the creation of a test plan. Reason number 9: We create a test plan because we want an opportunity to review the test plan with the project team. Reason number 10: We create a test plan document because test plans should be documented, so that they are repeatable. What is a software test plan? A software test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. It should be thorough enough to be useful, but not so thorough that no one outside the test group will be able to read it. Give me one test case that catches all the bugs! If there is a "magic bullet", i.e. the one test case that has a good possibility to catch ALL the bugs, or at least the most important bugs, it is a challenge to find it, because test cases depend on requirements; requirements depend on what customers need; and customers can have great many different needs. As software systems are getting increasingly complex, it is increasingly more challenging to write test cases. It is true that there are ways to create "minimal test cases" which can greatly simplify the test steps to be executed. But, writing such test cases is time consuming, and project deadlines often prevent us from going that route. Often the lack of enough time for testing is the reason for bugs to occur in the field. However, even with ample time to catch the "most important bugs", bugs still surface with amazing spontaneity. The challenge is, developers do not seem to know how to avoid providing the many opportunities for bugs to hide, and testers do not seem to know where the bugs are hiding. What is a test scenario? The terms "test scenario" and "test case" are often used synonymously. Test scenarios are test cases, or test scripts, and the sequence in which they are to be executed. Test scenarios are test cases that ensure that business process flows are tested from end to end. Test scenarios are independent tests, or a series of tests, that follow each other, where each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test

scenarios. What is the difference between a test plan and a test scenario? Difference number 1: A test plan is a document that describes the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a document that describes both typical and atypical situations that may occur in the use of an application. Difference number 2: Test plans define the scope, approach, resources, and schedule of the intended testing activities, while test procedures define test conditions, data to be used for testing, and expected results, including database updates, file outputs, and report results. Difference number 3: A test plan is a description of the scope, approach, resources, and schedule of intended testing activities, while a test scenario is a description of test cases that ensure that a business process flow, applicable to the customer, is tested from end to end. What is a test case? A test case is a document that describes an input, action, or event and its expected result, in order to determine if a feature of an application is working correctly. A test case should contain particulars such as a test case identifier, test case name, objective, test conditions/setup, input data requirements/steps, and expected results. Please note, the process of developing test cases can help find problems in the requirements or design of an application, since it requires you to completely think through the operation of the application. For this reason, it is useful to prepare test cases early in the development cycle, if possible. How do you write test cases? When I write test cases, I concentrate on one requirement at a time. Then, based on that one requirement, I come up with several real life scenarios that are likely to occur in the use of the application by end users. When I write test cases, I describe the inputs, action, or event, and their expected results, in order to determine if a feature of an application is working correctly. To make the test case complete, I also add particulars e.g. test case identifiers, test case names, objectives, test conditions (or setups), input data requirements (or steps), and expected results. Additionally, if I have a choice, I prefer writing test cases as early as possible in the development life cycle. Why? Because, as a side benefit of writing test cases, many times I am able to find problems in the requirements or design of an application. And, because the process of developing test cases makes me completely think through the operation of the application. You can learn to write test cases! If there is a will, there is a way! You CAN do it, if you put your mind to it! You CAN learn to write test cases, with little or no outside help. Click on a link! What is a parameter? In software QA or software testing, a parameter is an item of information - such as a name, a number, or a selected option - that is passed to a program by a user or another program. By definition, in software, a parameter is a value on which something else depends. Any desired numerical value may be given as a parameter. In software development, we use parameters when we want to allow a specified range of variables. We use parameters when we want to differentiate behaviour or pass input data to computer programs or their subprograms. Thus, when we are testing, the parameters of the test can be varied to produce different results, because parameters do affect the operation of the program receiving them. Example 1: We use a parameter, such as temperature that defines a system. In this definition, it is temperature that defines the system and determines its behaviour. Example 2: In the definition of function f(x) = x + 10, x is a parameter. In this definition, x defines the f(x) function and determines its behaviour. Thus, when we are testing, x can be varied to make f(x) produce different values, because the value of x does affect the value of f(x). When parameters are passed to a function subroutine, they are called arguments. What is a variable? A variable is a data item whose value can change. One example is a variable we have named capacitor_voltage_10000, where capacitor_value_10000 can be any whole number between -10000 and +10000. Keep in mind, there are local and global variables. What is a constant? In software or software testing, a constant is a meaningful name that represents a number, or string, that does not change. Constants are variables that remain the same, i.e. constant, throughout the execution of a program. Why do developers use constants? Because if we have code that contains constant values that keep reappearing, or, if we have code that depends on certain numbers that are difficult to remember, we can improve both the readability and maintainability of our code, by using constants, To give you an example, let's suppose we declare a constant and we call it Pi. We set it to 3.14159265, and use it

throughout our code. Constants, such as Pi, as the name implies, store values that remain constant throughout the execution of our program. Keep in mind that, unlike variables which can be read from and written to, constants are read-only variables. Although constants resemble variables, we cannot modify or assign new values to them, as we can to variables, but we can make constants public, or private. We can also specify what data type they are. What is a requirements test matrix? The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project's life cycle. The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table. The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality. The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort. The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort. Give me a requirements test matrix template! For a template, take a basic table that you would like to use for cross-referencing purposes. Step 1: Find out how many requirements you have. Step 2: Find out how many test cases you have. Step 3: Based on these numbers, create a basic table. Let's suppose you have a list of 90 requirements and 360 test cases. Based on these numbers, you want to create a table of 91 rows and 361 columns. Step 4: Focus on the first column of your table. One by one, copy all your 90 requirement numbers, and paste them into rows 2 through 91 of your table. Step 5: Focus on the first row of your table. One by one, copy all your 360 test case numbers, and paste them into columns 2 through 361 of your table. Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90 requirements they satisfy. If, for example, test case 64 satisfies requirement 12, then put a large "X" into cell 13-65 of your table... and then you have it; you have just created a requirements test matrix template. What about requirements? Requirements are important details that describe an application's externally perceived functionality and properties. All requirements should be clear, complete, reasonably detailed, cohesive, attainable and testable. Care should be taken to involve all of a project's significant customers in the requirements process. Customers could be in-house or external and could include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, future software maintenance engineers, salespeople and anyone who could later derail the project. If his/her expectations are not met, they should be included as a customer, if possible. Some type of documentation with detailed requirements is needed by test engineers in order to properly plan and execute tests. Without such documentation there is no clear-cut way to determine if a software application is performing correctly. What is reliability testing? Reliability testing is designing reliability test cases, using accelerated reliability techniques (e.g. step-stress, test/analyse/fix, and continuously increasing stress testing techniques), AND testing units or systems to failure, in order to obtain raw failure time data for product life analysis. The purpose of reliability testing is to determine product reliability, and to determine whether the software meets the customer's reliability requirements. In the system test phase, or after the software is fully developed, one reliability testing technique we use is a test/analyse/fix technique, where we couple reliability testing with the removal of faults. When we identify a failure, we send the software back to the developers, for repair. The developers build a new version of the software, and then we do test iteration. We track failure intensity (e.g. failures per transaction, or failures per hour) in order to guide our test process, and to determine the feasibility of the software release, and to determine whether the software meets the customer's reliability requirements. Give me an example on reliability testing. For example, we design, test, manufacture and sell defibrillators. For example, our of quantified reliability testing goal is: Our defibrillator is considered sufficiently reliable if 10 (or fewer) failures occur from 1,000 shocks. Then, for example, we use a test/analyse/fix technique, and we couple reliability testing with the removal of errors. When we identify a failed delivery of a shock, we send the software back to the developers, for repair. The developers build a new version of the software, and then we deliver another 1,000 shocks into a dummy resistor load. We track failure intensity (i.e. failures per 1,000 shocks) in order to guide our reliability testing, and to determine the feasibility of the software release, and to determine whether the software meets our reliability requirements.

What is incremental testing? Incremental testing is partial testing of an incomplete product. The goal of incremental testing is to provide an early feedback to software developers. What is alpha testing? Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use. What is beta testing? Following alpha testing, "beta versions" of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users. What is gamma testing? Gamma testing is testing of software that does have all the required features, but did not go through all the in-house quality checks. Cynics tend to refer to software releases as "gamma testing". How do test case templates look like? Software test case templates are blank documents that describe inputs, actions, or events, and their expected results, in order to determine if a feature of an application is working correctly. Test case templates contain all particulars of test cases. For example, one test case template is in the form of a 6-column table, where column 1 is the "test case ID number", column 2 is the "test case name", column 3 is the "test objective", column 4 is the "test conditions/setup", column 5 is the "input data requirements/steps", and column 6 is the "expected results". All documents should be written to a certain standard and template. Why? Because standards and templates do help to maintain document uniformity. Also because they help you to learn where information is located, making it easier for users to find what they want. Also because, with standards and templates, information is not be accidentally omitted from documents.

You might also like