You are on page 1of 28

Chapter-One

Software Testing
To ensure a defect-free product, you need to test the software to find and eliminate defects. Testing is the process of executing a program with the intent of finding errors and verifying that the program satisfies the specified requirements. Different people have different perspectives of testing and therefore the purpose of testing can be stated in several ways. Testing is done for the following reasons: To detect errors in a software product To verify that a software product conforms to its requirements To establish confidence that a program or system does what it is supposed to do To evaluate an attribute or capability of a software product and determine that it meets the required results

Testing involves the operation of a system or an application under controlled conditions and evaluation of the results. Controlled conditions include both normal and abnormal conditions. Testing should intentionally attempt to introduce errors. This helps determine if the software works as expected under all conditions. The purpose of software testing is to analyze a software product to determine the differences between its requirements and the actual behavior. The purpose is also to evaluate if the features and functionality of the given product meet the specified requirements of the customer or the user. Testing starts with the smallest complete unit of a product and ends with testing the entire system as a whole. The entire testing phase is divided into various sub-phases, each with a different focus and purpose to ensure an organized approach to testing.

Software Development Life Cycle (SDLC)


Before the acceptance of software development as an engineering stream, the process of developing software was an adhoc activity with no formal rules or standards. As a result, software projects faced serious problems in terms of schedule slippage, cost overrun, and inferior quality of software. Software Development Life Cycle (SDLC) was introduced to address the problems faced during the software development process. SDLC is a disciplined and systematic approach that divides the software development process into various phases, such as requirement, design, and coding. The phasewise development process helps to track schedule, cost, and quality of the software projects. The six phases of SDLC are: Feasibility analysis: Includes analysis of project requirements in terms of input data and desired output, processing required to transform input into output, cost-benefit analysis, and schedule of the project. The feasibility analysis also includes the technical feasibility of a project in terms of available software tools, hardware, and skilled software professionals. At the end of this phase, a feasibility report for the entire project is created. Requirement analysis and specification : Includes gathering, analyzing, validating, and specifying requirements. At the end of this phase, the Software Requirement Specification (SRS) document is prepared. SRS is a formal document that acts as a written agreement between the development team and the customer. SRS acts as input to the design phase and includes functional, performance, software, hardware, and network requirements of the project. Design: Includes translation of the requirements specified in the SRS into a logical structure that can be implemented in a programming language. The output of the design phase is a design document that acts as an input for all the subsequent SDLC phases. Coding: Includes implementation of the design specified in the design document into executable programming language code. The output of the coding phase is the source code for the software that acts as input to the testing and maintenance phase. Testing: Includes detection of errors in the software. The testing process starts with a test plan that recognizes test-related activities, such as test case generation, testing criteria, and resource allocation for testing. The code is tested and mapped against the design document created in the

design phase. The output of the testing phase is a test report containing errors that occurred while testing the application. Maintenance: Includes implementation of changes that software might undergo over a period of time, or implementation of new requirements after the software is deployed at the customer location. The maintenance phase also includes handling the residual errors that may exist in the software even after the testing phase.

During the software development life cycle, different errors might get introduced in the software product. These errors can be of different types. Leakage errors: These are the errors that are not detected at a particular stage in the development life cycle and are carried forward to the next stage. New Errors: In addition to the errors that leak into a particular stage from the previous stage, new errors may get introduced at every stage. Compatibility Errors: Sometimes two or more modules in a program work correctly when run in isolation but show erroneous results when integrated. These types of errors are called compatibility errors.

Chapter One Questions


1. What is software testing? Explain the purpose of testing? Answer: Software testing is the process of executing a program with the intent of finding errors. It is used to ensure the correctness of a software product. Software testing is also done to add value to software so that its quality and reliability is raised. The main purpose of software testing is to demonstrate that the software works according to its specification and the performance requirements are fulfilled. The test results indicate reliability and quality of the software being tested. 2. Explain the origin of defect distribution in a typical software development life cycle? Answer: In a software development life cycle, defects occur in every phase. However, the maximum defects occur due to the improper understanding of product requirements. Also, the requirements keep changing during the entire software development life cycle. Therefore, the maximum defects originate in the requirements phase. If any defect occurs during the requirements phase, it needs to be detected during that phase itself. Otherwise, the defects in the requirements phase lead to defects in the design and coding phases. 3. Explain the importance of testing. What happens if a software program is not testing before deployment? Answer: Testing is important because it performs the following tasks: Detects errors in a software product Verifies that a software product conforms to its requirements Establishes confidence that a program or system does what it is supposed to do Evaluates an attribute or capability of a software product and determines that it meets the required results

If a software program is deployed without testing, there might be some bugs in the program, which are left undetected. These bugs can result in non-conformance of the software to its requirements.

FAQ
1. "Testing can show the presence of defects in software but cannot guarantee their absence." How can you minimize the occurrence of undetected defects in your software? Answer: The occurrence of undetected defects can be minimized by starting testing early in the development life cycle and by designing test cases that adequately exercise each aspect of the program logic. 2. What activities are performed in the testing phase of SDLC?

Answer: The activities performed during the testing phase of SDLC are analyzing risk, planning, designing test, executing test, defect tracking, and reporting. 3. What could be the responsibilities of a software tester? Answer: A software tester is responsible for the following activities during the testing process: Developing test cases and procedures Creating test data Reviewing analysis and design artifacts Executing tests Using automated tools for regression testing Preparing test documentation Tracking defects Reporting test results

4. How can you test World Wide Web sites? Answer: World Wide Web sites are client/server applications, with Web as the server and browser as the client. While testing World Wide Web sites, the main concern should be on the interaction between the HTML pages, applications running on the server side, and Internet connections. You can use HTML testing, browser testing, or server testing to test World Wide Web sites. 5. What is the role of documentation in Quality Assurance? Answer: Documentation plays a very critical role in Quality Assurance. Proper documentation of standards and procedures is essential to assure quality. This is because the SQA activities of process monitoring, product evaluation, and auditing rely on documented standards and procedures to measure project compliance. QA practices should be documented so that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, defect reports, and user manuals should be documented for clarity and common understanding among all involved parties. 6. Who participates in testing? Answer: The people who participate in testing include the following: Customer End user Software developer Software tester Senior management Auditor

Chapter-Two
Techniques for Dynamic Testing
Depending on the approach used for testing, dynamic testing techniques can be classified as follows: Black-box testing White-box testing Gray-box testing

Black-box testing techniques use functional test approaches, while white-box testing techniques use structural test approaches. Grey-box testing techniques use a combination of both structural and functional test approaches. Summarize the session by asking the students to differentiate between: Black box testing and white box testing Unit testing and integration testing Acceptance testing and performance testing

Top down testing and bottom up testing

Test Approaches
The approaches used for testing can be broadly grouped under the following two categories: Static testing: Static testing is done to verify the conformance of a software system to its specification without executing the code. This testing involves analysis of the source text by individuals. Any document created as a part of the software development process can be tested using this approach. Static testing is usually more cost-effective than other types of testing and allows defect-detection to be combined with other quality checks. Static testing techniques include reviews and walkthroughs.

You can discuss the effectiveness of static testing, such as: More than 60 percent of program errors can be detected by informal program inspections. More than 90 percent of program errors may be detected using more rigorous mathematical program verification. Dynamic testing: Dynamic testing involves executing the source code to check if it works as expected. Dynamic testing may be conducted using the following approaches: Functional approaches Structural approaches

Functional Test Approaches Functional test approaches focus on the functionality of a software product. If the functions that a product has been designed to perform are known, tests can be conducted to demonstrate that each function works according to specification, at the same time searching for errors in each function. Functional approaches are more effective during later stages of testing. Functional test approaches are sometimes also called black-box testing techniques and are useful for locating the following types of errors: Incorrect functionality Missing functionality Interface errors Performance errors Incorrect specifications Initialization errors Termination errors

The following are the advantages of using functional test approaches: They are very effective on large units of code Testers do not need to have any knowledge of implementation, including specific programming languages Testers and programmers can be independent of each other Tests are done from a user's point of view Ambiguities or inconsistencies in the specifications can be easily identified Test cases can be designed as soon as the specifications are complete Although, functional test approaches are effective in uncovering a wide range of errors, this approach has several limitations. The following are the limitations of functional test approaches: May leave many program paths untested Cannot be directed towards specific segments of code, which may be very complex. Therefore, these may be more error prone Reason for a failure is not found Difficult to design tests without clear and concise specifications Structural Test Approaches

Functional test approaches have certain limitations. Therefore, these approaches cannot be used to identify all defects. We can overcome these limitations by using structural test approaches in conjunction with functional test approaches. Structural approaches focus on the internal working of a software product. If the internal working of a product is known, tests can be conducted to ensure that the internal operations perform according to the specification. Structural test approaches are sometimes also called white-box or glass-box testing techniques. Structural approaches look beyond the output of a program and are more effective than functional approaches on small individual modules. The advantages of structural testing approaches are: Useful in locating extra non-specified functions that cannot be detected using functional approaches More effective than functional approaches on small individual modules Although, structural test approaches overcome the problems of functional test approaches, they cannot completely replace functional test approaches. This is because structural test approaches have some disadvantages. The following are the disadvantages of structural test approaches: The number of logical paths in a program is large. It is not practically possible to check all logical paths through a program because it takes a lot of time and effort. Only a limited number of important logical paths can be tested. Use of structural approaches requires the tester to have some knowledge of programming languages. Structural approaches do not ensure that user requirements are met. Comparing the advantages and disadvantages of structural and functional testing, it can be concluded that for a complete software examination, both the techniques are required because they complement each other. Identifying Test Approaches Before delivering a software product to the customer, the software is subjected to a variety of tests. These tests may be conducted using either structural or functional approaches. The tests that are conducted using structural approaches are called structural testing techniques. Similarly, the tests that are conducted using functional test approaches are called functional testing techniques. The following table shows the testing approaches that are applied during the four basic types of testing: Type of Testing/ Approach used Unit Testing Integration Testing System Testing Acceptance Testing X X X X X Structural Approach Functional Approach

Unit testing uses structural approaches and is therefore, a structural testing technique. Integration testing uses both, structural and functional testing approaches. System testing and acceptance testing use functional approaches and are therefore, functional testing techniques.

Structural Testing Techniques


Some of the structural testing techniques are discussed below:

Stress testing: This involves testing the system in a manner that demands resources in abnormal quantity, frequency, or volume. This type of testing is usually done by simulating a production environment and is done for the following reasons: To check that the system works efficiently when subjected to normal transaction volumes

To check that the system works efficiently when subjected to above-normal transaction volumes To check that the system is capable of meeting expected turnaround times

Stress testing attempts to break a system by overloading it with a large volume of transactions. Execution testing: This is conducted to determine if the system achieves the desired level of performance. It can also be used to determine response times and turnaround times. Execution testing can be performed using the actual system or a simulated model of the system. The objectives of execution testing include the following: To determine response time

To determine turnaround time To check whether hardware and software are being optimally utilized Recovery testing: This is conducted to verify the ability of the system to recover from varying degrees of failure. This testing technique ensures that operations can be continued after a disaster. Recovery testing has the following objectives: To check if adequate backups have been made To ensure that the backup data is secure To ensure that recovery procedures have been documented

To ensure that recovery tools are available Recovery testing can be done using any of the two methods: By assessing the procedures, methods, tools, and techniques By introducing a simulated disaster Operations testing: When an application is developed, it is tested and then integrated into the operating environment. The application is then executed by the operation staff using the normal operating procedures and documentation. Operation testing ensures that the operation staff can properly execute the application using the operating procedures and documentation. Compliance testing: This verifies whether the application is developed in accordance with information technology standards, procedures, and guidelines. Compliance testing may be done by a peer group of programmers who test a computer program line-by-line to ensure that it complies with programming standards. Security testing: This is done to identify security defects in the software. Security defects are not as easily identifiable as other types of defects. The following are some objectives of security testing: To ensure that security risks are identified

To ensure that security measures are implemented To ensure that implemented security measures function properly

Functional Testing Techniques


Some of the functional testing techniques are discussed below. Requirements testing: This is conducted to verify that a system performs correctly over a continuous period of time. The objectives of requirements testing are given below: To ensure that the specified requirements are implemented

To ensure that the system works correctly for extended time periods To check if processing is done using defined procedures

To check if processing is done in accordance with government rules Regression testing: When a change is made to one segment of a system, it may have a disastrous effect on another segment that had been thoroughly tested earlier. This cascading effect may be caused because of any of the following reasons:

The changes were implemented incorrectly The changes introduce new data or parameters that cause problems in a previously tested segment

Regression testing retests previously tested segments to ensure that they function properly when a change is made to another part of the application. Error-handling testing: Error-handling testing is done by a group of individuals who think negatively to anticipate what can go wrong with the system. This type of testing is used throughout the development life cycle. The impact of errors is identified at all stages in the development process and appropriate action is taken to reduce those errors to an acceptable level. Manual-support testing: This involves testing the interface between users and the application system. The users must be able to enter transactions and obtain the results of processing. In addition, the users must be able to derive useful information out of the results and use it to take necessary action. Intersystem testing: Applications are often interconnected with other applications. Intersystem testing is done to ensure that the interconnections between applications function correctly. This type of testing involves operating multiple systems in the test and ensuring that proper parameters and data are correctly passed between the applications. Control testing: This is conducted to ensure that processing is performed in accordance with the intent of the management. Control testing involves testing the mechanisms that oversee the proper functioning of an application system by methods such as audit trails, documentation, backup and recovery. Parallel testing: When a new system is developed, it may be run in parallel with the old system so that performance and outcomes can be compared and corrected before deployment. This type of testing is called parallel testing.

FAQ
1. What is unit testing? Answer: Unit testing involves testing each individual unit of software to detect errors in its code. 2. What is integration testing? Answer: Integration testing involves testing two or more previously tested and accepted units to illustrate that they can work together when combined into a single entity. 3. What is system testing? Answer: System testing is the process of testing a completely integrated system to verify that it meets specified requirements. 4. What is acceptance testing? Answer: Acceptance testing is the process in which actual users test a completed information system to determine whether it satisfies its acceptance criteria. 5. What is a test plan? Answer: The test plan is a document that describes the complete testing activity. The creation of a test plan is essential for effective testing and requires about one-third of the total testing effort. Testplanning phase also involves planning for cross-platform testing. Cross platform testing involves testing the developed software on different platforms to verify that it works as expected on all the desired platforms. During the test-planning phase, you must decide the platforms on which the software is to be tested. 6. When should you start testing? Answer: Testing should be started as early as possible in the software development life cycle. The traditional "big bang" approach to testing, where all the testing effort is concentrated in the testing phase, when the development is complete, is more expensive than a continuous approach to testing. The cost of correcting a software defect increases in the later stages of the software development.

Chapter-Three
Program Quality Metrics
Program quality metrics are used to provide a quantitative measure of program quality. They are used to find the number of defects in a program. Various program quality metrics are: Lines of code: Represents the length of a source code in software. Errors in the source code affect the quality of the software. Lesser the number of errors in source code, higher is the quality of the software. Cyclomatic complexity: Measures the maximum number of independent executable paths in a program. Test cases are designed to test the efficient execution of each independent path. The quality of the product is ensured if each path is successfully executed. Cohesion: Represents the degree to which the internal elements of the module are bound to each other. The modules should be highly cohesive to support a well-defined module concept. If each module performs its desired function, the quality of the software improves. Function points: Measures various parameters, such as number of functions, number of inputs and outputs in a program during the requirement phase of a software development life cycle. Effective measurement of these parameters ensures the quality of the software.

Types of Review
A review is a software quality assurance activity used to uncover errors in logic or function of a software. Apart from quality reviews, there are other types of reviews. The different types of reviews are: Progress review: Provides information to the management about the progress of the software project. It involves both the product and the process review. A progress review is concerned with the software development cost, development schedule, and the software quality assurance plan. Program inspection: Detects the errors in the program design or code. It also checks if the standards are followed throughout the software development life cycle.

Describing ISO 9001 requirements


The ISO 9001 standard specifies the guidelines for maintaining the quality system of an organization. The quality system of an organization applies to all the activities related to its product or services. The requirements of ISO 9001 are:

Management responsibility: The management of an organization has the responsibility to: Plan an effective quality policy. Define the responsibility of all those persons who are involved in quality planning. Assign the responsibility of quality system to a management representative, other than those involved in the development process. This helps the person to work in an unbiased manner. Check the effectiveness of the quality system by performing audit reviews. Quality system: The quality control system of an organization must be well documented and maintained properly. Contract reviews: Before handling a contract, an organization must ensure that it is capable of carrying out the various obligations required. Design control: There should be a proper control on the design process, including the coding phase. The design control focuses on the following issues: Verification and confirmation of the design inputs. Verification of system design. Verification of the quality of design output. Control over the changes in the design. Document control: It focuses on the following issues: Availability of proper procedures for document approval and removal.

Availability of certain configuration management tools for controlling the document changes. Purchasing: An organization must check the purchased materials for conformity to requirements. Process control: It requires a proper management of the development team. The organization is also responsible for developing a quality plan that addresses the various quality requirements. Inspection and testing: It requires an effective testing to be performed at various levels of software development. Corrective actions: It involves correcting the errors found, detecting the cause of their occurrence, and preventing such future errors. Quality records: It involves recording the steps that are taken for controlling the quality of a process. Quality audits: Its main purpose is to ensure that the quality system is effective.

Statistical Testing
Statistical testing is the software testing activity that estimates the number of undiscovered errors in a software product. It is performed by forcefully introducing errors in the software and by repeating the tests on the faulty software. The errors discovered using this process indicate the effectiveness of the testing activity. Statistical testing activity identifies the parts of the software development process that introduce maximum faults. Such statistics help to indicate if the quality of the software development process is improving. Statistical testing focuses on the performance and reliability of software rather than finding software errors. As the errors are uncovered in the software, the primary cause behind them should be corrected for improving the reliability of the software. Statistical testing helps to predict the quality of a software product. The quality assurance activities performed during the software development life cycle reduce the number of errors introduced by performing precise reviews during the design and the coding phase. Another approach for reducing errors is to perform a thorough testing activity. In spite of these approaches, it is difficult to produce a software product without any errors. Thus, statistical testing provides a way to estimate the number of undiscovered errors in a software product. Statistical testing makes use of historical data to detect the number of undiscovered errors in the software product with the use of previous records. For example, suppose that 100 errors are discovered after performing unit testing on a software product. Then, 50 errors are intentionally introduced in the tested copy of the software. If the repetition of all the unit tests detects only 30 intentional errors, it implies that the unit testing is only 30% effective. If now the original software that detected 100 errors is taken, it is assumed to detect only 30% of the errors. This means that 70 errors are still undiscovered.

SCM Process
Software configuration management (SCM) is the process of systematically controlling the changes that take place during software development. It controls the changes in software deliverables, such as test cases, requirement specifications, source code, and design documents. SCM handles issues, such as evaluating the impact of changes and the remedies to accommodate the change requests. The SCM process controls the changes in such a manner that they have minimal effect on cost, schedule, and quality of the software product. Software engineers modify the software deliverables during the development phase of software. Unless the deliverables are properly maintained, problems may occur. Software configuration management is required for: Avoiding inconsistencies when an object is replicated: When several developers are involved in developing a software and if any one of them makes changes to his local copy, it is necessary that he notifies the same to other members. If this does not happen, the software becomes inconsistent. As a result, the software is integrated, it does not work. Avoiding problems that are associated with concurrent access: This problem occurs when a software developer modifies the parameters of a module designed by him, but does not inform the other developers who need to interface their modules with the modified module. As a result, when the other developers run the module, it does not work because of mismatch of parameters. Providing a stable development environment for software development.

Solutions to Chapter Three Questions

1. Quality and reliability are related concepts, but are fundamentally different in a number of ways. Discuss them. Answer: Reliability is a quality measure that cannot be measured with absolute certainty, but requires statistical and probabilistic methods for its measurement. It is concerned with measuring the probability of occurrence of failures. Reliability is measured by evaluating the frequency of failure, as well as severity of errors induced by the program. Software quality is the conformance to requirements, which are either explicit such as usability or implicit such as interoperability. Quality involves a number of factors, such as correctness, efficiency, portability, interoperability, and maintainability apart from reliability. 2. Can a program be correct and still not be reliable? Explain. Answer: It is possible that a program is correct, but it is not reliable. A program is correct if it behaves according to its stated functional specifications. If a program is correct, it does not ensure the nonoccurrence of a failure. Thus, if the consequence of a software error is not serious, incorrect software may be reliable. 3. Can a program be correct and still not exhibit good quality? Explain. Answer: It is not necessary that a correct program would exhibit good quality. For example, a program may be correct but the logic may be such that it does not efficiently utilize the system resources. As a result, the quality of the product may not be too good. A good quality program should utilize the system resources efficiently and the output should be produced in a minimum possible time. A good quality software must be delivered on time and within budget, and should be maintainable. 4. Explain in more detail, the review technique adopted in quality assurance. Answer: Various review techniques are used in quality assurance to review a software, its documentation, and the processes used to produce the software. The review process helps to check if the project standards are followed and if the software conforms to these standards. The conclusions at the end of the review process are recorded and passed to the document author for corrections. Apart from specifications, designs, and test plans, configuration management procedures and process standards can also be reviewed. The software quality assurance involves three types of reviews: Code inspection Walkthroughs Round review process

Code inspections are formal reviews that are conducted explicitly to find errors in a program. This process helps to check the software deliverables for technical accuracy and consistency. It also verifies whether the software product conforms to applicable standards or not. It involves a group of peers which first inspects the product individually and then gets together in a formal meeting to discuss the defects found by each member and identifies more defects. The inspection team includes the author, moderator, and reviewers. The moderator is responsible for planning and successfully executing the inspection process. The following activities need to be performed before an organization decides to introduce inspection in its software process: Prepare a checklist of likely errors. Define a policy stating that the inspection process is a part of verification and not personal appraisals. Invest on the training of inspection team leaders.

A walkthrough is an informal technique for analyzing the code. Its main purpose is to train the walkthrough attendees about the work product. A code walkthrough is conducted after the coding of the module is complete. The members of the development team select some test cases and simulate the execution of the code. In this process, the author describes the work product to all the members and gets feedback from them. The members also discuss the solutions to various problems detected.

FAQ
1. What is software quality assurance?

Answer: Software quality assurance is defined as a planned and systematic approach to the evaluation of the quality of a software product. It evaluates the adherence to software product standards, processes, and procedures for software development. 2. What are the principal activities in software quality management? Explain. Answer: Software quality management involves three activities: Quality assurance: Establishes a structure of organizational standards and procedures that contribute to high quality product. Quality planning: Selects standards and procedures from the established structure and applies them to a particular software product. Quality control: Ensures that there is no deviation from the established quality plan.

3. Explain what do you mean by correctness of a product. Answer: A Software is said to be correct if all the requirements specified in the Software Requirement Specifications (SRS) document are implemented correctly. 4. What is ISO 9000 standard? Answer: ISO 9000 standard specifies the guidelines for maintaining a quality system. In an organization, the quality system applies to all the activities that relate to its products and services. This standard addresses aspects, such as responsibilities, procedures, and resources for implementing quality management. 5. Explain the need for a SQA plan. Answer: The quality assurance activities performed by the software engineering team and the SQA group are governed by the SQA plan. The SQA plan helps to identify the following issues: The evaluations that need to be performed. The audits and reviews to be conducted. The standards that are applicable to the product. The procedures to be used for error detection and correction. The documents to be produced by the SQA group.

7. When should you stop testing? Answer: Testing is potentially endless. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. We cannot test till all the defects are detected and removed. At some point, we have to stop testing and ship the software. Testing is a trade-off between budget, time, and quality. The following are a few common factors in deciding when to stop testing: The time allocated for testing is exhausted. Test cases are completed with a certain percentage passed. The test budget is depleted. The coverage of code, functions, and requirements reaches a specified point. The defect rate falls below a certain level. The beta or alpha testing period ends.

Chapter-Four
Differentiating Between the Code Inspection and Walkthrough Processes
Following are the differences between inspection process and walkthrough process: Inspection process is performed on completed work products whereas walkthroughs can be conducted at any stage of the software development cycle.

Inspection process does not necessarily require the involvement of programmer whereas walkthroughs are conducted by programmers. Inspection process requires that the participants involved should be thorough with the work product before the process starts. Whereas in walkthroughs, the participants gain a knowledge of the work product during the process itself. Inspection process focuses only on finding the errors in a program whereas walkthroughs also involve discussions for identifying alternative solutions to the problems uncovered.

Roles of the Participants Involved in the Inspection Process


Program inspection is a review process that focuses on error detection rather than broader design issues. The errors may involve defects in the logic of a program, anomalies in the code, or noncompliance with the organizational standards. Program inspection is a formal process and involves a team of 4-5 people. The participants include the author, moderator, reader, inspector, and scribe. Following is the role of each participant in the inspection process: Author: Responsible for producing the software product. The author works with the moderator to assign roles to all the participants. The author is also responsible for fixing the defects identified during the inspection process. Moderator: Responsible for planning and scheduling the events for an inspection process. The moderator also determines if the preparation is sufficient to hold a meeting and if not, then reschedules the inspection process. The moderator delivers the inspection summary report to the organization's review coordinator. Reader: Presents the work product to the inspection team to obtain comments from inspectors. Inspector: Examines the work product prior to the inspection meeting for removing inconsistencies, if any. During inspection, an inspector is responsible for identifying the errors and suggests improvements. Scribe: Records the results of the inspection process.

Roles of the Participants Involved in Walkthrough Process


In a walkthrough process, the author is responsible for selecting the participants for the process. It is not necessary to assign specific roles for a walkthrough process. The responsibilities of an author are: Selecting the review participants and obtaining their agreement for scheduling the process. Distributing the work product to the participants before the process starts. Describing the work product to the participants during the meeting. Performing rework on the basis of review comments.

The reviewers are responsible for identifying the possible defects in the work product and provide suggestions for improving the product quality.

Criteria for Selecting the Black Box or White Box Testing Technique
Black box testing is performed by the quality team to ensure that the system meets the customer requirements and the application performs its defined functionality. It does not ensure that each line of code is tested. It does not help in identifying the efficiency of the code. On the other hand, white box testing is used to test all lines in the code and ascertain the code efficiency. A lot of time is required to test a system that uses white box testing method. White box testing cannot be used for identifying interface errors between the subsystems. Also, white box testing cannot be used where system has time constraints.

Solutions to Chapter Four Questions


1. Is code review relevant to software testing? Explain the process involved in a typical code review.

Answer: Code reviews are relevant to software testing as they detect the defects in the code. Code reviews are performed after the defects found during code reading and static analysis are corrected. Code reviews enhance the reliability of the software and reduce the effort during testing. Before the code review starts, the design documents and the code to be reviewed are distributed to the review team members. The review team includes the programmer, the designer, and the tester. The errors uncovered during the process are recorded and the process ends with an action plan. The programmer of the software product is responsible for correcting the discovered errors. 2. Explain the need for inspection and list the different types of code reviews. Answer: Inspection process is needed for various purposes as described below: It helps to identify errors at an early stage. It helps in identifying the most error-prone sections of the program at an early stage of software development cycle. The programmers receive feedback concerning his or her programming style and choice of algorithms and programming techniques. Other participants gain programming knowledge by being exposed to another programmer's errors and programming style. Code inspections Walkthrough process Round robin reviews

The different types of code reviews are:

3. Consider a program and perform a detailed review and list the review findings in detail. Answer: We will review a program to first read student data, such as student name, roll number, store the data in a file and then read the data from the file. struct student { char name[20]; int roll; }; void main() { struct student stud[20]; fstream inputfile; char filename[10]; int i,n; char fname[10]; cout<<"Enter file name"; cin >> fname; cout<<"\nNumber of student records to store:"; cin>>n; cout<<"Enter student details"; for(int x=0;x<n;x++) {

cout<<"Student name:"; cin>>stud[x].name; cout<<"Student roll number:"; cin>>stud[x].roll; } inputfile.open(fname,ios::out); for(i=0;i<n;;i++) inputfile<<stud[i].name<<stud[i].roll<<endl; infile.close(); inputfile.open(fname,ios::out); // Read student records from the file i=0; while(!infile.eof()) { infile>>stud[i].name>>stud[i].roll; ++i; } for(int j=0;j<n;j++) cout<<stud[j].name<<stud[j].roll<<endl; inputfile.close(); } } We formed an inspection team of four people to conduct a detailed review of the above program. The inspection team used a common checklist of errors for a detailed review of the program. The team searched for different types errors in the program. The findings of the detailed review using various checks are listed as follows: Check for Data-Declaration Errors Some of the variables used in the program have not been explicitly declared and some variables, which have been declared, have not been used. The program declares a character array named filename to store the file name. But, a different variable fname is used to input the file name from the user. The variable fname has not been declared in the program. The program has correctly declared an integer variable to input the number of student records to be stored in a file. The program declares a file type variable to perform file input-output operations.

Check for Data-Reference Errors The file close () function references a file variable, which has not been set in the program. The program declares an array to store ten student records but no checks exist on the number of student records a user can input. Check for comparison errors to detect if there is any comparison between variables having inconsistent data types. The program correctly compares the expression in the while loop. The program correctly performs the Boolean check operation to detect the end of a file.

Check for Control-Flow Errors The for loops used to input student data, store student data in a file and display the data stored in the file will terminate properly provided an user inputs at the most ten student records. Check for File Input/Output Errors

The program stores student records in a file and later reads the stored data. To write student records in the file, the file has been correctly opened with the correct file opening mode. After all the data is written in the file, the file, which has been opened, is not closed. The file name parameter used to open the file for the write operation is not declared in the program. To read file data, the file has been opened with a wrong file opening mode. Also, the file name parameter used to open the file for the read operation is not declared.

In this way, the inspection team performed a detailed inspection of the program. 4. Explain the difference between code walk through and Inspection. Answer: An inspection process is a formal meeting done by a group of peers. The group first inspects the product privately and then gets together in a meeting to discuss the problems detected by each member. Its primary purpose is to detect the errors in various stages of SDLC. The inspection process includes the author, reviewers, and a moderator. The moderator is responsible for planning and successfully executing the inspection process. Reviewers are not directly responsible for the development of the software, but are concerned with the product. Reviewers may include designers and testers. During the meeting, the moderator explains the objective of the review, defines the roles of different people, gives clarifications about the process, and distributes the inspection package to the reviewers. The inspection package consists of the documents to be inspected, additional documents that help in better understanding of the product, and checklists to be used. A walkthrough is an informal technique for analyzing the code. Its main purpose is to train the walkthrough attendees about the work product. A code walkthrough is conducted after the coding of the module is complete. The members of the development team select some test cases and simulate the execution of the code. In this process, the author describes the work product to all the members and gets feedback from them. The members also discuss the solutions to various problems detected.

FAQ
1. What is the difference between a walkthrough process and a code inspection process? Answer: Code inspection is a formal method of inspecting the code by a group of people with the purpose of identifying the defects in it. Walkthroughs are informal meetings conducted by the author of the program with the intent of educating the participants about the work product. It also involves discussions for identifying alternative solutions to the problems uncovered. 2. How can you determine the quality of a software before executing it? Answer: Software quality can be determined by using static techniques, such as code reviews and walkthroughs before executing the code. 3. What are the various techniques used for developing test cases? Answer: Two approaches are used for developing test cases: black box testing and white box testing. Black box testing can use either equivalence partitioning method, boundary value analysis, or cause effect graph analysis for designing test cases. White box testing involves control flow methods and data flow methods. 4. What are the various elements that are included in the inspection package for code review? Answer: The inspection package for code review includes: Program source code Significant portions of design or specification document Checklists to be used for review System constraints

5. What is the moderator's responsibility after the inspection meeting completes? Answer: After the inspection meeting completes, the moderator prepares a summary report of the meeting. The report lists the various errors uncovered during the meeting. The moderator also ensures

that all the issues in the report are addressed. The moderator also decides whether or not to perform a re-review of the product.

Chapter-Five
Tool Selection
A testing tool is an instrument used by a tester to perform testing. The selection of an appropriate tool is an important aspect of the testing process. A testing tool enables an increase in the efficiency and effectiveness of testing. The selection of an appropriate tool leads to an ease in the burden of test production and test execution. This is because minimum effort and time is spent on the process when the correct tool is selected. The criteria affecting the selection of an appropriate testing tool are as follows: The objectives of testing should be accomplished successfully. The tool should be easy to use. The time spent in installing and learning about the tool should be the least. The tool should be compatible with the platform and software used for testing. The purchase cost of the tool should be within the project budget.

The test manager plays a significant role in the identification, selection, and acquisition of testing tools. After selecting an appropriate tool, the test manager should perform the following tasks: Identify the goals to be met and assign responsibilities for the activities required to meet these goals. Approve a detailed tool acquisition plan that defines the resource requirements for procurement and in-house activities. Approve the procurement of the tool and training to use the tool, if this is not explicitly defined in the approval of the acquisition plan. Determine, after some period of tool usage, whether the goals related to the tool have been accomplished.

State real life examples that will emphasize the significance of selecting the appropriate tool.

The examples are as follows: Visualize a log of wood with a six-inch nail to be driven in it. The person performing the task uses a tack hammer to perform the task as a consequence of which the process becomes tedious and expensive. On the other hand, if the same task is performed with a three-pound heavy duty hammer the nail flows smoothly into the piece of wood with very little effort and no wastage of time. Thus, the selection of the appropriate tool is essential for effective testing. Visualize a hole in a piece of cloth. This hole can be stitched with a simple tool such as needle. If a sword is used to perform the task it would be impossible. Thus, the selection of an appropriate tool is very essential to complete the task on time. The tool with which the tester performs the test should be based on their individual skill level. If the tester does not possess the skills to use the tool, adequate training should be provided.

Types of Testing Tools


Automated Regression Testing Tools: The main selection parameter for the automated regression testing tool is the ability of the tool to support the application's front-end technology. Some steps involved in selecting this tool are as follows: Selecting the test cases that are to be automated Designing the framework of testing Designing test scenarios Preparing input data for the test Identifying utility functions related to the application software

An automation suite should be developed to carry out the testing process with the automated regression testing tool. This is done as follows: Developing utility functions related to the application software. Recording test scripts and further programming for sturdiness. Performing a code review and unit testing of the test scripts. Testing the test scripts in a batch on one machine and then on multiple machines.

An example of the Automated Regression Testing tool is the Taragana automated regression testing tool. This tool simplifies the creation of a large volume of automated acceptance tests, regression tests, or sanity tests in a short span of time for Web-enabled applications. The basis of this tool is the XML script and this tool is capable of testing password-protected sites and secured sites. Some of the benefits of the Taragana automated regression testing tool are as follows: Tightly integrated with JUnit enabling easy integration with the existing tests Cross-platform-based tool Extremely flexible in comparing elements of a Web page with other elements on another Web page, which can be accessed from a file repository or Web server Load Testing Tools: The main parameter of selecting a Load Testing tool is to determine the benchmark or breakdown point in the application software. The Load Testing Tool should be evaluated to determine if the tool supports the application technology not only at the Graphic User Interface (GUI) level but also on the middle tier and database level. The Load testing tool must also support the protocols of communication between the tiers. Some of the steps involved in the setting up the Load Testing tool are as follows: Configuring an isolated network with servers of specified configuration Configuring the proper application and database Identifying the number of clients Configuring the Load Testing tool Critical transactions for load testing Properly defined workload patterns Properly defined configuration patterns

The Load Testing scenario should be properly designed to include the following:

WAPT 3.0 is a load and stress-testing tool, which enables an easy-to-use and cost-effective way of testing Web sites and intranet applications with Web interfaces. This tool allows the user to test and analyze the performance characteristics and bottlenecks of a Web site under various load conditions. Some of the advantages of WAPT 3.0 are as follows: Is designed for Microsoft Windows 95/98/ME/NT/2000/XP. Is competitively priced and does not require expensive hardware to run. Is easy to use and it is a very powerful Web testing tool.

Manual tools: These are tools which are operated manually and do not require the tester to possess an in-depth knowledge of programming. Manual tools do not execute a programming code and do not require executing software. There are various kinds of manual tools available which facilitate the procedure of testing. The most important ones are as follows: Checklists: The checklist is a written format, which contains a series of probing questions about the completeness and attributes of application software. An example of a checklist is a feedback form. Test scripts: The test script is a tool that specifies the order of actions that should be performed during a test session. The script also contains the expected results. An example of test script is Jython and Python used for Java. Jython is an implementation of Java and Python is written in Java and is integrated with the Java platform. Decision Tables: The decision table is a tool used for documenting the different combinations of conditions and associated results in order to derive unique test cases for validation testing. Decision tables can either be computer-based or simply drawn on paper. The decision table contains a list of decisions and the criteria on which they are based. All the possible situations for

decisions should be listed and the action to be taken for each decision should be specified. Consider an example of a traffic intersection, where the decision to proceed can be expressed as a yes or no and the criterion to proceed is the color of the light, whether red or green. Traceability tools: Traceability tools avoid defects from cropping up in the test execution phase of the software development life cycle and provide almost defect-free software products to the customer. These tools ensure that the requirements are covered without any loopholes and the changes made to the artifacts are also tracked.

The traceability matrix tool is used to build traceability matrices. The traceability matrix tool ensures coverage assurance of the testing artifacts prepared with respect to the business requirements in the software product. This is used as a cross-reference for the system being tested, detailed system specifications, and test cases. An ideal tool is one that provides hypertext links among system features, test cases, and modules of the software product. Some efficient ways of building testing artifacts are as follows: Prepare a traceability matrix at every stage of the test process being tested. Identify the grey areas which cannot be tested and ensure that they are not present in any document. Update the traceability matrix concurrently when the review comments are incorporated for any application software prepared. Link all the base line documents together by taking any one document as the base document. All the test ware prepared should be interlinked. This means that the test conditions should be mapped to the test cases, which are mapped to the test scripts. All test cases written should be mapped correspondingly with the test conditions that are signed off and frozen. Prepare the application documents in a sequential manner to establish the links between the documents.

Code Coverage tools: The code coverage tool is used to quantify application testing and to identify untested portions of the code. This tool automatically instruments the code to provide detailed information about the dynamic application behavior of the application software. An example of the coverage tool is the Java Test Coverage tool. This tool enables the collection and display of code coverage data on the Java software source code. Some of the features of the Java Test Coverage tool are as follows: Compatible with Java 1 and Java 2 Works with stand-alone applications or servlets Works with arbitrary subsets of the source code base Can accumulate data from multiple test runs Handles thousands of files at one time

Activities in Test Case Design


A test case is a detailed procedure that fully tests a feature of the software application or an aspect of a feature, depending on the complexity of the application. This describes the various permutation and combination of steps to be carried out to test a feature or a screen exhaustively. A test case should be developed for each type of test listed in the test process. Test cases are of the following two types: Descriptive: These test cases contain information on how to perform the test and the data that should be used to perform the test. Detailed: These test cases describe the manner in which the test should be performed.

Most organizations prefer detailed test cases because such test cases are reproducible, can be automated, and can be used to determine the pass or fail criteria for a test. A test case must include the following components: Purpose of the test Special hardware requirements, such as a modem Special software requirements

Specific setup or configuration requirements Description of how to perform the test Expected results or success criteria of the test

Designing test cases is time-consuming. However, if this activity is done exhaustively, it enables comprehensive testing to be completed at the proper time. Documenting Test Case Design Documenting a test case design involves creating a simply written, procedural guide to develop a software test documentation of high quality, which is both systematic and comprehensive. The documentation should contain detailed instructions and templates on the given test documents, such as: Test plan Test design specification Test case specification Test procedure Test item transmittal report Test record Test log Test incident report Test summary report

Chapter Five Questions


1. What is black box testing? Explain. Answer: Black box testing focuses on the functional requirements of a software. In this testing method, the structure of the program is not considered. Test cases are designed only on the basis of the requirements or specifications of the program and the internals of the modules are not considered for the selection of test cases. Black box testing is applied at later stages in the testing process. Black box testing is used to find incorrect or missing functions, interface errors, errors in data structures, performance errors, and initialization and termination errors. The techniques used for black box testing are: Equivalence partitioning Boundary value analysis Cause effect graphing techniques

2. What are the different techniques that are available to conduct black box testing? Answer: The different techniques available to conduct black box testing are: Equivalence partitioning Boundary value analysis Cause effect graphing

Equivalence class partitioning is a way of selecting test cases for black box testing. In this technique, the domain of all the inputs is divided into a set of equivalence classes. If any test in an equivalence class succeeds, then every test in that class will succeed. This means that you need to identify classes of test cases such that the success of one test case in a class implies the success of other test cases. The following points should be remembered while designing equivalence classes: If a range of values defines the input data values to a system, then one valid and two invalid equivalence classes are always defined. If the input data presume values from a set of discrete members of some domain, then one equivalence class for valid input values and another equivalence class for invalid input values are always defined.

Boundary value analysis (BVA) leads to selection of test cases at the boundaries of different equivalence classes. It is observed that boundary points for any inputs are not tested properly. This leads to many errors. Guidelines for BVA are:

If the input range is specified to be between a and b, test cases should be designed with values a and b and with the values just above and just below a and b. If the input contains a number of values, test cases that use minimum and maximum values should be designed. The values just above and just below the minimum and maximum values are also tested.

The above two guidelines are also applied to output conditions. Cause effect graphing is a technique that helps to select combinations of input conditions in a systematic manner, such that the number of test cases does not become too large. The technique starts with the identification of causes and effects of the system. A cause specifies an input condition and the effect is a distinct output condition. In cause effect graphing, you need to create a graph of important program objects, such as modules or collections of programming languages and describe the relationships between them. A series of tests are then conducted on each object of the graph so that each object and the relationship between objects are verified and errors are uncovered. In the graph, the nodes represent objects and the links represent the relationship between objects. 3. Explain different methods available in white box testing. Answer: The different methods available in white box testing are: Basis path testing: Enables the test case designer to derive a logical complexity measure of a procedural design. This measure is used to define a basis set of execution paths. The test cases that exercise the basis set execute every statement in the program at least once. To derive the basis set you need to take the following steps: Draw a flow graph corresponding to the design or the code. Calculate the cyclomatic complexity of the resultant flow graph. Determine a basis set of linearly independent paths.

Prepare test cases that force execution of each path in the basis set. Condition testing: Verifies the logical conditions contained in a program module. The possible types of elements in a condition include a Boolean operator, a Boolean variable, a relational operator, or an arithmetic expression. This method not only detects the errors in the conditions of a program, but also detects other types of errors. There are a number of conditional strategies used, which are branch testing and domain testing. In branch testing, each decision in the program needs to be evaluated to true and false values at least once during testing. Domain testing requires three or four tests to be derived for a relational expression. The relational expression is given in the following form: E1<relational operator>E2 You require three test cases for the above expression. One of the test cases have the expression, E1 greater than E2, the other has E1 equal to E2, and the third test case has E1 less than the expression, E2. Data flow testing: Enables the selection of test paths of a program according to the location and uses of variables in the program. Loop testing: Focuses on the validity of loop constructs. The different types of loops include simple loop, concatenated loop, nested loop, and unstructured loop. Different sets of tests are applied to each loop.

FAQ
1. Can a person other than the software engineer design black box tests? Answer: Black box tests are architecture independent and require no knowledge of the underlying system. It is not concerned with how the output is produced, but it checks only whether the actual output matches the expected output or not. As a result, one need not be a software engineer to design black box tests. 2. What is a test case? How is it useful? Answer: A test case is a document that describes an input, action, and the expected output to determine if the program module is working correctly or not. Test case design helps to discover problems in the

requirements or design of an application, as it requires a complete understanding of the operations performed by the application. 3. What is test data? Answer: The information given below enables the learner to acquire a generic overview of test data. The data developed in support of a specific test case is test data. Test data can either be manually generated or extracted from an existing resource, such as production data. Test data that is a result of test execution can serve as an input to a subsequent test. Another source of test data includes recording user input using a capture or playback tool.

4. What is the difference between usability testing, recovery testing, and compatibility testing? Answer: Usability testing involves testing the ease with which a user learns about a product and uses it. Recovery testing involves verifying the system's ability to recover the loss that has occurred from varying degrees of failure. Compatibility testing involves testing whether the system is compatible with other systems with which it has to communicate during the execution of a process.

Chapter-Six
Test Environment
The test environment of an application software is a collection of hardware, software, network communications, and procedures that work together to provide a discrete computer service. An environment has unique features and characteristics that dictate how these are administered in similar, yet diverse manners.

Testing a Client/ Server Architecture


The Client/Server Architecture is a technology. The functioning of Client/Server architecture is as follows: In client/server architecture, the application server handles the processing requests of the clients and the back-end processor handles the processing of batch transactions on a regular basis. The following figure depicts client/server architecture:

C lie n t

S e rv e r

C lie n t

C lie n t
The Client/Server Architecture

The setting up of the environment for client/server architecture requires the following:

Appropriate hardware, such as monitors and CPU, should be configured at the appropriate time and place. Software, such as SQL, should be installed to enable smooth functioning of the architecture. Appropriate security testing should be carried out to enable protection of hardware, software, and data that is processed using this hardware and software.

All client workstations should operate under a set of the same rules. Thus, a set of standards should be developed to enable proper functioning.

Test Reporting
After testing a software application for its validity and reliability, a software tester must document and report the results. This process is known as test reporting. Developers and testers of the software application under test use the information thus obtained to fix the defects and track the status of the software application, respectively. Test reporting involves reporting the results of test execution of a software application to the relevant stakeholders. There are certain software tools that help a software tester to effectively generate a test report. These test-reporting tools provide features that make it possible to: Generate test report formats Provide statistical output of test results Track the defects found in a software application during testing Record the result of the test

Examples of test reporting tools are: Analysis tools Database tools Defect tracking tool Test report formats

Test Reports Format Most test reports have a standard format for logging the results. These formats are prepared using word processing packages, such as Microsoft Word and Lotus 1-2-3. At times, spreadsheet packages, such as Microsoft Excel, are also used to create test report formats. The following is a sample report: Scope of Test This section indicates the functions that are tested. Test Results This section indicates the actual and the expected results of testing the project. Actual Result Expected Result

What Works/What Does Not Work This section lists the functions that operate as expected and those that do not operate as defined. Recommendations This section suggests the actions to be taken to remove the errors in the project.
Sample Test Report of an Individual Project

Chapter Six Questions


1. Explain the need for GUI testing and its complexity. Answer: GUI is increasingly used in applications because they enable users to use different services provided by the applications easily. GUIs need to be properly tested to verify that they fulfill their objectives. There are no standard specifications based on which GUIs are designed. GUI designs are primarily guided by the user psychology that varies from application to application. It becomes difficult to understand the user psychology and prepare appropriate test methods to verify the correctness and appropriateness of GUIs. Therefore, GUI testing is a complex process. 2. List the guidelines required for a typical tester during GUI testing. Answer: The guidelines for GUI testing can be categorized into many operations: For windows: Check if the windows open properly based on the menu-based commands. Check if the window can be resized, moved, scrolled. Check if the window properly regenerates when it is overwritten and then recalled. Check if all the functions that relate to the window are available, when needed. Check if all the functions relating to the window operational.

Check if all relevant pull-down menus, tool bars, scroll bars, dialog boxes, and buttons, icons, and other controls are available and properly represented. Check if the active window is properly highlighted. Check if multiple or incorrect mouse picks within the window cause unexpected side effects. Check if audio and/or color prompts within the window appear according to the specification. Check if the window is properly closed. For pull-down menus and mouse operations: Check if the menu bar displayed is in the appropriate context. Check if the application menu bar displays system related features. Check if the pull-down operations work properly. Check if the breakaway, menus, palettes, and tool bars work properly. Check if all menu functions and pull-down sub functions are properly listed. Check if all menu functions are properly addressable by the mouse. Check if the text typeface, size, and format is correct. Check if it is possible to invoke each menu function using its alternative text-based command.

Check if the menu functions are highlighted (or grayed-out) based on the context of the current operations within a window. Check if each menu function performs as advertised. Check if the names of menu functions are self-explanatory. Check if the help available for each menu item is context sensitive. Check if the mouse operations are properly recognized throughout the interactive context. Check if the multiple clicks required are properly recognized in context. Check if the mouse having multiple buttons is properly recognized in context.

Check if the cursor, processing indicator (e.g. an hour glass or clock), and pointer properly change as different operations are invoked. For data entry:

Check if the alphanumeric data entry is provided properly. Check if the graphical modes of data entry (e.g., a slide bar) work properly. Check if the invalid data is properly recognized. Check if data input messages are intelligible. Check if the basic standard validation on each data is considered during the data entry itself.

Once the data is entered completely and if a correction is to be done for a specific data, is it required that the entire data should be entered again? Check if the mouse clicks are properly used. Check if the help buttons are available during data entry. 3. Select your own GUI based software system and test the GUI related functions by using the listed guidelines in this Chapter. Answer: To test the GUI related functions of a GUI based software system, we have selected a text editor application. The text editor application is a GUI based application that performs text-editing operations, such as file, edit, and search operations. The file operations include creating a new file, opening existing files, saving the files and exiting the application. The edit operations include undo, cut, copy, paste, delete, select all, time/date, word wrap and set font operations. The search operations include find and find next operations. Listed below are the results of testing the text editor application for GUI related functions. Test for Windows 1. A text editor screen is displayed on opening the text editor application. The menu bar contains File, Edit, Search, and Help sub menus in the menu bar for different types of text editing operations. 2. The text editor screen can be resized using the maximize and minimize buttons. 3. Each menu in the menu bar lists appropriate sub menus. 4. The text editor screen is highlighted when selected for text editing operations. 5. The close button on the text editor screen closes the text editor application. Test for pull-down menus 1. The menu bar in the text editor system displays menus to perform textediting operations, such as file, edit, and search operations. 6. Each menu in the menu bar has a pull-down menu to display the sub menus listed in that particular menu. 7. The sub menus are properly listed for each menu. For example, the File menu contains New, Open, Save, Save As and Exit sub menus. 8. All the menus and sub menus are addressable with a mouse. For example, the open dialog box appears on clicking the Open sub menu of the File menu. 9. Each menu is highlighted on being pointed by a mouse. 10. The names of the menus and sub menus are self-explanatory. For example, a user can easily make out that the New sub menu on the File menu opens a new file. 11. Every menu and sub menu in the text editor screen has a hot key to perform the related function without clicking a mouse. Test for data entry operations 1. The text editor screen facilitates entry of alphanumeric data. 12. Graphical data cannot be input in the text editor screen, and appropriate error message is displayed specifying invalid data entry. 13. The text editor screen facilitates the removal of unwanted data entered by selecting the text to be deleted and clicking the Delete sub menu on the Edit menu. Similarly, repetitive text can be copied and pasted in multiple locations on the text screen using the Paste sub menu in the Edit menu.

FAQ
1. What are interim test reports?

Answer: Interim test reports describe the status of testing. They are designed so that the test team can track progress against the test plan. Interim test reports are also important for the development team, as the test reports will identify defects that should be corrected. 2. What are the long-term benefits of a test report? Answer: The main long-term benefits of developing a test report are as follows: Problems in a software application can be traced if it functions improperly at the time of production. The relevant stakeholders of a software application can scan through a test report to trace the functions that have been correctly tested and those that may still contain defects. This can assist them in making appropriate decisions. Data in the test report can be used to analyze the rework process for making changes to prevent defects from occurring in the future. This can be done by accumulating the results of many test reports to identify the components of the rework process, which are defect-prone.

3. Name some client/server testing tools Answer: Some client/server testing tools are: Mercury Interactive Performance Awareness AutoTester Inc

4. What is one important feature, which the testing tools should possess? Answer: The testing tool you select must function on all of the operating environments that run in your organization. For example, AutoTester works on Windows 3.11, Windows 95, Windows NT, Unix, and OS/2. Segue Software's QA Partner runs on Windows 95, Windows NT, Window 3.1, OS/2, Macintosh System 7.x, and more than a dozen flavors of Unix. Great Circle from Geodesic Systems works with Windows 3.11, Windows NT, and Windows 95, and will soon be released for OS/2 Warp 3.0 and Sun Solaris X86. 5. What are the various formats used for documenting tests? Answer: The various formats used for documenting tests are: Test Plan Test Cases Test Procedures or Test Scripts Test Results Documentation Test Summary Log Test Fault or Incident Log Test Summary or Exit Report Minutes from Test Exit Meeting Test Data Test Schedule

Chapter-Seven
Program Slicing Debugging Approach
The program slicing debugging approach is similar to the backtracking debugging approach. In the program slicing debugging approach, the search for an error is limited by dividing the program into slices and testing only the program slices that may cause the error. For example, if you need to test a variable in a statement, you define a program slice that includes the lines of code preceding the statement and affecting the value of the variable in the statement. Using this method, the parts of the program that are irrelevant to the variable in the statement are deleted.

Guidelines for Effective Debugging

The guidelines for effective debugging are: The debuggers must have a thorough understanding of the program design. The debuggers must focus on fixing the errors rather than fixing the symptoms of these errors.

The debuggers must be aware that error fixing may introduce new errors. As a result, regression testing must be performed after each stage of error fixing.

Solutions to Chapter Seven Questions


1. What is the difference between verification and validation? Explain in your own words. Answer: In verification and validation, the main concern is on the correctness of the product. Verification is the process that is used to determine whether or not the products of a given phase of software development fulfill the specification established during the previous phase. Validation is the process that evaluates software at the end of software development to ensure compliance with the software requirements. For high reliability of the software, you need to perform both, verification and validation. 2. Explain unit test method with the help of your own example. Answer: In unit testing, different modules are tested against the specifications produced during design for the modules. The purpose of unit testing is to test the internal logic of the modules. The programmer of the module performs unit testing. The testing method focuses on testing the code. As a result, structural testing is used at this level. Consider the example of a system that displays the division of a student. The system is developed in many modules. Before you integrate these modules, it is necessary to perform tests on each module independently. Such tests are referred to as unit tests as each module or unit is being tested in the process. 3. Develop an integration testing strategy for any the system that you have implemented already. List the problems encountered during such process. Answer: Suppose you have developed an application for the payroll system of a company. The application is developed in many modules. You first developed the individual modules that calculate the basic salary of the employee, DA, and number of leaves. After developing each module, you perform unit testing and integrate them to obtain the module that calculates an employee's salary. Using bottom up integration testing, the lower-level modules including the basic salary, DA, and number of leaves are tested. Then, these tested modules are combined with higher-level module that calculates an employee's salary. However, at any stage of testing the lower-level modules exist and have already been tested. Therefore, by using bottom up testing strategy, the parts of the application are being tested and the errors are detected while the development proceeds. It is advantageous to use his technique if major errors occur at lower-level modules. The testing becomes complex as he system is made up of a large number of subsystems. 4. What is validation test? Explain. Answer: The validation is the process of examining the software and providing evidence of the achievement of software requirements and specifications. The validation process provides documented evidence that the software meets its specifications and requirements consistently. To validate a software, documented evidence is presented by examining all the phases of the software life cycle. Thus, the validation method is based on the software life cycle model and consists of the following phases: Requirements analysis process Design and implementation process Inspection and testing Installation and system acceptance test Maintenance phase.

Examining, testing, and documenting all the above phases complete the validation process.

FAQ
1. What are the preparatory measures for creating a test strategy? Answer: A test strategy describes the method you can use to test a software product. A test strategy needs to be developed for all levels of testing. The testing team analyzes the requirements, identifies the test strategy, and performs a review of the plan. For creating a testing strategy, the information required includes: Test tools required for testing Description of the roles of various resources to be used for testing

Testing methodology based on the known standards Functional requirements of the product System limitations

2. Which factors decide that the testing process should be concluded? Answer: The factors that decide the conclusion of the testing process are: Product release deadlines and testing deadlines are met Error rate reduces below a certain level Coverage of code and functionality reaches a specified point Test budget has been depleted

3. List the advantages and disadvantages of bottom up integration testing. Answer: Bottom up integration testing helps to test disjointed subsystems simultaneously. In this type of testing, stubs are not required, only test drivers are needed. A disadvantage of the bottom up strategy is that the testing becomes complex, when the system is made up of a large number of subsystems. 4. What are the various issues covered during the documentation of a testing process? Answer: The documentation generated at the end of the testing process is called test summary report. It includes a summary of the tests that are applied for various subsystems. It also specifies how many tests are applied to each subsystem, the number of tests that were successful, the number of tests that were unsuccessful, and the degree of deviation in unsuccessful tests. 5. What is the main drawback of integration testing and how can this be remedied? Answer: The problem in integration testing is that it is difficult to locate the errors discovered during the process. This is because there is a complex interaction between the system components. Therefore, when an inconsistent output is obtained, it is difficult to find the source of errors. For easily locating the errors, an integration approach should be used for system integration and testing. You start with a minimal set of subsystems and test this system. Then, you add components to this minimal system and test the system after each added increment. 6. Which of the two methods help to detect errors at an early stage in the development process, top down approach or bottom up approach? Answer: Top down testing approach can detect errors at an early stage in the development process. 7. Why is it difficult to implement top down testing method? Answer: It is difficult to implement top down testing method because you need to produce stubs that simulate the lower levels of the system. These stubs may be a simplified version of the components required or it may request the software tester to input a value or simulate the action of the components. 8. Which testing should be performed when you make changes to an existing system? Answer: Regression testing is performed to ensure that software is working properly after making changes to an existing system.

You might also like