Software Testing

Introduction

1

Software testing is a critical element of software quality assurance and represents the ultimate process to ensure the correctness of the product. The quality product always enhances the customer confidence in using the product thereby increases the business economics. In other words, a good quality product means zero defects, which is derived from a better quality process in testing. Software is an integrated set of Program codes, designed logically to implement a particular function or to automate a particular process. To develop a software product or project, user needs and constraints must be determined and explicitly stated. The development process is broadly classified into two. 1. Product development 2. Project development Product development is done assuming a wide range of customers and their needs. This type of development involves customers from all domains and collecting requirements from many different environments. Project Development is done by focusing a particular customer's need, gathering data from his environment and bringing out a valid set of information that will help as a pillar to development process. Testing is a necessary stage in the software life cycle: it gives the programmer and user some sense of correctness, though never "proof of correctness. With effective testing techniques, software is more easily debugged, less likely to "break," more "correct", and, in summary, better. Most development processes in the IT industry always seem to follow a tight schedule. Often, these schedules adversely affect the testing process, resulting in step motherly treatment meted out to the testing process. As a result, defects accumulate in the application and are overlooked so as to meet deadlines. The developers convince themselves that the overlooked errors can be rectified in subsequent releases.

Software Testing

The definition of testing is not well understood. People use a totally incorrect definition of the word testing, and that this is the primary cause for poor program testing. Testing the product means adding value to it by raising the quality or reliability of the product. Raising the reliability of the product means finding and removing errors. Hence one should not test a product to show that it works; rather, one should start with the assumption that the program contains errors and then test the program to find as many of the errors as possible. Definitions of Testing: “Testing is the process of executing a program with the intent of finding errors ” Or “Testing is the process of evaluating a system by manual or automatic means and verify that it satisfies specified requirements” Or "... the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences / between expected and actual results..."

Why software Testing?
Software testing helps to deliver quality software products that satisfy user’s requirements, needs and expectations. If done poorly,  defects are found during operation,  it results in high maintenance cost and user dissatisfaction  It may cause mission failure  Impact on operational performance and reliability Some of the case studies Disney’s Lion King, 1994-1995 In the fall of 1994, Disney company Released its first multimedia CD-ROM game for children, The Lion King Animated storybook. This was Disney’s first venture into the market and it was highly promoted and advertised. Sales were huge. It was “the game to buy” for children that holiday season. What happened, however, was a huge debacle. On December 26, the day after Christmas, Disney’s customer support phones began to ring, and ring, and ring. Soon the phones support technicians were swamped with calls from angry parents with crying children who couldn’t get the software to work. Numerous

2

Software Testing stories appeared in newspapers and on TV news. This problem later was found out, due to non performance of software testing for all conditions.

Software Bug: A Formal Definition
Calling any and all software problems bugs may sound simple enough, but doing so hasn’t really addressed the issue. To keep from running in circular definitions, there needs to be a definitive description of what a bug is. A software bug occurs when one or more of the following five rules is true: 1) The software doesn’t do something that the product specification says it should do. 2) The software does something that the product specification says it shouldn’t do. 3) The software does something that the product specification doesn’t mention. 4) The software doesn’t do something that the product specification doesn’t mention but should. 5) The software is difficult to understand, hard to use, slow, or –in the software tester’s eyes- will be viewed by the end user as just plain not right.

What exactly does Software Tester Do? (Or Role of Tester)
From the above Examples you have seen how nasty bugs can be and you know what is the definition of a bug is, and you can think how costly they can be. So main goal of tester is “The goal of Software Tester is to find bugs” As a software tester you shouldn’t be content at just finding bugs, you should think about how to find them sooner in the development process, thus making them cheaper to fix. “The goal of a Software Tester is to find bugs, and find them as early as possible”. But, finding bugs early isn’t enough. “The goal of a Software Tester is to find bugs, and find them as early as possible and make sure they get fixed”

3

Test cases must be written for invalid and unexpected. Best Testing Practices to be followed during testing 4 . A good test case is one that has high probability of detecting an as-yet undiscovered error. Don't test your own programs. Do not plan tests assuming that no errors will be found. Test the program to see if it does what it is not supposed to do as well as what it is supposed to do. All Tests should be traceable to customer requirements. and code as early as possible. design. The probability of locating more errors in any one module is directly proportional to the number of errors already found in that module. documentation. A necessary part of a test case is a definition of the expected output or result. Inspect the results of each test completely. Eight Basic Principles of Testing • • • • • • • Define the expected output or result.Software Testing 2 Testing Principle of The main objective of testing is to find defects in requirements. Avoid disposable test cases unless the program itself is disposable. as well as for valid and expected input conditions. The test process should be such that the software product that will be delivered to the customer is defect less. Include test cases for invalid or unexpected conditions.

Software Testing • Testing and evaluation responsibility is given to every member. design-based. etc) are developed and maintained. 3 Software Development Life Cycle (SDLC) Let us look at the Traditional Software Development life cycle vs Presently or Mostly commonly used life cycle. Requirements Design Development Testing Implementation Maintenance Fig A (Traditional) Fig B (Most commonly used) Requirements Design Development Implementation Maintenance E S T I N G T 5 . • Testing is used to verify that all project deliverables and components are complete. • Develop Master Test Plan so that resource and responsibilities are understood and assigned as early in the project as possible. • Systematic evaluation and preliminary test design are established as a part of all system engineering and specification work. • A-risk prioritized list of test requirements and objectives (such as requirements-based. and to demonstrate and track true project progress. so as to generate team responsibility among all. • Conduct Reviews as early and as often as possible to provide developer feedback and get problems found and fixed as they occur.

which includes all the client requirements. the emphasis is on inspection to determine that the implemented system meets the system specification. • Determine consistency of design with requirements. • Determine Adequacy of Requirements.Software Testing In the above Fig A. If there is error at Requirements phase then all phases should be changed. Proper requirements and specifications are critical for having a successful project. the Testing Phase comes after the Development or coding is complete and before the product is launched and goes into Maintenance phase. So.cost of fixing errors will be high because we are not able to find errors until coding is completed. the Software Requirement Specification (SRS) document is the primary output of this phase. the system will be re-tested to determine that the changes work and that the unchanged portion continues to work. review team (testers) and customers 6 . During the Requirements phase. During the maintenance phases. In this phase design team. The Fig B shows the recommended Test Process involves testing in every phase of the life cycle. And also you should verify the following activities: • Determine Verification Approach. During the Test and Installation phases. Requirements and Analysis Specification The main objective of the requirement analysis is to prepare a document. Removing errors at this phase can reduce the cost as much as errors found in the Design phase. This is very useful for the developers to understand the flow of the system. Design phase In this phase we are going to design entire project into two • • High –Level Design or System Design. That is. We have some disadvantages using this model . During Design and Development phases. High –Level Design or System Design (HLD) High – level Design gives the overall System Design in terms of Functional Architecture and Database design. total cost becomes very high. the emphasis is on verification to ensure that the design and program accomplish the defined requirements. • Generate functional test data. Low –Level Design or Detailed Design. the emphasis is upon validation to determine that the defined requirements meet the needs of the organization.

We test the executing part of the project. program specification. Logic design is done for every program and then documented as program specifications. Static testing means testing the product. For every program. unit test plan. executables. Development Phase This is the phase where actually coding starts. it is often done in parallel with coding. and the output are the system test plan and test result. A series of different tests are done to verify that all system elements have been properly integrated and the system performs all its functions. The inputs for this phase are the physical database design document. Testing phase This phase is intended to find defects that can be exposed only by testing the entire system. and code reviews. program skeletons. Indeed.Software Testing plays a major role. source data. we do it by examining and conducting the reviews. For this the entry criteria are the requirement document that is SRS. This can be done by Static Testing or Dynamic Testing. executables. which is not executing. project standards. the developers know what is their role and according to the specifications they develop the project. a unit test plan is created. Generate structural and functional test data for programs. Dynamic testing is what you would normally think of testing. The output of this phase is the subject to subsequent testing and validation. the functional design documents. The entry criteria for this will be the HLD document. After the preparation of HLD and LLD. The input for this is requirements specification document. Low – Level Design (LLD) During the detailed phase. And the exit criteria will the program specification and unit test plan (LLD). the view of the application developed during the high level design is broken down into modules and programs. The output will be test data. Note that the system test planning can occur before coding is completed. and the database design document. 7 . and database. And we should also verify these activities: • • Determine adequacy of implementation. projects standards. and utilities tools. And the exit criteria will be HLD. This stage produces the source code.

the user accepts the software. There are four frequently used models: • • • • Big –Bang Model Waterfall Model Prototype Model Spiral Model 8 . and no model is necessarily the best for a particular project. Acceptance consist of formal testing conducted by the customer according to the Acceptance test plan prepared earlier and analysis of the test results to determine whether the system satisfies its acceptance criteria. When the result of the analysis satisfies the acceptance criteria. which is not meeting the customer requirements or any thing to append to the present system. The cost of risk will be very high in this phase.Software Testing Implementation phase or the Acceptance phase This phase includes two basic tasks : • Getting the software accepted • Installing the software at the customer site. Maintenance phase This phase is for all modifications. 4 Software Development Lifecycle Models The process used to create a software product from its initial conception to its public release is known as the software development lifecycle model. The input to this will be project to be corrected and the output will be modified version of the project. This is the last phase of software development life cycle. There are many different methods that can be used for developing software. All types of corrections for the project or product take place in this phase.

it stays at that level until it’s ready. It’s also important to have flexible customers. It has the following seven phases of development: The figure represents the Waterfall Model. and results in well-defined outputs. All the effort is spent developing the software and writing the code. Waterfall Model A project using waterfall model moves down a series of steps starting from an initial idea to a final product. Resources are required to complete the process in each phase and each phase is accomplished through the application of explicit methods. There is little planning. It’s and ideal process if the product requirements aren’t well understood and the final release date is flexible. a lot of energy is expended – often violently – and out comes the perfect software product or it doesn’t. or Formal development process. utilizes well-defined process. If the project isn’t ready to progress. scheduling. the project team holds a review to determine if they’re ready to move to the next step. The Waterfall model is also called the Phased model because of the sequential move from one phase to another. tools and techniques.Software Testing Big – Bang Model The Big. The beauty of this model is that it’s simple. Each phase requires well-defined information. 9 . too.Bang Model is the one in which we put huge amount of matter (people or money) is put together. the implication being that systems cascade from one level to the next in smooth progression. because they won’t know what they’re getting until the very end. At the end of each step.

phasis on specifying what the product will be. The prototyping model has been defined as: 10 . also known as the Evolutionary model. a prototype is to understand the requirements. The basic idea of Prototyping is that instead of fixing requirements before the design and coding can begin. you need to complete the tasks for that step and then move on.Software Testing Requirement phase Analysis phase Design phase Development phase Testing phase Implementation phase Maintenance phase Notice three important points about this model. Prototype model The Prototyping model. there’s no overlap. came into SDLC because of certain failures in the first version of application software. As soon as you’re on a step. The steps are discrete. the user can actually feel how the system will work. the concept of Prototyping is used. To avoid failure of SDLC. By viewing or using the prototype. The prototype is built using known requirements. A failure in the first version of an application inevitably leads to need for redoing it.  There’s no way to back up.

When the user is unable to state his/her requirements. • The prototype is demonstrated to the user. • The resulting prototype is a partial representation of the system. • The user identifies problems and redefines the requirements. • The designer uses the validated requirements as a basis for designing the actual or production software Prototyping is used in the following situations: • • • • When an earlier version of the system does not exist. When user interfaces are an important part of the system being developed.Software Testing “A model whose stages consist of expanding increments of an operational software with the direction of evolution being determined by operational experience. 11 . • The developer constructs a working model of the system.” Prototyping Process The following activities are carried out in the prototyping process: • The developer and end user work together to define the specifications of the critical parts of the system. When the user's needs are not clearly definable/identifiable.

Each phase in the spiral model is split into four sectors of major activities. which was first presented in 1986. The new model aims at incorporating the strengths and avoiding the different of the other models by shifting the management emphasis to risk evaluation and resolution. Due to this. project risk. These activities are as follows: Objective setting: This activity involves specifying the project and process objectives in terms of their functionality and performance. Barry Boehm recognized this and tried to incorporate the factor. into a life cycle model. 5 Verification & validation 12 . Risk analysis: It involves identifying and analyzing alternative solutions. It also involves identifying the risks that may be faced during project development. nobody was prepared when something unforeseen happened. Engineering: This activity involves the actual construction of the system. the customer evaluates the product for any errors and modifications. One of the major causes of project failure in the past has been negligence of project risks. The result is the Spiral model.Software Testing Spiral model The traditional software process models don't deal with the risks that may be faced during project development. Customer evaluation: During this phase.

and help teams determine whether to continue development activity at various checkpoints or milestones in the process. rather than at the close of a phase or even later. when they are more costly to correct. • Decision-point or phase-end Reviews: This type of review is helpful in determining whether to continue with planed activities or not. • Post implementation Reviews: These reviews are held after implementation is complete to audit the process based on actual results. They are conducted to identify defects in a product early in the life cycle. and are held to assess the success of the overall process after release and identify any opportunities for process improvements. such as during the design activity. and results are not 13 . Validation is the process confirming that it meets the user’s requirements. They are usually limited to a segment of a project. Post-implementation reviews are also know as “ Postmortems”. They are held at the end of each phase. initiated as a request for input regarding a particular artifact or problem. with the goal of identifying defects as work progresses. Verification can be conducted through Reviews. Types of Reviews • In-process Reviews :They look at the product during a specific time period of life cycle. Classes of Reviews • Informal or Peer Review: In this type of review generally a one-to one meeting between the author of a work product and a peer. Quality reviews provides visibility into the development process throughout the software development life cycle.Software Testing Verification and validation are often used interchangeably but have different definitions. These differences are important to software testing. Verification is the process confirming that software meets its specifications. There is no agenda.

but is one of the most cost effective methods of ensuring quality. most problems will be found during this preparation. not the producer. It is concerned with activities involved in ensuring that software is delivered on schedule and 14 . Three rules should be followed for all reviews: 1. reader. or comments are made throughout. 3. Planning and Scheduling software projects. The result of the inspection meeting should be a written report. • Semiformal or Walkthrough Review: The author of the material being reviewed facilitates this. and the purpose is to find problems and see what's missing. The product is reviewed. Thorough preparation for inspections is difficult. All members of the reviewing team are responsible for the results of the review. Defects and issues are identified. painstaking work. The participants are led through the material in one of the two formats: the presentation is made without interruptions and comments are made at the end. Possible solutions for uncovered defects are not discussed during the review.Software Testing formally reported. typically with 38 people including a moderator. and a recorder to take notes. • Formal or Inspection Review: An inspection is more formalized than a 'walkthrough'. Attendees should prepare for this type of meeting by reading thru the document. These reviews occur as need-based through each phase of a project. not corrected. 6 Project Management Project management is Organizing. The subject of the inspection is typically a document such as a requirements spec or a test plan. 2. not to fix anything.

Project scheduling This activity involves splitting project into tasks and estimate time and resources required to complete each task.Software Testing in accordance with the requirements of the organization developing and procuring the software. which is higher than the schedule cost. Project management is needed because software development is always subject to budget and schedule constraints that are set by the organization developing the software. Project scheduling. Project management activities includes • • • • • Project planning. With out proper plan. Project Manager has to take into consideration various aspects like scheduling. Minimize task dependencies to avoid delays caused by one task waiting for another to complete. This includes Management 15 . Project Manager also has to allow for contingency in planning. Review. Project Plan must be regularly updated as new information becomes available. Iterative Code/Test/Release Phases After the planning and design phases. so that the cost of developing a solution is within the limits. It is a continuous activity from initial concept through to system delivery. the client and development team has to agree on the feature set and the timeframe in which the product will Project be delivered. estimating manpower resources. the development of the project will cause errors or it may lead to increase the cost. Iterative Code/Test/Release Phases Production Phase Post Mortem Project planning This is the most time-consuming project management activity. Organize tasks concurrently to make optional use of workforce.

Deliverables • • Final Test Signoff Final Customer Signoff Post Mortem Phase The post mortem phase allows to step back and review the things that went well and the things that need improvement. more time will be spent on convergence and the project timeframe expands. During this phase.Software Testing iterative releases of the product as to let the client see fully implemented functionality early and to allow the developers to discover performance and architectural issues early in the development. Also. full installation routines are to be used for each iterative release. Since the client has been involved in all iterations. code reviews must be done weekly to ensure that the developers are delivering to specification and all source code is put under source control. Each iterative release is treated as if the product were going to production. Deliverables • • • • • • • Triage Weekly Status with Project Plan and Budget Analysis Risk Assessment System Documentation User Documentation (if needed) Test Signoff for each iteration Customer Signoff for each iteration Production Phase Once all iterations are complete. highlight the most effective processes and provide action items that will improve future projects. Post mortem reviews cover processes that need adjustment. this phase should go very smoothly. as it would be done in production. Experience shows that one should space iterations at least 2 – 3 months a part. If iterations are closer than that. Full testing and user acceptance is performed for each iterative release. 16 . the final product is presented to the client for a final signoff.

Software Testing To conduct a post mortem review. The list of items allowing the team to prioritize the importance of each item has to be perused. 7 Quality Management The project quality management knowledge area is comprised of the set of processes that ensure the result of a project meets the needs for which the project was executed. a list of the items will be available that were mentioned most often. 3. Everyone has to be asked to come to the meeting with the following: 1. As each person offers their input. collection of the information listed above is required. Or Quality is defined as meeting the customer’s requirement for the first time and for every time. announce the meeting at least a week in advance so that everyone has time to reflect on the project issues they faced. Each process also has a set of tools and techniques that are used to turn input into output. Processes such as quality planning. Items that were done well during the project Items that were done poorly during the project Suggestions for future improvements During the meeting. This will allow one to see how many people had the same observations during the project. and control are included in this area. categorize the input so that all comments are collected. Definition of Quality: • Quality is the totality of features and characteristics of a product or service that bare on its ability to satisfy stated or implied needs. assurance. • 17 . When the next project begins. everyone on the team should review the Post Mortem Report from the prior release as to improve the next release. 2. At the end of observation review. This is much more that absence of defects which allows us to meet the requirements. This will allow drawing a distinction of the most important items. a list of action items has to be made that will be used to improve the process and publish the results. Finally. Each process has a set of input and a set of output.

and other process Output. (Is the product or service capable of being used?) Fitness for purpose.Software Testing Some goals of quality programs include: • • • Fitness for use. (Does the product or service meet its intended purpose?) Customer satisfaction. flowcharting. results of quality control measurements. control charts. • Input includes: work results. and process adjustments. Quality Control The process of monitoring specific project results to determine if they comply with relevant quality standards and identifying ways to eliminate causes of unsatisfactory performance. benchmarking. • Input includes: Quality Management Plan. • Input includes: Quality policy. completed checklists. and trend analysis. pareto charts. • Methods used include: inspection. • Methods used: quality planning tools and techniques and quality audits. and Input to other processes. acceptance decisions. standards and regulations. operational definitions. flowcharting. Quality Policy 18 . (Does the product or service meet the customer's expectations?) Quality Management Processes Quality Planning: The process of identifying which quality standards is relevant to the project and determining how to satisfy them. and checklists. and operational definitions. • Output includes: quality improvement. and design of experiments. • Methods used: benefit / cost analysis. operational definitions. statistical sampling. rework. Quality Management Plan. scope statement. product description. • Output includes: quality improvements. checklists. • Output includes: Quality Management Plan. Quality Assurance The process of evaluating overall projects performance on a regular basis to provide confidence that the project will satisfy the relevant quality standards.

Defects with most frequent occurrence should be targeted for corrective action. Used to determine the relationship between two or more pieces of corresponding data. Generally consists of 8 major Input to a quality process to permit the characterization of each input.Software Testing The overall quality intentions and direction of an organization as regards quality. Scatter diagrams 1. (Displayed as a histogram) 2. • 19 . Shows frequency of occurrence of items within a range of activity. Ranks defects in order of frequency of occurrence to depict 100% of the defects. Does not account for severity of the defects • Cause and Effect Diagrams (fishbone diagrams or Ishikawa diagrams) 1. Kaizen) Tools of Quality Management Problem Identification Tools : • Pareto Chart 1. 2. 4. Can be used to organize data collected for measurements done on a product or process. 80-20 rule: 80% of problems are found in 20% of the work. Analyzes the Input to a process to identify the causes of errors. • Histograms 1. as formally expressed by top management Total Quality Management (TQM) A common approach to implementing a quality improvement program within an organization Quality Concepts • • • • Zero Defects The Customer is the Next Person in the Process Do the Right Thing Right the First Time (DTRTRTFT) Continuous Improvement Process (CIP) (From Japanese word. 2. 3.

and highly negative) Problem Analysis Tools 1. Each process also has a set of tools and techniques that are used to turn the input into output Risk Management Processes Risk Management Planning Used to decide how to approach and plan the risk management activities for a project. budgeting.Software Testing 2. and WBS all serve as input to this process Methods used: Many planning meeting will be held in order to generate the risk management plan Output includes: The major output is the risk management plan. positive. it does include methodology to be used. • • • Input includes: The project charter. The data are plotted on an "X-Y" chart to determine correlation (highly positive. risk management policies. Each process has a set of input and a set of output. Check sheets (tic sheets) and check lists 3. Project risk management contains the processes for identifying. Flowcharts 8 Risk Management Risk management must be an integral part of any project. However. timing. Graphs 2. no correlation. and other information Risk Identification Determining which risks might affect the project and documenting their characteristics 20 . analyzing. negative. and responding to project risk. Everything does not always happen as planned. which does not include the response to specific risks.

Diagramming techniques can also be used Output includes: Risk and risk symptoms are identified as part of this process. The risks are also prioritized. Output includes: Output includes work-around plans. scope. corrective action. There are generally two types of risks. Risks calculated as high or moderate are prime candidates for further analysis • • Risk Monitoring and Control Used to monitor risks. Risks are rated against how they impact the projects objectives for cost. Then there are pure risks that represent only a risk of loss. • Input includes: There are many items used as input into this process. • • • Input includes: Input to this process includes the risk management plan. identify new risks. Trends should be observed. They include things such as the risk management plan. and quality Methods used: Several tools and techniques can be used for this process.Software Testing • • • Input includes: The risk management plan is used as input to this process Methods used: Documentation reviews should be performed in this process. risk identification and analysis. Probability and Impact will have to be evaluated Output includes: An overall project risk ranking is produced as a result of this process. as well as other items Risk Management Concepts Expected Monetary Value (EMV) 21 . The risks should already be identified as well. and evaluate their effectiveness throughout the project life cycle. and scope changes Methods used: Audits should be used in this process to ensure that risks are still risks as well as discover other conditions that may arise. They are business risks that are risks of gain or loss. Use of low precision data may lead to an analysis that is not useable. Pure risks are also known as insurable risks Risk Analysis A qualitative analysis of risks and conditions is done to prioritize their affects on project objectives. project change requests. execute risk reduction plans. schedule.

and tracking the Standards and procedures for managing changes in an evolving software product. This can be seen as part of a more general quality management process. poor design. Since this type of problem can severely affect schedules. 9 Configuration Management Configuration management (CM) is the processes of controlling. 22 . coordinate. When released to CM. Configuration management can be managed through • • Version control. and provided with some documentation as evidence of the problem. with the focus being on critical bugs. as they are a starting point for further development . software systems are sometimes called baselines. The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up. Configuration Testing is the process of checking the operation of the software being tested on various types of hardware. Can be used in conjunction with EMV since risk events can occur individually or in groups and in parallel or in sequence. etc.) managers should be notified. improper build or release procedures. Changes made in the project.Software Testing • • • A Risk Quantification Tool EMV is the product of the risk event probability and the risk event value Risk Event Probability: An estimate of the probability that a given risk event will occur Decision Trees A diagram that depicts key interactions among decisions and associated chance events as understood by the decision maker. and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing. Configuration management involves the development and application of procedures and standards to manage an evolving software product.

This is more important when the system fails or not meeting the requirements. This can include documents. Configuration Management Planning This starts at the early phases of the project and must define the documents or document classes. Documents. which might be required for future system maintenance. Release is the means of distributing the software outside the development team. They must also incorporate new system functionality.Software Testing Version Control and Release management Version is an instance of system. Change control board. Releases must incorporate changes forced on the system by errors discovered by users and by hardware changes. (CCB) Change management 23 . or simulation. All changes will have to be maintained that were made to the previous versions of the software. It is nothing but the updated or added features of the previous versions of software. which is functionally distinct in some way from other system instances. This contains three important documents they are • • • Change management items. It has to be planned as to when the new system version is to be produced and it has to be ensured that version management procedures and tools are properly applied. which are to be managed. data. It defines  the types of documents to be managed  document-naming scheme  who takes responsibility for the CM procedures and creation of baselines  polices for change control and version management. Changes made in the project This is one of most useful way of configuring the system. Change request documents. should be identified and included as managed documents. By making note of it one can get the original functionality.

Software Testing Software systems are subject to continual change requests from users. It records changes required. 10 Types of Software Testing Testing Static Dynamic 24 . from market forces. Change tracking tools keep track the status of each change request and automatically ensure that change requests are sent to the right people at the right time. should review the changes. who decide. from developers. impact analysis. A major problem in change management is tracking change status. Change management is concerned with keeping. Integrated with Email systems allowing electronic change request distribution. managing of changes and ensuring that they are implemented in the most cost-effective way. whether or not they are cost-effective from a strategic. Change request form Definition of change request form is part of CM planning process. organizational and technical viewpoint. change cost and recommendations (System maintenance staff). It also records change evaluation. Change control board A group. This group is sometimes called a change control board and includes members from project team. reason "why change -was suggested and urgency of change ( from requestor of the change).

The code needs to be examined by highly skilled technicians. • • Structural (usually called "white box") testing. Well-designed control structures. White-box testing strength is also its weakness. Functional ("black box") testing. common data areas. It is examining and reviewing it. The specification is a document and not an executing program. Research existing Standards and Guidelines. Specification terminology checklist. Low-level Reviews of specification • • Specification Attributes checklist.Software Testing Structural Testing Functional Testing Static Testing Static testing refers to testing something that’s not running.000 lines of spaghetti code and nests of ifs are evil. It’s also something that was created using written or graphical documents or a combination of both. That means that tools and skills are highly specialized to the 25 . Structural testing or White box testing Structural tests verify the structure of the software itself and require complete access to the source code. This is known as ‘white box’ testing because you see into the internal workings of the code. so it’s considered as static. Review and Test similar software. subroutines and reusable modular programs are good. Complicated loop structures. High-level Reviews of specification • • • Pretend to be the customer. White-box tests make sure that the software structure itself contributes to proper and efficient program execution. Dynamic Testing Techniques used are determined by type of testing that must be conducted. 100.

Software Testing particular language and environment. Also, large or distributed system execution goes beyond one program, so a correct procedure might call another program that provides bad data. In large systems, it is the execution path as defined by the program calls, their input and output and the structure of common files that is important. This gets into a hybrid kind of testing that is often employed in intermediate or integration stages of testing. Functional or Black Box Testing Functional tests examine the behavior of software as evidenced by its outputs without reference to internal functions. Hence it is also called ‘black box’ testing. If the program consistently provides the desired features with acceptable performance, then specific source code features are irrelevant. It's a pragmatic and down-to-earth assessment of software. Functional or Black box tests better address the modern programming paradigm. As object-oriented programming, automatic code generation and code re-use becomes more prevalent, analysis of source code itself becomes less important and functional tests become more important. Black box tests also better attack the quality target. Since only the people paying for an application can determine if it meets their needs, it is an advantage to create the quality criteria from this point of view from the beginning. Black box tests have a basis in the scientific method. Like the process of science, Black box tests must have a hypothesis (specifications), a defined method or procedure (test plan), reproducible components (test data), and a standard notation to record the results. One can re-run black box tests after a change to make sure the change only produced intended results with no inadvertent effects.

11

Testing levels
There are several types of testing in a comprehensive software test process, many of which occur simultaneously. • • • • Unit Testing Integration Testing System Testing Performance / Stress Test 26

Software Testing • • • Unit Testing Testing each module individually is called Unit Testing. This follows a White-Box testing. In some organizations, a peer review panel performs the design and/or code inspections. Unit or component tests usually involve some combination of structural and functional tests by programmers in their own systems. Component tests often require building some kind of supporting framework that allows components to execute. Integration testing The individual components are combined with other components to make sure that necessary communications, links and data sharing occur properly. It is not truly system testing because the components are not implemented in the operating environment. The integration phase requires more planning and some reasonable sub-set of production-type data. Larger systems often require several integration steps. There are three basic integration test methods: • • • all-at-once bottom-up top-down Regression Test Quality Assurance Test User Acceptance Test and Installation Test

The all-at-once method provides a useful solution for simple integration problems, involving a small program possibly using a few previously tested modules. Bottom-up testing involves individual testing of each module using a driver routine that calls the module and provides it with needed resources. Bottom-up testing often works well in less structured shops because there is less dependency on availability of other resources to accomplish the test. It is a more intuitive approach to testing that also usually finds errors in critical routines earlier than the top-down method. However, in a new system many modules must be integrated to produce system-level behavior, thus interface errors surface late in the process. Top-down testing fits a prototyping environment that establishes an initial skeleton that fills individual modules that is completed. The method lends

27

Software Testing itself to more structured organizations that plan out the entire test process. Although interface errors are found earlier, errors in critical low-level modules can be found later than you would like. System Testing The system test phase begins once modules are integrated enough to perform tests in a whole system environment. System testing can occur in parallel with integration test, especially with the top-down method. Performance / Stress Testing An important phase of the system testing, often-called load or volume or performance test, stress tests tries to determine the failure point of a system under extreme pressure. Stress tests are most useful when systems are being scaled up to larger environments or being implemented for the first time. Web sites, like any other large-scale system that requires multiple accesses and processing, contain vulnerable nodes that should be tested before deployment. Unfortunately, most stress testing can only simulate loads on various points of the system and cannot truly stress the entire network, as the users would experience it. Fortunately, once stress and load factors have been successfully overcome, it is only necessary to stress test again if major changes take place. A drawback of performance testing is it confirms the system can handle heavy loads, but cannot so easily determine if the system is producing the correct information. Regression Testing Regression tests confirm that implementation of changes have not adversely affected other functions. Regression testing is a type of test as opposed to a phase in testing. Regression tests apply at all phases whenever a change is made. Quality Assurance Testing Some organizations maintain a Quality Group that provides a different point of view, uses a different set of tests, and applies the tests in a different, more complete test environment. The group might look to see that organization standards have been followed in the specification, coding and documentation of the software. They might check to see that the original requirement is documented, verify that the software properly implements the required functions, and see that everything is ready for the users to take a crack at it.

28

If the users have not seen prototypes. If one can perform every test as user acceptance tests. (each and every line) Decision coverage – executes each decision direction at least once. they are inevitably going to be unhappy with the result.Software Testing User Acceptance Test and Installation Testing Traditionally. 29 . 12 Types of Testing Techniques White Box Testing Technique White box testing examines the basic program structure and it derives the test data from the program logic. there is much better chance of a successful project. ensuring that all statements and conditions have been executed at least once. Different methods used are: Statement coverage – executes all statements at least once. by this time. and understood the evolution of the system. been involved with the design. this is where the users ‘get their first crack’ at the software. White box tests verify that the software design is valid and also whether it was built according to the specified design. it's usually too late. Condition coverage – executes each and every condition in the program with all possible outcomes at least once. Unfortunately.

when looking for equivalence partitions. but still equally effective set. Black box testing is conducted on integrated. think about ways to group similar inputs. he can almost certainly walk in the middle of a field. it will almost certainly operate well under normal conditions.000) would have three equivalence classes: Less than $20. If software can operate on the edge of its capabilities. similar outputs.000 (valid) Greater than $50. Functional. and Closed-box.000-$50.Software Testing Black Box Testing Technique Black-box test technique treats the system as a "black-box". Three successful techniques for managing the amount of input data required includes : • • • Equivalence Partitioning Boundary Analysis Error Guessing Equivalence Partitioning: Equivalence partitioning is the process of methodically reducing the huge(infinite)set of possible test cases into a much smaller. so it doesn't explicitly use knowledge of the internal structure. Opaque-box. An Equivalence class is a subset of data that is representative of a larger class. functional components whose design integrity has been verified through completion of traceable white box tests. Black-box test design is usually described as focusing on testing functional requirements. and similar operations of the software. These groups are the equivalence partitions. Equivalence partitioning is a technique for testing equivalence classes rather than undertaking exhaustive testing of each value of the larger class. This technique 30 . Synonyms for black box include: Behavioral. Black box testing traces the requirements focusing on system externals.000(invalid) Boundary value analysis: If one can safely and confidently walk along the edge of a cliff without falling off. It validates that the software meets the requirements irrespective of the paths of execution taken to meet each requirements.000 and $50. For example A program that edits credit limits within a given range ($20.000(invalid) Between $20.

call the module or program being tested.This begins testing from the bottom of the hierarchy and works up to the top. Thread testing and incremental testing are usually utilized together. Bottom-up: . and can serve multiple purposes.001) On the boundary ($20.999 and $20. There are two types of incremental testing: Top-down: . Thread testing This test technique. In same credit limit example. For example. There are procedures and constraints associated with each of these methods. as the output always comes from the module directly above the module under test.000) Upper boundary plus or minus one ($49.Software Testing consist of developing test cases and data that focus on the input and output boundaries of a given function. units can undergo incremental until enough units are integrated and a single business function can be performed. where one of the inputs is the date. a test may try February 29.000 and $50.9. Modules are added in descending hierarchical order.This begins testing from top of the module hierarchy and work down to the bottom using interim stubs to simulate lower interfacing modules or programs. threading through the integrated components. Modules are added in ascending hierarchical order. which provide the test input. demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application. Example: In the example of date. and display test output. It involves adding unit-tested programs to a given module or component one by one. Output is also often easier to examine in bottom-up testing. Bottom-up testing requires the development of driver modules.99 Incremental testing Incremental testing is a disciplined method of testing the interfaces between unit-tested programs as well as between system components. 2000 or 9. although bottom-up testing is often thought to be easier to use. Drivers are often easier to create than stubs. 31 . boundary analysis would test: Low boundary plus or minus one ($19. and testing each result and combination.001) Error Guessing This is based on the theory that test cases can be developed based upon the intuition and experience of the Test-Engineer. which is often used during early integration testing.999 and $50.

approach. which helps in preparation of Test plan. but a good tester knows never to assume anything. • High-Level Expectations The first topics to address in the planning process are the ones that define the test team’s high-level expectations. the personnel responsible for each task. To identify the items being tested. by everyone on the project team. the features to be tested. The ultimate goal of the test planning process is communicating the software test team’s intent. The following are the important topics. They are fundamental topics that must be agreed to. and the risks associated with the plan. and its understanding of the testing that’s to be performed. but they are often overlooked. the testing tasks to be preformed. not the resulting documents. It’s the planning that matters. and schedule of the testing activities. 32 . The purpose of the software test plan is to prescribe the scope. its expectations. They might be considered “too obvious” and assumed to be understood by everyone. resource. The test plan is simply a by-product of the detailed planning process that’s undertaken to create it.Software Testing 13 Cycle Test Plan Preparation Testing Life Test case Design Test Execution & Test Log Preparation Defect Tracking Test Plan Preparation Test Report The software test plan is the primary means by which software testers communicate to the product development team what they intend to do.

The test team will likely work with all of them and knowing who they are and how to contact them is very important. Deciding on the strategy is a complex task. • Inter-Group Responsibilities Inter-Group responsibilities identify tasks and deliverables that potentially affect the test effort. If the responsibilities aren’t planned out. and so on. and how to contact them. • Risks and Issues 33 . project manages. • Metrics and Statistics Metrics and statistics are the means by which the progress and the success of the project. technical writers. of testing should be performed over the course of the project. Similarly. and the testing. This process often helps the entire team from and understands the overall development model.one that needs to be made by very experienced testers because it can determine the successes or failure of the test effort. can become a worst or resulting in important tasks been forgotten. where the test tools are located. • Test strategy The test strategy describes the approach that the test team will use to test the software both overall and in each phase. what they do. the project. The test planning process should identify exactly what information will be gathered. where documents are stored. or stages. The test planning process should identify each proposed test phase and make each phase known to the project team. Places and Things Test plan needs to identify the people working on the project. from when it’s found to when it’s fixed – and never. • Test phases To plan the test phases. and who will be responsible for collecting them. specifically the testing.Software Testing • People. where the software can be downloaded from. ever forgotten. are tracked. and so on need to be identified. The test team’s work is driven by many other functional groups – programmers. • Bug Reporting Exactly what process will be used to manage the bugs needs to be planned so that each and every bug is tracked. what decisions will be made with them. the test team will look at the proposed development model and decide whether unique phases.

Test Case Design The test case design specification refines the test approach and identifies the features to be covered by the design and its associated tests. “ the addition function of calculator.” and “video card configuration testing of quick time.Software Testing A common and very useful part of test planning is to identify potential problem or risky areas of the project – ones that could have an impact on the test effort.” “font size selection and display in word pad. if any. The following topics address this purpose and should be part of the test design specification that is created: • Test case ID or identification A unique identifier that can be used to reference and locate the test design specification the specification should also reference the overall test plan and contain pointers to any other plans or specifications that it references. Different inputs can be tried for the same test case and test the data entered is correct or not. It also identifies the test cases and test procedures. required to accomplish the testing and specifics the feature pass or fail criteria. • Test case Input or Test Data It is the input the data to be tested using the test case. 34 . and explain how the results will be verified. if any. • Expected result It describes exactly what constitutes a pass and a fail of the tested feature. Which is expected to get from the given input. The purpose of the test design specification is to organize and describe the testing needs to be performed on a specific feature. It should expand on the approach. listed in the test plan. The input may be in any form. describe the technique to be used.” • Test case procedure It is a description of the general approach that will be used to test the features. • Test Case Description It is a description of the software feature covered by the test design specification for example.

Software Testing Test Execution and Test Log Preparation After test case design. Now the test log is prepared. Example Test case ID Sys_xyz_01 Sys_xyz_02 Test case Description Checking the login window Checking the main window Test status/ result Fail True 35 . then the test is passed otherwise it will be treated as failed. which consists of entire data that were recorded. It records each and every test case so that it will be useful at the time of revision. each and every test case is checked and actual result obtained. After getting actual result. if both the actual and expected are same. whether the test failed or passed. with the expected column in the design stage is compared.

e-mail notification to developers and/or testers when a problem is assigned to them. a defect log could include: • • • • • Defect ID number Descriptive defect name and type Source of defect -test case or other source Defect severity Defect priority 36 . From the Customer's viewpoint. From the producer's viewpoint. such as Mercury's Test Director etc. this is known as "fit for use". a defect is any that causes customer dissatisfaction. the tool selected should support the recording and communication significant information about a defect. etc. on the market. Tools marketed for this purpose usually come with some number of customizable fields for tracking project specific data in addition to the basics. They also provide advanced features such as standard and ad-hoc reporting.Software Testing 14 Defect Tracking A defect can be defined in one or two ways. This tool could be as simple as a white board or a table created and maintained in a word processor or one of the more robust tools available today. a defect is a deviation from specifications. whether missing. whether in the requirements or not. and graphing capabilities. At a minimum. It is critical that defects identified at each stage of the project life cycle be tracked to resolution. For example. Defects are recorded for following major purposes: • • • • To correct the defect To report status of the application To gather statistics used to develop defect expectations in future applications To improve the software development process Most project teams utilize some type of tool to support the defect tracking process. wrong.

i. Some general principles • The primary goal is to prevent defects. like the entire software development process. Defect measurement should be integrated into the development process and be used by the project team to improve the development process. In other words. strategies. Wherever this is not possible or practical. etc. including the steps necessary to reproduce the defect Component or program where defect was found Screen prints. security violations. should be risk driven. user error. the goals are to both find the defect as quickly as possible and minimize the impact of the defect. and so on) -more robust tools provide a status history for the defect Date and time tracking for either the most recent status change.Software Testing • • • • • • • Defect status (e. In large project. fixed. which determines the order in which defects should be fixed. it may also be necessary to assign a priority to the defect. This foresight can help test teams avoid the common disagreements with development teams about the criticality of a defect. logs. priorities and resources should be based on an assessment of the risk and the degree to which the expected impact of risk can be reduced.e. etc.. and therefore should be fixed first. that will aid the developer in resolution process Stage of origination Person assigned to research and/or correct the defect Severity versus Priority The severity of a defect should be assigned objectively by the test team based on predefined severity descriptions. design. The priority assigned to a defect is usually more subjective based upon input from users regarding which defects are most important to them. It is recommended that severity levels be defined at the start of the project so that they intently assigned and understood by the team. The defect management process. For example a "severity one" defects maybe defined as one that causes data corruption. open.g. a system crash. or for each change in the status history Detailed description. closed. information on defects should be captured at the source as a • • 37 .

There should be a document. is the primary reason for gathering defect information. Defect information should be used to improve the process. Thus. • As much as possible. • • The Defect Management Process The key elements of a defect management process are as follows. in fact. This. Imperfect or flawed processes cause most defects. the capture and analysis of the information should be automated. which includes a list of tools. the process must be altered. People unrelated to the project or system should not do it. which have defect management capabilities and can be used to automate some of the defect management processes. • • • • • • Defect prevention Deliverable base-lining Defect discovery/defect naming Defect resolution Process improvement Management reporting Defect Prevention Deliverable Baseline Defect Discovery Defect Resolution Process 16 Improvement Management Reporting 15 Test Reports 38 .Software Testing natural by-product of doing the job. to prevent defects.

It is designed to accomplish three objectives: • Define the scope of testing . to assess the potential consequences and initiate appropriate actions to minimize those consequences. they should prepare a report on their results. draws the appropriate conclusions. if the function test matrix is maintained electronically. Without a well-developed test plan. a single software system) • Integration Test Report • System Test Report • Acceptance Test Report The test reports are designed to document the results of testing as defined in the test plan. and • Draw conclusions and make recommendations based on those results The test report may be a combination of electronic data and hard copy. Accumulating the results of many test reports to identify which components of the rework process are detect-prone does this. The test report has one immediate and three long-term purposes. The third long-term purpose is to show what was accomplished. software system). The immediate purpose is to provide information to the customers of the software system so that they can determine whether the system is ready for production: and if so.Software Testing A final test report should be prepared at the conclusion of each test activity. it is difficult to develop a meaningful test report. there is no reason to print that. • Present the results of testing. Individual Project Test Report These reports focus on individual projects (e. could eliminate or minimize the occurrence of high-frequency defects. as the paper report will summarize that data. When different testers test individual projects. and present recommendations.. The first of the three long-term uses is for the project to trace problems in the event the application malfunctions in production. The second long-term purpose is to use the data to analyze the rework process for making changes to prevent defects from occurring in the future. This might include • Individual Project Test Report (e.g.. For example.g. Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective action. if improved.normally a brief recap of the test plan. which has been executed in accordance with its criteria. These defect-prone components identify tasks/steps that. Integration Test Report 39 .

time pressures. which includes people skills and attitudes. The second objective is to ensure that the software system can operate in the real-world user environment. how it was to be tested and when tests should occur. System Test Report A system test plan standard that identified the objectives of testing. changing business conditions. the testing should have accomplished this objective. which have been: • • • Fully Tested Tested With Open Defects Not Tested Functions Working Timeline report 40 . it need only be referenced. If this is maintained electronically. Functional Testing Status 2. not included in the report. Eight Interim Reports: 1. Given this. A good test plan will identify the interfaces and institute test conditions that will validate interfaces. and so forth. Functions Working Timeline 3. The first is to ensure that the system as implemented meets the real operating needs of the user or customer. Testing Action Functional Testing Status Report This report will show percentages of the functions. If the defined requirements are those true needs. The System Test report should present the results of executing that test plan. the interface report follows the same format as the individual Project Test report. except that the conditions tested are the interfaces. Expected verses Actual Defects Detected Timeline 4. Relative Defect Distribution 8. Average Age of Detected Defects by Type 6. Defect Distribution 7. Acceptance Test Report There are two primary objectives for testing. what was to be tested.Software Testing Integration testing tests the interfaces between individual projects. Defects Detected verses Corrected Gap Timeline 5.

etc.Software Testing This report will show the actual plan to have all functions working verses the current status of functions working. Relative Defect Distribution report This report will take the previous report (Defect Distribution) and normalize the level of defects. tests that are behind schedule. ideally in a line graph format. Defect Distribution report This report will show the defect distribution by function or module.). In the planning stage. will show the number of defects uncovered verses the number of defects being corrected and accepted by the testing group. would show a more accurate level of defects. It can also include items such as numbers of tests completed. Average Age Detected Defects by Type report This report will show the average outstanding defects by type (severity 1. An ideal format could be a line graph. including possible shortfalls in testing. An example would be one application might be more in depth than another. the project may not be ready when originally planned. Testing action report This report can show many different things. and would probably have a higher level of defects. and other information that would present an accurate testing picture 16 Metric Software 41 . when normalized over the number of functions or lines of code. Examples of data to show might be number of severity defects. Expected verses Actual Defects Detected report This report will provide an analysis between the number of defects being generated against the expected number of defects expected from the planning stage Defects Detected verses Corrected Gap report This report. severity 2. If the gap grows too large. it is benefic determine the acceptable open days by defect type. However.

test coverage achieved so far. Test Metrics The following are the Metrics collected in testing process User participation = User Participation Test Time Vs Total Test Time 42 .Software Testing Effective management of any process requires quantification. Software Metrics are measures that are used to quantify the software. An important metric is the number of defects found in internal testing compared to the defects found in customer tests. Product Metric a metric used to measure the characteristic of the documentation and code The metrics for the test process would include status of test activities against the plan. implementing and maintaining the software system. which indicate the effectiveness of the test process itself. This module introduces the most commonly used software and reviews their use in constructing models of the software development process. • Process Metric • Product Metric Process Metric a metric used to measure the characteristic of the methods. Metric generally classified into 2 types. Software metrics provide a quantitative basis for the development and validation of models of the software development process. Definition of Software Metrics A metric is a mathematical number that shows a relationship between two variables. Metrics can be used to improve software productivity and quality. techniques and tools employed in developing. component or process possesses a given attribute. measurement. among others. It is a quantitative measure of the degree to which a system. and modeling. software development resource and software development process.

screen prints of all of the applications screens or windows can be used to walk the user through various business scenarios. Ideally this test is conducted on a system prototype before development actually beings. This test may be conducted jointly by developers and testers during integration testing. or at the 43 . If a navigational or operational prototype is not available. Conversion Testing Specifically designed to validate the effectiveness of the conversion process.Software Testing Path Tested = Number of Path Tested Total Number of Paths Acceptance Criteria Tested = Acceptance Criteria Verified Vs Total Acceptance Criteria Cost to Locate Defect Test Cost = No of Defects located in the Testing This metric shows the cost to locate a defect Detected Production Defect No of Defects detected in production = Application System size Test Automation Cost of Manual Test Effort = Total Test Cost 17 Other Testing Terms Usability Testing Determines how well the user will be able to understand and interact with the system. It identifies areas of poor human factors design that may make the system difficult to use.

one may need to ensure that batch processing will complete within the allocated amount of time. Configuration Testing In the IT Industry. prior to accepting it and installing it into a production environment. The test team may conduct this test during system test or by another team specifically gathered for this purpose. Field -to -Field mapping and data translation is validated and. For instance. Recovery Testing Evaluates the contingency features built into the application for handling inter and for returning to specific points in the application processing. modem speeds. internet browsers. testers. since system testing must be conducted with the converted data. Vendor Validation Testing Verifies that the functionality of contracted or third party software meets the organization's requirements. The test is conducted jointly by developers. During the test. Any restoration. they may handle projected volumes of users and data effectively. DBA's and network associates after the system testing.Software Testing start of system testing. e-mail application) 44 . Stress / Load Testing Conducted to validate the application. This test can be conducted jointly by the software vendor and the test team. and restart capabilities are also tested here. and network. and focuses on ensuring that all requested functionality has been delivered. database. a large percentage of new applications are either client/server or webbased. and various off the shelf applications that might be integrated (e. if a foil copy of production data will be used in the test. or that on-line response times meet performance requirements. validating that they will run on the various combinations of hardware and software. For instance.g. configuration testing for an web-based application would incorporate versions and releases of operating systems. the complete system is subjected to environmental conditions that defer expectations to answer question such as: • • • How large can the database grow before performance degrades? At what point will more storage space be required? How many users can use the system simultaneously before it slows down or fails? Performance Testing Usually conducted in parallel with stress and load testing in order to measure performance against specified service-level objectives under various conditions.

829-1998 5.610. The analysis is usually conducted by. 830-1998 IEEE Standard Glossary of Software Engineering Terminology IEEE Standard for Software Quality Assurance Plans IEEE Standard for Software Configuration Management Plan IEEE Standard for Software Test Documentation. 730-1998 3. 828-1998 4. The benefits Realization Test is a test or analysis conducted after an application is moved into production in order to determine whether the application is likely to deliver the original projected benefits. IEEE Recommended Practice for Software Requirement Specification 45 .Software Testing Benefits Realization Test With the increased focus on the value of business returns obtained from investments information technology this type of test or analysis is becoming more critical.the business user or client group who requested the project. IEEE STANDARDS: That a Tester should be aware of 1.12-1990 2. 18 Test Standards External Standards. Internal Standards-Development and enforcement of the test standards that testers must meet IEEE • • • • Institute of Electrical and Electronics Engineers Founded in 1884 Have an entire set of standards devoted to Software Testers should be familiar with all the standards mentioned in IEEE.Familiarity with and adoption of industry test standards from Organizations. and results are reported back to executive management.

1-1987 15. Internal Standards The use of Standards. 1028-1997 11.1 IEEE Recommended Practice for Software Descriptions IEEE Standard for Software Reviews IEEE Standard classification for Software Anomalies IEEE Standard for Software Productivity Metrics(ANSI) IEEE Standard for Software Project Management Plans IEEE Standard for Software Management IEEE Standard for Software Quality Metrics Methodology. • • • • • • Simplifies communication Promotes consistency and uniformity Eliminates the need to invent yet another solution to the same problem Provides continuity Presents a way of preserving proven practices Supplies benchmarks and framework 46 . 1012-1998 8. 1061-1998. 1045-1992 13.1008-1987 (R1993) IEEE Standard for Software Unit Testing (ANSI) 7. 1058.Software Testing 6. IEEE Standard for Software Verification and Validation Supplement to 1012-1998 Content Map to IEEE 122207.1 Other Standards: • • • • ISO-International Organization for Standards SPICE -Software Process Improvement and Capability Determination NIST -National Institute of Standards and Technology DoD-Department of Defense IEEE Standard for Software Verification and Validation.. 1058-1998 14. 1016-1998 10. 1012a-1998 9.. 1044-1993 12.

Software Testing 19 Web Testing Introduction The Web Testing is mainly concerned on 6 parts they are • • • • • • Usability One of the reasons the web browser is being used as the front end to applications is the ease of use. Either way a site map and/or ever-present navigational map can guide the user. the documentation needs also to be verified. Even if the web site is simple. The following are the some of the things to be checked for easy navigation through website: • Site map or navigational bar Does the site have a map? Sometimes power users know exactly where they want to go and don't want to go through lengthy introductions. Many will believe that this is the least important area to test. The site map needs to be verified for its correctness. Does each link on the map actually exist? Are there links on the site that are not represented on the map? Is the navigational bar Usability Functionality Server side Interface Client side Compatibility Performance Security 47 . Or new users get lost easily. the site should be better and easy to use. Users who have been on the web before will probably know how to navigate a well-built web site. there will always be some one who needs some clarification. so that the instructions are correct. Additionally. While 7012 are concentrating on tin's portion of testing it is important to verify that the application is easy to use.

This may seem "pretty neat". Plenty of sites ask them to email them at a specific address or to download a browser from an address. one has to make sure that any time a web reference is given. they are going to be annoyed. Usually. it will increase the chance they will stay. But. everyone thinks they are a graphic designer. JPG) be used for 30k less? In general. Sometimes. the best way to tell the user something is to simply show them. If the front page is available quickly. It is important to check with the public relations department on the exact wording of the content. the company can get into a lot of trouble. Finally. some developers are more interested in their new backgrounds. but it's not easy to use.GIF. a picture is worth a thousand words. big fonts and blinking can turn away a customer quickly. so you need to conserve memory usage. since most users who abandon a load will do it on the front page. containing the navigational bar. • Colors/backgrounds Ever since the web became popular. • Tables 48 . Anyone can slap together some fancy mission statement later. they just need some filler to verify alignment and layout. If there is a background. However. • Images Whether it's a screen grab or a little icon that points the way. One has to make sure the site looks professional. Unfortunately. but while they are developing. functionality comes before wording. the best idea is to use little or no background. patterns and pictures distract the user. or do they simply waste bandwidth? Can a different file type (. But if the user can't click on it.Software Testing present on every screen? Is it consistent? Does each link work on each page? Is it organized in an intuitive manner? • Content To a developer. one doesn't want large pictures on the front page. bandwidth is precious to the client and the server. It might be a good idea to consult a graphic designer to look over the site during User Acceptance Testing. Overuse of bold text. legally. Do all the images and value to each page. that it is hyper linked. Otherwise. it might be a single color on the left side of the page. Sites will have yellow text on a purple picture of a fractal pattern. Unfortunately. text produce like this may sneak through the cracks. than ease of use.

If the text refers to a picture on the right. • Forms When a user submits information through a form it needs to work properly. it should be handled properly and the customer should receive their package. • Links A link is the vehicle that gets the user from page to page. you need to verify that the server stores the information properly and that systems down the line can interpret and use that information. • Data verification If the system verifies user input according to business rules. Does the user constantly have to scroll right to see the price of the item? Would it be more efficient to put the price closer to the left and put miniscule details to the right? Are the columns wide enough or does every row have to wrap around? Are certain rows excessively high because of one entry? These are some of the points to be taken care of.Software Testing It has to be verified that tables are setup properly. If this is the case. For example. It may sound a little silly but many of the web sites exist with internal broken links. The submit button needs to work If the form is for an online registration. This is the part that interfaces with the server and actually "does stuff". the user should be given login information (that works) after successful completion. If the form gathers shipping information.that the link which brings to the page it said it would and that the pages it is trying to link. Make sure that widow and orphan sentences and paragraphs don't layout in an awkward manner because of pictures. exist. 49 . In order to test this. a State field may be checked against a list of valid values. • Wrap-around Finally. Two things has to be verified for each link . then that needs to work properly. it has to be verified whether the wrap-around occurs properly. Functionality The functionality of the web site is why the company hired a developer and not just an artist. make sure the picture is on the right. you need to verify that the list is complete and that the program actually calls the list properly (add a bogus value to the list and make sure the system accepts it).

) Basically. or 6 for Discover. If the merchant only takes Visa and MasterCard. change shipping information before an order is shipped. 50 . one may want to verify the application specific functional requirements. a web site is not an island.Software Testing • Cookies Most users only like the kind with sugar. (A simple client-side script can check 3 for American Express. 5 for MasterCard. This is why users will show up on the developer’s doorstep. verify that totals are being counted properly. otherwise people can edit their cookies and skew your statistics. It's also a good idea to run queries on the database to make sure the transaction data is being stored properly. but developers love web cookies. it has to be ensured that the software can handle every possible message returned by the external server. and stolen. make sure the cookies work and make sure it's encrypted in the cookie file. cancel an order. so one need to make sure that he can do what is advertised. invalid. And you'll probably want to make sure those cookies are encrypted too. 4 for Visa. Server side Interface Many times. Try credit cards that are valid. Try to perform all functions a user would: place an order. then the server logs viewed and verified that what is seen in the browser is actually happening on the server. The site will call external servers for additional data. • Application specific functional requirements Most importantly. Several test transactions may have to be sent using the web interface. transactions should be attempted. check the status of the order. you need to check them. If the system uses them. change an order. pay online. If the cookie is used for statistics. try using a Discover card. before the transaction is sent. For example. ad naseum. • Server interface The first interface one should test is the interface between the browser and the server. If they store login information. verification of data or fulfillment of orders. a merchant might verify credit card transactions real-time in order to reduce fraud. • External interfaces Some web systems have external interfaces.

it has to be checked whether browsers 3. does the order get stored so customer service reps can call back if the user doesn't come back to the site? Client side Compatibility It has to be verified that the application can work on the machines your customers will be using.what happens? Does the order complete anyway? Try losing the Internet connection from the user to the server. video setting and modem speed has to be tried with various combinations. browser. but it has to be verified that there is a message for those using older browsers. If the product is going to the web for the world to use. Usually we try to make sure our system can handle all our errors. every operating system.Software Testing • Error handling One of the areas left untested most often is interface error handling. if the users using both. • Browsers Does the site work with Netscape? Internet Explorer? Linux? Some HTML commands or scripts only work for certain browsers. Make sure there are alternate tags for images. but we never plan for the other systems' errors or for the unexpected. If SSL security is used. Make sure that the site doesn't use plug-ins only available for one OS. Try losing the connection from the server to the credit card verification server. so make sure that secondary fonts are selected. • Video settings Does the layout still look good on 640x400 or 600x800? Are fonts too small to read? Are they too big? Does all the text and graphic alignment still work? • Modem/connection speeds 51 . • Operating systems Does the site work for both MAC and IBM Compatibles? Some fonts are not available on both systems.0 and higher. Is there proper error handling for all these situations? Are charges still made to credit cards? Is the interruption is not user initiated. in case someone is using a text browser. Try leaving the site mid-transaction .

If they get a "busy signal".8 modem. So. But. (But it has to be kept in mind. not simply blow up. • Printers Users like to print. • Combinations A different combination has to be tried. The concept behind the web should save paper and reduce printing. Sometimes images and text align on the screen differently than on the printed page.) With internal applications. Performance Testing It need to be verified that the system can handle a large number of users at the same time. but not with Linux. Accessibility is extremely important to users. the site should work on all machines without limit growth and changes in the future.Software Testing Does it take 10 minutes to load a page with a 28. ideally. It has to be verified that order confirmation screens can be printed properly. but not on the front page. the system needs to know what to do when it's overloaded. a large amount of data from each user. Maybe IBM with Netscape works. load times need not be checked. Not only must the system be checked so the customers can gain access. you need to verify that the pages print properly. If everyone has a high-speed connection. they hang up and call the competition. If the company has an official web browser choke. but many times hackers will attempt to gain access to a system by overloading it. It has to be ensured that the images aren't too large. some people may dial in from home. the development team can make disclaimers about system requirements and only support those systems setups. If the web site will be used internally it might make testing a little easier. • Concurrent users at the same time 52 . then it has to be verified that it works for that browser. but whether it is tested after hooking up to high-speed connections? Users will expect long download times when they are grabbing documents or demos. For the sake of security. but most people would rather read on paper than on the screen. Make sure that marketing don't put 50k of font size -6 keywords for search engines. and a long period of continuous use. Maybe 600x800 looks good on the MAC but not on the IBM.

there is an alternate page for browser with versions less than 3. but what if a university bookstore decides to order 5000 copies of Intro to Psychology? Or what if one user wants to send a gift to larger number of his/her friends for Christmas (separate mailing addresses for each. Now try 100. the tool will pay for itself the second time you use it.html page so a directory listing doesn't appear. if that exposure is a hacked page. If the site offers web-based email. If the development group uses SSL it is to be ensured that. then it will be better to handle well before the occasion. of course. The web site will be the only exposure for some customers to know about a company. since they are difficult to do manually. running another test is just a click away. It may probably be required to use an automated test tool to implement these types of tests. it will be better to handle millions of users right after the winning numbers are posted. security is very important. since SSL is not compatible with those 53 .) Can the system handle large amounts of from a single user? • Long period of continuous use If the site is intended to take orders for specific occasion.html or main. • Large amount of data from each user Most customers may only order 1-5 books from your new online bookstore. Security Even if credit card payments are not accepted. A load test tool would be able simulate concurrent users accessing the site at the same time.000 people. the customers won't feel safe doing business with the company using internet. there will be a browser warning and the HTTP in the location field on the browser will change to HTTPS.Software Testing If the site just put up the results of a national lottery. Imagine coordinating 100 people to hit the site at the same time. it will be better to run months or even years. • SSL (Secured Socket Layer) Many sites use SSL for secure transactions. without downtimes. Each directory should have an index.0. And. • Directory setup The most elementary step of web security is proper setup of directories. Once the tool is set up. Generally. While entering an SSL site.

Is there a maximum number of failed logins allowed before the server locks out the current user? Is the lockout based on IP? What happens after the maximum failed login attempts. just as much as functionality. Conclusion Whether an Internet or intranet or extranet application is being tested. but a resourceful hacker could mail the servers username and password files to themselves. the page is up for public relations. Users have high expectations for web page quality. it needs to be verified that server logs are working properly. • Log files Behind the scenes. The details are different for each language. Others only allow access to the mail server. Some allow access to the root directory. Find out what scripting languages are being used and research the loopholes. It might also be a good idea to subscribe to a security newsgroup that discusses the language that is being tested. 20 54 . so the impression must be perfect. Does the log track every transaction? Does it track unsuccessful login attempts? Does it only track stolen credit card usage? What does it store for each transaction? IP address? User name? • Scripting languages Scripting languages are a constant source of security holes. Also it needs to be checked whether there is a time-out limit or what happens if the user tries a transaction after the timeout? • Logins In order to validate users. several sites require customers to login. This makes it easier for the customer since they don't have to re-enter personal information every time.Software Testing browsers. Sufficient warnings while entering and leaving the secured site are to be provided. In many cases. You need to verify that the system does not allow invalid usernames/password and that does allow valid logins. what are the rules for password selection – these needs to be checked. testing for the web can be more challenging than non-web applications.

55 . Audit is a staff function. Choices often include the actual minimum and maximum boundary values. it serves as the "eyes and ears" of management Baseline: A quantitative measure of the current level of performance. policies. to help define superior performance of a product. component. or processes against best practices. or competitive practices. and the minimum value plus or minus one. Audit: This is an inspection/assessment activity that verifies compliance with plans. data structures. Black-box Testing: A test technique that focuses on testing the functionality of the program.Software Testing Testing Terms Application: A single software product that mayor may not fully support a business function. usually data or business process driven. and procedures. Benchmarking: Comparing your company's products. or application against its specifications without knowledge of how the system is constructed. service. the maximum value plus or minus one. Benefits Realization Test: A test or analysis conducted after an application is moved into production to determine whether it is likely to meet the originating business case. services. and procedure parameters. or support process. and ensures that resources are conserved. Boundary Value Analysis: A data selection technique in which test data is chosen from the "boundaries" of the input or output domain classes.

Checkpoint: A formal review of key project deliverables. configuration settings. and correction or repair costs. 100% Decision 56 . and data translation. Check sheet: A form used to record data as it is gathered. and verification and validation must be done for each of these deliverables that is produced. equipment) to ensure that the product the customer receives is a quality (defect free) product The Cost of Quality includes prevention. including fieldto-field mapping. Cost of Quality (COQ): Money spent above and beyond expected production costs (labor. appraisal.Software Testing Bug: A catchall term for all software defects or errors. One checkpoint is defined for each key project deliverable. This may include various combinations of hardware types. and software versions. Configuration Testing: Testing of an application on all supported hardware and software platforms. materials. Decision Coverage: A white-box testing technique that measures the number of -or percentage -of decision directions executed by the test case designed. Conversion Testing: Validates the effectiveness of data conversion processes. Condition Coverage: A white-box testing technique that measures the number of percentage of decision outcomes covered by the test cases designed. Certification: Acceptance of software by an authorized agent after the software has been validated by the agent or after its validity has been demonstrated to the agent. 100% Condition coverage would indicate that every possible outcome of each decision had been executed at least once during testing.

Equivalence Partitioning: A test technique that utilizes a subset of data that is representative of a larger class. Desk Checking: The most traditional means for analyzing a system to a program. Defect Tracking Tools: Tools for documenting defects as they are found during testing and for tracking their status through to resolution. Driver: Code that sets up an environment and calls a module for test. This tool can also be used on artifacts created during analysis and design. This is done in place of undertaking exhaustive testing of each 57 .Software Testing coverage would indicate that all decision directions had been executed at least once during testing. Often. paths through the program are grouped into a finite set of classes. Alternatively. Entrance Criteria: Required conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process. whether in the statement of requirements or not. or (2) From the customer's viewpoint: anything that causes customer dissatisfaction. each logical path through the program can be tested. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met. it is useful to work with two definitions of a defect: (1) From the producer's viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product. and one path from each class is tested Decision/Condition Coverage: A white-box testing technique that executes possible combinations of condition outcomes in each decision. The developer of a system or program conducts desk checking. Defect: Operationally.

000 (invalid) Error or Defect: 1. Inspections involve authors only when specific questions concerning deliverables 58 . and other problems. a business rule that indicates that a program should edit salaries within a given range ($10. or omission of a requirement in the design specification).000 (invalid) Between $10. observed. Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Exit Criteria: Standards for work product quality. or theoretically corrects value or condition. Error Guessing: The data selection technique for picking values that seems likely to cause defects. Inspection: A formal assessment of a work product conducted by one or more qualified independent reviewers to detect defects. Human action that results in software containing a fault (e.000 -$15.000 and $15. specified. violations of development standards.Software Testing value of the larger class of data.000) might have 3 equivalence classes to test: Less than $10.. Exhaustive Testing: Executing the program through all possible combinations of values for program variables.000 (valid) Greater than $15. or measured value or condition and the true. which block the promotion of incomplete or defective work products to subsequent stages of the software development process. For example.g. A discrepancy between a computed. incorrect translation. omission or misinterpretation of user requirements in a software specification. 2. This technique is based upon the theory that test cases and test data can be developed based on the intuition and experience of the tester.

It is the first level of testing which formally integrates a set of programs that communicate among themselves via messages or files (a client and its server(s). a string of batch programs. and correctness of software at each stage of the development lifecycle. measurement. quality means "fit for use". a product is a quality product if it meets or conforms to the statement of requirements that defines the product. completeness. Quality Assurance (QA): The set of support activities (including facilitation. and the action taken when nonconformance is detected. but does not attempt to correct them. Quality Control (QC): The process by which product quality is compared with applicable standards. An inspection identifies defects. Life Cycle Testing: The process of verifying the consistency.Software Testing exist. training. Quality: A product is a quality product if it is defect free. Performance Test: Validates that both the on-line response time and batch run times meet the defined performance requirements. The development team to validate the technical quality or design of the application conducts it. This statement is usually shortened to: quality means meets requirements. Its focus is defect 59 . From a customer's perspective. Integration Testing: This test begins after two or more programs or application components have been successfully unit tested. To the producer. or a set of on-line modules within a dialog or conversation). Authors take corrective actions and arrange follow-up reviews as needed. and analysis) needed to provide adequate confidence that processes are established and continuously improved to produce products that meet specifications and are fit for use.

when executed successfully. and human-computer interfaces. The intention of stress testing is to identify constraints and to ensu4re that there are no performance problems. large database size or restart/recovery circumstances. including. satisfy management that the system meets specifications. backups. manual procedures. restart and recovery. This test also assures that disaster recovery is possible. . A predetermined combination of tests is designed that. and restarts. This is a line function. restores. Recovery Test: Evaluate the contingency features built into the application for handling interruptions and for returning to specific points in Life application processing cycle. structural and quality requirements have been met. For example: high transaction volume. the entire system is tested to verify that all functional. that is. System testing verifies the functional quality of the system in addition to all external interfaces. information.Software Testing detection and removal. It also verifies that interfaces between the application and 60 behavior of designed and specified modules not yet constructed. The technique requires the use of a set of test cases that have been developed to test all of the software's functional capabilities. to varying environmental conditions that delay normal expectations. Regression Testing: Regression testing is the process of retesting software to detect errors that may have been caused by program changes. Stress Testing: This test subjects a system. -checkpoints. Structural Testing: A testing method in which the test data are derived solely from the program structure. Stub: Special code segments -that when invoked by a code segment under testing sinuate the I System test: During this event. or components of a system. the performance of these tasks is the responsibility of the people working within the process.

steps. Operations environment. approach. Test Log: A chronological record of relevant details about the execution of tests. execution conditions of a given test item. and any communications systems. input data requirements. expected results. Test cases are broken down into one or more detailed test scripts and test data conditions for execution. A test case should contain particulars such as test case identifier. Test Data Set: Set of input elements used in the testing process Test Design Specification: A document that specifies the details of the test approach for a software feature or a combination of features and identifies the associated tests. Test Item: A software item that is an object of testing. Test Case A test case is a document that describes an input. Test cases document the input. action. It identifies test items. Test Procedure Specification: A document specifying a sequence of actions for the execution of a test. and that the application functions appropriately with the Database Management System. or event and an expected response. the testing tasks. Test Case Specification: -An individual test condition. objective. executed as part of a larger test contributes to the test's objectives. the personnel performing each task. and schedule of testing activities. the features to be tested. resources. that JCL functions correctly. and any risks requiring contingency planning. to determine if a feature of an application is working correctly. test case name. test conditions/setup. 61 . Test Plan: A document describing the intended scope.Software Testing open environment work correctly. and expected results.

windows. This is to ensure that the design (layout and sequence. The script also contains expected results. etc. Essentially. Test Scripts: A tool that specifies an order of actions that should be performed during a test session. menus. This review includes assuring that the user interface adheres to documented User Interface standards. and reports can be used. Usability Test: The purpose of this event is to review the application user interface and other human factors of the application with the people who will be using the application. Ideally. Validation is usually accomplished by verifying each stage of the software development life cycle. It validates that the system will work as intended by the test in the real world. or may be automated using capture/playback tools or other kinds of automated scripting tools. and is based on real world business scenarios.Software Testing Test Summary Report A document that describes testing activities and results and evaluates the corresponding test items. Test scripts may be manually prepared using paper forms. User Acceptance Test: User Acceptance Testing (UAT) is conducted to ensure that the system meets the needs of the organization and the end user/customer. this test validates that the RIGHT system was built.) enables the business functions to be executed as easily and intuitively as possible. not system requirements. an application prototype is used to walk the client group through various business scenarios. and should be conducted early in the design stage of development. although paper copies of screens. Testing: Examination by manual or automated means of the behaviour of a program by executing the program on sample data sets to verify that it satisfies specified requirements or to verify differences between expected and actual results. Validation: Determination of the correctness of the final program or software produced from a development project with respect to the user needs and requirements. 62 .

Software Testing Verification: I) The process of determining whether the products of a given phase of the software development cycle fulfill the requirements established during the previous phase. testing. 21 Technical Questions 1. This technique is usually used during tests executed by the development team. 63 . such as Unit or Component testing. What is Software Testing? The process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences between expected and actual results. What is the Purpose of Testing? • To uncover hidden errors • To achieve the maximum usability of the system • To Demonstrate expected performance of the system. not correction. to produce predictable results. 2. inspecting. White-box Testing: A testing technique that assumes that the path of the logic in a program unit or component is known. processes. White-box testing usually consists of testing paths. Walkthrough: A manual analysis technique in which the module author describes the module's structure and logic to an audience of colleagues. Techniques focus on error detection. checking. branch by branch. II) The act of reviewing. services. or documents conform to specified requirements. auditing. or otherwise establishing and documenting whether items. Will usually sue a formal set of standards or criteria as the basis of the review.

What is Baseline document. arrays. So White box takes these things in Macro level and test these things 8. CET (Customer Experience Test). White box testing is the basic type of testing testers performs. etc are very micro level but they arc Basement for any application. What are the Qualities of a Tester? • Should be perfectionist • Should be tactful and diplomatic • Should be innovative and creative • Should be relentless • Should possess negative thinking with good judgment skills 64 . What types of testing do testers perform? Black box testing. Can you say any two? A baseline document. 7. Functional Specification and Business Requirement Document 10. Clear Design and Flow of the application is needed 9. structures.Software Testing 3. An integrated application. which starts the understanding of the application before the tester. Apart from that they also perform a lot of tests like Ad . when Black box testing is available? A benchmark that certifies Commercial (Business) aspects and also functional (technical) aspects is objectives of black box testing.Hoc testing. What is the need for testing? The Primary need is to match requirements get satisfied with the functionality and also to answer two questions • Whether the system is doing what it supposes to do? • Whether the system is not performing what it is not suppose to do? 6. conditions. Why do you go for White box testing. performing its task as expected. 5. Conformance Testing 4. Cookie Testing. What are the entry criteria for Automation testing? Application should be stable. Configuration Tests. What are the entry criteria for Functionality and Performance testing? Functional testing: Functional Specification /BRS (CRS)/User Manual. starts actual testing. Client-Server Test. Here loops. files. What is the Outcome of Testing? A stable application. Stable for testing. Compatibility testing.

What is a Test Bed? Before Starting the Actual testing the element. Tell names of some testing type which you learnt or experienced? Any 5 or 6 types which are related to companies profile is good to say in the interview. Are collectively called as test Bed. which supports in preparing test data are called Data guidelines 16.. what would you deliver to the client? Test deliverables namely Test plan Test Data Test design Documents (Condition/Cases) • Defect Reports • Test Closure Documents • Test Metrics 14. Why do you go for Test Bed? 65 . What exactly is Heuristic checklist approach for unit testing? It is method of achieving the most appropriate solution of several found by alternative methods is selected at successive stages testing. What is a Data Guideline? Data Guidelines are used to specify the data required to populate the test bed and prepare test scripts. • Ad .Hoc testing • Cookie Testing • CET (Customer Experience Test) • Depth Test • Event-Driven • Performance Testing • Recovery testing • Sanity Test • Security Testing • Smoke testing • Web Testing 12. After completing testing. It includes all data parameters that are required to test the conditions derived from the requirement / specification The Document.Software Testing • Should possess the attitude to break the system 11. which supports the testing activity such as Test data. 15. The checklist Prepared to Proceed is called Heuristic checklist 13. Data guide lines.

test script (Before Starting Testing)? These are test design document which are used to execute the actual testing Without which execution of testing is impossible. how? Automated testing can never replace manual Testing. 23.Software Testing When Test Condition is executed its result should be compared to Test result (expected result). Follow a clear Process. 22. How do you go about testing of Web Application? To approach a web application testing. But It speeds up the process. Gap analysis document will add value to understand expected and existing system. Absence of creativity and innovative thinking. test case & Test Script? No document prepared in any process is waste of rime. What is the difference between quality and testing? "Quality is giving more cushions for user to use system with all its expected characteristics”. Why do we prepare test condition. 17. test cases. 20. 18. Better Suited for Regression testing of Manually tested Application and Performance testing.front end server. Can the System testing be done at any stage? 66 . as Test data is needed for this here comes the role of test Bed where Test data is made ready. What kind of Document you need for going for a Functional testing? Functional specification is the ultimate document. which expresses all the functionalities of the application and other documents like user manual and BRS are also need for functional testing. It is usually said as Journey towards Excellence. That too test design documents which plays vital role in test execution can never be said waste of time as without which proper testing cannot be done. “Testing is an activity done to achieve the quality”. finally this execution is going to find the bugs to be fixed so we have prepare this documents. which can be reviewed easily. the first attack on the application should be on its performance behavior as that is very important for a web application and then transfer of data between web server and . 21. As these tools to Follow GIGO principle of computer tools. Can Automation testing replace manual testing? If it so. Is it not waste of time in preparing the test condition. 19. security server and back end server.

Tools Configuration / Deployment in various Environments c. Tools Limitations for Object Detections b. Tools bugs with respect to exception handling. e. Describe some problem with automating testing tool.The system as a whole can be tested only if all modules arc integrated and all modules work correctly System testing should be done before UAT (User Acceptance testing) and Before Unit Testing. Mutation testing is based on two assumptions: the competent programmer hypothesis and the coupling effect. 24. The coupling effect stated that a set of test data that can uncover all simple faults in a program is also capable of detecting more complex faults. it is aimed at testing and uncovering some specific kinds of faults. Test Automation: 26. What is the use of automating testing tools in any job? The automation testing tools are used for Regression and Performance testing. and too many path combinations to fully test. Several problems are encountered while working with test automation tools like. What is Mutation testing & when can it be done? Mutation testing is a powerful fault-based testing technique for unit level testing. namely simple syntactic changes to a program. What automating testing tools are you familiar with? WinRunner and LoadRunner 27. How test automation is planned? Planning is the most important task in Test Automation. Tools abnormal polymorphism in behavior like sometimes it works but sometimes not for the same application / same script/same environment etc. Test Automation Plan should cover the following task items. Since it is a fault-based testing technique. Also. there are too many inputs. Tools Precision / Default Skeleton Script Issues like window synchronization issues etc. 25. software specifications can be subjective and be interpreted in different ways. . Mutation testing injects faults into code to determine optimal test inputs. too many outputs. The competent programmer hypothesis assumes that competent programmers turn to write nearly "correct" programs. a. d.Software Testing No. Why it is impossible to test a program completely? With any software other than the smallest and simplest program. 28. 29. 67 .

Reference Document Requirement as Perquisites for Test Automation. Possible measurements: The possible measurements can be e.) b. Definitely Test Automation plays a vital role in improving Test Effectiveness in various ways like. Test Automation Process Definitions including Standard to be followed while performing Test Automation. d. a. h. Virtual Load / Users usage in Load/Performance Testing wherein its not possible to use so many resources physically performing test and get so accurate results. 30. Time Availability Vs Time Estimations Calculations and Definitions. the average work effort in hours to update a test suite. And many more… 31. Maintainability • • Definition: The effort needed to update the test automation suites for each new release. Tool Cost Estimation Vs Project Cost Estimation Statistics for Testing. 32. j. Test Automation Scope Definition. f. i.Software Testing a. g. e. c.g. Production Requirements Analysis Results Consideration with respect to Factors like Load-Performance / Functionality Expected / Scalability etc. Reduction in Slippage caused due to human errors. Précised Time Calculations. e. Can test automation improve test effectiveness? Yes. Tool Evaluation: Tool Availability / Tool License Availability / Tool License Limitations. c. Object / Object Properties Level UI Verifications.driven automation? Data Driven Automation is the most important part of test automation where the requirement is to execute the same test cases for different set of test input data so that test can executed for pre-defined iterations with different set of test input data for each iteration. What are the main attributes of test automation? Here are some of the attributes of test automation that can be measured. b. Tool Selection: Type of Test Automation Expected (Regression / Performance etc. d. What is data . 68 . Resource Requirements Vs Availability Study. Automation Risk Analysis and planning to overcome if defined Risks Emerge in the Automation Process.

Possible measurements: The effort and time needed to set-up and run test automation in a new environment. 33. Possible measurements: The time and effort needed to identify.Software Testing Reliability • • Definition: The accuracy and repeatability of your test automation. Possible measurements: Monitoring over time the total cost of automated testing. Does automation replace manual testing? We cannot actually replace manual testing 100% using Automation but yes definitely it can replace almost 90% of the manual test efforts if the automation is done efficiently. Efficiency • • Definition: The total cost related to the effort needed for the automation. material.. locate. i.e. restore. Usability • • Definition: The extent to which automation can be used by different types of users (Developers. 34. combine and execute the different test automation test ware. resources. Portability • • Definition: The ability of the automated test to run on different environments. How a tool for test automation is chosen? Below are factors to be considered while choosing Test Automation Tool. non-technical people or other users etc. Possible measurements: Number of times a test failed due to defects in the tests or in the test scripts. Flexibility • • Definition: The ease of working with all the different kinds of automation test ware. etc. 69 . Possible measurements: Number of tests failed due to unexpected events.) Possible measurements: The time needed to train users to become confident and productive with test automation. Robustness • • Definition: The effectiveness of automation on an unstable or rapidly changing system.

e. Tool Cost d. Platform Support from the Tool. c.g. Tool Usage Comparisons with other similar available tools in market. b. Tools Limitations Analysis. How one will evaluate the tool for test automation? Whenever a Tool has to be evaluated one need to go through few important verifications / validations of the tool like. a. g. Tool Cost Vs Project Testing Budget Estimation. a. Tool Configuration & Deployment Requirements. Tool Type with its Features Vs Our Requirements Analysis. S/W & Platform Support of Tool Vs Application test Scope for these attributes. Application Designed Protocol.Software Testing a. d. Protocols / Technologies Support. d. What could go wrong with test automation? While using Test Automation there are various factors that can affect the testing process like. H/W. Virtual Load / Users Generation for load testing which is not worth doing manually as it needs lots of resources and also it might not give that precise results which can be achieved using a Automation Tool. c. g. Saves Resources (Human / H/w / S/W resources) c. f. 36. Automation Tool’s abnormal behavior like Scalability Variations due to memory violations might be considered as Applications memory violation in heavy load tests. e.g. Tool’s Limitations might result in Application Defects. Object Properties Level Verifications can be done which is difficult manually. Tool’s Compatibility with our Application Architecture and Development Technologies. Tool License Limitations / Availability Vs Test Requirements. Regression Testing Purposes. Protocol Support by Tool Vs. f. c. b. b. Regression Testing / Functional Testing / Performance-Load Testing) b. Test Automation Saves Major Testing Time.(Tools Scalability) 35. Test Type Expected. Reduction in Verification Slippages cased due to human errors. Java-CORBA required JDK to be present in System) causes Application to show up Bugs which are just due to the JDK installation in System which I had experienced 70 . h. For Data Driven Testing. (E. 37. f. a. Tools Limitations Vs Application Test Requirements e. What are main benefits of test automation? The main benefits of Test Automation are. Environment Settings Required for Tool (e.

f. a. c. b. 39. b. Environmental Settings and API’s / Addins Required by Tool to make it compatible to work with Specialized Environments like JAVA-CORBA creates JAVA Environmental Issues for the Application to work. 40.05 Java-Support Environmental Variables Creates Application Under Test 0to malfunction) g. a. Automation Tool’s Limitations for objects Recognizing. Test Planning (Pre-Requisite: Get Adequate Documents of the Project to test) b. Thus the following testing activities can be automated. Manual Testing e. There are many issues. Etc. e. If Bug Fixing Cycle repeats then Steps c-h repeats. (E. Bug Tracking & Bug Reporting. f. Automation Tool’s Third Part Integration Limitations. Test Report Generation. Test Cases (Pre-Requisite: Get Adequate Documents of the Project to test) c. out of which I would like to highlight few as given below. Regression. Automation Tool’s abnormal behavior due to its Scalability Issues. h. Automation Script Maintenance. d. which we come across while actual automation. should be automated if possible.Software Testing myself as on un-installation of JDK and Java-Addins my application works fine. d. e. We might assume its Application Defect and consider any issue as Application Bug. g. WinRunner 7. which becomes tough if product gets through frequent changes. Test Status/Results Notifications. Cursor Test (A Very Basic Test to make sure that all screens are coming and application is ready for test or to automate) d. Functional & Load / Performance testing. Due to Tool’s Defects. Test Case Preparation Tests like Cursor. Describe common problems of test automation? In Test Automation we come across several problems. 38. How are the testing activities described? The basic Testing activities are as follows: a. c.g. What testing activities one may want to automate? Anything. Analysis of the Test and Test Report Creation. Test Automation (Provided if the product had reached Stability enough to be automated). Bug Tracking System. which is repeated. 71 .

scripted environmental details. d. Centralized Application Specific / Generic Compiled Modules / Library Development. e. scripting approaches: linear. 43. which has the feature of Test Case Design and execution. script scope etc. What tools are available for support of testing during software development life cycle? Test Director for Test Management. Script Header should contain script developer name. script prerequisites from application side. 44. Understanding. Coding Standards should be followed for Scripting. What are the types of scripting techniques for test automation ? Scripting Technique: how to structure automated test scripts for maximum benefit and Minimum impact of software changes. Debugging easier. The major ones used are. What are principles of good testing scripts for automation? The major principles of good testing script for Automation are. g. script pre-processing. Increasing the factor of Reusability of the Script. Scripts should be Environment. f. script description in brief. a. script updated date. What are the limitations of automating software testing? 72 . complexity and make script easy for debugging.Software Testing 41. script contents. scripting issues. Parent Child Scripting. Script should be readable and appropriate comments should be written for each line / section of script. data-driven and programmed. which makes Script Updating. e. Can the activities of test case design be automated? Yes. Data-Driven Scripting b. h. data Independent as much as possible which can be achieved using parameterization. b. a. Shared. Techniques to Generalize the Scripts. script environmental requirements. 42. Script should be generalized. Bugzilla for Bug Tracking and Notification etc are the tools for Support of Testing. Scripts should be modular. d. c. Automation Scripts should be reusable. c. 45. minimizing the impact of Software changes on test scripts. Test Director is one of such tool. Repeated Tasks should be kept in Functions while scripting to avoid code repeat.

technologies supported.Software Testing If one talk about limitations of automating software testing. a. 73 . c. Automation Needs lots of time in the initial stage of automation. Every tool will have its own limitations with respect to protocol support. then to mention few. platform supported etc due to which not 100% of the Application can be automation because there is always something limited to the tool which we have to overcome with R&D. object recognition. Tool’s Memory Utilization is also one the important factor which blocks the application’s memory resources and creates problems to application in few cases like Java Applications etc. b.

Sign up to vote on this title
UsefulNot useful