You are on page 1of 49

Vocabulary

Every profession has its own vocabulary. To learn a profession, the first and crucial step is to master its
vocabulary. The entire knowledge of a profession is compressed and kept it in its vocabulary.
Take our own software testing profession, while communicating with our colleagues, we frequently use
terms like 'regression testing', 'System testing', now imagine communicating the same to a person who is
not in our profession or who doesn't understand our testing vocabulary, we need to explain in detail each
and every term .Communication becomes so difficult and painful. To speak the language of testing, you
need to learn its vocabulary.
Find below a huge collection of testing vocabulary:
Affinity Diagram: A group process that takes large amounts of language data, such as developing by
brainstorming, and divides it into categories.
Audit: This inspection/assessment activity verifies compliance with plans, policies and procedures and
ensures that resources are conserved.
Baseline: A quantitative measure of the current level of performance.
Benchmarking: Comparing your company's products, services or processes against best practices or
competitive practices, to help define superior performance of a product, service or support processes.
Black-box Testing: A test technique that focuses on testing the functionality of the program component or
application against its specifications without knowledge of how the system constructed.
Boundary value analysis: A data selection technique in which test data is chosen from the "boundaries"
of the input or output domain classes, data structures and procedure parameters. Choices often include the
actual minimum and maximum boundary values, the maximum value plus or minus one and the minimum
value plus or minus one.
Branch Testing: A test method that requires that each possible branch on each decision be executed on at
least once.
Brainstorming: A group process for generating creative and diverse ideas.
Bug: A catchall term for all software defects or errors.
Certification testing: Acceptance of software by an authorized agent after the software has been validated
by the agent or after its validity has been demonstrated to the agent.
Checkpoint (or verification point): Expected behaviour of the application, which must be validated, with
the actual behaviour after certain action has been performed on the application.
Client: The customer that pays for the product received and receives the benefit from the use of the
product.
Condition Coverage: A white-box testing technique that measures the number of or percentage of
decision outcomes covered by the test cases designed.100percentage condition coverage would indicate
that every possible outcome of each decision had been executed at least once during testing.
Configuration Management Tools: Tools that are used to keep track of changes made to systems and all
related artifacts. These are also known as version control tools.
Configuration testing: Testing of an application on all supported hardware and software platforms. This
may include various combinations of hardware types, configuration settings and software versions.
Completeness: A product is said to be complete if it has met all requirements.
Consistency: Adherence to a given set of rules.
Correctness: The extent to which software is free from design and coding defects. It is also the extent to
which software meets the specified requirements and user objectives.
Cost of Quality: Money spent above and beyond expected production costs to ensure that the product the
customer receives is a quality product. The cost of quality includes prevention, appraisal, and correction or
repair costs.
Conversion Testing: Validates the effectiveness of data conversion processes, including field-field
mapping and data translation.
Customer: The individual or organization, internal or external to the producing organization that receives
the product.
Cyclomatic complexity: The number of decision statements plus one.
Debugging: The process of analyzing and correcting syntactic, logic and other errors identified during
testing.
Decision Coverage: A white-box testing technique that measures the number of - or percentage - of
decision directions executed by the test case designed. 100% Decision coverage would indicate that all
decision directions had been executed at least once during testing. Alternatively, each logical path through
the program can be tested.
Decision Table: A tool for documenting the unique combinations of conditions and associated results in
order to derive unique test cases for validation testing.
Defect Tracking Tools: Tools for documenting defects as they are found during testing and for tracking
their status through to resolution.
Desk Check: A verification technique conducted by the authors of the artifact to verify the completeness
of their own work. This technique does not involve anyone else.
Dynamic Analysis: Analysis performed by executing the program code. Dynamic analysis executes or
simulates a development phase product and it detects errors by analyzing the response of the product to
sets of input data.
Entrance Criteria: Required conditions and standards for work product quality that must be present or
met for entry into the next stage of the software development process.
Equivalence Partitioning: A test technique that utilizes a subset of data that is representative of a larger
class. This is done in place of undertaking exhaustive testing of each value of the larger class of data.
Error or defect: 1.A discrepancy between a computed, observed or measured value or condition and the
true, specified or theatrically correct value or condition 2.Human action that results in software containing
a fault (e.g., omission or misinterpretation of user requirements in a software specification, incorrect
translation or omission of a requirement in the design specification)
Error Guessing: Test data selection techniques for picking values that seem likely to cause defects. This
technique is based upon the theory that test cases and test data can be developed based on intuition and
experience of the tester.
Exhaustive Testing: Executing the program through all possible combination of values for program
variables.
Exit criteria: Standards for work product quality, which block the promotion of incomplete or defective
work products to subsequent stages of the software development process.
Flowchart: Pictorial representations of data flow and computer logic. It is frequently easier to understand
and assess the structure and logic of an application system by developing a flow chart than to attempt to
understand narrative descriptions or verbal explanations. The flowcharts for systems are normally
developed manually, while flowcharts of programs can be produced.
Force Field Analysis: A group technique used to identify both driving and restraining forces that
influence a current situation.
Formal Analysis: Technique that uses rigorous mathematical techniques to analyze the algorithms of a
solution for numerical properties, efficiency, and correctness.
Functional Testing: Testing that ensures all functional requirements are met without regard to the final
program structure.
Histogram: A graphical description of individually measured values in a data set that is organized
according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the
distribution of individual values in a data set along with information regarding the average and variation.
Inspection: A formal assessment of a work product conducted by one or more qualified independent
reviewers to detect defects, violations of development standards, and other problems. Inspections involve
authors only when specific questions concerning deliverables exist. An inspection identifies defects, but
does not attempt to correct them. Authors take corrective actions and arrange follow-up reviews as needed.
Integration Testing: This test begins after two or more programs or application components have been
successfully unit tested. The development team to validate the interaction or communication/flow of
information between the individual components, which will be integrated, conducts it.
Life Cycle Testing: The process of verifying the consistency, completeness, and correctness of software at
each stage of the development life cycle.
Pass/Fail Criteria: Decision rules used to determine whether a software item or feature passes or fails a
test.
Path Testing: A test method satisfying the coverage criteria that each logical path through the program be
tested. Often, paths through the program are grouped into a finite set of classes and one path from each
class is tested.
Performance Test: Validates that both the online response time and batch run times meet the defined
performance requirements.
Policy: Managerial desires and intents concerning either process (intended objectives) or products (desired
attributes).
Population Analysis: Analyzes production data to identify, independent from the specifications, the types
and frequency of data that the system will have to process/produce. This verifies that the specs can handle
types and frequency of actual data and can be used to create validation tests.
Procedure: The step-by-step method followed to ensure that standards are met.
Process:
1. The work effort that produces a product. This includes efforts of people and equipment guided by
policies, standards, and procedures.
2. A statement of purpose and an essential set of practices (activities) that address that purpose.
Proof of Correctness: The use of mathematical logic techniques to show that a relationship between
program variables assumed true at program entry implies that another relationship between program
variables holds at program exit.
Quality: A product is a quality product if it is defect free. To the producer, a product is a quality product if
it meets or conforms to the statement of requirements that defines the product. This statement is usually
shortened to: quality means meets requirements. From a customer’s perspective, quality means, “fit for
use.”
Quality Assurance (QA): Deals with 'prevention' of defects in the product being developed. It is
associated with a process. The set of support activities (including facilitation, training, measurement, and
analysis) needed to provide adequate confidence that processes are established and continuously improved
to produce products that meet specifications and are fit for use.
Quality Control (QC): Its focus is defect detection and removal. Testing is a quality control activity
Quality Improvement: To change a production process so that the rate at which defective products
(defects) are produced is reduced. Some process changes may require the product to be changed.
Recovery Test: Evaluates the contingency features built into the application for handling
interruptions and for returning to specific points in the application processing cycle, including checkpoints,
backups, restores, and restarts. This test also assures that disaster recovery is possible.
Regression Testing: Testing of a previously verified program or application following program
modification for extension or correction to ensure no new defects have been introduced.
Risk Matrix: Shows the controls within application systems used to reduce the identified risk, and in what
segment of the application those risks exist. One dimension of the matrix is the risk, the second dimension
is the segment of the application system, and within the matrix at the intersections are the controls. For
example, if a risk is “incorrect input” and the systems segment is “data entry,” then the intersection within
the matrix would show the controls designed to reduce the risk of incorrect input during the data entry
segment of the application system.
Scatter Plot Diagram: A graph designed to show whether there is a relationship between two changing
variables.
Standards: The measure used to evaluate products and identify non-conformance. The basis upon which
adherence to policies is measured.
Statement of Requirements: The exhaustive list of requirements that define a product.
Statement Testing: A test method that executes each statement in a program at least once during program
testing.
Static Analysis: Analysis of a program that is performed without executing the program. It may be applied
to the requirements, design, or code.
Stress Testing: This test subjects a system, or components of a system, to varying environmental
conditions that defy normal expectations. For example, high transaction volume, large database size or
restart/recovery circumstances. The intention of stress testing is to identify constraints and to ensure that
there are no performance problems.
Structural Testing: A testing method in which the test data is derived solely from the program structure.
Stub: Special code segments that when invoked by a code segment under testing; simulate the behaviour
of designed and specified modules not yet constructed.
System Test: During this event, the entire system is tested to verify that all functional, information,
structural and quality requirements have been met.
Test Case: Test cases document the input, expected results, and execution conditions of a given test item.
Test Plan: A document describing the intended scope, approach, resources, and schedule of testing
activities. It identifies test items, the features to be tested, the testing tasks, the personnel performing each
task, and any risks requiring contingency planning.
Test Scripts: A tool that specifies an order of actions that should be performed during a test session. The
script also contains expected results. Test scripts may be manually prepared using paper forms, or may be
automated using capture/playback tools or other kinds of automated scripting tools.
Test Suite Manager: A tool that allows testers to organize test scripts by function or other grouping.
Unit Test: Testing individual programs, modules, or components to demonstrate that the work package
executes per specification, and validate the design and technical quality of the application. The focus is on
ensuring that the detailed logic within the component is accurate and reliable according to pre-determined
specifications. Testing stubs or drivers may be used to simulate behaviour of interfacing modules.
Usability Test: The purpose of this event is to review the application user interface and other human
factors of the application with the people who will be using the application. This is to ensure that the
design (layout and sequence, etc.) enables the business functions to be executed as easily and intuitively as
possible. This review includes assuring that the user interface adheres to documented User Interface
standards, and should be conducted early in the design stage of development. Ideally, an application
prototype is used to walk the client group through various business scenarios, although paper copies of
screens, windows, menus, and reports can be used.
User Acceptance Test: User Acceptance Testing (UAT) is conducted to ensure that the system meets the
needs of the organization and the end user/customer. It validates that the system will work as intended by
the user in the real world, and is based on real world business scenarios, not system requirements.
Essentially, this test validates that the right system was built.
Validation: Determination of the correctness of the final program or software produced from a
development project with respect to the user needs and requirements.
Verification: 1. The process of determining whether the products of a given phase of the software
development cycle fulfil the requirements established during the previous phase.
2. The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting
whether items, processes, services, or documents conform to specified requirements.
Walkthroughs: During a walkthrough, the producer of a product “walks through” or paraphrases the
products content, while a team of other individuals follow along. The team’s job is to ask questions and
raise issues about the product that may lead to defect identification.
White-box Testing: A testing technique that assumes that the path of the logic in a program unit or
component is known. White-box testing usually consists of testing paths, branch by branch, to produce
predictable results. This technique is usually used during tests executed by the development team, such as
Unit or Component testing.
Types of Testing
Black box testing - not based on any knowledge of internal design or code. Tests are based on
requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests are based on
coverage of code statements, branches, paths, conditions.
Unit testing - Unit is the smallest compliable component. A unit typically is the work of one programmer.
This unit is tested in isolation with the help of stubs or drivers. Typically done by the programmer and not
by testers.
Incremental integration testing - continuous testing of an application as new functionality is added;
requires that various aspects of an application's functionality be independent enough to work separately
before all parts of the program are completed, or that test drivers be developed as needed; done by
programmers or by testers.

Integration testing - testing of combined parts of an application to determine if they function together
correctly. The 'parts' can be code modules, individual applications, client and server applications on a
network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing - black-box testing aimed to validate to functional requirements of an application; this
type of testing should be done by testers.
System testing - black-box type testing that is based on overall requirements specifications; covers all
combined parts of a system.
End-to-end testing - similar to system testing but involves testing of the application in a environment that
mimics real-world use, such as interacting with a database, using network communications, or interacting
with other hardware, applications, or systems if appropriate. Even the transactions performed mimics the
end users usage of the application.
Sanity testing - typically an initial testing effort to determine if a new software version is performing well
enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5
minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane'
enough condition to warrant further testing in its current state.
Smoke testing - The general definition (related to Hardware) of Smoke Testing is:
Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and drain lines to
detect sources of unwanted leaks and sources of sewer odours.
In relation to software, the definition is Smoke testing is non-exhaustive software testing, ascertaining that
the most crucial functions of a program work, but not bothering with finer details.
Static testing - Test activities that are performed without running the software is called static testing.
Static testing includes code inspections, walkthroughs, and desk checks.
Dynamic testing - test activities that involve running the software are called dynamic testing.
Regression testing - Testing of a previously verified program or application following program
modification for extension or correction to ensure no new defects have been introduced. Automated testing
tools can be especially useful for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by
end-users/customers over some limited period of time.
Load testing -Load testing is a test whose objective is to determine the maximum sustainable load the
system can handle. Load is varied from a minimum (zero) to the maximum level the system can sustain
without running out of resources or having, transactions suffer (application-specific) excessive delay.
Stress testing - Stress testing is subjecting a system to an unreasonable load while denying it the resources
(e.g., RAM, disc, mips, interrupts) needed to process that load. The idea is to stress a system to the
breaking point in order to find bugs that will make that break potentially harmful. The system is not
expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner
(e.g., not corrupting or losing data). The load (incoming transaction stream) in stress testing is often
deliberately distorted to force the system into resource depletion.
Performance testing - Validates that both the online response time and batch run times meet the defined
performance requirements.
Usability testing - testing for 'user-friendliness'. Clearly, this is subjective, and will depend on the targeted
end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can
be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or external access,
wilful damage, etc; may require sophisticated testing techniques.
Compatibility testing - testing how well software performs in a particular hardware/software/ operating
system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not based on formal test
plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it.
Monkey testing:- monkey testing is a testing that runs with no specific test in mind. The monkey in this
case is the producer of any input data (whether that be file data, or input device data).
Keep pressing some keys randomly and check whether the software fails or not.
User acceptance testing - determining if software is satisfactory to an end-user or customer.
Comparison testing - comparing software weaknesses and strengths to competing products.
Alpha testing - testing of an application when development is nearing completion, minor design changes
may still be made as a result of such testing. Typically done by users within the development team.
Beta testing - testing when development and testing are essentially completed and final bugs and problems
need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the
'bugs' are detected. Proper implementation requires large computational resources.
Cross browser testing - application tested with different browser for usability testing & compatibility
testing.
Concurrent testing - Multi-user testing geared towards determining the effects of accessing the same
application code, module or database records. Identifies and measures the level of locking, deadlocking
and use of single-threaded code and locking semaphores etc.
Negative testing - Testing the application for fail conditions, negative testing is testing the tool with
improper inputs. For example entering the special characters for phone number.

Testing Techniques

Testing techniques can be used to effectively design efficient test cases. These techniques can be grouped
into black-box and white-box techniques. Find below some of the techniques
Black-Box Testing techniques: When creating black-box test cases, the input data used is critical. Three
successful techniques for managing the amount of input data required include:
Equivalence Partitioning : An equivalence class is a subset of data that is representative of a larger class.
Equivalence partitioning is a technique for testing equivalence classes rather than undertaking exhaustive
testing of each value of the larger class. For example, program which edits credit limits within a given
range (1,000 - 1,500) would have three equivalence classes:
< 1,000 (invalid)
Between 1,000 and 1,500 (valid)
> 1,500 (invalid)
Boundary Value Analysis: A technique that consists of developing test cases and data that focus on the
input and output boundaries of a given function. In same credit limit example, boundary analysis would
test:
Low boundary +/- one (999 and 1,001)
On the boundary (1,000 and 1,500)
Upper boundary +/- one (1,499 and 1,501)
Error Guessing : Test cases can be developed based upon the intuition and experience of the tester. For
example, in an example where one of the inputs is the date, a tester may try February 29, 2000
White-Box Testing techniques
White-box testing assumes that the path of logic in a unit or program is known. White-box testing consists
of testing paths, branch by branch, to produce predictable results. The following are white-box testing
techniques:
Statement Coverage : Execute all statements at least once.
Decision Coverage : Execute each decision direction at least once.
Condition Coverage : Execute each decision with all possible outcomes at least once.
Decision/Condition Coverage : Execute all possible combinations of condition outcomes in each
decision. Treat all iterations as two-way conditions exercising the loop zero times and one time.

Testing Tools

Test Management Tools


These tools are used to manage the entire testing process. Most of the tools support the following activities
Requirements gathering
Test planning
Test cases development
Test execution and scheduling
Analyzing test exection results
Defect reporting and tracking
Generation of test reports
The following are some of the prominent tools
Mercury TestDirector:
Silk Central Test Manager:
Defect tracking Tools
These tools are used to record bugs or defects uncovered during testing and track them until they get
completely fixed.
One of the free tool available on web
Bugzilla:
Automation Tools
These tools records the actions performed on the application being tested, in a language it understands and
wherever we want to compare the actual behaviour of the application with the expected behaviour, we
insert a verification point. The tool generates a script with the recorded actions and inserted verification
points. To repeat the test case, all we need to do is playback (run) the script and at the end of its run, check
the result file.
Some of the prominent tools available are
WinRunner:
Silk test:
Rational Robot:
Load testing/Performance testing tools
These tools can be used to identify the bottlenecks or areas of code, which are severely hampering the
performance of the application. They can be also used to measure the maximum load, which the
application can withstand before its performance starts to degrade.
Some of the prominent load testing tools
Load Runner:
Silk Performer:
OpenSta:
Code coverage tools
This type of tools can be very useful to measure the coverage of the test cases and to identify the gaps. The
tool identifies the code that has not been run even once (hence not tested) while running the test cases. You
may have to sit with the developers to understand the code. After analysis, the test cases should be updated
with new ones to cover the missing code. It’s not cost effective to aim for 100% code coverage unless it is
a critical application otherwise 70-80% is considered to be a good coverage.
The following are some of the prominent tools
Rational purecoverage:
Clover:
What is Quality?
A quality product is defined as the one that meets product requirements. But Quality can only be
seen through customer eyes. Therefore, the most important definition of quality is meeting customer needs
or Understanding customer requirements, expectations and exceeding those expectations. Customer must
be satisfied by using the product, and then it is a quality product.
What is the difference between meeting product requirements and meeting customer needs? Aren't
customer needs translated into product requirements?
Not always. However, our aim is to accurately capture customer needs into requirements and build a
product that satisfies those needs, we sometimes fail to do so because of the following reasons
-Customers fail to accurately communicate their exact needs
-captured requirements can be misinterpreted
Can't we define a quality product as the one that contains no bugs/defects?
Quality is much more than absence of defects/bugs. Consider this; though the product may have zero
defects, but if the usability sucks i.e. it is difficult to learn and operate the product, then it’s not a quality
product.
If the product has some defects, can it be still called a quality product?
It depends on the nature of those bugs. But in some cases, even though a product has bugs, it can be
still called a quality product. Unless the product is very critical, aiming for zero defects is not cost effective
always. We should aim for 100% defect 'detection', but given the budget, time and resources constraints,
we can still release the product with some unfixed or open bugs. If the open bugs cause no loss to the
customer, then it can be still called a quality product.
Is quality only tester’s responsibility?
No. Quality is everybody's responsibility including the customer. We, testers identify the deviations
and report them that are it. There are many factors that impact the quality such as maintainability,
reusability, flexibility, portability which the testers can't validate. Testers can only validate the correctness,
reliability, usability and interoperability of a product and report the deviations.
When is the right time to catch a bug?
As soon as possible. The cost of fixing the bug will keep on increasing exponentially as the product
development progresses. For example, the cost of fixing a design bug identified in system testing is much
more than fixing it, if it had been identified during design phase itself because now you not only have to
rectify the design but also the code, the corresponding documents and code that is dependent on this code.
Are there any other quality control practices apart from testing?
Yes. Inspections, design and code walkthroughs, reviews etc.
What are software quality factors?
Software quality factors are attributes of the software that, if they are wanted and not present, pose a
risk to the success of the software. There are 11 main factors and their definitions are given below. The
priority and importance of these attributes keeps changing from product to product. Like if the product
being developed needs to be changed quite frequently, then flexibility and reusability of the product needs
to be given priority. The following are the quality factors
Correctness: Extent to which a program satisfies its requirements.
Reliability: Extent to which a program can be expected to perform its intended function with required
precision.
Efficiency: The amount of computing resources and code required by a program to perform a function.
Integrity: Extent to which access to software or data by unauthorized persons can be controlled.
Usability: Effort required learning, operating, preparing input, and interpreting output of a program.
Maintainability: Effort required locating and fixing an error in an operational program.
Testability: Effort required testing a program to ensure that it performs its intended function.
Flexibility: Effort required modifying an operational program.
Portability: Effort required transferring software from one configuration to another.
Reusability: Extent to which a program can be used in other applications – related to the packaging and
scope of the functions that programs perform.
Interoperability: Effort required to couple one system with another
How to reduce the amount spend to ensure and build quality?
Or
How to reduce the cost of quality?
Cost of quality includes the total amount spent on preventing errors, identifying and correcting errors.
Coming to reducing this cost. Try to build a product that has less defects or no defects even before it goes
to testing phase and to achieve this you should spend more money and effort on trying to prevent errors
from going into the product. You must concentrate greatly on building efficient and effective processes and
keep on continuously improving them by identifying weakness in them. You many not reap great benefits
immediately but over a long run you can make significant savings by reducing the cost of quality.
How to reduce the cost of fixing a bug?
Catch it as early as possible. As the development process progresses, the cost of fixing a bug keep on
increasing exponentially. Practice life cycle testing
Life Cycle of Testing or V Testing
In traditional waterfall model, testing comes at the fag end of the development process. No testing is done
during requirements gathering phase, design phase and development phase. Defects identified during this
disconnected testing phase are very costly to fix which is this model's biggest disadvantage.
Life cycle testing or V testing aims at catching the defects as early as possible and thus reduces the cost of
fixing them. It achieves this by continuously testing the system during all phases of the development
process rather than just limiting testing to the last phase.
The life cycle testing can be best accomplished by the formation of a separate test team.
When the project starts both the system development process and system, test process begins. The team
that is developing the system begins the systems development process and the team that is conducting the
system test begins planning the system test process. Both teams start at the same point using the same
information. The systems development team has the and document the requirements for developmental
purposes. The test team will likewise use those same requirements, but for testing the system. At
appropriate points during the developmental process, the test team will test the developmental process in
an attempt to uncover defects.
The following is the software testing process, which follows life cycle testing.
Requirements Gathering phase: Verify whether the requirements captured are true user needs Verify that
the requirements captured are complete, unambiguous, and accurate and none conflicting with each other.
Design phase: Verify whether the design achieves the objectives of the requirements as well as the design
being effective and efficient.
Verification Techniques: Design walkthroughs, Design Inspections
Coding phase: Verify that the design is correctly translated to code. Verify coding is as per company’s
standards and policies.
Verification Techniques: Code walkthroughs, code Inspections
Validation Techniques: Unit testing and Integration techniques
System testing phase: Execute test cases. Log bugs and track them to closure.
User Acceptance phase: Users validate the applicability and usability of the software in performing their
day-to-day operations.
Maintenance phase: After the software is implemented, any changes to the software must be thoroughly
tested and care should be taken not to introduce regression issues.

Testing FAQ

What is the difference between QA and testing?


QA stands for "Quality Assurance", deals with 'prevention' of defects in the product being
developed. It is associated with process and process improvement activities. TESTING means "quality
control". Its focus is defect detection and removal. QUALITY CONTROL measures the quality of a
product. QUALITY ASSURANCE measures the quality of processes used to create a quality product.

What is black box/white box testing?


Black-box and white-box are test design methods.
Black-box test design treats the system as a "black-box"(can't see what is inside the box), so you
design test cases which pour the input at one end of the box and expect a certain specific output from other
end of the box .To run these test cases you don't need to know how the input is transformed inside the box
to output .Black-box also called as behavioural or functional or opaque-box or Gray-box or closed-box.
White-box test design treats the system as a transparent box, which allows one to peek inside the "box", so
you see how an input transforms into output. So you design test cases to test the internal logic, paths or
branches of the box. White-box is also known as structural or glass-box or clear-box or translucent-box test
design

What are unit, component and integration testing?


Unit: The smallest compilable component. A unit typically is the work of one programmer. As
defined, it does not include any called sub-components or communicating components in general.
Unit Testing: In unit testing called components (or communicating components) are replaced with stubs,
simulators, or trusted components. Calling components are replaced with drivers or trusted super-
components. The unit is tested in isolation.
Component: A unit is a component. The integration of one or more components is a component.
Note: The reason for "one or more" as contrasted to "Two or more" is to allow for components that call
themselves recursively.
Component testing: the same as unit testing except that all stubs and simulators are replaced with the real
thing.
Integration Testing: This test begins after two or more programs or application components have been
successfully unit tested. The development team to validate the interaction or communication/flow of
information between the individual components, which will be integrated, conducts it.

What is the difference between load and stress testing?


Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g.,
RAM, disc, mips, interrupts) needed to process that load. The idea is to stress a system to the breaking
point in order to find bugs that will make that break potentially harmful. The system is not expected to
process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not
corrupting or losing data). The load (incoming transaction stream) in stress testing is often deliberately
distorted so as to force the system into resource depletion.
Load testing is a test whose objective is to determine the maximum sustainable load the system can handle.
Load is varied from a minimum (zero) to the maximum level the system can sustain without running out of
resources or having, transactions suffer (application-specific) excessive delay.

Why does software have bugs?


Miscommunication or no communication - as to specifics of what an application should or should not
do (the application's requirements).
Software complexity - the complexity of current software applications can be difficult to comprehend for
anyone without experience in modern-day software development. Multi-tiered applications, client-server
and distributed applications, data communications, enormous relational databases, and sheer size of
applications have all contributed to the exponential growth in software/system complexity.
Programming errors - programmers, like anyone else, can make mistakes.
changing requirements (whether documented or undocumented) - the end-user may not understand the
effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers,
effects on other projects, work already completed that may have to be redone or thrown out, hardware
requirements that may be affected, etc. If there are many minor changes or any major changes, known and
unknown dependencies among parts of the project are likely to interact and cause problems, and the
complexity of coordinating changes may result in errors. Enthusiasm of engineering staff may be affected.
In some fast-changing business environments, continuously modified requirements may be a fact. In this
case, management must understand the resulting risks, and QA and test engineers must adapt and plan for
continuous extensive testing to keep the inevitable bugs from running out of control.
Time pressures - scheduling of software projects is difficult at best, often requiring a lot of guesswork.
When deadlines loom and the crunch comes, mistakes will be made.
poorly documented code - it's tough to maintain and modify code that is badly written or poorly
documented; the result is bugs. In many organizations management provides no incentive for programmers
to document their code or write clear, understandable, maintainable code. In fact, it's usually the opposite:
they get points mostly for quickly turning out code, and there's job security if nobody else can understand
it ('if it was hard to write, it should be hard to read').
software development tools - visual tools, class libraries, compilers, scripting tools, etc. often introduce
their own bugs or are poorly documented, resulting in added bugs.

What is difference between verification and validation?


Verification is the process of determining whether the products of a given phase of the software
development cycle fulfill the requirements established during the previous phase. This involves reviewing,
inspecting, checking, auditing, or otherwise establishing and documenting whether items, processes,
services, or documents conform to specified requirements.
Validation is the determination of the correctness of the final program or software produced from a
development project with respect to the user needs and requirements. This involves actual testing of the
product.

What are 5 common problems in the software development process?


Poor requirements - if requirements are unclear, incomplete, too general, and not testable, there will
be problems.
Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
Inadequate testing - no one will know whether or not the program is any good until the customer complains
or systems crash.
Featuritis - requests to pile on new features after development is underway; extremely common.
Miscommunication - if developers don't know what's needed or customers have erroneous expectations,
problems are guaranteed.

What is difference between test plan and use case?


Test plan: It contains introduction to the Client Company, scope, overview of the application, test
strategy, schedule, roles and responsibilities, deliverables and milestones.
Use Case: It is nothing but user action and system response. It contains the flows typical flow, alternate
flow and exceptional flow. Apart from these it also has a pre condition and post condition. A use case
describes how a end user uses specific functionality in the application.

What is difference between smoke testing and sanity testing?


The general definition (related to Hardware) of Smoke Testing is: Smoke testing is a safe harmless
procedure of blowing smoke into parts of the sewer and drain lines to detect sources of unwanted leaks and
sources of sewer odors. In relation to software, the definition is Smoke testing is non-exhaustive software
testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
Sanity testing is a brief test of major functional features of a software application to determine if its
basically operational (or Sane).

Differentiate between Static and Dynamic testing?


Test activities that are performed without running the software is called static testing. Static testing
includes code inspections, walkthroughs, and desk checks. In contrast, test activities that involve running
the software are called dynamic testing.
Static: Document review, inspections, reviews.
Dynamic: Build testing/testing code/testing application.

What is the difference between Requirements & Specifications?


Requirements: means statements by the customer what the system has to achieve.
Specifications: are implementable requirements.
What is the difference between statement coverage, path coverage and branch coverage?
Statement coverage measures the number of lines executed.
Branch coverage measures the number of executed branches. A branch is an outcome of a decision, so an if
statement, for example, has two branches (True and False).
Path coverage usually means coverage with respect to the set of entry/exit paths.

What is cross browser testing?


Cross browser testing - application tested with different browser for usability testing & compatibility
testing.

What is difference between Waterfall model and V model?


The waterfall model is a software development model (a process for the creation of software) in which
development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements
analysis, design, implementation, testing (validation), integration, and maintenance. To follow the
waterfall model, one proceeds from one phase to the next in a purely sequential manner. The model
maintains that one should move to a phase only when its preceding phase is completed and perfected.
Phases of development in the waterfall model are thus discrete, and there is no jumping back and forth or
overlap between them. In Waterfall Model, the tester role will take place only in the testing phase
V Model or Life cycle testing involves continuous testing of the system during the developmental process.
At predetermined points, the results of the development process are inspected to determine the correctness
of the implementation. These inspections identify defects at the earliest possible point.
When the project starts both the system development process and system test process begins. The team that
is developing the system begins the systems development process and the team that is conducting the
system test begins planning the system test process. Both teams start at the same point using the same
information.

What is the difference between Alpha testing and Beta testing?


Typically, software goes through two stages of testing before it is considered finished. Only users within
the organization developing the software often perform the first stage, called alpha testing. The second
stage, called beta testing, generally involves a limited number of external users.

What is Baseline document, Can you say any two?


A baseline document is a document, which has covered all the details and went through a "walkthrough".
Once a document is base lined, it cannot be changed unless there is a change request has been approved.
For instance, we have requirements document base lined, then High-level design document base lined and
so on....

Briefly, explain what is software Testing Life Cycle?


Software testing life cycle contains the following components:
1.Requirements
2.Test Plan preparation
3.Test case preparation
4. Test case execution
5.Bug Analysis
6.Bug Report
7.Bug Tracking and closure

What is the difference between System and End-to-End testing?


System testing - black-box type testing that is based on overall requirements specifications; covers all
combined parts of a system.
End-to-end testing - similar to system testing but involves testing of the application in a environment that
mimics real-world use, such as interacting with a database, using network communications, or interacting
with other hardware, applications, or systems if appropriate. Even the transaction performed mimics the
end users usage of the application.

What is incremental integration testing?


Continuous testing of an application as new functionality is added; requires that various aspects of an
application's functionality be independent enough to work separately before all parts of the program are
completed, or that test drivers or test stubs be developed as needed; done by programmers or by testers.

What is installation testing and how is it performed?


Installation testing is often the most under tested area in testing. This type of testing is performed to
ensure that all Installed features and options function properly. It is also performed to verify that all
necessary components of the application are, indeed, installed.
Installation testing should take care of the following points: -
1. To check if while installing product checks for the dependent software / patches say Service pack3.
2. The product should check for the version of the same product on the target machine, say the previous
version should not be over installed on the newer version.
3. Installer should give a default installation path say “C:\programs\.”
4. Installer should allow user to install at location other then the default installation path.
5. Check if the product can be installed “Over the Network”
6. Installation should start automatically when the CD is inserted.
7. Installer should give the remove / Repair options.
8. When uninstalling, check that all the registry keys, files, Dll, shortcuts, active X components are
removed from the system.
9. Try to install the software without administrative privileges (login as guest).
10. Try installing on different operating system.
11. Try installing on system having non-compliant configuration such as less memory / RAM / HDD.

What is Compliance Testing? What is its Significance?


Performed to check whether system is developed in accordance with standards, procedures and
policies followed by the company like, completeness of documentation etc.

What is bee-bugging testing and incremental testing?


Bebugging:-Test Engineer release the build with some known bugs is called Bebugging.
Incremental Testing:-Level by level testing is called Incremental Testing.

What are the software models?


A software model is a process for the creation of software. The following are few software models.
1) V-model
2) spiral model
3) waterfall model
4) prototype model

What is Concurrent Testing? In addition, how will you perform it?


Multi-user testing geared towards determining the effects of accessing the same application code,
module or database records. Identifies and measures the level of locking, deadlocking and use of single-
threaded code and locking semaphores etc.

What is the difference between adhoc testing, monkey testing and exploratory testing?
Adhoc testing: This Kind of testing does not have a any process/test case/Test scenarios
defined/preplanned to do it. It involves simultaneous test design and test execution.
Monkey testing:-monkey testing is a testing that runs with no specific test in mind. The monkey in this
case is the producer of any input data (whether that be file data, or input device data).
Keep pressing some keys randomly and check whether the software fails or not.
Exploratory testing is simultaneous learning, test design and test execution. It is a type of adhoc testing, but
here the tester does not have much idea about the application, he explores the system in an attempt to learn
the application and simultaneously test it.

What is Negative Testing?


Testing the application for fail conditions, negative testing is testing the tool with improper inputs.
For example entering the special characters for phone number.

What is Testing Techniques?


Black Box and White Box are testing types and not testing techniques.
Testing techniques are as follows:-
The most popular Black box testing techniques are:-
Equivalence Partitioning.
Boundary Value Analysis.
Cause-Effect Graphing.
Error Guessing.
The White-Box testing techniques are -
Statement coverage
Decision coverage
Condition coverage
Decision-condition coverage
Multiple condition coverage
Basis Path Testing
Loop testing
Data flow testing

What is the difference between bug priority & bug severity?


Priority means how urgently bug is needed to fix
Severity means how badly it harms the system
Priority tells U how important the bug is.
Severity tells U how bad the bug is.
Severity is constant...whereas priority might change according to schedule

What is defect density?


Defect density = Total number of defects/LOC (lines of code)
Defect density = Total number of defects/Size of the project
Size of Project can be Function points, feature points, use cases, KLOC etc

What is the difference between testing and debugging?


Testing: Locating or Identifying Bugs
Debugging: Fixing the identified Bugs

What are CMM and CMMI?


CMM stands for Capability Maturity Model developed by the Software Engineering Institute(SEI).
Before we delve into it, lets understand what is a software process.
A Software Process can be defined as set of activities, methods, practices and transformations that people
employ to develop and maintain software and the associated products.
The underlying premise of software process management is that the quality of a software product is largely
determined by the quality of the process used to develop and maintain it.
Continuous process improvement is based on many small, evolutionary steps.CMM organizes these steps
into five maturity levels. Each maturity level comprises a set of process goals that, when satisfied, stabilize
an important component of the software process. Organizing the goals into different levels helps the
organization to prioritize their improvement actions. The five maturity levels are as follows.
1. Initial - The Software Process is characterized as adhoc and occasionally even chaotic. Few processes
are defined and success depends on individual effort and heroics.
2. Repeatable - Basic project management processes are established to track cost, schedule, and
functionality. The necessary process discipline is in place to repeat earlier successes on projects with
similar applications.
3. Defined - The software process for both management and engineering activities is documented,
standardized, and integrated into a standard software process for the organization. All projects use an
approved, tailored version of the organization's standard software process for developing and maintaining
software.
4. Managed - Detailed measures of the software process and product quality are collected. Both the
software process and products are quantitatively understood and controlled.
5. Optimizing - Continuous process improvement is enabled by quantitative feedback from the process
and from piloting innovative ideas and technologies.

CMMI:
In CMM (aka SW-CMM), the entire emphasis is on the software practices. But Software is
becoming such a large factor in the systems that are being built today that it is virtually impossible to
logically separate the two disciplines.SEI redirected its effort toward the integration of system and
software practices and thus born cMMI which stands for Capability Maturity Model Integration.
QTP QUESTIONS

1. What are the Features & Benefits of Quick Test Pro (QTP 8.0)? - Operates stand-alone, or
integrated into Mercury Business Process Testing and Mercury Quality Center. Introduces next-
generation zero-configuration Keyword Driven testing technology in Quick Test Professional 8.0
allowing for fast test creation, easier maintenance, and more powerful data-driving capability.
Identifies objects with Unique Smart Object Recognition, even if they change from build to build,
enabling reliable unattended script execution. Collapses test documentation and test creation to a
single step with Auto-documentation technology. Enables thorough validation of applications
through a full complement of checkpoints.
2. How to handle the exceptions using recovery scenario manager in QTP? - There are 4 trigger
events during which a recovery scenario should be activated. A pop up window appears in an
opened application during the test run: A property of an object changes its state or value, A step in
the test does not run successfully, An open application fails during the test run, These triggers are
considered as exceptions. You can instruct QTP to recover unexpected events or errors that
occurred in your testing environment during test run. Recovery scenario manager provides a wizard
that guides you through the defining recovery scenario. Recovery scenario has three steps: 1.
Triggered Events 2. Recovery steps 3. Post Recovery Test-Run
3. What is the use of Text output value in QTP? - Output values enable to view the values that the
application talks during run time. When parameterized, the values change for each iteration. Thus
by creating output values, we can capture the values that the application takes for each run and
output them to the data table.
4. How to use the Object spy in QTP 8.0 version? - There are two ways to Spy the objects in QTP:
1) Thru file toolbar, In the File Toolbar click on the last toolbar button (an icon showing a person
with hat). 2) True Object repository Dialog, In Object repository dialog click on the button object
spy. In the Object spy Dialog click on the button showing hand symbol. The pointer now changes
in to a hand symbol and we have to point out the object to spy the state of the object if at all the
object is not visible. or window is minimized then, hold the Ctrl button and activate the required
window to and release the Ctrl button.
5. How Does Run time data (Parameterization) is handled in QTP? - You can then enter test data
into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate
data sets and create multiple test iterations, without programming, to expand test case coverage.
Data can be typed in or imported from databases, spreadsheets, or text files.
6. What is keyword view and Expert view in QTP? - Quick Test’s Keyword Driven approach, test
automation experts have full access to the underlying test and object properties, via an integrated
scripting and debugging environment that is round-trip synchronized with the Keyword View.
Advanced testers can view and edit their tests in the Expert View, which reveals the underlying
industry-standard VBScript that Quick Test Professional automatically generates. Any changes
made in the Expert View are automatically synchronized with the Keyword View.
7. Explain about the Test Fusion Report of QTP? - Once a tester has run a test, a Test Fusion
report displays all aspects of the test run: a high-level results overview, an expandable Tree View
of the test specifying exactly where application failures occurred, the test data used, application
screen shots for every step that highlight any discrepancies, and detailed explanations of each
checkpoint pass and failure. By combining Test Fusion reports with Quick Test Professional, you
can share reports across an entire QA and development team.
8. Which environments does QTP support? - Quick Test Professional supports functional testing of
all enterprise environments, including Windows, Web,..NET, Java/J2EE, SAP, Siebel, Oracle,
PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services.
9. What is QTP? - Quick Test is a graphical interface record-playback automation tool. It is able to
work with any web, java or windows client application. Quick Test enables you to test standard
web objects and ActiveX controls. In addition to these environments, Quick Test Professional also
enables you to test Java applets and applications and multimedia objects on Applications as well as
standard Windows applications, Visual Basic 6 applications and.NET framework applications
10. Explain QTP Testing process? - Quick Test testing process consists of 6 main phases:
11. Create your test plan - Prior to automating there should be a detailed description of the test
including the exact steps to follow, data to be input, and all items to be verified by the test. The
verification information should include both data validations and existence or state verifications of
objects in the application.
12. Recording a session on your application - As you navigate through your application, Quick Test
graphically displays each step you perform in the form of a collapsible icon-based test tree. A step
is any user action that causes or makes a change in your site, such as clicking a link or image, or
entering data in a form.
13. Enhancing your test - Inserting checkpoints into your test lets you search for a specific value of a
page, object or text string, which helps you identify whether or not your application is functioning
correctly. NOTE: Checkpoints can be added to a test as you record it or after the fact via the Active
Screen. It is much easier and faster to add the checkpoints during the recording process.
Broadening the scope of your test by replacing fixed values with parameters lets you check how
your application performs the same operations with multiple sets of data. Adding logic and
conditional statements to your test enables you to add sophisticated checks to your test.
14. Debugging your test - If changes were made to the script, you need to debug it to check that it
operates smoothly and without interruption.
15. Running your test on a new version of your application - You run a test to check the behavior of
your application. While running, Quick Test connects to your application and performs each step in
your test.
16. Analyzing the test results - You examine the test results to pinpoint defects in your application.
17. Reporting defects - As you encounter failures in the application when analyzing test results, you
will create defect reports in Defect Reporting Tool.
18. Explain the QTP Tool interface. - It contains the following key elements: Title bar, displaying the
name of the currently open test, Menu bar, displaying menus of Quick Test commands, File
toolbar, containing buttons to assist you in managing tests, Test toolbar, containing buttons used
while creating and maintaining tests, Debug toolbar, containing buttons used while debugging tests.
Note: The Debug toolbar is not displayed when you open Quick Test for the first time. You can
display the Debug toolbar by choosing View — Toolbars — Debug. Action toolbar, containing
buttons and a list of actions, enabling you to view the details of an individual action or the entire
test flow. Note: The Action toolbar is not displayed when you open Quick Test for the first time.
You can display the Action toolbar by choosing View — Toolbars — Action. If you insert a
reusable or external action in a test, the Action toolbar is displayed automatically. Test pane,
containing two tabs to view your test-the Tree View and the Expert View ,Test Details pane,
containing the Active Screen. Data Table, containing two tabs, Global and Action, to assist you in
parameterizing your test. Debug Viewer pane, containing three tabs to assist you in debugging your
test-Watch Expressions, Variables, and Command. (The Debug Viewer pane can be opened only
when a test run pauses at a breakpoint.) Status bar, displaying the status of the test
19. How does QTP recognize Objects in AUT? - Quick Test stores the definitions for application
objects in a file called the Object Repository. As you record your test, Quick Test will add an entry
for each item you interact with. Each Object Repository entry will be identified by a logical name
(determined automatically by Quick Test), and will contain a set of properties (type, name, etc) that
uniquely identify each object. Each line in the Quick Test script will contain a reference to the
object that you interacted with, a call to the appropriate method (set, click, check) and any
parameters for that method (such as the value for a call to the set method). The references to objects
in the script will all be identified by the logical name, rather than any physical, descriptive
properties.
20. What are the types of Object Repositories in QTP? - Quick Test has two types of object
repositories for storing object information: shared object repositories and action object repositories.
You can choose which type of object repository you want to use as the default type for new tests,
and you can change the default as necessary for each new test. The object repository per-action
mode is the default setting. In this mode, Quick Test automatically creates an object repository file
for each action in your test so that you can create and run tests without creating, choosing, or
modifying object repository files. However, if you do modify values in an action object repository,
your changes do not have any effect on other actions. Therefore, if the same test object exists in
more than one action and you modify an object’s property values in one action, you may need to
make the same change in every action (and any test) containing the object.
21. Explain the check points in QTP? - A checkpoint verifies that expected information is displayed
in an Application while the test is running. You can add eight types of checkpoints to your test for
standard web objects using QTP. A page checkpoint checks the characteristics of an Application. A
text checkpoint checks that a text string is displayed in the appropriate place on an Application. An
object checkpoint (Standard) checks the values of an object on an Application. An image
checkpoint checks the values of an image on an Application. A table checkpoint checks
information within a table on a Application. An Accessibility checkpoint checks the web page for
Section 508 compliance. An XML checkpoint checks the contents of individual XML data files or
XML documents that are part of your Web application. A database checkpoint checks the contents
of databases accessed by your web site
22. In how many ways we can add check points to an application using QTP? - We can add
checkpoints while recording the application or we can add after recording is completed using
Active screen (Note: To perform the second one The Active screen must be enabled while
recording).
23. How does QTP identify objects in the application? - QTP identifies the object in the application
by Logical Name and Class.
24. What is Parameterizing Tests? - When you test your application, you may want to check how it
performs the same operations with multiple sets of data. For example, suppose you want to check
how your application responds to ten separate sets of data. You could record ten separate tests, each
with its own set of data. Alternatively, you can create a parameterized test that runs ten times: each
time the test runs, it uses a different set of data.
25. What is test object model in QTP? - The test object model is a large set of object types or classes
that Quick Test uses to represent the objects in your application. Each test object class has a list of
properties that can uniquely identify objects of that class and a set of relevant methods that Quick
Test can record for it. A test object is an object that Quick Test creates in the test or component to
represent the actual object in your application. Quick Test stores information about the object that
will help it identifies and checks the object during the run session.
26. What is Object Spy in QTP? - Using the Object Spy, you can view the properties of any object in
an open application. You use the Object Spy pointer to point to an object. The Object Spy displays
the selected object’s hierarchy tree and its properties and values in the Properties tab of the Object
Spy dialog box.
27. What is the Diff between Image check-point and Bit map Check point? - Image checkpoints
enable you to check the properties of a Web image. You can check an area of a Web page or
application as a bitmap. While creating a test or component, you specify the area you want to check
by selecting an object. You can check an entire object or any area within an object. Quick Test
captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You
can also choose to save only the selected area of the object with your test or component in order to
save disk Space. For example, suppose you have a Web site that can display a map of a city the
user specifies. The map has control keys for zooming. You can record the new map that is
displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint,
you can check that the map zooms in correctly. You can create bitmap checkpoints for all
supported testing environments (as long as the appropriate add-ins are loaded). Note: The results of
bitmap checkpoints may be affected by factors such as operating system, screen resolution, and
color settings.
28. How many ways we can parameterize data in QTP? - There are four types of parameters: Test,
action or component parameters enable you to use values passed from your test or component, or
values from other actions in your test. Data Table parameters enable you to create a data-driven test
(or action) that runs several times using the data you supply. In each repetition, or iteration, Quick
Test uses a different value from the Data Table. Environment variable parameters enable you to use
variable values from other sources during the run session. These may be values you supply, or
values that Quick Test generates for you based on conditions and options you choose. Random
number parameters enable you to insert random numbers as values in your test or component. For
example, to check how your application handles small and large ticket orders, you can have Quick
Test generate a random number and insert it in a number of tickets edit field.
29. How do u do batch testing in WR & is it possible to do in QTP, if so explain? - Batch Testing
in WR is nothing but running the whole test set by selecting Run Test set from the Execution Grid.
The same is possible with QTP also. If our test cases are automated then by selecting Run Test set
all the test scripts can be executed. In this process, the Scripts get executed one by one by keeping
all the remaining scripts in Waiting mode.
30. If I give some thousand tests to execute in 2 days what do u do? - Adhoc testing is done. It
covers the least basic functionalities to verify that the system is working fine.
31. What does it mean when a check point is in red color? What do u do? - A red color indicates
failure. Here we analyze the cause for failure whether it is a Script Issue or Environment Issue or a
Application issue.
32. What is Object Spy in QTP? - Using the Object Spy, you can view the properties of any object in
an open application. You use the Object Spy pointer to point to an object. The Object Spy displays
the selected object’s hierarchy tree and its properties and values in the Properties tab of the Object
Spy dialog box.
33. What is the file extension of the code file & object repository file in QTP? - Code file extension
is.vbs and object repository is.tsr
34. Explain the concept of object repository & how QTP recognizes objects? - Object Repository:
displays a tree of all objects in the current component or in the current action or entire test
(depending on the object repository mode you selected). We can view or modify the test object
description of any test object in the repository or to add new objects to the repository. Quick test
learns the default property values and determines in which test object class it fits. If it is not enough
it adds assistive properties, one by one to the description until it has compiled the unique
description. If no assistive properties are available, then it adds a special Ordinal identifier such as
objects location on the page or in the source code.
35. What are the properties you would use for identifying a browser & page when using
descriptive programming? - Name would be another property apart from title that we can use.
36. Give me an example where you have used a COM interface in your QTP project? - com
interface appears in the scenario of front end and back end. for eg:if you r using oracle as back end
and front end as VB or any language then for better compatibility we will go for an interface. of
which COM will be one among those interfaces. Create object creates handle to the instance of the
specified object so that we program can use the methods on the specified object. It is used for
implementing Automation (as defined by Microsoft).
37. Explain in brief about the QTP Automation Object Model. - Essentially all configuration and
run functionality provided via the Quick Test interface is in some way represented in the Quick
Test automation object model via objects, methods, and properties. Although a one-on-one
comparison cannot always be made, most dialog boxes in Quick Test have a corresponding
automation object, most options in dialog boxes can be set and/or retrieved using the corresponding
object property, and most menu commands and other operations have corresponding automation
methods. You can use the objects, methods, and properties exposed by the Quick Test automation
object model, along with standard programming elements such as loops and conditional statements
to design your program.

LOAD RUNNER QUESTIONS

38. What is load testing? - Load testing is to test that if the application works fine with the loads that
result from large number of simultaneous users, transactions and to determine weather it can handle
peak usage periods.
39. What is Performance testing? - Timing for both read and update transactions should be gathered
to determine whether system functions are being performed in an acceptable timeframe. This
should be done standalone and then in a multi user environment to determine the effect of multiple
transactions on the timing of a single transaction.
40. Did u use LoadRunner? What version? - Yes. Version 7.2.
41. Explain the Load testing process? -
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios
we develop will accomplish load-testing objectives. Step 2: Creating Vusers. Here, we create
Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole,
and tasks measured as transactions. Step 3: Creating the scenario. A scenario describes the events
that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during
the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as
well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load
generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may
create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner
automatically builds a scenario for us. Step 4: Running the scenario. We emulate load on the
server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the
scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual
Vusers. Step 5: Monitoring the scenario. We monitor scenario execution using the LoadRunner
online runtime, transaction, system resource, Web resource, Web server resource, Web application
server resource, database server resource, network delay, streaming media resource, firewall server
resource, ERP server resource, and Java performance monitors. Step 6: Analyzing test results.
During scenario execution, LoadRunner records the performance of the application under different
loads. We use Load Runner’s graphs and reports to analyze the application’s performance.
42. When do you do load and performance Testing? - We perform load testing once we are done
with interface (GUI) testing. Modern system architectures are large and complex. Whereas single
user testing primarily on functionality and user interface of a system component, application testing
focuses on performance and reliability of an entire system. For example, a typical application-
testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to
issues such as what is the response time of the system, does it crash, will it go with different
software applications and platforms, can it hold so many hundreds and thousands of users, etc. This
is when we set do load and performance testing.
43. What are the components of LoadRunner? - The components of LoadRunner are The Virtual
User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring,
LoadRunner Books Online.
44. What Component of LoadRunner would you use to record a Script? - The Virtual User
Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for
a variety of application types and communication protocols.
45. What Component of LoadRunner would you use to play Back the script in multi user mode?
- The Controller component is used to playback the script in multi-user mode. This is done during a
scenario run where a vuser script is executed by a number of vusers in a group.
46. What is a rendezvous point? - You insert rendezvous points into Vuser scripts to emulate heavy
user load on the server. Rendezvous points instruct Vusers to wait during test execution for
multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task.
For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing
100 Vusers to deposit cash into their accounts at the same time.
47. What is a scenario? - A scenario defines the events that occur during each testing session. For
example, a scenario defines and controls the number of users to emulate, the actions to be
performed, and the machines on which the virtual users run their emulations.
48. Explain the recording mode for web Vuser script? - We use VuGen to develop a Vuser script by
recording a user performing typical business processes on a client application. VuGen creates the
script by recording the activity between the client and the server. For example, in web based
applications, VuGen monitors the client end of the database and traces all the requests sent to, and
received from, the database server. We use VuGen to: Monitor the communication between the
application and the server; Generate the required function calls; and Insert the generated function
calls into a Vuser script.
49. Why do you create parameters? - Parameters are like script variables. They are used to vary input
to the server and to emulate real users. Different sets of data are sent to the server each time the
script is run. Better simulate the usage model for more accurate testing from the Controller; one
script can emulate many different users on the system.
50. What is correlation? Explain the difference between automatic correlation and manual
correlation? - Correlation is used to obtain data which are unique for each run of the script and
which are generated by nested queries. Correlation provides the value to avoid errors arising out of
duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is
where we set some rules for correlation. It can be application server specific. Here values are
replaced by data which are created by these rules. In manual correlation, the value we want to
correlate is scanned and create correlation is used to correlate.
51. How do you find out where correlation is required? Give few examples from your projects? -
Two ways: First we can scan for correlations, and see the list of values which can be correlated.
From this we can pick a value to be correlated. Secondly, we can record two scripts and compare
them. We can look up the difference file to see for the values which needed to be correlated. In my
project, there was a unique id developed for each customer, it was nothing but Insurance Number, it
was generated automatically and it was sequential and this value was unique. I had to correlate this
value, in order to avoid errors while running my script. I did using scan for correlation.
52. Where do you set automatic correlation options? - Automatic correlation from web point of
view can be set in recording options and correlation tab. Here we can enable correlation for the
entire script and choose either issue online messages or offline actions, where we can define rules
for that correlation. Automatic correlation for database can be done using show output window and
scan for correlation and picking the correlate query tab and choose which query value we want to
correlate. If we know the specific value to be correlated, we just do create correlation for the value
and specify how the value to be created.
53. What is a function to capture dynamic values in the web Vuser script? - Web_reg_save_param
function saves dynamic data information to a parameter.
54. When do you disable log in Virtual User Generator, When do you choose standard and
extended logs? - Once we debug our script and verify that it is functional, we can enable logging
for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log
Option: When you select Standard log, it creates a standard log of functions and messages sent
during script execution to use for debugging. Disable this option for large load testing scenarios.
When you copy a script to a scenario, logging is automatically disabled Extended Log Option:
Select extended log to create an extended log, including warnings and other messages. Disable this
option for large load testing scenarios. When you copy a script to a scenario, logging is
automatically disabled. We can specify which additional information should be added to the
extended log using the Extended log options.
55. How do you debug a LoadRunner script? - VuGen contains two options to help debug Vuser
scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog
box allow us to determine the extent of the trace to be performed during scenario execution. The
debug information is written to the Output window. We can manually set the message class within
your script using the lr_set_debug_message function. This is useful if we want to receive debug
information about a small section of the script only.
56. How do you write user defined functions in LR? Give me few functions you wrote in your
previous project? - Before we create the User Defined functions we need to create the external
library (DLL) with the function. We add this library to VuGen bin directory. Once the library is
added then we assign user defined function as a parameter. The function should have the following
format: __declspec (dllexport) char* <function name>(char*, char*)Examples of user defined
functions are as follows:GetVersion, GetCurrentTime, GetPltform are some of the user defined
functions used in my earlier project.
57. What are the changes you can make in run-time settings? - The Run Time Settings that we
make are: a) Pacing - It has iteration count. b) Log - Under this we have Disable Logging Standard
Log and c) Extended Think Time - In think time we have two options like Ignore think time and
Replay think time. d) General - Under general tab we can set the vusers as process or as
multithreading and whether each step as a transaction.
58. Where do you set Iteration for Vuser testing? - We set Iterations in the Run Time Settings of the
VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.
59. How do you perform functional testing under load? - Functionality under load can be tested by
running several Vusers concurrently. By increasing the amount of Vusers, we can determine how
much load the server can sustain.
60. What is Ramp up? How do you set this? - This option is used to gradually increase the amount of
Vusers/load on the server. An initial value is set and a value to wait between intervals can be
specified. To set Ramp Up, go to ‘Scenario Scheduling Options’
61. What is the advantage of running the Vuser as thread? - VuGen provides the facility to use
multithreading. This enables more Vusers to be run per generator. If the Vuser is run as a process,
the same driver program is loaded into memory for each Vuser, thus taking up a large amount of
memory. This limits the number of Vusers that can be run on a single generator. If the Vuser is run
as a thread, only one instance of the driver program is loaded into memory for the given number of
Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more
Vusers to be run per generator.
62. If you want to stop the execution of your script on error, how do you do that? - The lr_abort
function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions
section, execute the vuser_end section and end the execution. This function is useful when you
need to manually abort a script execution as a result of a specific error condition. When you end a
script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we
have to first uncheck the “Continue on error” option in Run-Time Settings.
63. What is the relation between Response Time and Throughput? - The Throughput graph shows
the amount of data in bytes that the Vusers received from the server in a second. When we compare
this with the transaction response time, we will notice that as throughput decreased, the response
time also decreased. Similarly, the peak throughput and highest response time would occur
approximately at the same time.
64. Explain the Configuration of your systems? - The configuration of our systems refers to that of
the client machines on which we run the Vusers. The configuration of any client machine includes
its hardware settings, memory, operating system, software applications, development tools, etc.
This system component configuration should match with the overall system configuration that
would include the network infrastructure, the web server, the database server, and any other
components that go with this larger system so as to achieve the load testing objectives.
65. How do you identify the performance bottlenecks? - Performance Bottlenecks can be detected
by using monitors. These monitors might be application server monitors, web server monitors,
database server monitors and network monitors. They help in finding out the troubled area in our
scenario which causes increased response time. The measurements made are usually performance
response time, throughput, hits/sec, network delay graphs, etc.
66. If web server, database and Network are all fine where could be the problem? - The problem
could be in the system itself or in the application server or in the code written for the application.
67. How did you find web server related issues? - Using Web resource monitors we can find the
performance of web servers. Using these monitors we can analyze throughput on the web server,
number of hits per second that occurred during scenario, the number of http responses per second,
the number of downloaded pages per second.
68. How did you find database related issues? - By running “Database” monitor and help of “Data
Resource Graph” we can find database related issues. E.g. You can specify the resource you want
to measure on before running the controller and than you can see database related issues
69. Explain all the web recording options?
70. What is the difference between Overlay graph and Correlate graph? - Overlay Graph: It
overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph
show’s the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was
merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-
axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged
graph’s Y-axis.
71. How did you plan the Load? What are the Criteria? - Load test is planned to decide the number
of users, what kind of machines we are going to use and from where they are run. It is based on 2
important documents, Task Distribution Diagram and Transaction profile. Task Distribution
Diagram gives us the information on number of users for a particular transaction and the time of the
load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the
information about the transactions name and their priority levels with regard to the scenario we are
deciding.
72. What does vuser_init action contain? - Vuser_init action contains procedures to login to a server.
73. What does vuser_end action contain? - Vuser_end section contains log off procedures.
74. What is think time? How do you change the threshold? - Think time is the time that a real user
waits between actions. Example: When a user receives data from a server, the user may wait
several seconds to review the data before responding. This delay is known as the think
time. Changing the Threshold: Threshold level is the level below which the recorded think time
will be ignored. The default value is five (5) seconds. We can change the think time threshold in the
Recording options of the Vugen.
75. What is the difference between standard log and extended log? - The standard log sends a
subset of functions and messages sent during script execution to a log. The subset depends on the
Vuser type Extended log sends a detailed script execution messages to the output log. This is
mainly used during debugging when we want information about: Parameter substitution. Data
returned by the server. Advanced trace.
76. Explain the following functions: - lr_debug_message - The lr_debug_message function sends a
debug message to the output log when the specified message class is set. lr_output_message - The
lr_output_message function sends notifications to the Controller Output window and the Vuser log
file. lr_error_message - The lr_error_message function sends an error message to the LoadRunner
Output window. lrd_stmt - The lrd_stmt function associates a character string (usually a SQL
statement) with a cursor. This function sets a SQL statement to be processed. lrd_fetch - The
lrd_fetch function fetches the next row from the result set.
77. Throughput - If the throughput scales upward as time progresses and the number of Vusers
increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively
flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is
constraining the volume of data delivered.
78. Types of Goals in Goal-Oriented Scenario - Load Runner provides you with five different types
of goals in a goal oriented scenario:
1. The number of concurrent Vusers
2. The number of hits per second
3. The number of transactions per second
4. The number of pages per minute
5. The transaction response time that you want your scenario
79. Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph
you can see that as the number of Vusers increases, the average response time of the check itinerary
transaction very gradually increases. In other words, the average response time steadily increases as
the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response time. We
say that the test broke the server. That is the mean time before failure (MTBF). The response
time clearly began to degrade when there were more than 56 Vusers running simultaneously.
80. What is correlation? Explain the difference between automatic correlation and manual
correlation? - Correlation is used to obtain data which are unique for each run of the script and
which are generated by nested queries. Correlation provides the value to avoid errors arising out of
duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is
where we set some rules for correlation. It can be application server specific. Here values are
replaced by data which are created by these rules. In manual correlation, the value we want to
correlate is scanned and create correlation is used to correlate.
81. Where do you set automatic correlation options? - Automatic correlation from web point of
view, can be set in recording options and correlation tab. Here we can enable correlation for the
entire script and choose either issue online messages or offline actions, where we can define rules
for that correlation. Automatic correlation for database can be done using show output window and
scan for correlation and picking the correlate query tab and choose which query value we want to
correlate. If we know the specific value to be correlated, we just do create correlation for the value
and specify how the value to be created.
82. What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param
function saves dynamic data information to a parameter.

WIN RUNNER QUESTIONS

1. How you used WinRunner in your project? - Yes, I have been using WinRunner for creating
automated scripts for GUI, functional and regression testing of the AUT.
2. Explain WinRunner testing process? - WinRunner testing process involves six main stages
o Create GUI Map File so that WinRunner can recognize the GUI objects in the application
being tested
o Create test scripts by recording, programming, or a combination of both. While recording
tests, insert checkpoints where you want to check the response of the application being
tested.
o Debug Test: run tests in Debug mode to make sure they run smoothly
o Run Tests: run tests in Verify mode to test your application.
o View Results: determines the success or failure of the tests.
o Report Defects: If a test run fails due to a defect in the application being tested, you can
report information about the defect directly from the Test Results window.
3. What is contained in the GUI map? - WinRunner stores information it learns about a window or
object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads
an object’s description in the GUI map and then looks for an object with the same properties in the
application being tested. Each of these objects in the GUI Map file will be having a logical name
and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI
Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a
GUI Map file for each test created.
4. How does WinRunner recognize objects on the application? - WinRunner uses the GUI Map
file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to
locate objects. It reads an object’s description in the GUI map and then looks for an object with the
same properties in the application being tested.
5. Have you created test scripts and what is contained in the test scripts? - Yes I have created test
scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These
statements appear as a test script in a test window. You can then enhance your recorded test script,
either by typing in additional TSL functions and programming elements or by using WinRunner’s
visual programming tool, the Function Generator.
6. How does WinRunner evaluate test results? - Following each test run, WinRunner displays the
results in a report. The report details all the major events that occurred during the run, such as
checkpoints, error messages, system messages, or user messages. If mismatches are detected at
checkpoints during the test run, you can view the expected results and the actual results from the
Test Results window.
7. Have you performed debugging of the scripts? - Yes, I have performed debugging of scripts. We
can debug the script by executing the script in the debug mode. We can also debug script using the
Step, Step Into, Step out functionalities provided by the WinRunner.
8. How do you run your test scripts? - We run tests in Verify mode to test your application. Each
time WinRunner encounters a checkpoint in the test script, it compares the current data of the
application being tested to the expected data captured earlier. If any mismatches are found,
WinRunner captures them as actual results.
9. How do you analyze results and report the defects? - Following each test run, WinRunner
displays the results in a report. The report details all the major events that occurred during the run,
such as checkpoints, error messages, system messages, or user messages. If mismatches are
detected at checkpoints during the test run, you can view the expected results and the actual results
from the Test Results window. If a test run fails due to a defect in the application being tested, you
can report information about the defect directly from the Test Results window. This information is
sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.
10. What is the use of Test Director software? - TestDirector is Mercury Interactive’s software test
management tool. It helps quality assurance personnel plan and organize the testing process. With
TestDirector you can create a database of manual and automated tests, build test cycles, run tests,
and report and track defects. You can also create reports and graphs to help review the progress of
planning tests, running tests, and tracking defects before a software release.
11. Have you integrated your automated scripts from TestDirector? - When you work with
WinRunner, you can choose to save your tests directly to your TestDirector database or while
creating a test case in the TestDirector we can specify whether the script in automated or manual.
And if it is automated script then TestDirector will build a skeleton for the script that can be later
modified into one which could be used to test the AUT.
12. What are the different modes of recording? - There are two type of recording in WinRunner.
Context Sensitive recording records the operations you perform on your application by
identifying Graphical User Interface (GUI) objects. Analog recording records keyboard input,
mouse clicks, and the precise x- and y-coordinates travelled by the mouse pointer across the screen.
13. What is the purpose of loading WinRunner Add-Ins? - Add-Ins are used in WinRunner to load
functions specific to the particular add-in to the memory. While creating a script only those
functions in the add-in selected will be listed in the function generator and while executing the
script only those functions in the loaded add-in will be executed else WinRunner will give an error
message saying it does not recognize the function.
14. What are the reasons that WinRunner fails to identify an object on the GUI? - WinRunner
fails to identify an object in a GUI due to various reasons. The object is not a standard windows
object. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not
be able to learn any of the objects displayed in the browser window.
15. What is meant by the logical name of the object? - An object’s logical name is determined by its
class. In most cases, the logical name is the label that appears on an object.
16. If the object does not have a name then what will be the logical name? - If the object does not
have a name then the logical name could be the attached text.
17. What is the different between GUI map and GUI map files? - The GUI map is actually the sum
of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI
Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner
automatically creates a GUI Map file for each test created.

GUI Map file is a file which contains the windows and the objects learned by the WinRunner with
its logical name and their physical description.

18. How do you view the contents of the GUI map? - GUI Map editor displays the content of a GUI
Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor
displays the various GUI Map files created and the windows and objects learned in to them with
their logical name and physical description.
19. When you create GUI map do you record all the objects of specific objects? - If we are
learning a window then WinRunner automatically learns all the objects in the window else we will
we identifying those object, which are to be learned in a window, since we will be working with
only those objects while creating scripts.
20. What is the difference between QA and testing?

Testing: The process of executing a system with the intent of finding defects including test planning
prior to the execution of the test cases.

Quality Control: A set of activities designed to evaluate a developed working product.

Quality Assurance: A set of activities designed to ensure that the development and/or maintenance
process is adequate to ensure a system will meet its objectives.
The key difference to remember is that QA is interested in the process whereas testing and quality
control are interested in the product. Having a testing component in your development process
demonstrates a higher degree of quality (as in QA).

21. What is the Differences between WinRunner Versions 7.6 to 8.0 or 8.2?

Win runner 8.0 supports for Netscape 7.x. Supports for windows 2003. Supports for PowerBuilder
More printing options. Advance function viewer window Win runner 8.2 version supports only sap and
multi media. This is only version supports sap application.

22. What are the methods to load GUI map? What happens when GUI map is loaded?
Using GUI_load command, the GUI map can be loaded.
Syntax: GUI_load ();
While loading the GUI map file, all the information about objects and windows with the physical
description and logical names will get loaded in the memory. Whenever the winrunner executes the
script, the objects will get identified using the information already loaded in the memory.

How to record a data driven test script using data driver wizard?
You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-
driven test. For example, your test script may include recorded operations, checkpoints, and other
statements that do not need to be repeated for multiple sets of data. You need to parameterize only the
portion of your test script that you want to run in a loop with multiple sets of data.
To create a data-driven test:

• If you want to turn only part of your test script into a data-driven test, first select those lines in the
test script.
• Choose Tools > DataDriver Wizard.
• If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the
test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test,
click Next.
• The Use a new or existing Excel table box displays the name of the Excel file that WinRunner
creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a
different name for the data table, or use
• The browse button to locate the path of an existing data table. By default, the data table is stored in
the test folder.
• In the Assign a name to the variable box, enter a variable name with which to refer to the data table,
or accept the default name, “table.”
• At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of
the table variable. Throughout the script, only the table variable name is used. This makes it easy for
you to assign a different data table
• To the script at a later time without making changes throughout the script.
• Choose from among the following options:

-> Add statements to create a data-driven test: Automatically adds statements to run your test in a loop:
sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a
ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data
table; adds ddt_open and ddt_close statements
-> To your test script to open and close the data table, which are necessary in order to iterate rows in the
table. Note that you can also add these statements to your test script manually.
-> If you do not choose this option, you will receive a warning that your data-driven test must contain a
loop and statements to open and close your data table.
-> Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and
ddt_save statements to your test script after the ddt_open statement.
-> Note that in order to import data from a database, either Microsoft Query or Data Junction must be
installed on your machine. You can install Microsoft Query from the custom installation of Microsoft
Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase
Data Junction, contact your Mercury Interactive representative. For detailed information on working with
Data Junction, refer to the documentation in the Data Junction package.
-> Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with
parameters, using the ddt_val function, and in the data table, adds columns with variable values for the
parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you
to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table
or use an existing column when parameterize data.
-> Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The
first argument of the function is the name of the column in the data table. The replaced data is inserted into
the table.

• The Test script line to parameterize box displays the line of the test script to parameterize. The
highlighted value can be replaced by a parameter. The Argument to be replaced box displays the
argument (value) that you can replace with a parameter. You can use the arrows to select a different
argument to replace.

Choose whether and how to replace the selected data:

1. Do not replace this data: Does not parameterize this data.


2. An existing column: If parameters already exist in the data table for this test, select an existing
parameter from the list.
3. A new column: Creates a new column for this parameter in the data table for this test. Adds the
selected data to this column of the data table. The default name for the new parameter is the logical
name of the object in the selected. TSL statement above. Accept this name or assign a new name.

• The final screen of the wizard opens.


• If you want the data table, to open after you close the wizard, select Show data table now.
• To perform the tasks specified in previous screens and close the wizard, click Finish.
• To close the wizard without making any changes to the test script, click Cancel.

Explain WinRunner testing process?


WinRunner testing process involves six main stages

• Create GUI Map File so that WinRunner can recognize the GUI objects in the application being
tested
• Create test scripts by recording, programming, or a combination of both. While recording tests,
insert checkpoints where you want to check the response of the application being tested.
• Debug Test: run tests in Debug mode to make sure they run smoothly
• Run Tests: run tests in Verify mode to test your application.
• View Results: determines the success or failure of the tests.
• Report Defects: If a test run fails due to a defect in the application being tested, you can report
information about the defect directly from the Test Results window.

What is contained in the GUI Map?


WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner
runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then
looks for an object with the same properties in the application being tested. Each of these objects in the
GUI Map file will be having a logical name and a physical description.
There are 2 types of GUI Map files.

• Global GUI Map file: a single GUI Map file for the entire application
• GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

How does WinRunner recognize objects on the application?


WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a
test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks
for an object with the same properties in the application being tested.

How does WinRunner evaluates test results?


Following each test run, WinRunner displays the results in a report. The report details all the major
events that occurred during the run, such as checkpoints, error messages, system messages, or user
messages. If mismatches are detected at checkpoints during the test run, you can view the expected results
and the actual results from the Test Results window.

How do you run your test scripts?


We run tests in Verify mode to test your application. Each time WinRunner encounters a
checkpoint in the test script, it compares the current data of the application being tested to the expected
data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

Hoe do you analyze results and report the defects?


Following each test run, WinRunner displays the results in a report. The report details all the major
events that occurred during the run, such as checkpoints, error messages, system messages, or user
messages. If mismatches are detected at checkpoints during the test run, you can view the expected results
and the actual results from the Test Results window. If a test run fails due to a defect in the application
being tested, you can report information about the defect directly from the Test Results window. This
information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

What is the use of Test Director Software?


TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance
personnel plan and organize the testing process. With TestDirector you can create a database of manual
and automated tests, build test cycles, run tests, and report and track defects. You can also create reports
and graphs to help review the progress of planning tests, running tests, and tracking defects before a
software release.

What is the purpose of loading Win Runner Add-Ins?


Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory.
While creating a script only those functions in the add-in selected will be listed in the function generator
and while executing the script only those functions in the loaded add-in will be executed else WinRunner
will give an error message saying it does not recognize the function.

What are the reasons that WinRunner fails to identify an object on the GUI?
WinRunner fails to identify an object in a GUI due to various reasons.

• The object is not a standard windows object.


• If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able
to learn any of the objects displayed in the browser window.

What do you mean by the logical name of the object?


An object’s logical name is determined by its class. In most cases, the logical name is the label that
appears on an object.

What is the difference between GUI Map and GUI Map file?

1) The GUI map is actually the sum of one or more GUI map files. There are two modes for
organizing GUI map files.

• Global GUI Map file: a single GUI Map file for the entire application
• GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

2) GUI Map file is a file, which contains the windows and the objects learned by the WinRunner with its
logical name and their physical description.

What is the purpose of set_window command?


Set_Window command sets the focus to the specified window. We use this command to set the
focus to the required window before executing tests on a particular window.
Syntax: set_window (, time);
The logical name is the logical name of the window and time is the time the execution has to wait till it
gets the given window into focus.

What different actions are performed by find and show button?

To find a particular object in the GUI Map file in the application, select the object and click the
Show window. This blinks the selected object.

To find a particular object in a GUI Map file click the Find button, this gives the option to select
the object. When the object is selected, if the object has been learned to the GUI Map file it will be
focused in the GUI Map file.

How do you identify which files are loaded in the GUI map?
The GUI Map Editor has a drop down “GUI File” displaying all the GUI Map files loaded into the
memory.

How do you modify the logical name or the physical description of the objects in GUI map?
You can modify the logical name or the physical description of an object in a GUI map file using
the GUI Map Editor.

How WinRunner handles varying window labels?


We can handle varying window labels using regular expressions. WinRunner uses two “hidden”
properties in order to use regular expression in an object’s physical description. These properties are
regexp_label and regexp_MSW_class.

• The regexp_label property is used for windows only. It operates “behind the scenes” to insert a
regular expression into a window’s label description.
• The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is
obligatory for all types of windows and for the object class object.

How do you copy and move objects between different GUI map files?
We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps
to be followed are:

• Choose Tools > GUI Map Editor to open the GUI Map Editor.
• Choose View > GUI Files.
• Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files
simultaneously.
• View a different GUI map file on each side of the dialog box by clicking the file names in the GUI
File lists.
• In one file, select the objects you want to copy or move. Use the Shift key and/or Control key to
select multiple objects. To select all objects in a GUI map file, choose Edit > Select All.
• Click Copy or Move.
• To restore the GUI Map Editor to its original size, click Collapse

How do you suppress a regular expression?


We can suppress the regular expression of a window by replacing the regexp_label property with
label property.

How do you filter the objects in the GUI map?


GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.

• Logical name displays only objects with the specified logical name.
• Physical description displays only objects matching the specified physical description. Use any
substring belonging to the physical description.
• Class displays only objects of the specified class, such as all the push buttons.

What is the purpose of GUI map configuration?


GUI Map configuration is used to map a custom object to a standard object.

How do you make the configuration and mappings permanent?


The mapping and the configuration you set are valid only for the current WinRunner session. To
make the mapping and the configuration permanent, you must add configuration statements to your startup
test script.

What is the purpose of GUI Spy?


Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the
Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy
dialog box. You can choose to view all the properties of an object, or only the selected set of properties
that WinRunner learns.

What is the purpose of location indicator and index indicator in GUI map configuration?
In cases where the obligatory and optional properties do not uniquely identify an object,
WinRunner uses a selector to differentiate between them. Two types of selectors are available:

• A location selector uses the spatial position of objects.

The location selector uses the spatial order of objects within the window, from the top left to the bottom
right corners, to differentiate among objects with the same description.

• An index selector uses a unique number to identify the object in a window.

The index selector uses numbers assigned at the time of creation of objects to identify the object in a
window. Use this selector if the location of objects with the same description may change within a
window.
How do you find out which is the start up file in WinRunner?
The test script name in the Startup Test box in the Environment tab in the General Options dialog
box is the start up file in WinRunner.
What are the virtual objects and how do you learn them?

• Applications may contain bitmaps that look and behave like GUI objects. WinRunner records
operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual
object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record
and run tests.
• Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the
coordinates of that object, and assign it a logical name.

What are the two modes of recording?


There are 2 modes of recording in WinRunner

• Context Sensitive recording records the operations you perform on your application by
identifying Graphical User Interface (GUI) objects.
• Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates
traveled by the mouse pointer across the screen.

What is a checkpoint and what are different types of checkpoints?


Checkpoints allow you to compare the current behavior of the application being tested to its
behavior in an earlier version. You can add four types of checkpoints to your test scripts:
GUI checkpoints verify information about GUI objects. For example, you can check that a button is
enabled or see which item is selected in a list.

Bitmap checkpoints take a “snapshot” of a window or area of your application and compare this to an
image captured in an earlier version.

Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.

Database checkpoints check the contents and the number of rows and columns of a result set, which is
based on a query you create on your database.

What are data driven tests?


When you test your application, you may want to check how it performs the same operations with
multiple sets of data. You can create a data-driven test with a loop that runs ten times: each time the loop
runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must
link the data to the test script which it drives. This is called parameterizing your test. The data is stored in
a data table. You can perform these operations manually, or you can use the DataDriver Wizard to
parameterize your test and store the data in a data table.

What are the synchronization points?


Synchronization points enable you to solve anticipated timing problems between the test and your
application. For example, if you create a test that opens a database application, you can add a
synchronization point that causes the test to wait until the database records are loaded on the screen.

For analog testing, you can also use a synchronization point to ensure that WinRunner repositions a
window at a specific location. When you run a test, the mouse cursor travels along exact coordinates.
Repositioning the window enables the mouse pointer to make contact with the correct elements in the
window.
What is parameterizing?
In order for WinRunner to use data to drive the test, you must link the data to the test script, which it
drives. This is called parameterizing your test. The data is stored in a data table.

How do you maintain the document information of the test scripts?


Before creating a test, you can document information about the test in the General and Description
tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality
tested, a detailed description of the test, and a reference to the relevant functional specifications document.

What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?
You can check a single property of a GUI object. For example, you can check whether a button is
enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property
value, use the Check Property dialog box to add one of the following functions to the test script:

• button_check_info
• scroll_check_info
• edit_check_info
• static_check_info
• list_check_info
• win_check_info
• obj_check_info

Syntax:
button_check_info (button, property, property_value);
edit_check_info (edit, property, property_value);

What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?

• You can create a GUI checkpoint to check a single object in the application being tested. Either you
can check the object with its default properties or you can specify which properties to check.

Creating a GUI Checkpoint using the Default Checks

• You can create a GUI checkpoint that performs a default check on the property recommended by
WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check
verifies that the push button is enabled.
• To create a GUI checkpoint using default checks:
• Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for
Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK
GUI FOR OBJECT/WINDOW soft key in order to avoid extraneous mouse movements. Note that you
can press the CHECK GUI FOR OBJECT/WINDOW soft key in Context Sensitive mode as well. The
WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window
opens on the screen.
• Click an object.
• WinRunner captures the current value of the property of the GUI object being checked and stores it
in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is
inserted in the test script as an obj_check_gui statement

Syntax: win_check_gui (window, checklist, expected_results_file, time);


Creating a GUI Checkpoint by Specifying which Properties to Check

• You can specify which properties to check for an object. For example, if you create a checkpoint
that checks a push button, you can choose to verify that it is in focus, instead of enabled.

To create a GUI checkpoint by specifying which properties to check:

• Choose Create > GUI Checkpoint > For Object/Window, or click the GUI Checkpoint for
Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK
GUI FOR OBJECT/WINDOW soft key in order to avoid extraneous mouse movements. Note that you
can press the CHECK GUI FOR OBJECT/WINDOW soft key in Context Sensitive mode as well. The
WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window
opens on the screen.
• Double-click the object or window. The Check GUI dialog box opens.
• Click an object name in the Objects pane. The Properties pane lists all the properties for the
selected object.
• Select the properties you want to check.
• To edit the expected value of a property, first select it. Next, either click the Edit Expected Value
button, or double-click the value in the Expected Value column to edit it.
• To add a check in which you specify arguments, first select the property for which you want to
specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments
column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify
arguments for a check on this property. (You do not need to specify arguments if a default argument is
specified.) When checking standard objects, you only specify arguments for certain properties of edit
and static text objects. You also specify arguments for checks on certain properties of nonstandard
objects.
• To change the viewing options for the properties of an object, use the Show Properties buttons.
• Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores
it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is
inserted in the test script as an obj_check_gui or a win_check_gui statement.

Syntax: win_check_gui ( window, checklist, expected_results_file, time );


obj_check_gui ( object, checklist, expected results file, time );

What information is contained in the checklist file and in which file expected results are stored?
The checklist file contains information about the objects and the properties of the object we are
verifying.

The gui*.chk file contains the expected results which is stored in the exp folder.

What do you verify with the database checkpoint default and what command it generates, explain
syntax?

By adding runtime database record checkpoints you can compare the information in your
application during a test run with the corresponding record in your database. By adding standard database
checkpoints to your test scripts, you can check the contents of databases in different versions of your
application.

When you create database checkpoints, you define a query on your database, and your database
checkpoint checks the values contained in the result set. The result set is set of values retrieved from the
results of the query.
You can create runtime database record checkpoints in order to compare the values displayed in
your application during the test run with the corresponding values in the database. If the comparison does
not meet the success criteria, you specify for the checkpoint, the checkpoint fails. You can define a
successful runtime database record checkpoint as one where one or more matching records were found,
exactly one matching record was found, or where no matching records are found.

You can create standard database checkpoints to compare the current values of the properties of the
result set during the test run to the expected values captured during recording or otherwise set before the
test run. If the expected results and the current results do not match, the database checkpoint fails. Standard
database checkpoints are useful when the expected results can be established before the test run.

Syntax: db_check(<checklist_file>, <expected_restult>);

• You can add a runtime database record checkpoint to your test in order to compare
information that appears in your application during a test run with the current value(s) in the
corresponding record(s) in your database. You add runtime database record checkpoints by
running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts
the appropriate db_record_check statement into your script.

Syntax:
db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );

Checklist Filename: A file created by WinRunner and saved in the test's checklist folder. The file contains
information about the data to be captured during the test run and its corresponding field in the database.
The file is created based on the information entered in the Runtime Record Verification wizard.
Success Conditions Contains one of the following values:

1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching


database records are found.
2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database
record is found.
3. DVR_NO_MATCH - The checkpoint passes if no matching database records are
found.

Record Number: An out parameter returning the number of records in the database.

How do you edit the expected value of an object?


We can modify the expected value of the object by executing the script in the Update mode. We
can also manually edit the gui*.chk file which contains the expected values which come under the exp
folder to change the values.

How do you modify the expected results of a GUI checkpoint?


We can modify the expected results of a GUI checkpoint be running the script containing the
checkpoint in the update mode.

How do you create ODBC query?


We can create ODBC query using the database checkpoint wizard. It provides with option to create
an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection
string and the SQL statement.

How do you record a data driven test?


We can create a data-driven testing using data from a flat file, data table or a database.
• Using Flat File: we actually store the data to be used in a required format in the file. We
access the file using the File manipulation commands, read data from the file and assign the
variables with data.
• Data Table: It is an excel file. We can store test data in these files and manipulate them.
We use the ‘ddt_*’ functions to manipulate data in the data table.
• Database: we store test data in the database and access these data using ‘db_*’ functions.

How do you convert a database file to a text file?


You can use Data Junction to create a conversion file which converts a database to a target text file.

How do you parameterize database check points?


When you create a standard database checkpoint using ODBC (Microsoft Query), you can add
parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a
database checkpoint with a query in which the SQL statement defining your query changes.

How do you create parameterize SQL commands?


A parameterized query is a query in which at least one of the fields of the WHERE clause is
parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the
following SQL statement is based on a query on the database in the sample Flight Reservation application:

SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights


WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)

SELECT defines the columns to include in the query.


FROM specifies the path of the database.
WHERE (optional) specifies the conditions, or filters to use in the query.
Departure is the parameter that represents the departure point of a flight.
Day_Of_Week is the parameter that represents the day of the week of a flight.

• When creating a database checkpoint, you insert a db_check statement into your test script.
When you parameterize the SQL statement in your checkpoint, the db_check function has a
fourth, optional, argument: the parameter_array argument. A statement similar to the
following is inserted into your test script:

db_check ("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params);

The parameter_array argument will contain the values to substitute for the parameters in the
parameterized checkpoint.

Explain the following WinRunner commands.

• db_connect - to connect to a database

db_connect(<session_name>, <connection_string>);

• db_execute_query - to execute a query

db_execute_query ( session_name, SQL, record_number );


[record_number is the out value]
• db_get_field_value - returns the value of a single field in the specified row_index and
column in the session_name database session.

db_get_field_value ( session_name, row_index, column );

• db_get_headers - returns the number of column headers in a query and the content of the
column headers, concatenated and delimited by tabs.

db_get_headers ( session_name, header_count, header_content );

• db_get_row - returns the content of the row, concatenated and delimited by tabs.

db_get_row ( session_name, row_index, row_content );

• db_write_records - writes the record set into a text file delimited by tabs.

db_write_records ( session_name, output_file [ , headers [ , record_limit ] ] );

• db_get_last_error - returns the last error message of the last ODBC or Data Junction
operation in the session_name database session.

db_get_last_error ( session_name, error );

• db_disconnect - disconnects from the database and ends the database session.

db_disconnect ( session_name );

• db_dj_convert - runs the djs_file Data Junction export file. When you run this file, the Data
Junction Engine converts data from one spoke (source) to another (target). The optional
parameters enable you to override the settings in the Data Junction export file.

db_dj_convert ( djs_file [ , output_file [ , headers [ , record_limit ] ] ] );

Explain Get Text checkpoint from object/window with syntax?


We use obj_get_text (<logical_name>, <out_text>) function to get the text from an object.
We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

Explain Get Text checkpoint from screen area with syntax?


We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

Explain Get Text checkpoint from selection (web only) with syntax?
Returns a text string from an object.

web_obj_get_text (object, table_row, table_column, out_text [, text_before, text_after, index]);

• object: The logical name of the object.


• table_row: If the object is a table, it specifies the location of the row within a table. The
string is preceded by the # character.
• table_column: If the object is a table, it specifies the location of the column within a table.
The string is preceded by the # character.
• out_text: The output variable that stores the text string.
• text_before: Defines the start of the search area for a particular text string.
• text_after: Defines the end of the search area for a particular text string.
• Index: The occurrence number to locate. (The default parameter number is numbered 1).
What is Six Sigma?
Six Sigma stands for Six Standard Deviations from mean. Initially defined as a metric for measuring
defects and improving quality, a methodology to reduce defect levels below 3.4 Defects per one Million
Opportunities.
Six Sigma incorporates the basic principles and techniques used in Business, Statistics, and Engineering.
These three form the core elements of Six Sigma. Six Sigma improves the process performance, decreases
variation and maintains consistent quality of the process output. This leads to defect reduction and
improvement in profits, product quality and customer satisfaction.
Six Sigma experts (Green Belts and Black Belts) evaluate a business process and determine ways to
improve upon the existing process.

What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no
preparation is usually required.

What is an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a
moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such
as a requirements spec or a test plan, and the purpose is to find problems and see what is missing, not to fix
anything. Attendees should prepare for this type of meeting by reading thru the document; most problems
will be found during this preparation. The result of the inspection meeting should be a written report.
Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective
methods of ensuring quality.

What are five common problems in the software development process?

• Poor requirements - if requirements are unclear, incomplete, too general, and not testable, there will
be problems.
• Unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
• Inadequate testing - no one will know whether or not the program is any good until the customer
complains or systems crash.
• Featuritis - requests to pile on new features after development is underway; extremely common.
• Miscommunication - if developers do not know what is needed or customers have erroneous
expectations, problems are guaranteed.

What are five common solutions to software development problems?

• Solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are
agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-type
environments, continuous close coordination with customers/end-users is necessary.
• Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing,
changes, and documentation; personnel should be able to complete the project without burning out.
• Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for
testing and bug fixing. 'Early' testing ideally includes unit testing by developers and built-in testing
and diagnostic capabilities.
• Stick to initial requirements as much as possible - be prepared to defend against excessive changes
and additions once development has begun, and be prepared to explain consequences. If changes
are necessary, they should be adequately reflected in related schedule changes. If possible, work
closely with customers/end-users to manage expectations. This will provide them a higher comfort
level with their requirements decisions and minimize excessive changes later on.
• Communication - require walkthroughs and inspections when appropriate; make extensive use of
group communication tools - groupware, wiki's, bug-tracking tools and change management tools,
intranet capabilities, etc.; insure that information/documentation is available and up-to-date -
preferably electronic, not paper; promote teamwork and cooperation; use prototypes and/or
continuous communication with end-users if possible to clarify expectations.

What is 'good code'?


'Good code' is code that works, is bug free, and is readable and maintainable. Some organizations
have coding 'standards' that all developers are supposed to adhere to, but everyone has different ideas about
what's best, or what is too many or too few rules. There are also various theories and metrics, such as
McCabe Complexity metrics. It should be kept in mind that excessive use of standards and rules can stifle
productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to check for
problems and enforce standards.

What is 'good design'?


'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good
internal design is indicated by software code whose overall structure is clear, understandable, easily
modifiable, and maintainable; is robust with sufficient error handling and status logging capability; and
works correctly when implemented. Good functional design is indicated by an application whose
functionality can be traced back to customer and end-user requirements.

Software Quality Assurance

SQA is the planned and systematic set of activities that ensure that software process and products conform
to requirements, standards, and procedures. "Processes" include all activities involved in designing, coding,
testing and maintaining; "products" include software, associated data, documentation, and all supporting
and reporting paperwork.

The role of SQA is to give management the assurance that the officially established process is actually
being implemented. It ensures that:

1. An appropriate development methodology is in place.


2. The projects use standards and procedures in their work.
3. Reviews and audits are conducted.
4. Documentation is produced to support maintenance and enhancement.
5. Software Configuration Management is set up to control change.
6. Testing is performed and passed.

Goals of SQA
The software development is a complex process full risk. There are technical risks such as
software will not perform as intended or be too hard to operate, modify, and/or maintain; there are
programmatic risks such as the project will overrun cost or schedule. The goals of SQA are to
reduce these risks by:

• Appropriately monitoring the software and the development process.


• Ensuring full compliance with standards and procedures for software and process.
• Ensuring that inadequacies in product, process, or standards are brought to management's attention
so that they can be fixed.

Responsibilities of SQA

To achieve its goals, SQA is responsible for:

1. Review all development and quality plans for completeness.


2. Participate as inspection moderators in design and code inspections.
3. Review all test plans for adherence to standards.
4. Review samples of all test results to determine adherence to plans.
5. Periodically audit SCM performance to determine adherence to standards.
6. Participate in all project phase reviews and write down any nonconformance.

Creating SQA Plan


An effective SQA program requires forward planning and following through it.
The SQA plan specifies its goals, tasks to be performed, and the standards and
procedures against which the development work is to be measured.

The IEEE standard for SQA plan preparation contains the following:

1. Purpose
2. Reference Documents
3. Management
4. Documentation
5. Standards, Practices, and Conventions
6. Reviews and Audits
7. Software Configuration Management
8. Problem Reporting and Corrective Action
9. Tools, Techniques, and Methodologies
10. Code Control
11. Media Control
12. Supplier Control
13. Records Collection, Maintenance, and Retention

Documentation
The documentation section should describe the documentation to be produced and how it is
to be previewed. These include:

1. Software requirement specification, which specifies each software function, performance


parameter, interfaces, or other attribute with sufficient precision to permit its verification.
2. Software Design Description, which describes the major components, databases, and internal
interfaces.
3. Software Verification and Validation Plan, which describes the methods used to verify that the
requirements are implemented in the design, that the design is implemented in the code, and that
the code meets the requirements.
4. Software Verification and Validation Report, which is used to report on the SQA verification and
validation activities.
5. User Documentation, which is required for installation, operation, and maintenance of the software.
6. Other, includes software development plan, the software configuration management plan, the
standards and procedures manual, together with the planned review methods.

Standards and Procedures


The standards are the criteria to which software products are compared. Procedures are the
criteria to which development and control processes are compared. They are closely related. The
standards define what should be done; while procedures define how the work is actually to be
done, by whom, when and what is done with the results.

The minimum requirement for standards includes:

1. Documentation Standards specify form and content for planning, control, and product
documentation and provide consistency throughout a project.
2. Design Standards specify the form and content of the design product. They provide rules and
methods for translating the software requirements into the software design and for representing it in
the design documentation.
3. Code Standards specify the language in which the code is to be written and define any restrictions
on use of language features. They define legal language structure, style conventions, rules for data
structures, and internal code documentation.

SQA Activities
SQA activities include product evaluation and process monitoring, which ensure the product
and the process used in development are correctly carried out and standards are followed. SQA
audit, another SQA activity, is a key technique used to perform product evaluation and process
monitoring.

Production evaluation is to ensure that standards are followed. It assures that clear and achievable
standards exist and evaluate compliance of software product with the standards.

Process monitoring is to ensure that appropriate steps to carry out process are being followed. SQA
monitors processes by comparing actual steps performed with established procedures.

Audit is a fundamental SQA technique. It looks at a process or product in depth, comparing them with
established standards and procedures.

Mercury WinRunner - Features and Benefits

Increase power and flexibility of tests without any programming: The Function Generator presents a
quick and error-free way to design tests and enhance scripts without any programming knowledge. Testers
can simply point at a GUI object, and Mercury WinRunner® will examine it, determine its class and
suggest an appropriate function to be used.

Use multiple verification types to ensure sound functionality: Mercury WinRunner provides
checkpoints for text, GUI, bitmaps, URL links and the database, allowing testers to compare expected and
actual outcomes and identify potential problems with numerous GUI objects and their functionality.

Verify data integrity in your back-end database: Built-in Database Verification confirms values stored
in the database and ensures transaction accuracy, as well as the data integrity of records that have been
updated, deleted, and added.

View, store, and verify at a glance every attribute of tested objects: Mercury WinRunner's GUI Spy
automatically identifies, records, and displays the properties of standard GUI objects, ActiveX controls, as
well as Java objects and methods. This ensures that every object in the user interface is recognized by the
script and can be tested.

Maintain tests and build reusable scripts: The GUI map provides a centralized object repository,
allowing testers to verify and modify any tested object. These changes are then automatically propagated to
all appropriate scripts, eliminating the need to build new scripts each time the application is modified.

Test multiple environments with a single application: Mercury WinRunner supports more than 30
environments, including Web, Java, Visual Basic, etc. In addition, it provides targeted solutions for such
leading ERP/CRM applications as SAP, Siebel, PeopleSoft, and a number of others.

Simplify creation of test scripts: Mercury WinRunner's Data Driver Wizard greatly simplifies the process
of preparing test data and scripts. This allows for optimal use of QA resources and results in more
thorough testing.
Automatically identify discrepancies in data: Mercury WinRunner examines and compares expected
and actual results using multiple verifications for text, GUI, bitmaps, URLs, and databases. This ensures
stable functionality and execution of business transactions when the application is released into production.

Validate applications across browsers: Mercury WinRunner enables you to use the same test to validate
applications in Internet Explorer, Netscape, and AOL. This saves testing time and reduces the number of
scripts that must be developed and maintained.

Automatically recover tested applications from a crash: Unexpected events, errors, and application
crashes during a test run can disrupt the testing process and distort results. Mercury WinRunner's Recovery
Manager enables unattended recovery and provides a wizard that guides the process of defining a recovery
scenario.

Leverage investments in other testing products: Mercury WinRunner fully integrates with our other
testing solutions, including Mercury LoadRunner® for load testing and Mercury TestDirector™ for global
test management. Moreover, organizations can reuse Mercury WinRunner test scripts with Mercury
QuickTest Professional™.

Full integration with Mercury Business Process Testing: With Mercury WinRunner 8.2's compatibility
with Mercury Business Process Testing™, you have the ability to create business process components as
well as convert existing Mercury WinRunner scripts to components. With Mercury Business Process
Testing, subject matter experts and automation engineers collaborate to increase effectiveness.

GUI MAP EDITOR

The GUI map editor can be used to move and copy the objects between GUI map files. The steps are,
• Go to Tools > GUI map editor.
• Go to View > GUI files.
• Click Expand in GUI map editor. The dialog box will expand and two GUI map files will be
displayed.
• Different GUI map file can be viewed from the side of dialog box; this can be done through
clicking the file names listed in GUI file.
• Select the objects to be copied or move in one file, for selecting multiple objects Shift key can be
used or use Edit > Select all.
• Click Copy or Move.
• Use the option Collapse for restoring the GUI map editor in its Original size.

QUICK TEST PROFESSIONAL


Quick Test Professional (QTP) is an automated functional Graphical User Interface (GUI) testing tool
created by the HP subsidiary Mercury Interactive that allows the automation of user actions on a web or
client based computer application. It is primarily used for functional regression test automation. QTP uses
a scripting language built on top of Visual Basic Script to specify the test procedure, and to manipulate the
objects and controls of the application under test.

Overview
Quick Test Professional (QTP) is a UI Automation framework designed mainly for Windows
application and web applications. It works by identifying the objects in the application UI or a web
page and performing the desired operations on them (like mouse clicks or keyboard events); it can
also be used to capture object properties like name, handler ID etc.

Some of the key features that can be used in the automation are:

Plug-ins
Plug-ins are used to make recording more acceptable to a specific application, we use web
plug-ins to automate test cases with respect to web applications. QTP has default plug-ins for
ActiveX controls, web applications and VB objects. Plug-in for other objects, such as
Microsoft .NET objects, are also available.

Record and Playback


Initial development of automated tests with QTP is usually done by record-and-playback. A
user's actions are captured via Microsoft's Component Object Model (COM). These are recorded
as VB script-like commands into "Actions", which are similar to functions. All manipulated objects
that QTP recognizes are also added to the object repository. Once recorded one gets editable QTP
scripts.

After clicking on the playback button all these actions will be played back. During playback, QTP attempts
to identify and manipulate the same objects, again using COM.

Active Screen
QTP captures the screen of the application and saves along with the script. The active
screen section highlights the objects in the active screen as the user navigates through the script so
the user knows what object he/she will be performing the action upon. Active screen is also helpful
in creating checkpoints.

Checkpoints
Checkpoints are an important feature in QTP, used for verification of the tests. One can
check each event made on the automation and can add check point to check if a particular object,
text or a bitmap is present in the automaton run. Regular expressions can also be used in check
points.

In short we can say "check points are used to check the behaviour of application being tested".

Recovery
Recovery is a concept like exception handling in a programming language, which can be
applied when an unexpected failure occurs. For instance if an application crash occurs and a
message dialog appears, QTP can be instructed to open the application and continue with the rest of
the test cases from there.

Output Value
This feature is used to output values to the data table when a particular event is made. For
instance if one wants to store the default value of the text box before the start of the test one can use
this and store in a data table when an event is made on it.

Data Table
The data table is a Microsoft Excel workbook that can be easily accessed from within QTP.
It contains one worksheet for each Action, including the hidden global action at the top of the
action tree (see Keyword View). It is primarily used to implement keyword-driven testing. There
are two types of Data Tables available in QTP. One is Global Data Sheet and Local Data Sheet.

Virtual Objects
As this tool is designed to be used across applications, some objects may not be recognized
properly. So it is designed with a virtual object concept which can be used to overcome this
drawback. If an object is not recorded one can define that object as a personal virtual object and
build a virtual object repository. The next time the object is recorded, it will be recognized as a
virtual object from your repository and playback will be possible.

Transactions
This feature can be used to calculate the time required to run a particular test or particular
steps in the tests. Transactions are most often used in automated software performance testing, so
they are infrequently used in QTP.

Results
QTP generates the result file for the test cases at the end of test in the form of XML tree. It
gives the complete detail of PASS and FAILS counts, along with the tractability of failures with
appropriate messages. It is easy to verify the results from these files.

User Interface
QuickTest provides two main views of a script: Keyword View and Expert View. They are selected
from tabs at the bottom of the QTP window.

Keyword View
Keyword View is QTP's default test procedure interface. It displays a test procedure as a
tree of Actions and functions. The tree contains columns listing the Action or function name, any
parameters, and comments. This mode is most useful for the beginners in operating with the tool.
This view allows the user to select the objects either from the application or from the Object
Repository and the user can select the methods to be performed on those objects. The script is
automatically generated. User can also set checkpoints from keyword view.

How many types of recording modes in QTP?


QuickTest's Normal recording mode records the objects in your application and the operations
performed on them. This mode is the default and takes full advantage of QuickTest's test object model,
recognizing the objects in your application regardless of their location on the screen.
When working with specific types of objects or operations, however, you may want to choose from the
following, alternative recording modes:
Analog Recording: — this mode records exact mouse and Key Board operations you perform in relation
to the screen /Application Window. This mode is useful for the operation, which you can record at Object
Level, such as drawing a picture, recording signature. The steps recorded using Analog Mode is saved in
separated data file, Quick Tests add to your Test a Run Analog File statement that calls the recorded
analog File. This file is stored with your action in which these Analog Steps are created.
Note: You cannot edit analog recording steps from within QuickTest.
Low-Level Recording:—enables you to record on any object in your application, whether or not
QuickTest recognizes the specific object or the specific operation. This mode records at the object level
and records all run-time objects as Window or Win Object test objects. Use low-level recording for
recording in an environment or on an object not recognized by QuickTest. You can also use low-level
recording if the exact coordinates of the object are important for your test or component.
Note: Steps recorded using low-level mode may not run correctly on all objects.

Important difference Between QTP and WinRunner?


Qtp: To compare static and dynamic images through qtp

Winrunner: to conduct testing on static images only.

Qtp: In these, three types of recording are possible they are:


1) GENERAL RECORDING 2) ANALOG RECORDING 3) LOW LEVEL
RECORDING

WINRUNNER: In these, two types of recording are possible they are:


1) CONTEXT SENSITIVE RECORDING 2) ANALOG RECORDING

QTP supports different technologies like SAP applications, Macromedia applications, PeopleSoft etc that
are not supported by WinRunner. QTP supports VBS.
In winrunner 2 types of TSL test script i.e.
1. Main test
2. Compiled module
But in QTP, only one type of test script i.e. Main test. It does not support compiled module.
QTP Supports .net Applications & Full fledge java Applications, these are not supported by winrunner.

QTP having nine checkpoints where winrunner is having four check points.
We can create reusable action in QTP but not in winner.
How to retrieve the property of an object in QTP?
Using Getroproperty ("text") we will find out the what ever object properties u want other wise
by using object spy also we will get object properties. OR

Gettoproperty ("text") = to get the property from the qtp, which it is using at run time.

When to insert transactions in QTP?


We use the transaction point in QTP to find out execution time of the script (vb script) by using the
start transaction and end transaction points.
Note: Transactions can be defined only for tests.
Components cannot include transactions.
Services.StartTransaction "ReserveSeat"

Services.EndTransaction "ReserveSeat"

What is the extension of 'Log file' in QTP?

Extension of log file in QTP is .res

SILKTEST AND WINRUNNER


FEATURE DESCRIPTIONS
Startup Initialization and Configuration
SilkTest derives its initial startup configuration settings from its partner.ini file. This though is
not important because SilkTest can be reconfigured at any point in the session by either changing any
setting in the Options menu or loading an Option Set.
An Option Set file (*.opt) permits customized configuration settings to be established for each test
project. The project specific Option Set is then be loaded [either interactively, or under program control]
prior to the execution of the project’s test cases.
The Options menu or an Option Set can also be used to load an include file (*.inc) containing the
project’s GUI Declarations [discussed in section 2.6 on page 5], along with any number of other include
files containing library functions, methods, and variables shared by all testcases.
WinRunner derives its initial startup configuration from a wrun.ini file of settings.
During startup the user is normally polled [this can be disabled] for the type of addins they want to use
during the session [refer to section 2.3 on page 3 for more information about addins].
The default wrun.ini file is used when starting WinRunner directly, while project specific
initializations can be established by creating desktop shortcuts which reference a project specific
wrun.ini file. The use of customized wrun.ini files is important because once WinRunner is started
with a selected set of addins you must terminate WinRunner and restart it to use a different set of addins.
The startup implementation supports the notion of a startup test which can be executed during WinRunner
initialization. This allows project-specific compiled modules [memory resident libraries] and GUI Maps
[discussed in section 2.6 on page 5] to be loaded. The functions and variables contained in these modules
can then be used by all tests that are run during that WinRunner session.
Both tools allow most of the configuration setup established in these files to be over-ridden with runtime
code in library functions or the test scripts.

Test Termination
SilkTest tests terminate on exceptions, which are not explicitly trapped in the testcase. For example if
a window fails to appear during the setup phase of testing [i.e. the phase driving the application to a
verification point], a test would terminate on the first object or window timeout exception that is thrown
after the errant window fails to appear.
WinRunner tests run to termination [in unattended Batch mode] unless an explicit action is taken to
terminate the test early. Therefore, tests that ignore this termination model will continue running for long
periods of time after a fatal error is encountered. For example if a window fails to appear during the setup
phase of testing, subsequent context sensitive statements [i.e. clicking on a button, performing a menu pick,
etc.] will fail—but this failure occurs after a multi-second object/window “is not present” timeout expires
for each missing window and object. [When executing tests in non-Batch mode, that is in Debug, Verify, or
Update modes, WinRunner normally presents an interactive dialog box when implicit errors such as
missing objects and windows are encountered].

Addins and Extensions


Out of the box, under Windows, both tools can interrogate and work with objects and windows created
with the standard Microsoft Foundation Class (MFC) library. Objects and windows created using a non-
MFC technology [or non-standard class naming conventions] are treated as custom objects. Dealing with
truly custom objects is discussed further in section 2.8 on page 6.
But objects and windows created for web applications [i.e. applications which run in a browser], Java
applications, Visual Basic applications, and PowerBuilder applications are dealt with in a special manner:
SilkTest enables support for these technologies using optional extensions. Selected extensions are
enabled/disabled in the current Option Set [or the configuration established by the default partner.ini
option settings].
WinRunner enables support for these technologies using optional addins. Selected addins are
enabled/disabled using the Addin Manager either at WinRunner startup or by editing the appropriate
wrun.ini file prior to startup.
Note that (1) some combinations of addins [WinRunner] and extensions [SilkTest] are mutually exclusive,
(2) some of these addins/extensions may no longer be supported in the newest releases of the tool, (3)
some of these addins/extensions may only support the last one or two releases of the technology [for
example version 5 and 6 of Visual Basic] and (4) some of these addins and extensions may have to be
purchased at an addition cos.

Visual Recorders
SilkTest provides visual recorders and wizards for the following activities:
Creating a test frame with GUI declarations for a full application and adding/deleting selective objects
and windows in and existing GUI declarations frame file.
Capturing user actions with the application into a test case, using either context sensitive [object
relative] or analog [X:Y screen coordinate relative] recording techniques.
Inspecting identifiers, locations and physical tags of windows and objects.
Checking window and object bitmaps [or parts thereof].
Creating a verification statement [validating one or more object properties].
WinRunner provides visual recorders and wizards for the following activities:
Creating an entire GUI Map for a full application and adding/deleting selective objects and windows in
an existing GUI Map. It is also possible to implicitly create GUI Map entries by capturing user actions
[using the recorder described next].
Capturing user actions with the application into a test case, using either context sensitive [object
relative] or analog [X:Y screen coordinate relative] recording techniques.
Inspecting logical names, locations and physical descriptions of windows and objects.
Checking window and object bitmaps [or parts thereof].
Creating a GUI checkpoint [validating one or more object properties].
Creating a database checkpoint [validating information in a database].
Creating a database query [extracting information from a database].
Locating at runtime a missing object referenced in a testcase [and then adding that object to the GUI
Map].
Teaching WinRunner to recognize a virtual object [a bitmap graphic with functionality].
Creating Data Tables [used to drive a test from data stored in an Excel-like spreadsheet].
Checking text on a non-text object [using a built-in character recognition capability].
Creating a synchronization point in a testcase.
Defining an exception handler.
Some of these recorders and wizards do not work completely for either tool against all applications, under
all conditions. For example neither tool’s recorder to create a full GUI Map [WinRunner] or test frame
[SilkTest] works against large applications, or any web application.
Evaluate the recorders and wizards of interest carefully against your applications if these utilities are
important to your automated testing efforts.

Object Hierarchy
SilkTest supports a true object-oriented hierarchy of parent-child-grandchild-etc. relationships between
windows and objects within windows. In this model an object such as a menu is the child of its enclosing
window and a parent to its menu item objects.
WinRunner, with some rare exceptions [often nested tables on web pages], has a flat object hierarchy
where child objects exist in parent windows. Note that web page frames are treated as windows, and not
child objects of the enclosing window on web pages that are constructed using frames.

Object Recognition
Both of these tools use a lookup table mechanism to isolate the variable name used to reference an object
in a test script from the description used by the operating system to access that object at runtime:
SilkTest normally places an application’s GUI declarations in a test frame file. There is generally one
GUI declaration for each window and each object in a window. A GUI declaration consists of an object
identifier—the variable used in a test script—and its class and object tag definition used by the operating
system to access that object at runtime.
SilkTest provides the following capabilities to define an object tag: (1) a string, which can include
wildcards; (2) an array reference which resolves to a string which can include wildcards; (3) a function or
method call that returns a string, which can include wildcards, (4) an object class and class relative index
number; and (5) multiple tags [multi-tags] each optionally conditioned with (6) an OS/GUI/browser
specifier [a qualification label].
WinRunner normally places an application’s logical name/physical descriptor definitions in a GUI
Map file. There is generally one logical name/physical descriptor definition for each window and each
object in a window. The logical name is used to reference the object in a test script, while the physical
descriptor is used by the operating system to access that object at runtime.
WinRunner provides the following capabilities to define a physical descriptor: (1) a variable number of
comma delimited strings which can include wildcards, where each string identifies one property of the
object. [While there is only a single method of defining a physical descriptor, this definition can include a
wide range and variable number of obligatory, optional, and selector properties on an object by object
basis].
The notion behind this lookup table mechanism is to permit changes to an object tag [SilkTest] or a
physical descriptor [WinRunner] definition without the need to change the associated identifier [SilkTest]
or logical name [WinRunner] used in the testcase. In general the object tag [SilkTest] or physical
descriptor [WinRunner] resolve to one or more property definitions which uniquely identify the object in
the context of its enclosing parent window at runtime.
It is also possible with both tools to dynamically construct and use object tags [SilkTest] or physical
descriptors [WinRunner] at runtime to reference objects in test scripts.

Object Verification
Both tools provide a variety of built-in library functions permitting a user to hand code simple verification
of a single object property [i.e. is/is not focused, is/is not enabled, has/does not have an expected text
value, etc.]. Complex multiple properties in a single object and multiple object verifications are supported
using visual recorders:
SilkTest provides a Verify Window recorder which allows any combination of objects and object
properties in the currently displayed window to be selected and captured. Using this tool results in the
creation, within the testcase, of a VerifyProperties() method call against the captured window.
WinRunner provides several GUI Checkpoint recorders to validate (1) a single object property, (2)
multiple properties in a single object, and (3) multiple properties of multiple objects in a window. The
single property recorder places a verification statement directly in the test code while the multiple property
recorders create unique checklist [*.ckl] files in the /chklists subdirectory [which describe the
objects and properties to capture], as well as an associated expected results [*.chk] file in the /exp
subdirectory [which contains the expected value of each object and property defined in the checklist file].
Both tools offer advanced features to define new verification properties [using mapping techniques and/or
built-in functions and/or external DLL functions] and/or to customize how existing properties are captured
for standard objects.

Custom Objects
To deal with a custom object [i.e. an object that does not map to standard class] both tools support the use
of class mapping [i.e. mapping a custom class to a standard class with like functionality], along with a
variety of X:Y pixel coordinate clicking techniques [some screen absolute, some object relative] to deal
with bitmap objects, as well as the ability to use external DLL functions to assist in object identification
and verification. Beyond these shared capabilities each tool has the following unique custom object
capabilities:
SilkTest has a feature to overlay a logical grid of X rows by Y columns on a graphic that has evenly
spaced “hot spots”[this grid definition is then used to define logical GUI declarations for each hot spot].
These X:Y row/column coordinates are resolution independent [i.e. the logical reference says “give me 4th
column thing in the 2nd row”, where that grid expands or contracts depending on screen resolution].
WinRunner has a built-in text recognition engine which works with most standard fonts. This
capability can often be used to extract visible text from custom objects, position the cursor over a custom
object, etc. This engine can also be taught a non-standard font type which is does understand out of the
box.

Internationalization (Language Localization)


SilkTest supports the single-byte IBM extended ASCII character set, and its Technical Support has also
indicated, “That Segue has no commitment for unicode”. The user guide chapter titled “Supporting
Internationalized Applications” shows a straightforward technique for supporting X number of [single-byte
IBM extended ASCII character set] languages in a single test frame of GUI declarations.
WinRunner provides no documentation on how to use the product to test language localized
applications. Technical Support has indicated that (1) “WinRunner supports multi-byte character sets for
language localized testing…”, (2) “there is currently no commitment for the unicode character set…”, and
(3) “it is possible to convert a US English GUI Map to another language using a [user developed] phrase
dictionary and various gui_* built-in functions”.
Check into the aspects of this feature very carefully if it is important to your testing effort.

Database Interfaces
Both tools provide a variety of built-in functions to perform Structure Query Language (SQL) queries to
control, alter, and extract information from any database which supports the Open Database Connectivity
(ODBC) interface.

Database Verification
Both tools provide a variety of built-in functions to make SQL queries to extract information from an
ODBC compliant database and save it to a variable [or if you wish, an external file].
Verification at this level is done with hand coding.
WinRunner also provides a visual recorder to create a Database Checkpoint used to validate the selected
contents of an ODBC compliant database within a testcase. This recorder creates side files similar to GUI
Checkpoints and has a built-in interface to (1) the Microsoft Query application [which can be installed as
part of Microsoft Office], and (2) to the Data Junction application [which may have to be purchased at an
addition cost], to assist in constructing and saving complex database queries.

Both tools offer support for testing non-graphical controls through the advanced use of custom DLLs
[developed by the user], or the Extension Kit [SilkTest, which may have to be purchased at an addition
cost] and the Mercury API Function Library [WinRunner].
Data Driven Testing
Both tools support the notion of data-driven tests, but implement this feature differently:
SilkTest’s implementation is built around an array of user-defined records. A record is a data structure
defining an arbitrary number of data items that are populated with values when the array is initialized
[statically or at runtime]. Non-programmers can think of an array of records as a memory resident
spreadsheet of X number of rows that contain Y number columns where each row/column intersection
contains a data item.
The test code, as well as the array itself, must be hand coded. It is also possible to populate the array each
time the test is run by extracting the array’s data values from an ODBC compliant database, using a series
of built-in SQL function calls. The test then iterates through the array such that each iteration of the test
uses the data items from the next record in the array to drive the test or provide expected data values.
WinRunner’s implementation is built around an Excel compatible spreadsheet file of X number of
rows that contain Y number of columns where each row/column intersection contains a data item. This
spreadsheet is referred to as a Data Table.
The test code, as well as the Data Table itself, can be created with hand coding or the use of the
DataDriver visual recorder. It is also possible to populate a Data Table file each time the test is run by
extracting the table’s data values from an ODBC compliant database using a WinRunner wizard interfaced
to the Microsoft Query application.. The test then iterates through the Data Table such that each iteration
of the test uses the data items from the next row in the table to drive the test or provide expected data
values.
Both tools also support the capability to pass data values into a testcase for a more modest approach to data
driving a test.

Restoring an Application’s Initial State


SilkTest provides a built-in recovery system, which restores the application to a stable state, referred to
as the basestate, when the test or application under test fails ungracefully. The default basestate is defined
to be: (1) the application under test is running; (2) the application is not minimized; and (3) only the
application’s main window is open and active. There are many built-in functions and features, which allow
the test engineer to modify, extend, and customize the recovery system to meet the needs of each
application under test.
WinRunner does not provide a built-in recovery system. You need to code routines to return
the application under test to its basestate—and dismiss all orphaned dialogs—when a test
fails ungracefully.

Scripting Language
Both tools provide proprietary, interpreted, scripting languages. Each language provide the usual flow
control constructs, arithmetic and logical operators, and a variety of built-in library functions to perform
such activities as string manipulation, [limited] regular expression support, standard input and output, etc.
But that is where the similarity ends:
SilkTest provides a strongly typed, object-oriented programming language called 4Test. Variables and
constants may be one of 19 built-in data types, along with a user defined record data type. 4Test supports
single- and multi-dimensional dynamic arrays and lists, which can be initialized statically or dynamically.
Exception handling is built into the language [via the do… except statement].
WinRunner provides a non-typed, C-like procedural programming language called TSL. Variables and
constants are either numbers or strings [conversion between these two types occurs dynamically,
depending on usage]. There is no user defined data type such as a record or a data structure. TSL supports
sparsely populated associative single- and [pseudo] multidimensional arrays, which can be initialized
statically or dynamically—element access is always done using string references—foobar[“1”] and
foobar[1] access the same element [as the second access is implicitly converted to an associative
string index reference]. Exception handling is not built into the language.
The only way to truly understand and appreciate the differences between these two programming
environments is to use and experiment with both of them.

Exception Handling
Both tools provide the ability to handle unexpected events and errors, referred to as exceptions, but
implement this feature differently:
SilkTest’s exception handling is built into the 4Test language—using the do… except construct
you can handle the exception locally, instead of [implicitly] using SilkTest’s default built-in exception
handler [which terminates the currently running test and logs an error]. If an exception is raised within the
do block of the statement control is then immediately passed to the except block of code. A variety of
built-in functions [LogError (), LogWarning, ExceptNum (), ExceptLog (), etc.] and
4Test statements [raise, reraise, etc.] aid in the processing of trapped exceptions within the except
block of code.
WinRunner’s exception handling is built around (1) defining an exception based on the type of
exception (Popup, TSL, or object) and relevant characteristics about the exception (most often its error
number); (2) writing an exception hander, and (3) enabling and disabling that exception handler at
appropriate point(s) in the code. These tasks can be achieved by hand coding or through the use of the
Exception Handling visual recorder.

Test Results Analysis


SilkTest’s test results file resolves around the test run. For example if you run 3 testcases [via a test
suite or SilkOrganizer] all of the information for that test run will be stored in a single results file. There is
a viewer to analyze the results of the last test run or X previous to the last run. Errors captured in the results
file contain a full stack trace to the failing line of code, which can be brought up in the editor by double-
clicking on any line in that stack trace.
WinRunner’s test results file revolves around each testcase. For example if you run 3 testcases [by
hand or via a batch test or TestDirector] 3 test results files are created, each in a subdirectory under its
associated testcase. There is a viewer to analyze the results of a test has last run or if results have not been
deleted, a previous run. Double clicking on events in the log often expands that entry’s information,
sometimes bringing up specialized viewers [for example when that event is some type of checkpoint or
some type of failure].

Managing the Testing Process


SilkTest has a built-in facility, SilkOrganizer, for creating a testplan and then linking the testplan to
testcases. SilkOrganizer can also be used to track the automation process and control the execution of
selected groups of testcases. One or more user defined attributes [such as “Test Category”, “Author”,
“Module”, etc.] are assigned to each testcase and then later used in testplan queries to select a group of
tests for execution. There is also a modest capability to add manual test placeholders in the testplan, and
then manually add pass/fail status to the results of a full test run. SilkTest also supports a test suite, which
is a file containing calls to one or more test scripts or other test suites.
WinRunner integrates with a separate program called TestDirector [at a substantial additional cost],
for visually creating a test project and then linking WinRunner testcases into that project. TestDirector is a
database repository based application that provides a variety of tools to analyze and manipulate the various
database tables and test results stored in the repository for the project. A bug reporting and tracking tool is
included with TestDirector as well [and this bug tracking tool supports a browser based client].
Using a visual recorder, testcases are added to one or more test sets [such as “Test Category”, “Author”,
“Module”, etc.] for execution as a group. There is a robust capability for authoring manual test cases [i.e.
describing of each test step and its expected results], interactively executing each manual test, and then
saving the pass/fail status for each test step in the repository. TestDirector also allows test sets to be
scheduled for execution at a time and date in the future, as well as executing tests remotely on other
machines [this last capability requires the Enterprise version of TestDirector].
TestDirector is also capable of interfacing with and executing LoadRunner test scripts as well as other 3rd
party test scripts [but this later capability requires custom programming via TestDirector APIs].
Additionally TestDirector provides API’s to allow WinRunner as well as other 3rd party test tools [and
programming environments] to interface with a TestDirector database.

External Files
When the tool’s source code files are managed with a code control tool such as PVCS or Visual
SourceSafe, it is useful to understand what external side files are created:
SilkTest implicitly creates *.*o bytecode-like executables after interpreting the source code
contained in testcases and include files [but it unlikely that most people will want to source code control
these files]. No side files are created in the course of using its recorders. SilkTest does though create an
explicit *.bmp files for storing the expected and actual captured bitmap images when performing a
bitmap verification.
WinRunner implicitly creates many side files using a variety of extensions [*.eve, *.hdr, *.asc,
*.ckl, *.chk, and a few others] in a variety of implicitly created subdirectories [/db, /exp,
/chklist, /resX] under the testcase in the course of using its visual recorders as well as storing
pass/fail results at runtime.

Debugging
Both tools support a visual debugger with the typical capabilities of breakpoints, single step, run to, step
into, set out of, etc.

You might also like