You are on page 1of 95

1

Contents
1. Test strategy document.......................................................................................................................3
2. The whole QA process:........................................................................................................................6
3. Metrics Used In Testing...........................................................................................................................7
Software Quality Metrics...................................................................................................................11
Defect Removal Efficiency.................................................................................................................12
4. Various types of testing:........................................................................................................................18
Performance vs. load vs. stress testing.................................................................................................24
More on performance vs. load testing..................................................................................................29
5. Test automation - Advantages...............................................................................................................33
Best Practices in Automated Testing...........................................................................................33
Automated Testing Advantages, Disadvantages and Guidelines.......................................35
6. Software Quality Management..............................................................................................................37
7. What is Defect Tracking?.......................................................................................................................40
8. QA Plan - various sections explained.....................................................................................................41
9. Effective Software Testing.....................................................................................................................42
10. FAQS:...................................................................................................................................................46
11. Software lifecycle models:...................................................................................................................53
The Phases of the V-model....................................................................................................................54
[edit] Verification Phases.......................................................................................................................54
[edit] Requirements analysis.............................................................................................................54
[edit] System Design..........................................................................................................................55
[edit] Architecture Design..................................................................................................................55
[edit] Module Design.........................................................................................................................55
[edit] Validation Phases.........................................................................................................................55
[edit] Unit Testing..............................................................................................................................55
[edit] Integration Testing...................................................................................................................56
[edit] System Testing.........................................................................................................................56
[edit] User Acceptance Testing..........................................................................................................56
12. Product testing & ISO certification......................................................................................................59
2

13. Motivating testers...............................................................................................................................60


How To Recruit, Motivate, and Energize Superior Test Engineer......................................................60
14. Certifications and exams for Software testing.....................................................................................64
15. CMMI level5 for testing projects.........................................................................................................65
16. Automation tools (QTP, winrunner, load runner), test management tools.........................................66
17. Test estimation....................................................................................................................................67
Is it correct to accept the change in functionality after creating Test cases and just before
start of a new relase?.....................................................................................................................71
18. Test case management tools...............................................................................................................72
19. Code coverage & tools.........................................................................................................................75
20. Defect tracking tools - etrack, bugtrq, (perforce - source code control system).................................77
21. QA best practices.................................................................................................................................78
22. QA challenges......................................................................................................................................78
23. Entry and Exit criteria for testing.........................................................................................................80
23. QA in web technologies (Website testing)...........................................................................................82
WebSite (Web Application) Testing Dr. Edward Miller eValid HOME.....................82
ABSTRACT......................................................................................................................................82
BACKGROUND..............................................................................................................................82
DEFINING WEBSITE QUALITY & RELIABILITY..................................................................83
WEBSITE ARCHITECTURAL FACTORS..................................................................................83
WEBSITE TEST AUTOMATION REQUIREMENTS...............................................................85
WEBSITE DYNAMIC VALIDATION..........................................................................................86
TESTING SYSTEM CHARACTERISTICS................................................................................87
24. QA exam questions..............................................................................................................................90
25. Testing dot net applications................................................................................................................90
26. Initiatives & Best practices...................................................................................................................90
27. Initiatives & Best practices...................................................................................................................90
Answers to sample CSTE exam questions..................................................................................................94
28. CSAT parameters.................................................................................................................................97
29. Defect management in Quality Centre................................................................................................98
30. Test Management Office (TMO)..........................................................................................................98
31. RASCI matrix........................................................................................................................................99
3

31. Program Management.........................................................................................................................99

1. Test strategy document


Test strategy document – is like a blue print that helps you visualize test schedule, and helps you focus
on key questions to meet the test schedule.
4

Test strategy document is prepared during the requirements specification phase.

Test strategy typically has the following aspects:

a. Definition of test/business/project objectives.


b. Formulation of overall testing approach.
c. Determination of testing environment.
d. Determination of test automation requirements.
e. Formulation of metric plan.
f. Risk identification, mitigation and contingency plan.
g. Identification and preparation of specific testing templates
h. Definition of performance criteria.
i. Definition of acceptance criteria.
Test approach:

a. how many people will test the application.


b. What are the various phases of testing will be performed.
c. What are various types of testing performed.
d. Where will each testing phase take place (onsite/offshore)
e. Will you do a manual testing or automated testing (or both)
Need to have justification for your answers like why you need 5 people for testing, why can’t you
complete with 2 people.

Also define the team structure.

Test environment:

Test machines, soft wares, hard disks, tools etc.

Test automation requirements:

Plan for automation, separate strategy document for automation can be added.

Metric plan:

Metric plan typically includes test-related metrics such as

 Defect density
 Residual defect density
 Code coverage
 No. of test cases executed per day
 No. of test cases passed
 No. of test cases failed.

Risk Management:

A risk is an obstacle that would prevent one from reaching a defined goal(s), leading to adverse business
impacts.
5

Identification of risks and coming up with risk mitigation (avoid) and contingency (resolution) plan is
called risk management.

A good mitigation plan will eliminate the occurrence of a risk.

Simplest way to identify the list of risks is to question the team -> what are the ways the project can fail,
all the relevant answers can be considered as risks for the project.

Contingency plan and risk identification methods:

Eg :

Risk: schedule slippage

Cause of risk: Lack of resources or resource becomes unavailable.

Mitigation plan: Recruitment of additional resources.

Contingency plan: Have backup resources.

Templates:

Test case document template

Defect report template

Wipro recommends the use of standard test templates available in veloci-Q, Some project teams uses
the templates given by the customer.

Test plan and test case design

The mapping of test cases to requirements is known as traceability matrix.

Example :

Req id : 001

Requirement : On successful authentication user must be allowed to view the pages.

Test cases : TC01, TC02, TC03, TC04.

Traceability Matrix :
The traceability Matrix defines the mapping between customer requirements and prepared test cases by
test engineers. This matrix is requirements traceability matrix or requirements validation matrix. This is
used by testing team to verify how far the test cases prepared have covered the requirements of the
functionalities to be tested.
6

Sample traceability matrix


REQ1 REQ1 REQ1 REQ1 REQ1 REQ1 REQ1 REQ1 REQ1 REQ1 REQ1 REQ1 REQ1 REQ1
Requirement Reqs
Identifiers Tested
UC UC UC UC UC UC UC UC UC UC UC TECH TECH TECH
1.1 1.2 1.3 2.1 2.2 2.3.1 2.3.2 2.3.3 2.4 3.1 3.2 1.1 1.2 1.3

Test Cases 321 3 2 3 1 1 1 1 1 1 2 3 1 1 1

Tested Implicitly 77

1.1.1 1 x

1.1.2 2 x x

1.1.3 2 x x

1.1.4 1 x

1.1.5 2 x x

1.1.6 1 x

1.1.7 1 x

1.2.1 2 x x

1.2.2 2 x x

1.2.3 2 x x

1.3.1 1 x

1.3.2 1 x

1.3.3 1 x

1.3.4 1 x

1.3.5 1 x

etc…
7

5.6.2 1 x

Scheduling: Run plan/Execution plan

Suspension criteria and Resumption criteria :

Suspension criteria specifies all the circumstances under which testing can be suspended.

What are those??

Test entry criteria:

For eg : entry criteria to start system testing is – unit/integration testing must be completed, system
test build should be stable.

Test Exit criteria: No severity bugs, all test results and reports are available.

Test completion:

Decide based on trend, for eg: you have completed 50% of testing and still defect rate is very high and
high severity defects are occurring – means you can ask for schedule extension to accommodate retest
the high severity defects.

On the other hand, team has completed 95% of testing and defect rate is very low and all defects are
minor only. You can declare the test completion. But mention that 10 test cases were not executed.

Error estimation:

Error estimation helps in you answer the question “Have I found enough number of defects/bugs”.

One of the most popular methods of error estimation is “error seeding”. This involves intentional
seeding of errors in the code.

No. of remaining unseeded errors = No.of unseeded errors found during testing X (total number of
seeded errors / No.of seeded errors detected during testing)

Estimation of error distribution :

Coding and logical errors Design errors.

Unit test 65 0

Integration/Module test 30 60

System test 3 35

Total 98 95
8

2. The whole QA process:


Because the answer to this question gives the interviewer clear picture of your expertise in QA and he
many need not go for any further questions and waste his time. I will try to answer your question in the
following way.

QA is a crucial role in any project as he is involved right from the first phase of the project. Suppose let
us consider we have received a project proposal from client.

1. In the first stage (requirement stage)--> we collect the requirements from the customer/client and
will have a review to check the achievability. For this high level QA related personnel will be involved
(SQA Analyst or SQA Manager)

2. Once SRS/FRS is prepared --> that is to be reviewed parellely by a QA Manager to check if we are in
position to handle the execution w.r.t given schedule and resources. QA Analyst start preparing system
test cases here.

3. One SRS is converted in to Design documents (HLD and LLD) again we will be having a sort of review
by QA Analyst/Architect to check if the design is optimum and as per the standards. QA person start
preparing Integrations level test cases here.

4. Once design is put in to implementation--> QA person will be involved in review and unit level testing.

5. Once all the modules/units are coded development start with integrating all those units to make a
single unit. Again QA person will be testing here using Integration Test Cases that he has already
prepared in step3.

6. Once the integration is done QA people use System level test cases to test the system behaviour. At
the same time he will be preparing the Req Traceability matrix to check if all the given requirements are
transformed in to test cases at the end.

7. Once the complete system is deployed in offshore test server-- QA person conduct different types of
testing to check the functionality of the application...eg: Pre deployment testing etc As depicted above
QA Analyst will be involved in almost all stages of the PDC. If you are applying for any managerial level
you can stressmore on the first three points.If you are applying for any junior level you exclude first two
points...All the best--Vijay Sarvepalli

 Tools used in QA planning


 Tools used in QA execution
 QA books read
 QA norms & certifications (CMMI, ISO etc)
 QA challenges
 QA projects Executed and significant achievements
9

3. Metrics Used In Testing


Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of
Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of
acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design
and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

Software Quality Metrics

We best manage what we can measure. Measurement enables the Organization to improve the software
process; assist in planning, tracking and controlling the software project and assess the quality of the
software thus produced. It is the measure of such specific attributes of the process, project and product
that are used to compute the software metrics. Metrics are analyzed and they provide a dashboard to the
management on the overall health of the process, project and product. Generally, the validation of the
metrics is a continuous process spanning multiple projects. The kind of metrics employed generally
10

account for whether the quality requirements have been achieved or are likely to be achieved during the
software development process. As a quality assurance process, a metric is needed to be revalidated
every time it is used. Two leading firms namely, IBM and Hewlett-Packard have placed a great deal of
importance on software quality. The IBM measures the user satisfaction and software acceptability in
eight dimensions which are capability or functionality, usability, performance, reliability, ability to be
installed, maintainability, documentation, and availability. For the Software Quality Metrics the Hewlett-
Packard normally follows the five Juran quality parameters namely the functionality,the usability, the
reliability, the performance and the serviceability. In general, for most software quality assurance systems
the common software metrics that are checked for improvement are the Source lines of code, cyclical
complexity of the code, Function point analysis, bugs per line of code, code coverage, number of classes
and interfaces, cohesion and coupling between the modules etc.

Common software metrics include:

 Bugs per line of code


 Code coverage
 Cohesion
 Coupling
 Cyclomatic complexity
 Function point analysis
 Number of classes and interfaces
 Number of lines of customer requirements
 Order of growth
 Source lines of code
 Robert Cecil Martin’s software package metrics

Software Quality Metrics focus on the process, project and product. By analyzing the metrics the
organization the organization can take corrective action to fix those areas in the process, project or
product which are the cause of the software defects.

The de-facto definition of software quality consists of the two major attributes based on intrinsic product
quality and the user acceptability. The software quality metric encapsulates the above two attributes,
addressing the mean time to failure and defect density within the software components. Finally it
assesses user requirements and acceptability of the software. The intrinsic quality of a software product is
generally measured by the number of functional defects in the software, often referred to as bugs, or by
testing the software in run time mode for inherent vulnerability to determine the software "crash"
scenarios.In operational terms, the two metrics are often described by terms namely the defect density
(rate) and mean time to failure (MTTF).

Although there are many measures of software quality, correctness, maintainability, integrity and usability
provide useful insight.

Correctness

A program must operate correctly. Correctness is the degree to which the software performs the required
functions accurately. One of the most common measures is Defects per KLOC. KLOC means thousands
(Kilo) Of Lines of Code.) KLOC is a way of measuring the size of a computer program by counting the
number of lines of source code a program has.

Maintainability

Maintainability is the ease with which a program can be correct if an error occurs. Since there is no direct
way of measuring this an indirect way has been used to measure this. MTTC (Mean time to change) is
11

one such measure. It measures when a error is found, how much time it takes to analyze the change,
design the modification, implement it and test it.

Integrity

This measure the system’s ability to with stand attacks to its security. In order to measure integrity two
additional parameters are threatand security need to be defined. Threat – probability that an attack of
certain type will happen over a period of time. Security – probability that an attack of certain type will be
removed over a period of time. Integrity = Summation [(1 - threat) X (1 - security)]

Usability

How usable is your software application ? This important characteristic of your application is measured in
terms of the following characteristics:

 -Physical / Intellectual skill required to learn the system


 -time required to become moderately efficient in the system.
 -the net increase in productivity by use of the new system.
 -subjective assessment(usually in the form of questionnaire on the new system)

Defect Removal Efficiency

Defect Removal Efficiency (DRE) is a measure of the efficacy of your SQA activities.. For eg. If the DRE
is low during analysis and design, it means you should spend time improving the way you conduct formal
technical reviews.

DRE = E / ( E + D )

Where E = No. of Errors found before delivery of the software and D = No. of Errors found after delivery of
the software.

Ideal value of DRE should be 1 which means no defects found. If you score low on DRE it means to say
you need to re-look at your existing process. In essence DRE is a indicator of the filtering ability of
quality control and quality assurance activity . It encourages the team to find as many defects before
they are passed to the next activity stage. Some of the Metrics are listed out here:

Test Coverage = Number of units (KLOC/FP) tested / total size of the system Number of tests
per unit size = Number of test cases per KLOC/FP Defects per size = Defects detected / system
size Cost to locate defect = Cost of testing / the number of defects located Defects detected in
testing = Defects detected in testing / total system defects Defects detected in production =
Defects detected in production/system size Quality of Testing = No. of defects found during
Testing/(No. of defects found during testing + No of acceptance defects found after delivery)
*100 System complaints = Number of third party complaints / number of transactions processed
Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for
Design and Documentation Test Execution Productivity = No of Test cycles executed / Actual
Effort for testing Test efficiency= (number of tests required / the number of system errors)

Measure Metrics
12

Number of system enhancement requests per year Number of


1. Customer satisfaction index maintenance fix requests per year User friendliness: call volume to
customer service hotline User friendliness: training time per new
user Number of product recalls or fix releases (software vendors)
Number of production re-runs (in-house information systems
groups)

Normalized per function point (or per LOC) At product delivery (first
2. Delivered defect quantities 3 months or first year of operation) Ongoing (per year of operation)
By level of severity By category or cause, e.g.: requirements defect,
design defect, code defect, documentation/on-line help defect,
defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users Turnaround time for defect fixes, by level of severity Time for minor
vs. major enhancements; actual vs. planned elapsed time (by
customers) in the first year after product delivery </TR>

<B>7. Complexity of delivered product McCabe's cyclomatic complexity counts across the system
Halstead’s measure Card's design complexity measures Predicted
defects and maintenance costs, based on complexity measures
8. Test coverage Breadth of functional coverage
Percentage of paths, branches or conditions that
were actually tested Percentage by criticality
level: perceived level of risk of paths The ratio of Total lines of code exercised by test suite/Total Lines of code * 100
the number of detected faults to the number of
predicted faults. Tools : GCOV, javacoverage

Business losses per defect that occurs during operation Business


9. Cost of defects interruption costs; costs of work-arounds Lost sales and lost
goodwill Litigation costs resulting from defects Annual maintenance
cost (per function point) Annual operating cost (per function point)
Measurable damage to your boss's career

Costs of reviews, inspections and preventive measures Costs of test


planning and preparation Costs of test execution, defect tracking,
10. Costs of quality activities version and change control Costs of diagnostics, debugging and
fixing Costs of tools and tool support Costs of tools and tool support
Costs of test case library maintenance Costs of testing & QA
education associated with the product Costs of monitoring and
oversight by the QA organization (if separate from the development
and test organizations)

11. Re-work Re-work effort (hours, as a percentage of the original coding hours)
Re-worked LOC (source lines of code, as a percentage of the total
delivered LOC) Re-worked software components (as a percentage
of the total delivered components)
12. Reliability
Availability (percentage of time a system is available, versus the
time the system is needed to be available) Mean time between
failure (MTBF) Mean time to repair (MTTR) Reliability ratio (MTBF /
MTTR) Number of product recalls or fix releases Number of
13

production re-runs as a ratio of production runs

FIGURE 3.1 The Relationship Between LOC and Defect Density

Cyclomatic Complexity

About the same time Halstead founded software science, McCabe proposed a topological or graph-
theory measure of cyclomatic complexity as a measure of the number of linearly independent paths
that make up a computer program. To compute the cyclomatic complexity of a program that has been
graphed or flow-charted, the formula used is

in which

More simply, it turns out that M is equal to the number of binary decisions in the program plus 1. An n-
way case statement would be counted as n . 1 binary decisions. The advantage of this measure is that
it is additive for program components or modules. Usage recommends that no single module have a
value of M greater than 10. However, because on average every fifth or sixth program instruction
executed is a branch, M strongly correlates with program size or LOC. As with the other early quality
measures that focus on programs per se or even their modules, these mask the true source of
architectural complexity—interconnections between modules. Later researchers have proposed
structure metrics to compensate for this deficiency by quantifying program module interactions. For
example, fan-in and fan-out metrics, which are analogous to the number of inputs to and outputs from
hardware circuit modules, are an attempt to fill this gap. Similar metrics include number of subroutine
calls and/or macro inclusions per module, and number of design changes to a module, among others.
Kan reports extensive experimental testing of these metrics and also reports that, other than module
length, the most important predictors of defect rates are number of design changes and complexity
level, however it is computed.9

Function Point Metrics

Quality metrics based either directly or indirectly on counting lines of code in a program or its modules
are unsatisfactory. These metrics are merely surrogate indicators of the number of opportunities to
make an error, but from the perspective of the program as coded. More recently the function point has
been proposed as a meaningful cluster of measurable code from the user's rather than the
programmer's perspective. Function points can also be surrogates for error opportunity, but they can
be more. They represent the user's needs and anticipated or a priori application of the program rather
than just the programmer's a posteriori completion of it. A very large program may have millions of
LOC, but an application with 1,000 function points would be a very large application or system indeed.
A function may be defined as a collection of executable statements that performs a task, together with
declarations of formal parameters and local variables manipulated by those statements. A typical
14

function point metric developed by Albrecht 10 at IBM is a weighted sum of five components that
characterize an application:

These represent the average weighting factors wij, which may vary with program size and complexity.
xij is the number of each component type in the application.

The function count FC is the double sum:

The second step employs a scale of 0 to 5 to assess the impact of 14 general system
characteristics in terms of their likely effect on the application:

 Data communications  Online update

 Distributed functions  Complex processing

 Performance  Reusability

 Heavily used configuration  Installation ease

 Transaction rate  Operational ease

 Online data entry  Multiple sites

 End-user efficiency  Facilitation of change

The scores for these characteristics ci are then summed based on the following formula to find a value
adjustment factor (VAF):

Finally, the number of function points is obtained by multiplying the number of function counts by the
value adjustment factor:
15

This is actually a highly simplified version of a commonly used method that is documented in the
International Function Point User's Group Standard (IFPUG, 1999).11

Although function point extrinsic counting metrics and methods are considered more robust than
intrinsic LOC counting methods, they have the appearance of being somewhat subjective and
experimental in nature. As used over time by organizations that develop very large software systems
(having 1,000 or more function points), they show an amazingly high degree of repeatability and
utility. This is probably because they enforce a disciplined learning process on a software development
organization as much as any scientific credibility they may possess

Availability and Customer Satisfaction Metrics

To the end user of an application, the only measures of quality are in the performance, reliability, and
stability of the application or system in everyday use. This is "where the rubber meets the road," as
users often say. Developer quality metrics and their assessment are often referred to as "where the
rubber meets the sky." This article is dedicated to the proposition that we can arrive at a priori user-
defined metrics that can be used to guide and assess development at all stages, from functional
specification through installation and use. These metrics also can meet the road a posteriori to guide
modification and enhancement of the software to meet the user's changing needs. Caution is advised
here, because software problems are not, for the most part, valid defects, but rather are due to
individual user and organizational learning curves. The latter class of problem calls places an enormous
burden on user support during the early days of a new release. The catch here is that neither alpha
testing (initial testing of a new release by the developer) nor beta testing (initial testing of a new
release by advanced or experienced users) of a new release with current users identifies these
problems. The purpose of a new release is to add functionality and performance to attract new users,
who initially are bound to be disappointed, perhaps unfairly, with the software's quality. The DFTS
approach we advocate in this article is intended to handle both valid and perceived software problems.

Typically, customer satisfaction is measured on a five-point scale: 11

 Very satisfied
 Satisfied

 Neutral

 Dissatisfied

 Very dissatisfied

Results are obtained for a number of specific dimensions through customer surveys. For example, IBM
uses the CUPRIMDA categories—capability, usability, performance, reliability, installability,
maintainability, documentation, and availability. Hewlett-Packard uses FURPS categories—
functionality, usability, reliability, performance, and serviceability. In addition to calculating
percentages for various satisfaction or dissatisfaction categories, some vendors use the net
satisfaction index (NSI) to enable comparisons across product lines. The NSI has the following
weighting factors:

 Completely satisfied = 100%


 Satisfied = 75%

 Neutral = 50%

 Dissatisfied = 25%

 Completely dissatisfied = 0%

NSI then ranges from 0% (all customers are completely dissatisfied) to 100% (all customers are
completely satisfied). Although it is widely used, the NSI tends to obscure difficulties with certain
16

problem products. In this case the developer is better served by a histogram showing satisfaction rates
for each product individually.

4. Various types of testing:

ACCEPTANCE TESTING

Testing to verify a product meets customer specified requirements. A customer usually


does this type of testing on a product that is developed externally.

BLACK BOX TESTING

Testing without knowledge of the internal workings of the item being tested. Tests are
usually functional.

COMPATIBILITY TESTING

Testing to ensure compatibility of an application or Web site with different browsers,


OSs, and hardware platforms. Compatibility testing can be performed manually or can be
driven by an automated functional or regression test suite.

CONFORMANCE TESTING

Verifying implementation conformance to industry standards. Producing tests for the


behavior of an implementation to be sure it provides the portability, interoperability,
and/or compatibility a standard defines.

FUNCTIONAL TESTING

Validating an application or Web site conforms to its specifications and correctly


performs all its required functions. This entails a series of tests which perform a feature
by feature validation of behavior, using a wide range of normal and erroneous input
data. This can involve testing of the product's user interface, APIs, database
management, security, installation, networking, etcF testing can be performed on an
automated or manual basis using black box or white box methodologies.

INTEGRATION TESTING

Testing in which modules are combined and tested as a group. Modules are typically
code modules, individual applications, client and server applications on a network, etc.
Integration Testing follows unit testing and precedes system testing.

LOAD TESTING

Load testing is a generic term covering Performance Testing and Stress Testing.

PERFORMANCE TESTING

Performance testing can be applied to understand your application or WWW site's


scalability, or to benchmark the performance in an environment of third party products
such as servers and middleware for potential purchase. This sort of testing is particularly
17

useful to identify performance bottlenecks in high use applications. Performance testing


generally involves an automated test suite as this allows easy simulation of a variety of
normal, peak, and exceptional load conditions.

REGRESSION TESTING

Similar in scope to a functional test, a regression test allows a consistent, repeatable


validation of each new release of a product or Web site. Such testing ensures reported
product defects have been corrected for each new release and that no new quality
problems were introduced in the maintenance process. Though regression testing can be
performed manually an automated test suite is often used to reduce the time and
resources needed to perform the required testing.

SMOKE TESTING

A quick-and-dirty test that the major functions of a piece of software work without
bothering with finer details. Originated in the hardware testing practice of turning on a
new piece of hardware for the first time and considering it a success if it does not catch
on fire.

STRESS TESTING

Testing conducted to evaluate a system or component at or beyond the limits of its


specified requirements to determine the load under which it fails and how. A graceful
degradation under load leading to non-catastrophic failure is the desired result. Often
Stress Testing is performed using the same process as Performance Testing but
employing a very high level of simulated load.

SYSTEM TESTING

Testing conducted on a complete, integrated system to evaluate the system's


compliance with its specified requirements. System testing falls within the scope of black
box testing, and as such, should require no knowledge of the inner design of the code or
logic.

UNIT TESTING

Functional and reliability testing in an Engineering environment. Producing tests for the
behavior of components of a product to ensure their correct behavior prior to system
integration.

WHITE BOX TESTING

Testing based on an analysis of internal workings and structure of a piece of software.


Includes techniques such as Branch Testing and Path Testing. Also known as Structural
Testing and Glass Box Testing.

 Various types of testing :


o Special focus on :
 Regression testing
18

 Functionality testing
 Performance testing
 Load & Stress testing

 Regression testing is any type of software testing that seeks to uncover software errors
by partially retesting a modified program. The intent of regression testing is to assure that
a bug fix has been successfully corrected based on the error that was found, while
providing a general assurance that no other errors were introduced in the process of fixing
the original problem. Regression is commonly used to efficiently test bug fixes by
systematically selecting the appropriate minimum test suite needed to adequately cover
the affected software code/requirements change. Common methods of regression testing
include rerunning previously run tests and checking whether previously fixed faults have
re-emerged.

 "One of the main reasons for regression testing is that it's often extremely difficult for a
programmer to figure out how a change in one part of the software will echo in other
parts of the software."[

Regression Testing – What to Test?

Since Regression Testing tends to verify the software application after a change has been made
everything that may be impacted by the change should be tested during Regression Testing.
Generally the following areas are covered during Regression Testing:

- Any functionality that was addressed by the change


- Original Functionality of the system

- Performance of the System after the change was introduced

System Testing: Why? What? & How?

Introduction:

‘Unit testing’ focuses on testing each unit of the code.

‘Integration testing’ focuses on testing the integration of “units of code” or components.


Each level of testing builds on the previous level.

‘System Testing’ is the next level of testing. It focuses on testing the system as a whole.

This article attempts to take a close look at the System Testing Process and analyze:
Why System Testing is done? What are the necessary steps to perform System Testing? How to
make it successful?
19

How does System Testing fit into the Software


Development Life Cycle?

In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the individual
components are working OK. The ‘Integration testing’ focuses on successful integration of all
the individual pieces of software (components or units of code).

Once the components are integrated, the system as a whole needs to be rigorously tested to
ensure that it meets the Quality Standards.

Thus the System testing builds on the previous levels of testing namely unit testing and
Integration Testing.

Usually a dedicated testing team is responsible for doing ‘System Testing’.

Why System Testing is important?

System Testing is a crucial step in Quality Management Process.

........- In the Software Development Life cycle System Testing is the first level where
...........the System is tested as a whole
........- The System is tested to verify if it meets the functional and technical
...........requirements
........- The application/System is tested in an environment that closely resembles the
...........production environment where the application will be finally deployed
........- The System Testing enables us to test, verify and validate both the Business
...........requirements as well as the Application Architecture

Prerequisites for System Testing:

The prerequisites for System Testing are:


........- All the components should have been successfully Unit Tested
........- All the components should have been successfully integrated and Integration
..........Testing should be completed
........- An Environment closely resembling the production environment should be
...........created.

When necessary, several iterations of System Testing are done in multiple environments.
20

Steps needed to do System Testing:

The following steps are important to perform System Testing:


........Step 1: Create a System Test Plan
........Step 2: Create Test Cases
........Step 3: Carefully Build Data used as Input for System Testing
........Step 3: If applicable create scripts to
..................- Build environment and
..................- to automate Execution of test cases
........Step 4: Execute the test cases
........Step 5: Fix the bugs if any and re test the code
........Step 6: Repeat the test cycle as necessary

What is a ‘System Test Plan’?

As you may have read in the other articles in the testing series, this document typically describes
the following:
.........- The Testing Goals
.........- The key areas to be focused on while testing
.........- The Testing Deliverables
.........- How the tests will be carried out
.........- The list of things to be Tested
.........- Roles and Responsibilities
.........- Prerequisites to begin Testing
.........- Test Environment
.........- Assumptions
.........- What to do after a test is successfully carried out
.........- What to do if test fails
.........- Glossary

How to write a System Test Case?

A Test Case describes exactly how the test should be carried out.

The System test cases help us verify and validate the system.
The System Test Cases are written such that:
........- They cover all the use cases and scenarios
........- The Test cases validate the technical Requirements and Specifications
........- The Test cases verify if the application/System meet the Business & Functional
...........Requirements specified
........- The Test cases may also verify if the System meets the performance standards

Since a dedicated test team may execute the test cases it is necessary that System Test Cases. The
detailed Test cases help the test executioners do the testing as specified without any ambiguity.

The format of the System Test Cases may be like all other Test cases as illustrated below:
21

 Test Case ID
 Test Case Description:

o What to Test?

o How to Test?

 Input Data

 Expected Result

 Actual Result

Sample Test Case Format:

Test
What To How to Expected Actual
Case Input Data Pass/Fail
Test? Test? Result Result
ID

. . . . . . .

Additionally the following information may also be captured:


........a) Test Suite Name
........b) Tested By
........c) Date
........d) Test Iteration (The Test Cases may be executed one or more times)

Working towards Effective Systems Testing:

There are various factors that affect success of System Testing:

1) Test Coverage: System Testing will be effective only to the extent of the coverage of Test
Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered by the test
cases are sufficient. The Test cases should “cover” all scenarios, use cases, Business
Requirements, Technical Requirements, and Performance Requirements. The test cases should
enable us to verify and validate that the system/application meets the project goals and
specifications.

2) Defect Tracking: The defects found during the process of testing should be tracked.
Subsequent iterations of test cases verify if the defects have been fixed.
22

3) Test Execution: The Test cases should be executed in the manner specified. Failure to do so
results in improper Test Results.

4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is a
compilation of the various components that make the application deployed in the appropriate
environment. The Test results will not be accurate if the application is not ‘built’ correctly or if
the environment is not set up as specified. Automating this process may help reduce manual
errors.

5) Test Automation: Automating the Test process could help us in many ways:

a. The test can be repeated with fewer errors of omission or oversight

b. Some scenarios can be simulated if the tests are automated for instance
simulating a large number of users or simulating increasing large amounts
of input/output data

6) Documentation: Proper Documentation helps keep track of Tests executed. It also helps
create a knowledge base for current and future projects. Appropriate metrics/Statistics can be
captured to validate or verify the efficiency of the technical design /architecture.

Performance: What is our peak processing capability (CPU/DB/Memory within tolerance and
steady).

Load: When does our peak processing capability begin to digress (CPU/DB/Memory begins
to run out).

Stress: When does our processing capability digress below our expectations.
(CPU/DB/Memory gone...)

Management, developers, marketing usually understands this whole method as performance


testing.

Performance vs. load vs. stress testing

Here's a good interview question for a tester: how do you define performance/load/stress testing?

Many times people use these terms interchangeably, but they have in fact quite different meanings.

This post is a quick review of these concepts, based on my own experience, but also using definitions

from testing literature -- in particular: "Testing computer software" by Kaner et al, "Software testing

techniques" by Loveland et al, and "Testing applications on the Web" by Nguyen et al.

Update July 7th, 2005


23

From the referrer logs I see that this post comes up fairly often in Google searches. I'm updating it

with a link to a later post I wrote called 'More on performance vs. load testing'.

Performance testing

The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a

baseline for future regression testing. To conduct performance testing is to engage in a carefully

controlled process of measurement and analysis. Ideally, the software under test is already stable

enough so that this process can proceed smoothly.

A clearly defined set of expectations is essential for meaningful performance testing. If you don't know

where you want to go in terms of the performance of the system, then it matters little which direction

you take (remember Alice and the Cheshire Cat?). For example, for a Web application, you need to

know at least two things:

 expected load in terms of concurrent users or HTTP connections

 acceptable response time

Once you know where you want to be, you can start on your way there by constantly increasing the

load on the system while looking for bottlenecks. To take again the example of a Web application,

these bottlenecks can exist at multiple levels, and to pinpoint them you can use a variety of tools:

 at the application level, developers can use profilers to spot inefficiencies in their code (for

example poor search algorithms)

 at the database level, developers and DBAs can use database-specific profilers and query

optimizers

 at the operating system level, system engineers can use utilities such as top, vmstat, iostat

(on Unix-type systems) and PerfMon (on Windows) to monitor hardware resources such as CPU,

memory, swap, disk I/O; specialized kernel monitoring software can also be used

 at the network level, network engineers can use packet sniffers such as tcpdump, network

protocol analyzers such as ethereal, and various utilities such as netstat, MRTG, ntop, mii-tool

From a testing point of view, the activities described above all take a white-box approach, where the

system is inspected and monitored "from the inside out" and from a variety of angles. Measurements
24

are taken and analyzed, and as a result, tuning is done.

However, testers also take a black-box approach in running the load tests against the system under

test. For a Web application, testers will use tools that simulate concurrent users/HTTP connections and

measure response times. Some lightweight open source tools I've used in the past for this purpose are

ab, siege, httperf. A more heavyweight tool I haven't used yet is OpenSTA. I also haven't used The

Grinder yet, but it is high on my TODO list.

Ab : Apache http server benchmarking tool.

When the results of the load test indicate that performance of the system does not meet its expected

goals, it is time for tuning, starting with the application and the database. You want to make sure your

code runs as efficiently as possible and your database is optimized on a given OS/hardware

configurations. TDD practitioners will find very useful in this context a framework such as Mike Clark's

jUnitPerf, which enhances existing unit test code with load test and timed test functionality. Once a

particular function or method has been profiled and tuned, developers can then wrap its unit tests in

jUnitPerf and ensure that it meets performance requirements of load and timing. Mike Clark calls this

"continuous performance testing". I should also mention that I've done an initial port of jUnitPerf to

Python -- I called it pyUnitPerf.

If, after tuning the application and the database, the system still doesn't meet its expected goals in

terms of performance, a wide array of tuning procedures is available at the all the levels discussed

before. Here are some examples of things you can do to enhance the performance of a Web

application outside of the application code per se:

 Use Web cache mechanisms, such as the one provided by Squid

 Publish highly-requested Web pages statically, so that they don't hit the database

 Scale the Web server farm horizontally via load balancing

 Scale the database servers horizontally and split them into read/write servers and read-only

servers, then load balance the read-only servers

 Scale the Web and database servers vertically, by adding more hardware resources (CPU,

RAM, disks)

 Increase the available network bandwidth


25

Performance tuning can sometimes be more art than science, due to the sheer complexity of the

systems involved in a modern Web application. Care must be taken to modify one variable at a time

and redo the measurements, otherwise multiple changes can have subtle interactions that are hard to

qualify and repeat.

In a standard test environment such as a test lab, it will not always be possible to replicate the

production server configuration. In such cases, a staging environment is used which is a subset of the

production environment. The expected performance of the system needs to be scaled down

accordingly.

The cycle "run load test->measure performance->tune system" is repeated until the system under

test achieves the expected levels of performance. At this point, testers have a baseline for how the

system behaves under normal conditions. This baseline can then be used in regression tests to gauge

how well a new version of the software performs.

Another common goal of performance testing is to establish benchmark numbers for the system under

test. There are many industry-standard benchmarks such as the ones published by TPC, and many

hardware/software vendors will fine-tune their systems in such ways as to obtain a high ranking in the

TCP top-tens. It is common knowledge that one needs to be wary of any performance claims that do

not include a detailed specification of all the hardware and software configurations that were used in

that particular test.

Load testing

We have already seen load testing as part of the process of performance testing and tuning. In that

context, it meant constantly increasing the load on the system via automated tools. For a Web

application, the load is defined in terms of concurrent users or HTTP connections.

In the testing literature, the term "load testing" is usually defined as the process of exercising the

system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called

volume testing, or longevity/endurance testing.

Examples of volume testing:

 testing a word processor by editing a very large document

 testing a printer by sending it a very large job


26

 testing a mail server with thousands of users mailboxes

 a specific case of volume testing is zero-volume testing, where the system is fed empty tasks

Examples of longevity/endurance testing:

 testing a client-server application by running the client in a loop against the server over an

extended period of time

Goals of load testing:

 expose bugs that do not surface in cursory testing, such as memory management bugs,

memory leaks, buffer overflows, etc.

 ensure that the application meets the performance baseline established during performance

testing. This is done by running regression tests against the application at a specified maximum

load.

Although performance testing and load testing can seem similar, their goals are different. On one

hand, performance testing uses load testing techniques and tools for measurement and benchmarking

purposes and uses various load levels. On the other hand, load testing operates at a predefined load

level, usually the highest load that the system can accept while still functioning properly. Note that

load testing does not aim to break the system by overwhelming it, but instead tries to keep the

system constantly humming like a well-oiled machine.

In the context of load testing, I want to emphasize the extreme importance of having large datasets
available for testing. In my experience, many important bugs simply do not surface unless you deal

with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory,

thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies

on file systems, etc. Testers obviously need automated tools to generate these large data sets, but

fortunately any good scripting language worth its salt will do the job.

Stress testing

Stress testing tries to break the system under test by overwhelming its resources or by taking

resources away from it (in which case it is sometimes called negative testing). The main purpose

behind this madness is to make sure that the system fails and recovers gracefully -- this quality is

known as recoverability.
27

Where performance testing demands a controlled environment and repeatable measurements, stress

testing joyfully induces chaos and unpredictability. To take again the example of a Web application,

here are some ways in which stress can be applied to the system:

 double the baseline number for concurrent users/HTTP connections

 randomly shut down and restart ports on the network switches/routers that connect the

servers (via SNMP commands for example)

 take the database offline, then restart it

 rebuild a RAID array while the system is running

 run processes that consume resources (CPU, memory, disk, network) on the Web and

database servers

I'm sure devious testers can enhance this list with their favorite ways of breaking systems. However,

stress testing does not break the system purely for the pleasure of breaking it, but instead it allows

testers to observe how the system reacts to failure. Does it save its state or does it crash suddenly?

Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last

good state? Does it print out meaningful error messages to the user, or does it merely display

incomprehensible hex codes? Is the security of the system compromised because of unexpected

failures? And the list goes on.

Conclusion

I am aware that I only scratched the surface in terms of issues, tools and techniques that deserve to

be mentioned in the context of performance, load and stress testing. I personally find the topic of

performance testing and tuning particularly rich and interesting, and I intend to post more articles on

this subject in the future.

More on performance vs. load testing

I recently got some comments/questions related to my previous blog entry on performance vs. load

vs. stress testing. Many people are still confused as to exactly what the difference is between

performance and load testing. I've been thinking more about it and I'd like to propose the following

question as a litmus test to distinguish between these two types of testing: are you actively profiling

your application code and/or monitoring the server(s) running your application? If the answer is yes,
28

then you're engaged in performance testing. If the answer is no, then what you're doing is load

testing.

Another way to look at it is to see whether you're doing more of a white-box type testing as opposed

to black-box testing. In the white-box approach, testers, developers, system administrators and DBAs

work together in order to instrument the application code and the database queries (via specialized

profilers for example), and the hardware/operating system of the server(s) running the application

and the database (via monitoring tools such as vmstat, iostat, top or Windows PerfMon). All these

activities belong to performance testing.

The black box approach is to run client load tools against the application in order to measure its

responsiveness. Such tools range from lightweight, command-line driven tools such as httperf,

openload, siege, Apache Flood, to more heavy duty tools such as OpenSTA, The Grinder, JMeter. This

type of testing doesn't look at the internal behavior of the application, nor does it monitor the

hardware/OS resources on the server(s) hosting the application. If this sounds like the type of testing

you're doing, then I call it load testing.

In practice though the 2 terms are often used interchangeably, and I am as guilty as anyone else of

doing this, since I called one of my recent blog entries "HTTP performance testing with httperf,

autobench and openload" instead of calling it more precisely "HTTP load testing". I didn't have access

to the application code or the servers hosting the applications I tested, so I wasn't really doing

performance testing, only load testing.

I think part of the confusion is that no matter how you look at these two types of testing, they have

one common element: the load testing part. Even when you're profiling the application and monitoring

the servers (hence doing performance testing), you still need to run load tools against the application,

so from that perspective you're doing load testing.

As far as I'm concerned, these definitions don't have much value in and of themselves. What matters

most is to have a well-established procedure for tuning the application and the servers so that you can

meet your users' or your business customers' requirements. This procedure will use elements of all the

types of testing mentioned here and in my previous entry: load, performance and stress testing.

Here's one example of such a procedure. Let's say you're developing a Web application with a

database back-end that needs to support 100 concurrent users, with a response time of less than 3
29

seconds. How would you go about testing your application in order to make sure these requirements

are met?

1. Start with 1 Web/Application server connected to 1 Database server. If you can, put both servers

behind a firewall, and if you're thinking about doing load balancing down the road, put the Web server

behind the load balancer. This way you'll have one each of different devices that you'll use in a real

production environment.

2. Run a load test against the Web server, starting with 10 concurrent users, each user sending a total

of 1000 requests to the server. Step up the number of users in increments of 10, until you reach 100

users.

3. While you're blasting the Web server, profile your application and your database to see if there are

any hot spots in your code/SQL queries/stored procedures that you need to optimize. I realize I'm

glossing over important details here, but this step is obviously highly dependent on your particular

application.

Also monitor both servers (Web/App and Database) via command line utilities mentioned before (top,

vmstat, iostat, netstat, Windows PerfMon). These utilities will let you know what's going on with the

servers in terms of hardware resources. Also monitor the firewall and the load balancer (many times

you can do this via SNMP) -- but these devices are not likely to be a bottleneck at this level, since

they usualy can deal with thousands of connections before they hit a limit, assuming they're

hardware-based and not software-based.

This is one of the most important steps in the whole procedure. It's not easy to make sense of the

output of these monitoring tools, you need somebody who has a lot of experience in system/network

architecture and administration. On Sun/Solaris platforms, there is a tool called the SE Performance

Toolkit that tries to alleviate this task via built-in heuristics that kick in when certain thresholds are

reached and tell you exactly what resource is being taxed.

4. Let's say your Web server's reply rate starts to level off around 50 users. Now you have a

repeatable condition that you know causes problems. All the profiling and monitoring you've done in

step 3, should have already given you a good idea about hot spots in your applicationm about SQL

queries that are not optimized properly, about resource status at the hardware/OS level.
30

At this point, the developers need to take back the profiling measurements and tune the code and the

database queries. The system administrators can also increase server performance simply by throwing

more hardware at the servers -- especially more RAM at the Web/App server in my experience, the

more so if it's Java-based.

5. Let's say the application/database code, as well as the hardware/OS environment have been tuned

to the best of everybody's abilities. You re-run the load test from step 2 and now you're at 75

concurrent users before performance starts to degrade.

At this point, there's not much you can do with the existing setup. It's time to think about scaling the

system horizontally, by adding other Web servers in a load-balanced Web server farm, or adding other

database servers. Or maybe do content caching, for example with Apache mod_cache. Or maybe

adding an external caching server such as Squid.

One very important product of this whole procedure is that you now have a baseline number for your

application for this given "staging" hardware environment. You can use the staging setup for nightly

peformance testing runs that will tell you whether changes in your application/database code caused

an increase or a decrease in performance.

6. Repeat above steps in a "real" production environment before you actually launch your application.

All this discussion assumed you want to get performance/benchmarking numbers for your application.

If you want to actually discover bugs and to see if your application fails and recovers gracefully, you

need to do stress testing. Blast your Web server with double the number of users for example. Unplug

network cables randomly (or shut down/restart switch ports via SNMP). Take out a disk from a RAID

array. That kind of thing.

The conclusion? At the end of the day, it doesn't really matter what you call your testing, as long as

you help your team deliver what it promised in terms of application functionality and performance.

Performance testing in particular is more art than science, and many times the only way to make

progress in optimizing and tuning the application and its environment is by trial-and-error and

perseverance. Having lots of excellent open source tools also helps a lot.
31

5. Test automation - Advantages


Best Practices in Automated Testing
This article talks about many interesting things like what's the Case for Automated Testing, Why
Automate the Testing Process?, Using Testing Effectively, Reducing Testing Costs, Replicating testing
across different platforms, Greater Application Coverage, Results Reporting, Understanding the
Testing Process, Typical Testing Steps, Identifying Tests Requiring Automation, Task Automation and
Test Set-Up and Who Should Be Testing?.

The Case for Automated Testing

Today, rigorous application testing is a critical part of virtually all software development projects. As
more organizations develop mission – critical systems to support their business activities, the need is
greatly increased for testing methods that support business objectives. It is necessary to ensure that these
systems are reliable, built according to specification and have the ability to support business processes.
Many internal and external factors are forcing organizations to ensure a high level of software quality and
reliability.
32

Why Automate the Testing Process?

In the past, most software tests were performed using manual methods. This required a large staff of
test personnel to perform expensive and time-consuming manual test procedures. Owing to the size
and complexity of today’s advanced software applications, manual testing is no longer a viable option
for most testing situations.

Using Testing Effectively

By definition, testing is a repetitive activity. The methods that are employed to carry out testing
(manual or automated) remain repetitious throughout the development life cycle. Automation of
testing processes allows machines to complete the tedious, repetitive work while human personnel
perform other tasks. Automation eliminates the required “think time” or “read time” necessary for the
manual interpretation of when or where to click the mouse. An automated test executes the next
operation in the test hierarchy at machine speed, allowing test to be completed many times faster
than the fastest individual. Automated test also perform load/stress testing very effectively.

Reducing Testing Costs

The cost of performing manual testing is prohibitive when compared to automated methods. The
reason is that computers can execute instructions many times faster and with fewer errors than
individuals. Many automated testing tools can replicate the activity of a large number of users (and
their associated transactions) using a single computer. Therefore, load/stress testing using automated
methods requires only a fraction of the computer hardware that would be necessary to complete a
manual test.

Replicating testing across different platforms

Automation allows the testing organization to perform consistent and repeatable test. When
applications need to be deployed across different hardware or software platforms, standard or
benchmark tests can be created and repeated on target platforms to ensure that new platforms
operate consistently.

Greater Application Coverage

The productivity gains delivered by automated testing allow and encourage organization to test more often and more
completely. Greater application test coverage also reduces the risk if exposing users to malfunctioning or non-
compliant software.

Results Reporting

Full-featured automated testing systems also produce convenient test reporting and analysis. These reports provide a
standardized measure of test status and results, thus allowing more accurate interpretation of testing outcomes.
Manual methods require the user to self-document test procedures and test results.

Understanding the Testing Process

The introduction of automated testing into the business environment involves far more than buying and installing an
automated testing tool.

Typical Testing Steps: Most software testing projects can be divided into general steps

Test Planning: This step determines like ‘which’ and ‘when’.


33

Test Design: This step determines how the tests should be built the level of quality.
Test Environment Preparation: Technical environment is established during this step.
Test Construction: At this step, test scripts are generated and test cases are developed.
Test Execution: This step is where the test scripts are executed according to the test plans.
Test evaluation: After the test is executed, the test results are compared to the expected results and evaluations can
be made about the quality of an application.

Identifying Tests Requiring Automation

Most, but not all, types of tests can be automated. Certain types of tests like user comprehension tests test that run
only once and tests that require constant human intervention are usually not worth the investment incurred to
automate. The following are examples of criteria that can be used to identify tests that are prime candidates for
automation.

High path frequency – Automated testing can be used to verify the performance of application paths that are used
with a high degree of frequency when the software is running in full production. Examples include: creating
customer records.

Critical Business Processes – Mission-critical processes are prime candidates for automated testing. Examples
include: financial month-end closings, production planning, sales order entry and other core activities. Any
application with a high –degree of risk associated with a failure is a good candidate for test automation.

Repetitive Testing – If a testing procedure can be reused many times, it is also a prime candidate for automation

Applications with a Long Life Span – If an application is planned to be in production for a long period of time, the
greater the benefits are from automation.

Task Automation and Test Set-Up

In performing software testing, there are many tasks that need to be performed before or after the actual test. For
example, if a test needs to be executed to create sales orders against current inventory, goods need to be in
inventory. The tasks associated with placing items in inventory can be automated so that the test can run repeatedly.
Additionally, highly repetitive tasks not associated with testing can be automated utilizing the same approach.

Who Should Be Testing?

There is no clear consensus in the testing community about which group within an organization should be
responsible for performing the testing function. It depends on the situation prevailing in the organization.

Automated Testing Advantages, Disadvantages and Guidelines

This article start with brief Introduction to Automated Testing, Different methods in Automated Testing, Benefits of
Automated Testing and the guidelines that Automated testers must follow to get the benefits of automation.

Advantages of Automated Testing

Introduction:

"Automated Testing" is automating the manual testing process currently in use. This requires that a formalized
"manual testing process", currently exists in the company or organization.
Automation is the use of strategies, tools and artifacts that augment or reduce the need of manual or human
involvement or interaction in unskilled, repetitive or redundant tasks.
34

Minimally, such a process includes:

 Detailed test cases, including predictable "expected results", which have been developed from Business Functional
Specifications and Design documentation
 A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases
are able to be repeated each time there are modifications made to the application.

The following types of testing can be automated


 Functional - testing that operations perform as expected.
 Regression - testing that the behavior of the system has not changed.

 Exception or Negative - forcing error conditions in the system.

 Stress - determining the absolute capacities of the application and operational infrastructure.

 Performance - providing assurance that the performance of the system will be adequate for both batch runs and online
transactions in relation to business projections and requirements.

 Load - determining the points at which the capacity and performance of the system become degraded to the situation
that hardware or software upgrades would be required.

Benefits of Automated Testing

Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human error

Repeatable: You can test how the software reacts under repeated execution of the same operations.
Programmable: You can program sophisticated tests that bring out hidden information from the application.
Comprehensive: You can build a suite of tests that covers every feature in your application.

Reusable: You can reuse tests on different versions of an application, even if the user interface changes.

Better Quality Software: Because you can run more tests in less time with fewer resources

Fast: Automated Tools run tests significantly faster than human users.

Cost Reduction: As the number of resources for regression test are reduced.

Choosing the right tools for the job and targeting the right areas of the organization to deploy them can only realize
these benefits. The right areas where the automation fit must be chosen.

The following areas must be automated first

1. Highly redundant tasks or scenarios


2. Repetitive tasks that are boring or tend to cause human error
3. Well-developed and well-understood use cases or scenarios first
4. Relatively stable areas of the application over volatile ones must be automated.

Automated testers must follow the following guidelines to get the benefits of
automation:

• Concise: As simple as possible and no simpler.

• Self-Checking: Test reports its own results; needs no human interpretation.


35

• Repeatable: Test can be run many times in a row without human intervention.

• Robust: Test produces same result now and forever. Tests are not affected by changes in the external environment.

• Sufficient: Tests verify all the requirements of the software being tested.

• Necessary: Everything in each test contributes to the specification of desired behavior.

• Clear: Every statement is easy to understand.

• Efficient: Tests run in a reasonable amount of time.

• Specific: Each test failure points to a specific piece of broken functionality; unit test failures provide "defect
triangulation".

• Independent: Each test can be run by itself or in a suite with an arbitrary set of other tests in any order.

• Maintainable: Tests should be easy to understand and modify and extend.

• Traceable: To and from the code it tests and to and from the requirements.

Disadvantages of Automation Testing

Though the automation testing has many advantages, it has its own disadvantages too. Some of the disadvantages
are:
• Proficiency is required to write the automation test scripts.
• Debugging the test script is major issue. If any error is present in the test script, sometimes it may lead to deadly
consequences.
• Test maintenance is costly in case of playback methods. Even though a minor change occurs in the GUI, the test
script has to be rerecorded or replaced by a new test script.
• Maintenance of test data files is difficult, if the test script tests more screens.

Some of the above disadvantages often cause damage to the benefit gained from the automated scripts. Though the
automation testing has pros and corns, it is adapted widely all over the world.

6. Software Quality Management


This article gives an overview of Software Quality Management and various processes that are a part of
Software Quality Management. Software Quality is a highly overused term and it may mean different
things to different people. You will learn What is Software Quality Management?, What does it take to
Manage Software Quality?, Quality Planning, Quality Assurance, Quality Control, Importance of
Documentation and What is Defect Tracking?

The definition of the ISO 8204 for quality:

“Totality of characteristics of an entity that bears on its ability to satisfy stated and implied needs.”
This means that the Software product delivered should be as per the requirements defined. We now
examine a few more terms used in association with Software Quality.
Quality Planning:
In the Planning Process we determine the standards that are relevant for the Software Product, the
36

Organization and the means to achieve them.


Quality Assurance:
Once the standards are defined and we start building the product. It is very important to have processes
that evaluate the project performance and aim to assure that the Quality standards are being followed and
the final product will be in compliance.
Quality Control:
Once the software components are built the results are monitored to determine if they comply with the
standards. The data collected helps in measuring the performance trends and as needed help in identifying
defective pieces of code.

What is Software Quality Management?

Software Quality Management simply stated comprises of processes that ensure that the Software Project
would reach its goals. In other words the Software Project would meet the clients expectations.

The key processes of Software Quality Management fall into the following three categories:

1) Quality Planning
2) Quality Assurance
3) Quality Control

What does it take to Manage Software Quality?

The Software Quality Management comprises of Quality Planning, Quality Assurance and Quality
Control Processes. We shall now take a closer look at each of them.

1) Quality Planning

Quality Planning is the most important step in Software Quality Management. Proper planning ensures
that the remaining Quality processes make sense and achieve the desired results. The starting point for the
Planning process is the standards followed by the Organization. This is expressed in the Quality Policy
and Documentation defining the Organization-wide standards. Sometimes additional industry standards
relevant to the Software Project may be referred to as needed. Using these as inputs the Standards for the
specific project are decided. The Scope of the effort is also clearly defined. The inputs for the Planning
are as summarized as follows:

a. Company’s Quality Policy


b. Organization Standards
c. Relevant Industry Standards
d. Regulations
e. Scope of Work
f. Project Requirements

Using these as Inputs the Quality Planning process creates a plan to ensure that standards agreed upon are
met. Hence the outputs of the Quality Planning process are:

a. Standards defined for the Project


b. Quality Plan
37

To create these outputs namely the Quality Plan various tools and techniques are used. These tools and
techniques are huge topics and Quality Experts dedicate years of research on these topics. We would
briefly introduce these tools and techniques in this article.

a. Benchmarking: The proposed product standards can be decided using the existing performance
benchmarks of similar products that already exist in the market.

b. Design of Experiments: Using statistics we determine what factors influence the Quality or features of
the end product

c. Cost of Quality: This includes all the costs needed to achieve the required Quality levels. It includes
prevention costs, appraisal costs and failure costs.

d. Other tools: There are various other tools used in the Planning process such as Cause and Effect
Diagrams, System Flow Charts, Cost Benefit Analysis, etc.

All these help us to create a Quality Management Plan for the project.

2) Quality Assurance

The Input to the Quality Assurance Processes is the Quality Plan created during Planning.
Quality Audits and various other techniques are used to evaluate the performance of the project. This
helps us to ensure that the Project is following the Quality Management Plan. The tools and techniques
used in the Planning Process such as Design of Experiments, Cause and Effect Diagrams may also be
used here, as required.

3) Quality Control

Following are the inputs to the Quality Control Process:

- Quality Management Plan.


- Quality Standards defined for the Project
- Actual Observations and Measurements of the Work done or in Progress

The Quality Control Processes use various tools to study the Work done. If the Work done is found
unsatisfactory it may be sent back to the development team for fixes. Changes to the Development
process may be done if necessary.

If the work done meets the standards defined then the work done is accepted and released to the clients.

Importance of Documentation:

In all the Quality Management Processes special emphasis is put on documentation. Many software shops
fail to document the project at various levels. Consider a scenario where the Requirements of the
Software Project are not sufficiently documented. In this case it is quiet possible that the client has a set
of expectations and the tester may not know about them. Hence the testing team would not be able test the
software developed for these expectations or requirements. This may lead to poor “Software Quality” as
the product does not meet the expectations.
38

Similarly consider a scenario where the development team does not document the installation
instructions. If a different person or a team is responsible for future installations they may end up making
mistakes during installation, thereby failing to deliver as promised.

Once again consider a scenario where a tester fails to document the test results after executing the test
cases. This may lead to confusion later. If there were an error, we would not be sure at what stage the
error was introduced in the software at a component level or when integrating it with another component
or due to environment on a particular server etc. Hence documentation is the key for future analysis and
all Quality Management efforts.

Steps:
In a typical Software Development Life Cycle the following steps are necessary for Quality Management:

1) Document the Requirements


2) Define and Document Quality Standards
3) Define and Document the Scope of Work
4) Document the Software Created and dependencies
5) Define and Document the Quality Management Plan
6) Define and Document the Test Strategy
7) Create and Document the Test Cases
8) Execute Test Cases and (log) Document the Results
9) Fix Defects and document the fixes
10) Quality Assurance audits the Documents and Test Logs

Various Software Tools have been development for Quality Management. These Tools can help us track
Requirements and map Test Cases to the Requirements. They also help in Defect Tracking.

7. What is Defect Tracking?


This is very important to ensure the Quality of the end Product. As test cases are executed at various
levels defects if any are found in the Software being tested. The Defects are logged and data is collected.
The Software Development fixes these defects and documents how they were fixed The testing team
verifies whether the defect was really fixed and closes the defects. This information is very useful. Proper
tracking ensures that all Defects were fixed. The information also helps us for future projects.

The Capability Maturity Model defines various levels of Organization based on the processes that they
follow.
Level 0
The following is true for “Level 0” Organizations -
There are no Processes, tracking mechanisms, no plans. It is left to the developer or any person
responsible for Quality to ensure that the product meets expectations.

Level 1 – Performed Informally


The following is true for “Level 1” Organizations -
In Such Organizations, Typically the teams work extra hard to achieve the results. There are no tracking
mechanisms, standards defined. The work is done but is informal and not well documented.

Level 2 – Planned and Tracked


The following is true for “Level 2” Organizations -
39

There are processes within a team and the team can repeat them or follow the processes for all projects
that it handles.

However the process is not standardized throughout the Organization. All the teams within the
organization do not follow the same standard.

Level 3 – Well-Defined
In “Level 3” Organizations the processes are well defined and followed throughout the organization.

Level 4 – Quantitatively Controlled


In “Level 4” Organizations –

- The processes are well defined and followed throughout the organization
- The Goals are defined and the actual output is measured
- Metrics are collected and future performance can predicted

Level 5 – Continuously Improving


“Level 5” Organizations have well defined processes, which are measured and the organization has a
good understanding of IT projects affect the Organizational goals.
The Organization is able to continuously improve its processes based on this understanding.

Severity and priority of defects

Severity Levels can be defined as follow:


S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or
almost totally, affected. A major area of the users system is affected by the incident and it is significant to business
processes.
S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing.
Incident affects an area of functionality but there is a work-around which negates impact to business process.
This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent
S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in
normal use, or minor errors in
layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are
cosmetic in nature and of no or very low impact to business processes.

8. QA Plan - various sections explained


<Add details from NSW QA Plan>

9. Effective Software Testing


In this tutorial you will learn about Effective Software Testing? How do we measure ‘Effectiveness’ of
Software Testing? Steps to Effective Software Testing, Coverage and Test Planning and Process.
40

A 1994 study in US revealed that only about “9% of software projects were successful”
A large number of projects upon completion do not have all the promised features or they do not meet all
the requirements that were defined when the project was kicked off.

It is an understatement to say that – An increasing number of businesses depend on the software for their
day-to-day businesses. Billions of Dollars change hands every day with the help of commercial software.
Lots of lives depend on the reliability of the software for example running critical medical systems,
controlling power plants, flying air planes and so on.

Whether you are part of a team that is building a book keeping application or a software that runs a power
plant you cannot afford to have less than reliable software.
Unreliable software can severely hurt businesses and endanger lives depending on the criticality of the
application. The simplest application poorly written can deteriorate the performance of your environment
such as the servers, the network and thereby causing an unwanted mess.

To ensure software application reliability and project success Software Testing plays a very crucial role.
Everything can and should be tested –

 Test if all defined requirements are met


 Test the performance of the application

 Test each component

 Test the components integrated with each other

 Test the application end to end

 Test the application in various environments

 Test all the application paths

 Test all the scenarios and then test some more

What is Effective Software Testing?

How do we measure ‘Effectiveness’ of Software Testing?


The effectiveness of Testing can be measured if the goal and purpose of the testing effort is clearly
defined. Some of the typical Testing goals are:

 Testing in each phase of the Development cycle to ensure that the “bugs”(defects) are eliminated at the
earliest
 Testing to ensure no “bugs” creep through in the final product

 Testing to ensure the reliability of the software

 Above all testing to ensure that the user expectations are met

The effectiveness of testing can be measured with the degree of success in achieving the above goals.
41

Steps to Effective Software Testing:

Several factors influence the effectiveness of Software Testing Effort, which ultimately determine the
success of the Project.

A) Coverage:

The testing process and the test cases should cover

 All the scenarios that can occur when using the software application
 Each business requirement that was defined for the project

 Specific levels of testing should cover every line of code written for the application

There are various levels of testing which focus on different aspects of the software application. The often-
quoted V model best explains this:

The various levels of testing illustrated above are:

 Unit Testing
 Integration Testing

 System Testing

 User Acceptance Testing

The goal of each testing level is slightly different thereby ensuring the overall project reliability.

Each Level of testing should provide adequate test coverage.


Unit testing should ensure each and every line of code is tested
Integration Testing should ensure the components can be integrated and all the interfaces of each
component are working correctly
System Testing should cover all the “paths”/scenarios possible when using the system

The system testing is done in an environment that is similar to the production environment i.e. the
environment where the product will be finally deployed.
42

There are various types of System Testing possible which test the various aspects of the software
application.

B) Test Planning and Process:

To ensure effective Testing Proper Test Planning is important


An Effective Testing Process will comprise of the following steps:

 Test Strategy and Planning


 Review Test Strategy to ensure its aligned with the Project Goals

 Design/Write Test Cases

 Review Test Cases to ensure proper Test Coverage

 Execute Test Cases

 Capture Test Results

 Track Defects

 Capture Relevant Metrics

 Analyze

Having followed the above steps for various levels of testing the product is rolled.

It is not uncommon to see various “bugs”/Defects even after the product is released to production. An
effective Testing Strategy and Process helps to minimize or eliminate these defects. The extent to which it
eliminates these post-production defects (Design Defects/Coding Defects/etc) is a good measure of the
effectiveness of the Testing Strategy and Process.
As the saying goes - 'the proof of the pudding is in the eating'

Methodologies:

 VelociQ :
 LEAN: Lean is a philosophy that shortens the timeline between customer order and
shipment by eliminating waste. By eliminating waste, quality is improved; production
time and cost are reduced . Lean in the Software industry is quite similar to Agile
software development .

Tools used in LEAN :


Visual controls
Value stream mapping
CR productivity and turnaround time improvement exercise taken up during NORTEL
OME project :
43

o CR scrubs
o CR standup meetings
o CR effort analysis
o Value stream mapping
o Review request and testing in parallel
o CR fixing guidelines document circulated
o Module specific debugging tips documented and trained
o Database maintained with CRs fixed as of now

 QA links:

http://ramya-moorthy.blogspot.com/2007/07/tips-for-developing-effective.html

http://www.tutorialspoint.com/perl/perl_oo_perl.htm

http://www.bjnet.edu.cn/tech/book/perl/ch19.htm

10. FAQS:
1. What are the performance requirements of the system/server application?

In terms of response time, concurrent users, concurrent sessions, simultaneous users, operations etc

Workload: It refers to the user load subjected by a web application under real time user access or during
the performance test and the way the users are distributed between various transaction flows.
Normally, the web server log files of a web site can be analyzed to learn about the workload of the site
(if the web site is already in production). For web sites which are yet to go alive for the first time, a
workload model needs to be developed based on the discussion with business analysts, application
experts, etc. It is very important to know the workload of the application before conducting the
performance test. Conducting the performance test for a system without proper analysis on the
workload might lead to misleading results.

Baseline test : test the performance for 1 user and then compare the performance with more number of
users.

performance testing :

- what data to collect, what to look for, how to analyze

- what commands does what ?

- in case of website testing

vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.
44

netstat - Print network connections, routing tables, interface statis-tics, masquerade connections, and
multicast memberships

lsof - list of open files

sar - system activity reporter.

what version of solaris, linux used ? redhat linux, kernel version

when did u use it last time

In PCNL :

For each patch releases, the load&stress is performed to simulate large number of clients and identify
crashes, connection loses etc.

what is the capacity it can handle ?

how did u arrive at the number ?

what else you monitor ?

OpenSTA -> what is this?

Used for performance benchmarking of websites

A distributed software testing architecture based on CORBA. OpenSTA is designed to be used by


Performance Testing Consultants or other technically proficient individuals. Using OpenSTA a user can
generate realistic heavy loads simulating the activity of hundreds to thousands of virtual users. This
capability is fully realized through OpenSTA's distributed testing architecture.

OpenSTA graphs both virtual user response times and resource utilization information from all Web
Servers, Application Servers, Database Servers and Operating Platforms under test, so that precise
performance measurements can be gathered during load tests and analysis on these measurements can
be performed.

OpenSTA is Open Source software licensed under the GNU General Public License.

Difference between process and thread:

Creation of new process requires new resources and Address space whereas the thread can be created
in the same address space of the process which not only saves space and resources but are also easy to
create and delete and many threads can exhists in a process.
45

JMeter is an Apache Jakarta project that can be used as a load testing tool for analyzing and
measuring the performance of a variety of services, with a focus on web applications.

JMeter can be used as a unit test tool for JDBC database connections, FTP, LDAP, Webservices,
JMS, HTTP and generic TCP connections. JMeter can also be configured as a monitor, although
this is typically considered an ad-hoc solution in lieu of advanced monitoring solutions.

JMeter supports variable parameterization, assertions (response validation), per thread cookies,
configuration Variables and a variety of reports.

JMeter architecture is based on plugins. Most of its "out of the box" features are implemented
with plugins. Off-site developers can easily extend JMeter with custom plugins.

 ISO, CMM and Six Sigma—the best of the industry's standards to enable quality—come together
in Wipro's integrated Quality System veloci-Q.
 Since veloci-Q is built on the sound foundations of these quality models, it means your projects
can run at reduced risk; improved productivity; fewer defects; on time, on budget delivery; and
with better visibility.
Few important links in veloci-Q are mentioned below:

 http://channelw.wipro.com/velociq/qs/policy/index.htm
 http://channelw.wipro.com/velociq/qs/procedures/index.htm
 http://channelw.wipro.com/velociq/qs/lcm/index.htm

 E-Cube – Enabling Excellence in Execution – is an initiative that the IS and SEPG groups have
embarked on to enable excellence in the way projects are executed in Wipro.
 E-Cube was launched with the objective of building an Enterprise Project Management
application tool that will provide the organization with:
a) Core project management functionality, e.g., managing resources, schedules, risks,
estimation, cost tracking, etc.

b) A more simplified and user-friendly quality-assurance framework than what iPAT


provides today

Agile software development is a conceptual framework for undertaking software engineering projects .

Some of the principles behind the Agile Manifesto are:

 Customer satisfaction by rapid, continuous delivery of useful software


 Working software is delivered frequently (weeks rather than months)
 Even late changes in requirements are welcomed.
 Close, daily, cooperation between business people and developers
 Face-to-face conversation is the best form of communication.
46

 Continuous attention to technical excellence and good design.


 Simplicity
 Regular adaptation to changing circumstances

Some of the Key lean manufacturing principles include:

 Perfect first-time quality


 Waste minimization
 Continuous improvement
 Flexibility

KAIZEN is one of the several tools available to an organization to eliminate wastes

 Process Improvement Proposal (PIP) is open to all employees in Wipro Technologies. It could be
any process related issue from a 'nice to have feature' to an interesting article with respect to
veloci-Q
 PIP is more specific to the Quality System.
 PIPs can be raised with respect to Process Models, Procedures, Project Forms, Guidelines,
Coding Standards.

SQA

 Ensures that all the projects in the vertical follow the processes defined in the Wipro Quality
System.
 Conducts training on the quality system.
 Facilitates sharing of knowledge and best practices among all the projects within the division.
 Plan, conduct and organize QIC meeting.
 SQA should respond to all the queries with respect to quality and process.

PDB

 Project Data Bank (PDB) in veloci-Q is a rich repository of quantified project metrics,
measurements and learning's collected from closed projects across the organization
 In order to integrate knowledge collected from various projects, PDB has been merged with
KNet offering more powerful search and GUI.

PDB comprises of the following:

a) Historical Project Database (HPD)

b) WILL

c) WISE
47

MRM

 Management Review meeting (MRM) are organized in the organizational level or vertical level.
 The main purpose of conducting MRM is to review the quality system of the organization and
implement the changes if required.
 In an organizational level the MRM is conducted once every year and in the vertical level it is
organized once every six months.
 The MRM is conducted by Management representatives who are responsible for
implementation of the quality system in the organization.
 Management Representatives are appointed by vice – chairman and president.

QIC

 Quality Improvement Councils (QIC) are organized in the vertical level and group level.
 The main purpose of conducting QIC is to review the quality system of the organization and
implement the changes if required.
 QIC are conducted every month in the vertical/group level.
 The QIC is conducted by Management representatives who are responsible for implementation
of the quality system in the organization.
 Both MRM and QIC are very essential in reviewing the quality system, norms and processes
followed by the organization.

Work Breakdown Structure (WBS)

Work Breakdown Structure is a bottom up estimation technique. Activities carried out in the project are
decomposed into smallest possible tasks as is technically feasible at the time of estimation.
Process Improvement Proposal (PIP)

Any proposals for changes to QS are raised through PIPs. This is also referred as Procedure Improvement
Proposal. A PIP should be raised for implementing the piloted process into the QS.
Look Ahead Meeting (LAM)

LAMs are planned meetings and should be conducted at the beginning of a phase (e.g. start of plan,
design, Implementation and Testing phase) to identify the possible Defect Prevention activities in the
subsequent phase(s) that are about to start.

For resume:

Participated in proposal writings.

Good knowledge in Wipro’s Integrated Quality System – Veloci-Q.


48

Experience in development, maintenance, porting and testing projects.

Conducted Look Ahead Meetings (LAM). Participated in QIC meetings (Quality Improvements Council).

RE: In general, how do you see automation fitting into the overall process of testing?

--------------------------------------------------------------------------------

Automation can come into picture only when the application has become stable. And in the case of
Maintenance projects where we work for continuous improvments and Regression testing is demanded
often running the developed automation scripts will save our time.

-------------------------

FAQs

1. If the actual result doesn't match with expected result in this situation what should we do?

2. What is the importance of requirements traceability in a product testing?

3. When is the best time for system testing?

4. What is use case? What is the diffrence between test cases and use cases?

5. What is the difference between the test case and a test script

6. Describe to the basic elements you put in a defect report?

7. How do you test if you have minimal or no documentation about the product?

8. How do you decide when you have tested enough?

9. How do you determine what to test?

10. In general, how do you see automation fitting into the overall process of testing?

11. How do you deal with environments that are hostile to quality change efforts?
49

12. Describe to me the Software Development Life Cycle as you would define it?

13. Describe to me when you would consider employing a failure mode and defect analysis?

14. What is the role of QA in a company that produces software?

15. How do you scope, organize, and execute a test project?

16. How can you test the white page

17. What is the role of QA in a project development?

18. How you used whitebox and block box technologies in your application?

19. 1) What are the demerits of winrunner?2) We write the test data after what are the principles to do
testing an application?

20. What is the job of Quality Assurance Engineer? Difference between the Testing & Quality Assurance
job.

http://www.exforsys.com/tutorials/testing/software-quality-
management.html

Thanxs 4 good but explanation


I would like 2 add 1 more thing that
take ex of total tesing of chair==>
normal tresting ==if chair is dsnd 4 100 kg wt,n my wt is 70 kg then that teting is called as nornal testing.
if my wt is 100 kg then that testing is called as load testing,
if my wt is 120 kg then that testing called as stress testing..

During both above tests, we all collect product system resources usages(CPU utilization information) by using
Win:PerfMon, Unix/Linux:vmstat/mpstat/iostat/netstat/lsof/ps/sar. %CPU, context switch, memory, IO
wait/queue, num of established/time-wait sockets are counters helping to identify product’s bottlenecks & defects.

Run vmstat to gather memory utilization information

Run vmstat, iostat, sar, or topas to gather cpu utilization information

Run netpmon, netstat and/or nfsstat to gather network utilization information

Run iostat or topas to gather disk I/O utilization information

http://agiletesting.blogspot.com/2005/02/performance-vs-load-vs-stress-testing.html
50

Gokul -- for the largest repository of open source projects, go to sourceforge.net. Search for your favorite OS or
programming language, and take it from there.

Another repository of open source projects is hosted at Google Code at http://code.google.com/

Come up with baselines for your applications (load testing), then increase the load to see the behavior, which is
called stress testing.

In computer science, a memory leak is a particular type of unintentional memory consumption by a computer
program where the program fails to release memory when no longer needed. This condition is normally the result of
a bug in a program that prevents it from freeing up memory that it no longer needs.

This term has the potential to be confusing, since memory is not physically lost from the computer. Rather, memory
is allocated to a program, and that program subsequently loses the ability to access it due to program logic flaws.

A memory leak has symptoms similar to a number of other problems (see below) and generally can only be
diagnosed by a programmer with access to the program source code; however, many people refer to any unwanted
increase in memory usage as a memory leak, even if this is not strictly accurate.

There are several tools to detect the memory leaks and perfmon etc.

What is project risk and what is product risk:

• Example for Project Risk


– Failure to meet a delivery date
• Example for Product Risk
– Undetected defect that could lead to a system failure

11. Software lifecycle models:


V Model is one of the SDLC Methodologies.

In this methodology Development and Testing takes place at the same time with the same
kind of information in their hands.

Typical V shows Development Phases on the Left hand side and Testing Phases on the Right
hand side.

Development Team follow Do-Procedure to achive the goals of the company

and

Testing Team follow check-Procedure to verify them.

'V' Model
51

SRS/BRS User Acceptance

Analysis/Design System Testing

HLD Integration Testing

LLD Unit Testing

Coding

The advantages of V model are:


1) Time saving - This is because the Testing activities
start as soon as the customer gives the requirement
(parallal to the development activites)i.e. testing design
starts when requiremnts are given by customer meanwhile
developers start with coding and when the coding is
complete testing can immediately start without waste in
time for the design.
2) Cost Saving - V model is cost saving because there is
early detection of bugs while development and unit testing.

Now coming to the next part of question, All the companies


do not implement V Model. It depends on customer
requirement eg. if the customer is more interested to cover
up all risks then the company may use Spiral Model.

There are few disadvantages of V model:


1) Expensive

2) For BIG projects its a repeated process

Operational issues like need to deploy bigger teams, procurement is required early etc.

The V-model is a software development process which can be presumed to be the extension of
the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards
after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships
between each phase of the development life cycle and its associated phase of testing.

The V-model deploys a well-structured method in which each phase can be implemented by the
detailed documentation of the previous phase. Testing activities like test designing start at the
beginning of the project well before coding and therefore saves a huge amount of the project
time.
52

The Phases of the V-model

The V-model consists of a number of phases. The Verification Phases are on the Left hand side
of the V, the Coding Phase is at the bottom of the V and the Validation Phases are on the Right
hand side of the V .

[edit] Verification Phases

[edit] Requirements analysis

In the Requirements analysis phase, the requirements of the proposed system are collected by
analyzing the needs of the user(s). This phase is concerned about establishing what the ideal
system has to perform. However it does not determine how the software will be designed or
built. Usually, the users are interviewed and a document called the user requirements document
is generated.

The user requirements document will typically describe the system’s functional, physical,
interface, performance, data, security requirements etc as expected by the user. It is one which
the business analysts use to communicate their understanding of the system back to the users.
The users carefully review this document as this document would serve as the guideline for the
system designers in the system design phase. The user acceptance tests are designed in this
phase. See also Functional requirements.

[edit] System Design

Systems design is the phase where system engineers analyze and understand the business of the
proposed system by studying the user requirements document. They figure out possibilities and
techniques by which the user requirements can be implemented. If any of the requirements are
not feasible, the user is informed of the issue. A resolution is found and the user requirement
document is edited accordingly.

The software specification document which serves as a blueprint for the development phase is
generated. This document contains the general system organization, menu structures, data
structures etc. It may also hold example business scenarios, sample windows, reports for the
better understanding. Other technical documentation like entity diagrams, data dictionary will
also be produced in this phase. The documents for system testing are prepared in this phase.

[edit] Architecture Design

The phase of the design of computer architecture and software architecture can also be referred
to as high-level design. The baseline in selecting the architecture is that it should realize all
which typically consists of the list of modules, brief functionality of each module, their interface
relationships, dependencies, database tables, architecture diagrams, technology details etc. The
integration testing design is carried out in this phase.
53

[edit] Module Design

The module design phase can also be referred to as low-level design. The designed system is
broken up into smaller units or modules and each of them is explained so that the programmer
can start coding directly. The low level design document or program specifications will contain a
detailed functional logic of the module, in pseudocode:

 database tables, with all elements, including their type and size
 all interface details with complete API references

 all dependency issues

 error message listings

 complete input and outputs for a module.

The unit test design is developed in this stage.

[edit] Validation Phases

[edit] Unit Testing

In the V-model of software development, unit testing implies the first stage of dynamic testing
process. According to software development expert Barry Boehm, a fault discovered and
corrected in the unit testing phase is more than a hundred times cheaper than if it is done after
delivery to the customer.

It involves analysis of the written code with the intention of eliminating errors. It also verifies
that the codes are efficient and adheres to the adopted coding standards. Testing is usually white
box. It is done using the Unit test design prepared during the module design phase. This may be
carried out by software developers.

[edit] Integration Testing

In integration testing the separate modules will be tested together to expose faults in the
interfaces and in the interaction between integrated components. Testing is usually black box as
the code is not directly checked for errors.

[edit] System Testing

System testing will compare the system specifications against the actual system. The system test
design is derived from the system design documents and is used in this phase. Sometimes system
testing is automated using testing tools. Once all the modules are integrated several errors may
arise. Testing done at this stage is called system testing.

[edit] User Acceptance Testing


54

Acceptance testing is the phase of testing used to determine whether a system satisfies the
requirements specified in the requirements analysis phase. The acceptance test design is derived
from the requirements document. The acceptance test phase is the phase used by the customer to
determine whether to accept the system or not. The following description is unaccaptable in and
overview article Acceptance testing:
- To determine whether a system satisfies its acceptance criteria or not.
- To enable the customer to determine whether to accept the system or not.
- To test the software in the "real world" by the intended audience.
Purpose of acceptance testing:
- To verify the system or changes according to the original needs.

Procedures for conducting the acceptance testing:


Define the acceptance criteria:
- Functionality requirements.
- Performance requirements.
- Interface quality requirements.
- Overall software quality requirements.
Develop an acceptance plan:
- Project description.
- User responsibilities.
- Acceptance description.
- Execute the acceptance test plan.
-to develop

2I process model

Iteration and incremental process model

Service process model

This process model can be chosen for projects like reengineering of certain components, analysis
of problems and providing consultation, Test runs, Test Case Development etc

It is recommended to choose this model if each request is < 3 phases of any other Process Model
and each request is < 3 person months of maximum effort

Maintenance process model

Testing process model

This process model will help projects, which handle one time testing, projects with repetitive QA
cycle and Test certification projects.

Test requirements, test design, test execution, Release and acceptance


55

Agile process model :

Agile project management (training notes)

is a time based model (not scope based).

Each iteration ideally - 1 to 4 weeks

Various agile practices like scrum, XP combined to form agile project management model

Agile helps in

Significantly reducing the time to market.

Working software at the earliest

Helps with quick ROI

Testing starts very early in the lifecycle, so maintenance cost will come down

Design scalability is an issue???

Each iteration is to develop a shippable product (includes design, coding, UT etc). Since all
phases are covered in first iteration itself, it helps in unearthing the issues/technical challenges
which can be address immediately. In typical water fall model this happens only by the end of
the life cycle.

XP & Scrum are two popular agile methodologies

Scrum doesn't have any acronym, carried from Rugby game

Requirements are called user stories in agile(like use cases)

Every iteration is called a sprint.

Daily scrum : (should not exceed 15 mins)

- what i did yesterday ?

- What i will do today ?

- any show stoppers, issues ?

- new tasks added


56

XP - Extreme programming

Whatever is good for the project, take it to extreme levels. Reviews are good, so do reviews
every day, for every piece of code (similar)

Iteration 0 - setup, highlevel architecture, rampup etc

scrum of scrums - in case of large teams

Scrum master -who drives the scrum

If required have a technical scrum

Test first code next, test driven development :

UT tools - Junit, nunit, cpptest etc

Pair programming : driover, navigator

Testing in agile process model :

12. Product testing & ISO certification

Eg : Windows Vista logo certification, Windows 7 logo certification.

Works with Windows certification.

Automatic updates, revenue, using logo. (beneficiary for independent Software Vendors).

To qualify for the Works with Windows Vista logo, an application must meet the
following requirements:

1. The application installs without errors on any PC running on Windows Vista


2. The application works as well on Windows Vista as it does on Windows XP SP2,
that is, all of the features are available to the end user. Minor functional
differences such as rendering changes that do not prevent users from using
features are acceptable.
3. The application works equally well in both of these scenarios:
a. Windows Vista was installed on a bare machine (“clean install”)
57

b. Windows Vista was installed as an upgrade from Windows XP SP2. It


may be necessary to reinstall the application after Windows Vista was
installed.
4. The ISV will provide the same or equivalent product support and warranties for
users who install the application on a PC running Windows Vista as they
provide for users running Windows XP SP2.

13. Motivating testers


Eric found interesting idea to reward good testers. The idea of holding a bug contest. And
decided to award the ‘Mercury’ cap to the tester who could log the most bugs in a given week.
See the winner of his contest.

A small tweak I would rather suggest to make this technique more effective is to award the
testers who will find the quality bug, may be called as “Bug of the week”. This way quality
bugs will be the main focus of software testers rather than running behind the quantity.
Obviously you should not ignore those small UI bugs also

How To Recruit, Motivate, and Energize Superior Test Engineer


by SQA » Wed Feb 27, 2008 12:54 pm

Jeff Feldstein - How To Recruit, Motivate, and Energize Superior Test Engineers

http://video.google.com/videoplay?docid ... 1835446274

ABSTRACT

The expectations today are for increasingly high-quality software, ... all » requiring more
sophisticated automation in testing. Test and QA teams must work more closely with
development to ensure that this sophisticated automation is possible. This has lead to software
engineers applying creativity, talent and expertise to not just application development, but
testing as well. This transition from manual to scripting to highly engineered test automation
changes the way we recruit, hire, motivate and retain great test engineering talent.

The speaker uses examples of how his team at Cisco changed the way it tests over the past six
years. In this class, he'll review eight points for why test is a better place for software
developers than software development, and he'll show how and when to express these points
to hire, motivate and retain top talent. You'll see how to inspire greater innovation and
creativity in your testing processes, and how to manage and inspire test and development
teams that are spread across different locations. You'll also learn the place of manual testing in
the new environment.
58

So how can build skilled testers on any project?


You can improve testers performance by assigning him/her to the single project.
Due to this the tester will get the detail knowledge of the project domain, Can concentrate well
on that project, can do the R&D work during the early development phase of the project.

This not only build his/her functional testing knowledge but also the project Domain
knowledge.

Company can use following methods to Improve the Testers performance:


1) Assign one tester to one project for long duration or to the entire project. Doing this will
build testers domain knowledge, He/She can write better test cases, Can cover most of the test
cases, and eventually can find the problem faster.

2) Most of the testers can do the functional testing, BV analysis but they may not know how to
measure test coverage,How to test a complete application, How to perform load testing.
Company can provide the training to their employees in those areas.

3) Involve them in all the project meetings, discussions, project design so that they can
understand the project well and can write the test cases well.

4) Encourage them to do the extra activities other than the regular testing activities. Such
activities can include Inter team talk on their project experience, Different exploratory talks on
project topics.

Most important is to give them freedom to think outside the box so that they can take better
decision on Testing activities like test plan, test execution, test coverage.

If you have a better idea to boost the testers performance don’t forget to comment on!

Test engineers are equally capable of software engineers, they should be able to design equally

complicated test cases and execute.

Automation %tage - 60%, around world is very minor.

Automation test framework

Configuration file, list of tests, expected results

Dev team do review the test plan, need to be closely interacting with developers.

When requirements are received, the test team needs to do:

 Requirements are clear?


 Are they testable?
59

 Are they making sense when you look at overall?

Better having separate test engineering organization.

o Testers do write complex software to test a complex system.


o Test team can propose/experiment various ways of test frameworks, etc. Where as
development team to adhere the requirements and implement as per the marketing
team.

o Team size : 3 devs : 1 tester, hence more visibility

o testing is more creative (Model based testing, automatic test generation).

o can break the code (every body likes).

o customer interaction and business knowledge (dev team will not have the idea).

o test team members can perform better addressing the customer problems at the
site as they

have the big picture, where as dev team member will have knowledge of part of it.

 Developer has a narrow view, where as test engineer will have a bird eye view or the big
picture.

 Stop calling QA (quality assurance - ISO standards, TL9k standards, standard procedures and
policies, entry and exit criteria, it should be part the dev & test teams).

14. Certifications and exams for Software testing

15. CMMI level5 for testing projects

CMMI Levels : CMMI Levels

Slide 17 : CMMI is an appraisal method developed by Software Engineering Institute in Pittsburg to develop and
refine an organization’s processes. CMMI is used as a benchmark for assessing different organizations for equivalent
comparison. CMMI describes the maturity of the company based upon the project the company is handling and the
related clients.

CMMI Level 1 - Initial : CMMI Level 1 - Initial Processes are ad-hoc Organizations does not provide a stable
environment Success depends upon the individuals Frequently exceed budget and schedule Over commit – abandon
processes in time of crisis and do not have repeatable successes
60

CMMI Level 2 - Repeatable : CMMI Level 2 - Repeatable Successes are repeatable Existing practices are retained
during times of stress Use some basic Project Management to track cost and schedule Processes may not repeat for
all the projects Minimum process discipline is in place to repeat earlier successes with similar application Still risk of
exceeding cost and time

CMMI Level 3 - Defined : CMMI Level 3 - Defined Standard processes are established and improved over time
Effective project management system is implemented

CMMI Level 4 – Quantitatively Managed : CMMI Level 4 – Quantitatively Managed Use precise measurements Set
quantitative quality goals Processes are controlled using statistical and other quantitative techniques

CMMI Level 5 - Optimizing : CMMI Level 5 - Optimizing Improve processes through innovative technological
improvements

16. Automation tools (QTP, winrunner, load runner), test


management tools
17. Test estimation
1. Developers may not give the application for testing on time due to their own problems.

2. Time taken to test one application depends on number of bugs we get in the application.

3. Some times resolved bugs gets reopen due to some side effects.

Am I thinking in correct way?? If not, please suggest me good planning. It helps lot at my work
place.

-----

I understand you use simply the average of the estimated hour.

Have you ever used PERT (Program Evaluation Riview Technique)? Considering what you
mentioned above, I believe it would be fairly simple to convert into PERT.

You'd just need to ad an estimation of the most likely duration. Let's say 400 hours in this case.

End Time (E), Optimistic (O), Most likely (M) and Pessimistic (P). E=(O+4M+P)/6, gives you
E=(350+4x400+500)/6=408.

---

You are correct and unless we can somehow learn how to predict the future, we will probably
keep on struggling with those problems.

What can we do?


61

1. give gross estimates to your work - "assuming everything is OK, this will take 2 weeks to
test/check".

This will give you a start estimation. No less than X days are needed.

2. Continuing 1. consider known risk factors and add time accordingly. For example, if this is a
new feature so tests don't exist yet, development code is new...I smell trouble; feature will
probably be pretty buggy at first.

3. provide enough testing space between builds and reduce that as product matures.

---

Assumptions, Assumptions, Assumptions - they are key to estimates! I'd suggest that you do
your estimation keeping factors like reality, feasibility and achievability in mind and ensure they
are based on your assumptions. If the assumptions differ from what the case is in actuality, one
gets an opportunity to revise them.

Needless to mention, assumptions also need to be realistic. Things like delayed builds will and
shall always happen, hence I'd recommend we keep some contingency for these kind of
deviations.

I normally do my test estimates based on Work Breakdown Structure as it is easy to build,


explain and maintain. Hope this helps

----

I also recommend a few things that might help lowering the % of failure in the estimation:

1- Have a template with the Basics TC's and your time for create and execute

- have a minimum base estimate, I understand and know that the major problem are the
complexity of each requirement or module or task

2- Also create indicators, where you have a record of all projects and their testing lifecycle, for
example:

- Estimated Time vrs Spent Time

- Registered bugs in each project: Percentage of Number of Bugs

- Failed or Passed TC's for each project: Percentage of TC's Passed/Failed

- Spent time when Validated/Closed bugs: Average time to report or close a Bug

- Spent time on each cycle run: Average time to create a TC


62

- Spent time when created TC's: Average time to create a TC

This help you making decisions for future estimates, because you can have the average the time
or #: "bugs/executed time/created time" for each cycle run

Real time example from Banking application :

Test design estimations

No of Test Cases Distribution 9200


No. Of Simple Test cases 20% 1840
No. Of Medium Test Cases 50% 4600
No. Of Complex Test Cases 30% 2760
No. Of Test cases/day - Simple 12
No. Of Test cases/day - Medium 10
No. Of Test cases/day - Complex 6
Review Effort 15%
Test Data Requirement Preparation 10%
Test Management 10%

Test execution efforts estimation:

No. Of Simple Test cases 20%


No. Of Medium Test Cases 50%
No. Of Complex Test Cases 30%
No. Of Test cases/day - Simple 16
No. Of Test cases/day - Medium 12
No. Of Test cases/day - Complex 8

Activities involved

 Test case design


 Test case review

 Test data requirements

 Infrastructure issues

 Test case execution

 Defect logging/meetings
63

 Defect retesting

 Test management

 Test audit efforts

 Contingency effort

Note: Discussions between testing team and development team is very critical and this consumes
a good amount of time. So, this has to be considered in the estimation.

Other factors:

Estimation Basis
Scheduled Period for Testing(Design
and Execution)-Nigeria 97
Scheduled Period for Testing(Design
and Execution)-Namibia 71
Total No. of Test Cases 9200
Working Hours Per day 8
Review Effort 15%
Test Data Requirement Preparation 10%
Test Management 10%
Retesting effort 20%
Logging Defect 5%
Defect Meeting 5%
Test Audit Effort 5%
No.of Working days Per Week 5

Is it correct to accept the change in functionality after creating Test


cases and just before start of a new relase?
Before start of a new release we got a set of requirements and the testing team has prepared the test cases based on the requirements and
got a sign off for the Test cases, but suddenly customer has requested to change the functionality. Will the testing team should agree for the
new change given at the last minute or telling the customer this change can be possible only in next release?

End of the day it is the quality of the product is important. As part of test team, we should
understanding what kind of changes in functionality are going in ? are they trivial or major, if
required how much percentage of test cases needs to be rewritten, what is the amount of test case
design, review, management effort involved, how the testing dates will get impacted, all these
details needs to be articulated as part of impact anlaysis.
64

The data needs to be presented to the client, then they can take a decision.

18. Test case management tools

Click the Releases button on the sidebar. The Releases module enables you
to define releases and cycles for managing the testing process.
➤ Click the Requirements button on the sidebar. The Requirements module
enables you to specify your testing requirements. This includes defining
what you are testing, defining requirement topics and items, and analyzing
the requirements.
➤ Click the Test Plan button on the sidebar. The Test Plan module enables you
to develop a test plan based on your testing requirements. This includes
dividing your plan into categories, developing tests, automating tests where
beneficial, and analyzing the plan.
➤ Click the Test Lab button on the sidebar. The Test Lab module enables you
to run tests on your application and analyze the results.
➤ Click the Defects button on the sidebar. The Defects module enables you to
add defects, determine repair priorities, repair open defects, and analyze the
data.

Quality centre has 4 different tabs as described below :

• Requirements: Add scope and then requirements.

• Define Scope -Use requirements documents to determine testing scope—test goals and
objectives.

• Create Requirements- Build a requirements tree to define overall testing requirements

• Detail Requirements- Describe each requirement, assign a priority level, and add
attachments if necessary.

• Analyze Requirements- Generate reports and graphs to assist in analyzing testing


requirements.

• Test Plan: Add Test cases for each requirement and attach test scripts to the same.

• Define Test Subjects -Divide application into modules or functions to be tested.

• Define Test cases- Add a basic definition of each test to the test plan tree.

• Create Requirement Coverage- Link each test with a testing requirements.


65

• Design Test Steps - Develop manual tests by adding steps to the tests in test plan tree.
Test steps describe the test operations, the points to check, and the expected outcome
of each test. Decide which tests to automate.

• Automate Tests - For tests need to automate, create test scripts with a Mercury
Interactive testing tool,

• Analyze Test cases - Review your tests to determine their suitability to your testing
goals

• Test Lab : Execute the test plans, schedule the test plans execution

• Create Test Sets -Determine which tests to include in each test set.

• Schedule Runs - Schedule automated tests for execution.

• Run Tests - Execute the tests in your test set automatically or manually

• Analyze Test Results - View the results of your test runs in order to determine whether
a defect has been detected in your application.

• Defects: Log, monitor, track, update defects.

• Stores information about defects

• Tasks

• Create/Edit Defects

• Link defects to each other

• Add defects - Report new defects detected .

• Review New Defects - Review new defects and determine which ones should be fixed

• Repair Open Defects - Developers correct defects assigned to them.

• Test New Build - Test a new build of application. Continue this process until defects are
repaired.

• Analyze Defect Data - Generate reports to assist in analyzing the progress of defect
repairs, and to help determine when to release the application

Quality centre (Formerly Test Director) is a web based test management tool by Mercury (now by HP).
66

Quality Center helps to organize and manage all phases of the application testing process like specifying
testing requirements, planning test cases, executing tests, and tracking defects.

 Quality Assurance managers use the testing scope to determine the overall testing requirements
for the application under test. They define requirement topics and assign them to the QA testers
in the test team. Each QA tester uses Quality Center to record the requirement topics for which
they are responsible.

Each requirement can be like -> security, performance, a module, usability, scalability, language support.

 Once Requirements are created in requirements tree, use the requirements as basis for defining
the tests in your test plan tree and running tests in a test set.

The user can view the list of defects associated to a requirement.

You can also create test plans, and add tests to test plan tree.

Also you can associate a defect with a test. Associate defect with a requirement.

Test lab is used for execution.

You can schedule a test run.

When test engineers find any defect in the application, they will submit the defect into quality centre.
The Defect data can be accessed by the QA and support teams.

 We can use Word as well as Excel to Export or import test case in to Quality Center With help
of Quality Center Following Addins

Word Addin

Excel Addin

Advantages of Quality Centre:

 Gain real-time visibility into requirements coverage and associated defects to paint a clear picture of
business risk
 Manage the release process and make more informed release decisions with real-time KPIs and
reports

 Measure progress and effectiveness of quality activities

 Collaborate in the software quality lifecycle with a single global platform

 Manage manual and automated testing assets centrally

 Facilitate standardized testing and quality processes that boost productivity through workflows and
alerts

 Lower costs by using QA testing tools to capture critical defects before they reach production
67

 Quality Center (QC) is Mercury Interactive Web-based Global test management tool,
which brings communication, organization, documentation and structure into every
testing project..

 QC is very much suitable for multi user environment, with very short times, enabling lot
more of reuse of test information over time.

 QC is an integrated enterprise application, for organizing and managing the entire testing
process .

 QC helps to maintain a repository of test cases that cover all the aspects of applications,
with each test case designed to fulfill a specific requirement of the application

 QC supports both manual and automated tests.

 It provides for efficient method of scheduling and executing test sets, collecting test
results and analyzing data.

 It features a sophisticated system of tracking defects from initial detection till resolution.

 QC automates the test management process making is more efficient and cost effective.

 Testers can test the application, report defects and developers can then fix them and ask
for retesting.

19. Code coverage & tools


Code coverage analysis is the process of:

 Finding areas of a program not exercised by a set of test cases,


 Creating additional test cases to increase coverage, and

 Determining a quantitative measure of code coverage, which is an


indirect measure of quality.

Code coverage is a measure used in software testing. It describes the degree to which the source
code of a program has been tested. It is a form of testing that inspects the code directly and is
therefore a form of white box testing[1]. Currently, the use of code coverage is extended to the
field of digital hardware, the contemporary design methodology of which relies on Hardware
description languages (HDLs).

There are a number of coverage criteria, the main ones being:[3]

 Function coverage - Has each function (or subroutine) in the program been called?
68

 Statement coverage - Has each node in the program been executed?

 Branch coverage - Has every edge in the program been executed?

 Decision coverage (also known as branch coverage) - Has each control structure (such as an IF
statement) evaluated both to true and false?

 Condition coverage (or predicate coverage) - Has each boolean sub-expression evaluated both
to true and false? This does not necessarily imply decision coverage.

 Condition/decision coverage - Both decision and condition coverage should be satisfied.

For example, consider the following C++ function:

int foo(int a, int b)


{
int c = b;
if ((a>5) && (b>0)) {
c = a;
}
return a*c;
}

Assume this function is a part of some bigger program and this program was run with some test
suite. If during this execution function 'foo' was called at least once, then function coverage for
this function is satisfied.

Statement coverage for this function will be satisfied if it was called e.g. as 'foo(7,1)'.

The tests with 'foo(7,1)' and 'foo(7,0)' calls will satisfy decision coverage.

Condition coverage can be satisfied with tests, which do 'foo(7,1)', 'foo(7,0)' and 'foo(4,0)'.

In languages, like Pascal, where standard boolean operations are not short circuit, condition
coverage does not necessarily implies decision coverage. For example, consider the following
fragment of code:

if a and b then

Condition coverage can be satisfied by two tests:

 a=true, b=false
 a=false, b=true

However, this set of tests doesn't satisfy decision coverage.


69

To measure the code coverage, the source code needs to be instrumented and build has to be
prepared. Then set of test cases to be executed on the instrumented build to get the code coverage
details. Eg : gcov is a code coverage tool for C language code.

Other tool : bullsEyeCoverage

20. Defect tracking tools - etrack, bugtrq, (perforce - source


code control system)
All are client specific bug tracking tools including clear quality.

ClearQuality is part of Clarify Inc.'s Service Management System.While ClearSupport provides high
volume call tracking, ClearQuality provides defect tracking. Information it keeps includes priority,severity,
module and description. It allows related information to be attached by the user. In addition to Motif on
UNIX platforms, ClearQuality's client may be run from PCs and Macintosh machines. A supplier WWW
site is available at http://www.clarify.com/.

21. QA best practices


Testing is a process of facilitating development team/ project team in improving the quality of
software before it is released to the customer for use. Some key essential steps are always there
that need to be followed by the Testers during Software Testing to streamline the process. The
most important checkpoints for testers during software testing, in my opinion, would be:

(1)- Complete document of customer and business requirements specifying all the requirements
for the development of the product.

(2)- Latest completed build and URL of the application to hit for Testing.

(3)- Software requirements for installing the application on PCs for testing or for database
connectivity.

(4)-Training/Demo of the project by development team to Testing Team so as to understand the


flow/functionality of the software.

(5)- Scope of testing should be made clear by the development team/Head.

(6)- If the build comes for retesting, then it should be accompanied by the revised document
which includes the updated changes incorporated in the software.

(7)- Clarity regarding which member of development team should be contacted in case of any
clarification required during the testing phase regarding the functionality of the module or if
testers encounter a showstopper in the Software.

(8)- After release of the bugs list to development team, how much time they will require for fixing
the bugs.
70

22. QA challenges

Difference of opinion between Dev and QA teams. Need to sort with due diligence.

Unavailability of loads (versions) in expected time. QA schedules will get impacted. Need to work on
contingency plan and execute accordingly.

Dev team needs to give a release notes for each load they release explaining the changes between
previous load, known issues/limitations etc. which will help in better planning of testing.

1) Testers focusing on finding easy bugs:


If organization is rewarding testers based on number of bugs (very bad approach to judge testers
performance) then some testers only concentrate on finding easy bugs those don’t require deep
understanding and testing. A hard or subtle bug remains unnoticed in such testing approach.

-> Continuous eduction to testers on importance of testing, how find critical defects,

2) To cope with attrition:


Increasing salaries and benefits making many employees leave the company at very short career
intervals. Managements are facing hard problems to cope with attrition rate. Challenges - New
testers require project training from the beginning, complex projects are difficult to understand,
delay in shipping date!

3 ) Understanding the requirements & functionality


Some times testers are responsible for communicating with customers for understanding the
requirements. What if tester fails to understand the requirements? Will he be able to test the
application properly? Definitely No! Testers require good listening and understanding
capabilities.

4) Relationship with developers:


Big challenge. Requires very skilled tester to handle this relation positively and even by
completing the work in testers way. There are simply hundreds of excuses developers or testers
can make when they are not agree with some points. For this tester also requires good
communication, troubleshooting and analyzing skill.

2) Regression testing:
When project goes on expanding the regression testing work simply becomes uncontrolled.
Pressure to handle the current functionality changes, previous working functionality checks and
bug tracking.

5) Lack of skilled testers:


I will call this as ‘wrong management decision’ while selecting or training testers for their
project task in hand. These unskilled fellows may add more chaos than simplifying the testing
71

work. This results into incomplete, insufficient and ad-hoc testing throughout the testing life
cycle.

6) Testing always under time constraint:


Hey tester, we want to ship this product by this weekend, are you ready for completion? When
this order comes from boss, tester simply focuses on task completion and not on the test
coverage and quality of work. There is huge list of tasks that you need to complete within
specified time. This includes writing, executing, automating and reviewing the test cases.

7) Which tests to execute first?


If you are facing the challenge stated in point no 6, then how will you take decision which test
cases should be executed and with what priority? Which tests are important over others? This
requires good experience to work under pressure.

9) Automation testing:
Many sub challenges - Should automate the testing work? Till what level automation should be
done? Do you have sufficient and skilled resources for automation? Is time permissible for
automating the test cases? Decision of automation or manual testing will need to address the pros
and cons of each process.

10) Decision to stop the testing:


When to stop testing? Very difficult decision. Requires core judgment of testing processes and
importance of each process. Also requires ‘on the fly’ decision ability.

11) One test team under multiple projects:


Challenging to keep track of each task. Communication challenges. Many times results in failure
of one or both the projects.

12) Reuse of Test scripts:


Application development methods are changing rapidly, making it difficult to manage the test
tools and test scripts. Test script migration or reuse is very essential but difficult task.

These are some top software testing challenges we face daily. Project success or failure
depends largely on how you address these basic issues.

For further reference and detailed solutions on these challenges refer book “Surviving the Top
Ten challenges of Software Testing” written by William E. Perry and Randall W. Rice.

23. Entry and Exit criteria for testing

Entry Criteria

• Development of the particular component is completed


• Specifications for product are complete and approved
• All test cases are documented
72

Exit Criteria

• All test cases completed successfully


• No outstanding critical defects
• All test results have been documented

System Testing

Entry Criteria

• Unit Testing has been completed


• Development has been completed on the entire product
• Specifications for the product have been completed and approved
• All Test cases are documented

Exit Criteria

• All test cases completed successfully


• All defect recorded in Test director
• No outstanding critical defects
• All test results have been documented
• All code has been migrated into the model environment

<STRONG ="redx">System Integration Testing

Entry Criteria

• Unit and System testing has been completed and signed-off


• All code and applications are present in the model environment
• SIT test plan is approved
• All test cases have been documented and approved

Exit Criteria

• No outstanding critical defects, unless agreed by UAT manager


• Less than 10 major defects outstanding
• All Test cases have been complete
SIT summary report has been approved
• SIT sign-off meeting attended by SIT manager, UAT manager/Test principal and project manager where
• SIT results and finding are presented and a decision to move forward has been made

User Acceptance Testing

Entry Criteria

• The application works functionally as defined in the specifications


• No high category defects outstanding
• All areas have had testing started on them unless pre agreed by UAT manager/Test principal, SIT manager
and Project managers
• Entire system functioning and all new components available unless previously agreed between UAT
manager/Test Principal, SIT manager and project managers
• All test cases are documented and reviewed prior to the commencement of UAT
73

Exit Criteria

• No outstanding critical defects


• Minimal outstanding medium category defects with plans in place to fix
• 90% of Business process is working
• All critical Business processes are working
• All test results recorded and approved
• UAT test summary report documented and approved
• UAT close off meeting held.

Unit testing is the responsibility of the development team. Sometimes QA


team checks for the code coverage.

23. QA in web technologies (Website testing)


WebSite (Web Application) Testing
Dr. Edward Miller
eValid HOME

Download this paper in PDF format.

ABSTRACT

The instant worldwide audience of any Web Browser Enabled Application --


or a WebSite -- make its quality and reliability crucial factors in its success.
Correspondingly, the nature of WebSites and Web Applications pose unique
software testing challenges. Webmasters, Web applications developers, and
WebSite quality assurance managers need tools and methods that meet
their specific needs. Mechanized testing via special purpose Web testing
software offers the potential to meet these challenges. Our technical
approach, based on existing Web browsers, offers a clear solution to most of
the technical needs for assuring WebSite quality.

BACKGROUND

WebSites impose some entirely new challenges in the world of software


quality! Within minutes of going live, a Web application can have many
thousands more users than a conventional, non-Web application. The
immediacy of the Web creates immediate expectations of quality and rapid
application delivery, but the technical complexities of a WebSite and
variances in the browser make testing and quality control that much more
74

difficult, and in some ways, more subtle, than "conventional" client/server or


application testing. Automated testing of WebSites is an opportunity and a
challenge.

DEFINING WEBSITE QUALITY & RELIABILITY

Like any complex piece of software there is no single, all inclusive quality
measure that fully characterizes a WebSite (by which we mean any web
browser enabled application).

Dimensions of Quality. There are many dimensions of quality; each


measure will pertain to a particular WebSite in varying degrees. Here are
some common measures:

 Timeliness: WebSites change often and rapidly. How much has a WebSite changed
since the last upgrade? How do you highlight the parts that have changed?
 Structural Quality: How well do all of the parts of the WebSite hold together? Are all
links inside and outside the WebSite working? Do all of the images work? Are there
parts of the WebSite that are not connected?

 Content: Does the content of critical pages match what is supposed to be there? Do
key phrases exist continually in highly-changeable pages? Do critical pages maintain
quality content from version to version? What about dynamically generated HTML
(DHTML) pages?

 Accuracy and Consistency: Are today's copies of the pages downloaded the same as
yesterday's? Close enough? Is the data presented to the user accurate enough? How
do you know?

 Response Time and Latency: Does the WebSite server respond to a browser request
within certain performance parameters? In an e-commerce context, how is the end-
to-end response time after a SUBMIT? Are there parts of a site that are so slow the
user discontinues working?

 Performance: Is the Browser --> Web --> ebSite --> Web --> Browser connection
quick enough? How does the performance vary by time of day, by load and usage? Is
performance adequate for e-commerce applications? Taking 10 minutes -- or maybe
even only 1 minute -- to respond to an e-commerce purchase may be unacceptable!

Impact of Quality. Quality remains is in the mind of the WebSite user. A


poor quality WebSite, one with many broken pages and faulty images, with
Cgi-Bin error messages, etc., may cost a lot in poor customer relations, lost
corporate image, and even in lost sales revenue. Very complex, disorganized
WebSites can sometimes overload the user.

The combination of WebSite complexity and low quality is potentially lethal


to Company goals. Unhappy users will quickly depart for a different site;
and, they probably won't leave with a good impression.
75

WEBSITE ARCHITECTURAL FACTORS

A WebSite can be quite complex, and that complexity -- which is what


provides the power, of course -- can be a real impediment in assuring
WebSite Quality. Add in the possibilities of multiple WebSite page authors,
very-rapid updates and changes, and the problem compounds.

Here are the major pieces of WebSites as seen from a Quality perspective.

Browser. The browser is the viewer of a WebSite and there are so many
different browsers and browser options that a well-done WebSite is probably
designed to look good on as many browsers as possible. This imposes a kind
of de facto standard: the WebSite must use only those constructs that work
with the majority of browsers. But this still leaves room for a lot of
creativity, and a range of technical difficulties. And, multiple browsers'
renderings and responses to a WebSite have to be checked.

Display Technologies. What you see in your browser is actually composed


from many sources:

 HTML. There are various versions of HTML supported, and the WebSite ought to be
built in a version of HTML that is compatible. This should be checkable.
 Java, JavaScript, ActiveX. Obviously JavaScript and Java applets will be part of any
serious WebSite, so the quality process must be able to support these. On the
Windows side, ActiveX controls have to be handled well.

 Cgi-Bin Scripts. This is link from a user action of some kind (typically, from a FORM
passage or otherwise directly from the HTML, and possibly also from within a Java
applet). All of the different types of Cgi-Bin Scripts (perl, awk, shell-scripts, etc.)
need to be handled, and tests need to check "end to end" operation. This kind of a
"loop" check is crucial for e-commerce situations.

 Database Access. In e-commerce applications you are either building data up or


retrieving data from a database. How does that interaction perform in real world
use? If you give in "correct" or "specified" input does the result produce what you
expect?

Some access to information from the database may be appropriate,


depending on the application, but this is typically found by other
means.

Navigation. Users move to and from pages, click on links, click on


images (thumbnails), etc. Navigation in a WebSite is often complex
and has to be quick and error free.

Object Mode. The display you see changes dynamically; the only
constants are the "objects" that make up the display. These aren't real
76

objects in the OO sense; but they have to be treated that way. So, the
quality test tools have to be able to handle URL links, forms, tables,
anchors, buttons of all types in an "object like" manner so that
validations are independent of representation.

Server Response. How fast the WebSite host responds influences


whether a user (i.e. someone on the browser) moves on or gives up.
Obviously, InterNet loading affects this too, but this factor is often
outside the Webmaster's control at least in terms of how the WebSite
is written. Instead, it seems to be more an issue of server hardware
capacity and throughput. Yet, if a WebSite becomes very popular --
this can happen overnight! -- loading and tuning are real issues that
often are imposed -- perhaps not fairly -- on the WebMaster.

Interaction & Feedback. For passive, content-only sites the only real
quality issue is availability. For a WebSite that interacts with the user,
the big factor is how fast and how reliable that interaction is.

Concurrent Users. Do multiple users interact on a WebSite? Can they


get in each others' way? While WebSites often resemble client/server
structures, with multiple users at multiple locations a WebSite can be
much different, and much more complex, than complex applications.

WEBSITE TEST AUTOMATION REQUIREMENTS

Assuring WebSite quality requires conducting sets of tests,


automatically and repeatably, that demonstrate required properties
and behaviors. Here are some required elements of tools that aim to
do this.

Test Sessions. Typical elements of tests involve these characteristics:

o Browser Independent. Tests should be realistic, but not be dependent on a


particular browser, whose biases and characteristics might mask a WebSite's
problems.
o No Buffering, Caching. Local caching and buffering -- often a way to improve
apparent performance -- should be disabled so that timed experiments are a
true measure of the Browser-Web-WebSite-Web-Browser response time.

o Fonts and Preferences. Most browsers support a wide range of fonts and
presentation preferences, and these should not affect how quality on a
WebSite is assessed or assured.

o Object Mode. Edit fields, push buttons, radio buttons, check boxes, etc. All
should be treatable in object mode, i.e. independent of the fonts and
preferences.
77

Object mode operation is essential to protect an investment in


test suites and to assure that test suites continue operating
when WebSite pages experience change. In other words, when
buttons and form entries change location on the screen -- as
they often do -- the tests should still work.

However, when a button or other object is deleted, that error


should be sensed! Adding objects to a page clearly implies re-
making the test.

o Tables and Forms. Even when the layout of a table or form varies in the
browser's view, tests of it should continue independent of these factors.
o Frames. Windows with multiple frames ought to be processed simply, i.e. as if
they were multiple single-page frames.

Test Context. Tests need to operate from the browser level for two
reasons: (1) this is where users see a WebSite, so tests based in
browser operation are the most realistic; and (2) tests based in
browsers can be run locally or across the Web equally well. Local
execution is fine for quality control, but not for performance
measurement work, where response time including Web-variable
delays reflective of real-world usage is essential.

WEBSITE DYNAMIC VALIDATION

Confirming validity of what is tested is the key to assuring WebSite


quality -- the most difficult challenge of all. Here are four key areas
where test automation will have a significant impact.

Operational Testing. Individual test steps may involve a variety of


checks on individual pages in the WebSite:

o Page Consistency. Is the entire page identical with a prior version? Are key
parts of the text the same or different?
o Table, Form Consistency. Are all of the parts of a table or form present?
Correctly laid out? Can you confirm that selected texts are in the "right
place".

o Page Relationships. Are all of the links on a page the same as they were
before? Are there new or missing links? Are there any broken links?

o Performance Consistency, Response Times. Is the response time for a user


action the same as it was (within a range)?

Test Suites. Typically you may have dozens or hundreds (or


thousands?) of tests, and you may wish to run tests in a variety of
modes:
78

o Unattended Testing. Individual and/or groups of tests should be executable


singly or in parallel from one or many workstations.
o Background Testing. Tests should be executable from multiple browsers
running "in the background" on an appropriately equipped workstation.

o Distributed Testing. Independent parts of a test suite should be executable


from separate workstations without conflict.

o Performance Testing. Timing in performance tests should be resolved to the


millisecond; this gives a strong basis for averaging data.

o Random Testing. There should be a capability for randomizing certain parts of


tests.

o Error Recovery. While browser failure due to user inputs is rare, test suites
should have the capability of resynchronizing after an error.

Content Validation. Apart from how a WebSite responds


dynamically, the content should be checkable either exactly or
approximately. Here are some ways that content validation could be
accomplished:

o Structural. All of the links and anchors should match with prior "baseline"
data. Images should be characterizable by byte-count and/or file type or
other file properties.
o Checkpoints, Exact Reproduction. One or more text elements -- or even all
text elements -- in a page should be markable as "required to match".

o Gross Statistics. Page statistics (e.g. line, word, byte-count, checksum, etc.).

o Selected Images/Fragments. The tester should have the option to rubber


band sections of an image and require that the selection image match later
during a subsequent rendition of it. This ought to be possible for several
images or image fragments.

Load Simulation. Load analysis needs to proceed by having a special


purpose browser act like a human user. This assures that the
performance checking experiment indicates true performance -- not
performance on simulated but unrealistic conditions. There are many
"http torture machines" that generate large numbers of http requests,
but that is not necessarily the way real-world users generate requests.

Sessions should be recorded live or edited from live recordings to


assure faithful timing. There should be adjustable speed up and slow
down ratios and intervals.

Load generation should proceed from:


79

o Single Browser Sessions. One session played on a browser with one or


multiple responses. Timing data should be put in a file for separate analysis.
o Multiple Independent Browser Sessions. Multiple sessions played on multiple
browsers with one or multiple responses. Timing data should be put in a file
for separate analysis. Multivariate statistical methods may be needed for a
complex but general performance model.

TESTING SYSTEM CHARACTERISTICS

Considering all of these disparate requirements, it seems evident that


a single product that supports all of these goals will not be possible.
However, there is one common theme and that is that the majority of
the work seems to be based on "...what does it [the WebSite] look like
from the point of view of the user?" That is, from the point of view of
someone using a browser to look at the WebSite.

This observation led our group to conclude that it would be worthwhile


trying to build certain test features into a "test enabled web browser",
which we called eValid.

Browser Based Solution. With this as a starting point we determined


that the browser based solution had to meet these additional
requirements:

o Commonly Available Technology Base. The browser had to be based on a well


known base (there appear to be only two or three choices).
o Some Browser Features Must Be Deletable. At the same time, certain
requirements imposed limitations on what was to be built. For example, if we
were going to have accurate timing data we had to be able to disable caching
because otherwise we are measuring response times within the client
machine rather than "across the web."

o Extensibility Assured. To permit meaningful experiments, the product had to


be extensible enough to permit timings, static analysis, and other information
to be extracted.

Taking these requirements into account, and after investigation of W3C's Amaya
Browser and the open-architecture Mozilla/Netscape Browser we chose the IE Brower
as our initial base for our implementation of eValid.

User Interface. How the user interacts with the product is very
important, in part because in some cases the user will be someone
very familiar with WebSite browsing and not necessarily a testing
expert. The design we implemented takes this reality into account.
80

o Pull Down Menus. In keeping with the way browsers are built, we put all the
main controls for eValid on a set of Pull Down menus, as shown in the
accompanying screen shot.

Figure 1. eValid Menu Functions.

o "C" Scripting. We use interpreted "C" language as the control language


because the syntax is well known, the language is fully expressive of most of
the needed logic, and because it interfaces well with other products.
o Files Interface. We implemented a set of dialogs to capture critical information
and made each of them recordable in a text file. The dialogs are associated
with files that are kept in parallel with each browser invocation:

 Keysave File. This is the file that is being created -- the file is shown
line by line during script recording as the user moves around the
candidate WebSite.

 Timing File. Results of timings are shown and saved in this file.

 Messages File. Any error messages encountered are delivered to this


file. For example, if a file can't be downloaded within the user-specified
maximum time an error message is issued and the playback continues.
(This helps preserve the utility of tests that are partially unsuccessful.)

 Event File. This file contains a complete log of recording and playback
activities that is useful primarily to debug a test recording session or to
better understand what actually went on during playback.

Operational Features. Based on prior experience, the user interface


for eValid had to provide for several kinds of capabilities already
known to be critical for a testing system. Many of these are critically
important for automated testing because they assure an optimal
combination of test script reliability and robustness.

o Capture/Replay. We had to be able both to capture a user's actual behavior


online, and be able to create scripts by hand.
o Object Mode. The recording and playback had to support pure-Object Mode
operation. This was achieved by using internal information structures in a way
that lets the scripts (either recorded or constructed) to refer to objects that
are meaningful in the browser context.

A side benefit of this was that playbacks were reliable,


independent of the rendering choices made by the user. A script
81

plays back identically the same, independent of browser window


size, type-font choices, color mappings, etc.

o [Adjustable] True-Time Mode. We assured realistic behavior of the product by


providing for recording of user-delays and for efficient handling of delays by
incorporating a continuously variable "playback delay multiplier" that can be
set by the user.
o Playback Synchronization. For tests to be robust -- that is, to reliably indicate
that a feature of a WebSite is working correctly -- there must be a built-in
mode that assures synchronization so that Web-dependent delays don't
interfere with proper WebSite checking. eValid does this using a proprietary
playback synchronization method that waits for download completion (except
if a specified maximum wait time is exceeded).

o Timer Capability. To make accurate on-line performance checks we built in a


1 millisecond resolution timer that could be read and reset from the playback
script.

o Validate Selected Text Capability. A key need for WebSite content checking,
as described above, is the ability to capture an element of text from an image
so that it can be compared with a baseline value. This feature was
implemented by digging into the browser data structures in a novel way (see
below for an illustration). The user highlights a selected passage of the web
page and clicks on the "Validate Selected Text" menu item.

24. QA exam questions


25. Testing dot net applications
26. Initiatives & Best practices
Code quality initiative

 To educate team with coding standards and guidelines

 Enforcing tools usage like wipro code checker which gives violations w.r.to standard code quality.

 Using customer specified tool (NASA) which internally uses findbugs etc.

Review focus group and review effective ness


82

 Guidelines and checklist for review process

 Forming review focus group based on the modules and expertise

 Forming rules and regulations for review process.

 Ensuring all defects found during review were closed before submitting

 Maintaining all the artifacts about review findings and fixes.

27. Initiatives & Best practices

Testing FAQS

1. The customer’s view of quality means:


a. Meeting requirements
b. Doing it the right way
c. Doing it right the first time
d. Fit for use
e. Doing it on time

2. The testing of a single program, or function, usually performed by the developer is


called:
a. Unit testing
b. Integration testing
c. System testing
d. Regression testing
e. Acceptance testing

3. The measure used to evaluate the correctness of a product is called the product:
a. Policy
b. Standard
c. Procedure to do work
d. Procedure to check work
e. Guideline

4. Which of the four components of the test environment is considered to be the most
important component of the test environment:
a. Management support
b. Tester competency
c. Test work processes
d. Testing techniques and tools

5. Effective test managers are effective listeners. The type of listening in which the tester is
performing an analysis of what the speaker is saying is called:
a. Discriminative listening
83

b. Comprehensive listening
c. Therapeutic listening
d. Critical listening
e. Appreciative listening

6. To become a CSTE, an individual has a responsibility to accept the standards of conduct


defined by the certification board. These standards of conduct are called:
a. Code of ethics
b. Continuing professional education requirement
c. Obtaining references to support experience
d. Joining a professional testing chapter
e. Following the common body of knowledge in the practice of software testing

7. Which of the following are risks that testers face in performing their test activities:
a. Not enough training
b. Lack of test tools
c. Not enough time for testing
d. Rapid change
e. All of the above

8. All of the following are methods to minimize loss due to risk. Which one is not a method
to minimize loss due to risk:
a. Reduce opportunity for error
b. Identify error prior to loss
c. Quantify loss
d. Minimize loss
e. Recover loss

9. Defect prevention involves which of the following steps:


a. Identify critical tasks
b. Estimate expected impact
c. Minimize expected impact
d. a, b and c
e. a and b

10. The first step in designing use case is to:


a. Build a system boundary diagram
b. Define acceptance criteria
c. Define use cases
d. Involve users
e. Develop use cases

11. The defect attribute that would help management determine the importance of the
defect is called:
a. Defect type
b. Defect severity
84

c. Defect name
d. Defect location
e. Phase in which defect occurred

12. The system test report is normally written at what point in software development:
a. After unit testing
b. After integration testing
c. After system testing
d. After acceptance testing

13. The primary objective of user acceptance testing is to:


a. Identify requirements defects
b. Identify missing requirements
c. Determine if software is fit for use
d. Validate the correctness of interfaces to other software systems
e. Verify that software is maintainable

14. If IT establishes a measurement team to create measures and metrics to be used in


status reporting, that team should include individuals who have:
a. A working knowledge of measures
b. Knowledge in the implementation of statistical process control tools
c. A working understanding of benchmarking techniques
d. Knowledge of the organization’s goals and objectives
e. All of the above

15. What is the difference between testing software developed by a contractor outside your
country, versus testing software developed by a contractor within your country:
a. Does not meet people needs
b. Cultural differences
c. Loss of control over reallocation of resources
d. Relinquishment of control
e. Contains extra features not specified

16. What is the definition of a critical success factor:


a. A specified requirement
b. A software quality factor
c. Factors that must be present
d. A software metric
e. A high cost to implement requirement

17. The condition that represents a potential for loss to an organization is called:
a. Risk
b. Exposure
c. Threat
d. Control
e. Vulnerability
85

18. A flaw in a software system that may be exploited by an individual for his or her
advantage is called:
a. Risk
b. Risk analysis
c. Threat
d. Vulnerability
e. Control

19. The conduct of business of the Internet is called:


a. e-commerce
b. e-business
c. Wireless applications
d. Client-server system
e. Web-based applications

20. The following is described as one of the five levels of maturing a new technology into an
IT organization’s work processes. The “People-dependent technology” level is equivalent to
what level in SEI’s compatibility maturity model:
a. Level 1
b. Level 2
c. Level 3
d. Level 4
e. Level 5

Answers to sample CSTE exam questions


1. (d) Fit for use
2. (a) Unit testing
3. (b) Standard
4. (a) Management support
5. (d) Critical listening
6. (a) Code of ethics
7. (e) All of the above
8. (c) Quantify loss
9. (d) a, b and c
10. (a) Build a system boundary diagram
11. (b) Defect severity
12. (c) After system testing
13. (c) Determine if software is fit for use
14. (e) All of the above
15. (b) Cultural differences
16. (c) Factors that must be present
17. (a) Risk
18. (d) Vulnerability
86

19. (b) e-business


20. (a) Level 1

ISV – Independent Software Vendor

Other topics:

STATIC TESTING: This type of testing is done during Verification process. It


does not need computer, testing of a program is done without executing the
program. ex: Reviewing, walkthrough..

DYNAMIC TESTING: This testing is needs computer. It is done during Validation


process. The software is tested executing it on computer.
Ex: Unit testing, integration testing, system testing...

Version control systems used in the project :

Equivalence partition

Boundary value analysis

What is padding in MS project and explain?

Important expectations from a test manager?

A good QA, test, or QA/Test(combined) manager should:


1. be familiar with the software development process
2. be able to maintain enthusiasm of their team and promote a positive atmosphere, despite
3. what is a somewhat 'negative' process (e.g., looking for or preventing problems)
4. be able to promote teamwork to increase productivity
5. be able to promote cooperation between software, test, and QA engineers
6. have the diplomatic skills needed to promote improvements in QA processes
7. have the ability to withstand pressures and say 'no' to other managers when quality is
insufficient or QA processes are not being adhered to
8. have people judgement skills for hiring and keeping skilled personnel
8. be able to communicate with technical and non-technical people, engineers, managers, and
customers.
9. be able to run meetings and keep them focused

Test optimization? How to do?

 Use automation to reduce person cost


87

 Use tools to reduce hardware cost – like VmWare instead of procuring multiple test
machines, you can use VmWare.

 Setup test case design and test execution guidelines to reduce review and rework time.

 Setup defect management guidelines to reduce interactions and discussions between


development team and test team. Like clear details in defect, necessary screen shots,
additional information, logs, environment details, build version etc.

 Conduct reviews at earlier stage of test development to save time in future phases.

 Provide training on product, test methodologies to team for better understanding of scope and
vision of the program.

 Look for reuse options (Test cases, environment, test plan etc).

Difference between Test plan and Test strategy

Test Strategy:
It is a company level document and developed by QA category people like QA and PM. This
document defines "Testing Approach" to achieve testing objective. Test strategy is the freezed part of
BRS from which we get Test Policy and Test Strategy.

Components in the Test Strategy are as follows:


1. Scope and objective
2. Business issues
3. Roles and responsibilities
4. Communication and status reporting
5. Test deliverability
6. Test approach
7. Test automation and tools
8. Testing measurements and metrices
9. Risks and mitigation
10. Defect reporting and tracking
11. Change and configuration management
12. Training plan

Test Plan:
Test plan is the freezed document developed from SRS, FS, UC. After completion of testing team
formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test,
who to test, and when to test.
There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general
talk about Project Test Plan.
Components are as follows:

1. Test Plan id
88

2. Introduction
3. Test items
4. Features to be tested
5. Features not to be tested
6. Approach
7. Testing tasks
8. Suspension criteria
9. Features pass or fail criteria
10. Test environment (Entry criteria, Exit criteria)
11. Test delivarables
12. Staff and training needs
13. Responsibilities
14. Schedule
15. Risk and mitigation
16. Approach

This is one of the standard approach to prepare test plan document, but things can vary company-to-
company

Difference between testing, QA and QC :

 Quality Assurance: A set of activities designed to ensure that the development and/or
maintenance process is adequate to ensure a system will meet its objectives.
 Quality Control: A set of activities designed to evaluate a developed work product.

 Testing: The process of executing a system with the intent of finding defects. (Note that the
"process of executing a system" includes test planning prior to the execution of the test cases.)

Look Ahead Meeting:

 To discuss about best practices to be implemented.


 What went well, what went wrong, how can we improve etc will be discussed.

 Identify pain areas and plans to overcommit the same

 Eg : training needs, infrastructure, identifying dependencies, stake holders, tools and


techniques etc

28. CSAT parameters

 Adherence to commitment
 Quality of deliverables
89

 Accuracy of status updates

 Timely escalation of issues

 Handling challenges

 Risk management

 Managing people issues and attrition

 Flexibility

 Delivering on schedule and efforts

29. Defect management in Quality Centre

When you submit a defect to a Quality Center project, it is tracked through


these stages: New, Open, Fixed, and Closed. A defect may also be Rejected or it
may be Reopened after it is fixed.

When you initially report the defect to the Quality Center project, it is
assigned the status New, by default. A quality assurance or project manager
reviews the defect and determines whether or not to consider the defect for
repair. If the defect is refused, it is assigned the status Rejected. If the defect is
accepted, the quality assurance or project manager determines a repair
priority, changes its status to Open, and assigns it to a member of the
development team. A developer repairs the defect and assigns it the status
Fixed. You retest the application, making sure that the defect does not recur.
If the defect recurs, the quality assurance or project manager assigns it the
status Reopened. If the defect is repaired, the quality assurance or project

manager assigns it the status Closed.

30. Test Management Office (TMO)

 TO have a single view of all the projects running across the program

 To have a single control


90

 To handle scope management, changes, change requests

 For better people management and rotation across projects.

 Customer expectation management

 Managing program level risks, challenges and issues

• Overall management responsibility for the One SIBS Testing Program - Stage 1 and Stage 2 implementation
• Coordinating with OCBC, Development Vendors,PMO and Wipro Testing Team teams for Testing Activities
• Planning and administering Testing the overall One SIBS program
• Reviews/approves plans for conformance to program strategy and program plan and schedule
• Anchor the development of Master Test Plan and Strategy
• Managing scope and change requests for One SIBS Testing Program initiative
• Milestone reviews
• Managing Program Level Testing risks
• Escalation and Issue management in Testing
• Test Program Governance
Dependency Management across stakeholders

31. RASCI matrix

A Responsibility Assignment Matrix (RAM), also known as RACI matrix (pronounced /resi/) or
Linear Responsibility Chart (LRC), describes the participation by various roles in completing
tasks or deliverables for a project or business process. It is especially useful in clarifying roles
and responsibilities in cross-functional/departmental projects and processes.

Responsible
Those who do the work to achieve the task. There is typically one role with a
participation type of Responsible, although others can be delegated to assist in the work
required (see also RASCI below for separately identifying those who participate in a
supporting role).
Accountable (also Approver or final Approving authority)
Those who are ultimately accountable for the correct and thorough completion of the
deliverable or task, and the one to whom Responsible is accountable. In other words, an
Accountable must sign off (Approve) on work that Responsible provides. There must be
only one Accountable specified for each task or deliverable.
Consulted
Those whose opinions are sought; and with whom there is two-way communication.
Informed
Those who are kept up-to-date on progress, often only on completion of the task or
deliverable; and with whom there is just one-way communication.
Support
Resources allocated to Responsible. Unlike Consulted, who may provide input to the
task, Support will assist in completing the task.
91

31. Program Management

Program management: Process of managing multiple interdependent projects that will enable
organization to achieve intended business benefits. Projects deliver outputs like IT systems set up,
programs create outcomes to meet business objective like business metrics improvement.

Program Governance: Function that ensures the program management objective are met through
performance reviews, eliminating risk, resolving issues and enable team for effective and efficient
program management.

Program Management Processes


• Service delivery processes – ITIL, CMMI

• Risk management process

• Technology (life cycle) management process

• Knowledge management

• Project management process - PMI

• Contract management process

• Finance management process

• Issue and escalation management process

• Communication management process

• Performance management process

• Relationship management process

• Change management process


92

Governance

SLA ACTIVITY RESOURCE QUALITY

MANAGEMENT MANAGEMENT MANAGEMENT MANAGEMENT


Scope Finalization Activity Definition
Team Planning Quality Planning
SLA Definition Activity Sequencing
People Acquisition Quality Assurance
SLA Assessment Schedule Development
Team Development Quality Control
SLA Assurance Schedule Control

FINANCIAL COMMUNICATION RISK CONTRACT

MANAGEMENT MANAGEMENT MANAGEMENT Legal and regulatory


MANAGEMENT
Communication
Risk Assessment Compliance
Planning
Estimation Impact Analysis
Budgeting Risk Mitigation Contract
Performance Reporting
Cost Control Administration
HR Communication Mechanism
Contract closure

32 Finding faults early


It is commonly believed that the earlier a defect is found the cheaper it is to fix it. [16] The
following table shows the cost of fixing the defect depending on the stage it was found.[17] For
example, if a problem in the requirements is found only post-release, then it would cost 10–100
times more to fix than if it had already been found by the requirements review.

Time detected

System Post-
Requirements Architecture Construction
test release

Requirements 1× 3× 5–10× 10× 10–100×


Time
Architecture - 1× 10× 15× 25–100×
introduced
Construction - - 1× 10× 10–25×
93

33 Software quality assurance (SQA) vs Testing


Though controversial, software testing may be viewed as an important part of the software
quality assurance (SQA) process.[12] In SQA, software process specialists and auditors take a
broader view on software and its development. They examine and change the software
engineering process itself to reduce the amount of faults that end up in the delivered software:
the so-called defect rate.

What constitutes an "acceptable defect rate" depends on the nature of the software; A flight
simulator video game would have much higher defect tolerance than software for an actual
airplane.

Although there are close links with SQA, testing departments often exist independently, and
there may be no SQA function in some companies.

Software testing is a task intended to detect defects in software by contrasting a computer


program's expected results with its actual results for a given set of inputs. By contrast, QA
(quality assurance) is the implementation of policies and procedures intended to prevent defects
from occurring in the first place.

34 Severity vs Priority
Severity means Technical (i.e if the application is shuts down when we perform one function in
the Application the severity level is high)

Priority means Business(If we are going to deliver a build in that build defect may be low
severity but the Priority is high.)

Severity only decided by the tester. Priority will be decided by the PM.

34 Functional testing vs System testing vs User Acceptance


testing

Functionality testing is based on functional requirements of the application whereas the system testing is end to end
testing it covers all the functionality, performance, usabilty, database, stress testing.

Functional testing is the subset of system testing, but both are blackbox testing

User Acceptance Testing :

This type of testing is done by the Cust / Client or a


testing team from the client;s side.
The appln here is tested for all its features as a n end
user.
94

System Testing :

End to end testing.


U have 4/5 questions on System testing here.so u can go
thru it again.

User acceptance Testing is performed by the Client of the


applcation to determine whether the application is
Developed as per the requirements specified by him/her.
it is performed within the development of the organization
or at the client site.
Alpha testing and Beta testing are the examples of
Acceptance Testing.

System testing is perfomed by the Testers to determine


whether the application is satisfyiing the requirements as
specified in SRS.

34 Automation and Automation frameworks

?The Framework is a tiered organization of the function


libraries.?

In other words, our Test Automation Script contains simply


calls to functions instead of individual statements. These
functions are reusable and any Test Script can be
constructed by simply calling these functions.

The code for these Library functions is found in the next


tier(level). The functions themselves might have been
defined in terms of another set of functions, which are
again described in the next tier.

This is called automation framework. It can be 2-tier, 3-


tier,... n-tier.

A test automation framework is a set of assumptions,


concepts, and practices that provide support for automated
software testing. This article describes and demonstrates
five basic frameworks.

There is no hard and fast rule to use a specific Automation


frame work. It all depends on your project needs, here are
some info on the same.

Data Driven approach is suitable for applications that have


95

limited functionality but large number of variations in


terms of test data.

Functional Framework is suitable for applications that have


variety of functionality but limited variations in terms of
test data.

Hybrid Test Automation Framework is suitable for


applications that have variety of functionality and larger
number of variations in terms of test data.

Record, enhance and play back methodology is suitable to


convert small to medium size manual scripts into equivalent
Automation scripts - one to one basis.

You might also like