Professional Documents
Culture Documents
Contents
1. Test strategy document.......................................................................................................................3
2. The whole QA process:........................................................................................................................6
3. Metrics Used In Testing...........................................................................................................................7
Software Quality Metrics...................................................................................................................11
Defect Removal Efficiency.................................................................................................................12
4. Various types of testing:........................................................................................................................18
Performance vs. load vs. stress testing.................................................................................................24
More on performance vs. load testing..................................................................................................29
5. Test automation - Advantages...............................................................................................................33
Best Practices in Automated Testing...........................................................................................33
Automated Testing Advantages, Disadvantages and Guidelines.......................................35
6. Software Quality Management..............................................................................................................37
7. What is Defect Tracking?.......................................................................................................................40
8. QA Plan - various sections explained.....................................................................................................41
9. Effective Software Testing.....................................................................................................................42
10. FAQS:...................................................................................................................................................46
11. Software lifecycle models:...................................................................................................................53
The Phases of the V-model....................................................................................................................54
[edit] Verification Phases.......................................................................................................................54
[edit] Requirements analysis.............................................................................................................54
[edit] System Design..........................................................................................................................55
[edit] Architecture Design..................................................................................................................55
[edit] Module Design.........................................................................................................................55
[edit] Validation Phases.........................................................................................................................55
[edit] Unit Testing..............................................................................................................................55
[edit] Integration Testing...................................................................................................................56
[edit] System Testing.........................................................................................................................56
[edit] User Acceptance Testing..........................................................................................................56
12. Product testing & ISO certification......................................................................................................59
2
Test environment:
Plan for automation, separate strategy document for automation can be added.
Metric plan:
Defect density
Residual defect density
Code coverage
No. of test cases executed per day
No. of test cases passed
No. of test cases failed.
Risk Management:
A risk is an obstacle that would prevent one from reaching a defined goal(s), leading to adverse business
impacts.
5
Identification of risks and coming up with risk mitigation (avoid) and contingency (resolution) plan is
called risk management.
Simplest way to identify the list of risks is to question the team -> what are the ways the project can fail,
all the relevant answers can be considered as risks for the project.
Eg :
Templates:
Wipro recommends the use of standard test templates available in veloci-Q, Some project teams uses
the templates given by the customer.
Example :
Req id : 001
Traceability Matrix :
The traceability Matrix defines the mapping between customer requirements and prepared test cases by
test engineers. This matrix is requirements traceability matrix or requirements validation matrix. This is
used by testing team to verify how far the test cases prepared have covered the requirements of the
functionalities to be tested.
6
Tested Implicitly 77
1.1.1 1 x
1.1.2 2 x x
1.1.3 2 x x
1.1.4 1 x
1.1.5 2 x x
1.1.6 1 x
1.1.7 1 x
1.2.1 2 x x
1.2.2 2 x x
1.2.3 2 x x
1.3.1 1 x
1.3.2 1 x
1.3.3 1 x
1.3.4 1 x
1.3.5 1 x
etc…
7
5.6.2 1 x
Suspension criteria specifies all the circumstances under which testing can be suspended.
For eg : entry criteria to start system testing is – unit/integration testing must be completed, system
test build should be stable.
Test Exit criteria: No severity bugs, all test results and reports are available.
Test completion:
Decide based on trend, for eg: you have completed 50% of testing and still defect rate is very high and
high severity defects are occurring – means you can ask for schedule extension to accommodate retest
the high severity defects.
On the other hand, team has completed 95% of testing and defect rate is very low and all defects are
minor only. You can declare the test completion. But mention that 10 test cases were not executed.
Error estimation:
Error estimation helps in you answer the question “Have I found enough number of defects/bugs”.
One of the most popular methods of error estimation is “error seeding”. This involves intentional
seeding of errors in the code.
No. of remaining unseeded errors = No.of unseeded errors found during testing X (total number of
seeded errors / No.of seeded errors detected during testing)
Unit test 65 0
Integration/Module test 30 60
System test 3 35
Total 98 95
8
QA is a crucial role in any project as he is involved right from the first phase of the project. Suppose let
us consider we have received a project proposal from client.
1. In the first stage (requirement stage)--> we collect the requirements from the customer/client and
will have a review to check the achievability. For this high level QA related personnel will be involved
(SQA Analyst or SQA Manager)
2. Once SRS/FRS is prepared --> that is to be reviewed parellely by a QA Manager to check if we are in
position to handle the execution w.r.t given schedule and resources. QA Analyst start preparing system
test cases here.
3. One SRS is converted in to Design documents (HLD and LLD) again we will be having a sort of review
by QA Analyst/Architect to check if the design is optimum and as per the standards. QA person start
preparing Integrations level test cases here.
4. Once design is put in to implementation--> QA person will be involved in review and unit level testing.
5. Once all the modules/units are coded development start with integrating all those units to make a
single unit. Again QA person will be testing here using Integration Test Cases that he has already
prepared in step3.
6. Once the integration is done QA people use System level test cases to test the system behaviour. At
the same time he will be preparing the Req Traceability matrix to check if all the given requirements are
transformed in to test cases at the end.
7. Once the complete system is deployed in offshore test server-- QA person conduct different types of
testing to check the functionality of the application...eg: Pre deployment testing etc As depicted above
QA Analyst will be involved in almost all stages of the PDC. If you are applying for any managerial level
you can stressmore on the first three points.If you are applying for any junior level you exclude first two
points...All the best--Vijay Sarvepalli
Metric = Formula
Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of
Code)
Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).
Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of
acceptance defects found after delivery) *100
Effectiveness of testing to business = Loss due to problems / total resources processed by the system.
Source Code Analysis = Number of source code statements changed / total number of tests.
Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design
and Documentation
Test Execution Productivity = No of Test cycles executed / Actual Effort for testing
We best manage what we can measure. Measurement enables the Organization to improve the software
process; assist in planning, tracking and controlling the software project and assess the quality of the
software thus produced. It is the measure of such specific attributes of the process, project and product
that are used to compute the software metrics. Metrics are analyzed and they provide a dashboard to the
management on the overall health of the process, project and product. Generally, the validation of the
metrics is a continuous process spanning multiple projects. The kind of metrics employed generally
10
account for whether the quality requirements have been achieved or are likely to be achieved during the
software development process. As a quality assurance process, a metric is needed to be revalidated
every time it is used. Two leading firms namely, IBM and Hewlett-Packard have placed a great deal of
importance on software quality. The IBM measures the user satisfaction and software acceptability in
eight dimensions which are capability or functionality, usability, performance, reliability, ability to be
installed, maintainability, documentation, and availability. For the Software Quality Metrics the Hewlett-
Packard normally follows the five Juran quality parameters namely the functionality,the usability, the
reliability, the performance and the serviceability. In general, for most software quality assurance systems
the common software metrics that are checked for improvement are the Source lines of code, cyclical
complexity of the code, Function point analysis, bugs per line of code, code coverage, number of classes
and interfaces, cohesion and coupling between the modules etc.
Software Quality Metrics focus on the process, project and product. By analyzing the metrics the
organization the organization can take corrective action to fix those areas in the process, project or
product which are the cause of the software defects.
The de-facto definition of software quality consists of the two major attributes based on intrinsic product
quality and the user acceptability. The software quality metric encapsulates the above two attributes,
addressing the mean time to failure and defect density within the software components. Finally it
assesses user requirements and acceptability of the software. The intrinsic quality of a software product is
generally measured by the number of functional defects in the software, often referred to as bugs, or by
testing the software in run time mode for inherent vulnerability to determine the software "crash"
scenarios.In operational terms, the two metrics are often described by terms namely the defect density
(rate) and mean time to failure (MTTF).
Although there are many measures of software quality, correctness, maintainability, integrity and usability
provide useful insight.
Correctness
A program must operate correctly. Correctness is the degree to which the software performs the required
functions accurately. One of the most common measures is Defects per KLOC. KLOC means thousands
(Kilo) Of Lines of Code.) KLOC is a way of measuring the size of a computer program by counting the
number of lines of source code a program has.
Maintainability
Maintainability is the ease with which a program can be correct if an error occurs. Since there is no direct
way of measuring this an indirect way has been used to measure this. MTTC (Mean time to change) is
11
one such measure. It measures when a error is found, how much time it takes to analyze the change,
design the modification, implement it and test it.
Integrity
This measure the system’s ability to with stand attacks to its security. In order to measure integrity two
additional parameters are threatand security need to be defined. Threat – probability that an attack of
certain type will happen over a period of time. Security – probability that an attack of certain type will be
removed over a period of time. Integrity = Summation [(1 - threat) X (1 - security)]
Usability
How usable is your software application ? This important characteristic of your application is measured in
terms of the following characteristics:
Defect Removal Efficiency (DRE) is a measure of the efficacy of your SQA activities.. For eg. If the DRE
is low during analysis and design, it means you should spend time improving the way you conduct formal
technical reviews.
DRE = E / ( E + D )
Where E = No. of Errors found before delivery of the software and D = No. of Errors found after delivery of
the software.
Ideal value of DRE should be 1 which means no defects found. If you score low on DRE it means to say
you need to re-look at your existing process. In essence DRE is a indicator of the filtering ability of
quality control and quality assurance activity . It encourages the team to find as many defects before
they are passed to the next activity stage. Some of the Metrics are listed out here:
Test Coverage = Number of units (KLOC/FP) tested / total size of the system Number of tests
per unit size = Number of test cases per KLOC/FP Defects per size = Defects detected / system
size Cost to locate defect = Cost of testing / the number of defects located Defects detected in
testing = Defects detected in testing / total system defects Defects detected in production =
Defects detected in production/system size Quality of Testing = No. of defects found during
Testing/(No. of defects found during testing + No of acceptance defects found after delivery)
*100 System complaints = Number of third party complaints / number of transactions processed
Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for
Design and Documentation Test Execution Productivity = No of Test cycles executed / Actual
Effort for testing Test efficiency= (number of tests required / the number of system errors)
Measure Metrics
12
Normalized per function point (or per LOC) At product delivery (first
2. Delivered defect quantities 3 months or first year of operation) Ongoing (per year of operation)
By level of severity By category or cause, e.g.: requirements defect,
design defect, code defect, documentation/on-line help defect,
defect introduced by fixes, etc.
3. Responsiveness (turnaround time) to users Turnaround time for defect fixes, by level of severity Time for minor
vs. major enhancements; actual vs. planned elapsed time (by
customers) in the first year after product delivery </TR>
<B>7. Complexity of delivered product McCabe's cyclomatic complexity counts across the system
Halstead’s measure Card's design complexity measures Predicted
defects and maintenance costs, based on complexity measures
8. Test coverage Breadth of functional coverage
Percentage of paths, branches or conditions that
were actually tested Percentage by criticality
level: perceived level of risk of paths The ratio of Total lines of code exercised by test suite/Total Lines of code * 100
the number of detected faults to the number of
predicted faults. Tools : GCOV, javacoverage
11. Re-work Re-work effort (hours, as a percentage of the original coding hours)
Re-worked LOC (source lines of code, as a percentage of the total
delivered LOC) Re-worked software components (as a percentage
of the total delivered components)
12. Reliability
Availability (percentage of time a system is available, versus the
time the system is needed to be available) Mean time between
failure (MTBF) Mean time to repair (MTTR) Reliability ratio (MTBF /
MTTR) Number of product recalls or fix releases Number of
13
Cyclomatic Complexity
About the same time Halstead founded software science, McCabe proposed a topological or graph-
theory measure of cyclomatic complexity as a measure of the number of linearly independent paths
that make up a computer program. To compute the cyclomatic complexity of a program that has been
graphed or flow-charted, the formula used is
in which
More simply, it turns out that M is equal to the number of binary decisions in the program plus 1. An n-
way case statement would be counted as n . 1 binary decisions. The advantage of this measure is that
it is additive for program components or modules. Usage recommends that no single module have a
value of M greater than 10. However, because on average every fifth or sixth program instruction
executed is a branch, M strongly correlates with program size or LOC. As with the other early quality
measures that focus on programs per se or even their modules, these mask the true source of
architectural complexity—interconnections between modules. Later researchers have proposed
structure metrics to compensate for this deficiency by quantifying program module interactions. For
example, fan-in and fan-out metrics, which are analogous to the number of inputs to and outputs from
hardware circuit modules, are an attempt to fill this gap. Similar metrics include number of subroutine
calls and/or macro inclusions per module, and number of design changes to a module, among others.
Kan reports extensive experimental testing of these metrics and also reports that, other than module
length, the most important predictors of defect rates are number of design changes and complexity
level, however it is computed.9
Quality metrics based either directly or indirectly on counting lines of code in a program or its modules
are unsatisfactory. These metrics are merely surrogate indicators of the number of opportunities to
make an error, but from the perspective of the program as coded. More recently the function point has
been proposed as a meaningful cluster of measurable code from the user's rather than the
programmer's perspective. Function points can also be surrogates for error opportunity, but they can
be more. They represent the user's needs and anticipated or a priori application of the program rather
than just the programmer's a posteriori completion of it. A very large program may have millions of
LOC, but an application with 1,000 function points would be a very large application or system indeed.
A function may be defined as a collection of executable statements that performs a task, together with
declarations of formal parameters and local variables manipulated by those statements. A typical
14
function point metric developed by Albrecht 10 at IBM is a weighted sum of five components that
characterize an application:
These represent the average weighting factors wij, which may vary with program size and complexity.
xij is the number of each component type in the application.
The second step employs a scale of 0 to 5 to assess the impact of 14 general system
characteristics in terms of their likely effect on the application:
Performance Reusability
The scores for these characteristics ci are then summed based on the following formula to find a value
adjustment factor (VAF):
Finally, the number of function points is obtained by multiplying the number of function counts by the
value adjustment factor:
15
This is actually a highly simplified version of a commonly used method that is documented in the
International Function Point User's Group Standard (IFPUG, 1999).11
Although function point extrinsic counting metrics and methods are considered more robust than
intrinsic LOC counting methods, they have the appearance of being somewhat subjective and
experimental in nature. As used over time by organizations that develop very large software systems
(having 1,000 or more function points), they show an amazingly high degree of repeatability and
utility. This is probably because they enforce a disciplined learning process on a software development
organization as much as any scientific credibility they may possess
To the end user of an application, the only measures of quality are in the performance, reliability, and
stability of the application or system in everyday use. This is "where the rubber meets the road," as
users often say. Developer quality metrics and their assessment are often referred to as "where the
rubber meets the sky." This article is dedicated to the proposition that we can arrive at a priori user-
defined metrics that can be used to guide and assess development at all stages, from functional
specification through installation and use. These metrics also can meet the road a posteriori to guide
modification and enhancement of the software to meet the user's changing needs. Caution is advised
here, because software problems are not, for the most part, valid defects, but rather are due to
individual user and organizational learning curves. The latter class of problem calls places an enormous
burden on user support during the early days of a new release. The catch here is that neither alpha
testing (initial testing of a new release by the developer) nor beta testing (initial testing of a new
release by advanced or experienced users) of a new release with current users identifies these
problems. The purpose of a new release is to add functionality and performance to attract new users,
who initially are bound to be disappointed, perhaps unfairly, with the software's quality. The DFTS
approach we advocate in this article is intended to handle both valid and perceived software problems.
Very satisfied
Satisfied
Neutral
Dissatisfied
Very dissatisfied
Results are obtained for a number of specific dimensions through customer surveys. For example, IBM
uses the CUPRIMDA categories—capability, usability, performance, reliability, installability,
maintainability, documentation, and availability. Hewlett-Packard uses FURPS categories—
functionality, usability, reliability, performance, and serviceability. In addition to calculating
percentages for various satisfaction or dissatisfaction categories, some vendors use the net
satisfaction index (NSI) to enable comparisons across product lines. The NSI has the following
weighting factors:
Neutral = 50%
Dissatisfied = 25%
Completely dissatisfied = 0%
NSI then ranges from 0% (all customers are completely dissatisfied) to 100% (all customers are
completely satisfied). Although it is widely used, the NSI tends to obscure difficulties with certain
16
problem products. In this case the developer is better served by a histogram showing satisfaction rates
for each product individually.
ACCEPTANCE TESTING
Testing without knowledge of the internal workings of the item being tested. Tests are
usually functional.
COMPATIBILITY TESTING
CONFORMANCE TESTING
FUNCTIONAL TESTING
INTEGRATION TESTING
Testing in which modules are combined and tested as a group. Modules are typically
code modules, individual applications, client and server applications on a network, etc.
Integration Testing follows unit testing and precedes system testing.
LOAD TESTING
Load testing is a generic term covering Performance Testing and Stress Testing.
PERFORMANCE TESTING
REGRESSION TESTING
SMOKE TESTING
A quick-and-dirty test that the major functions of a piece of software work without
bothering with finer details. Originated in the hardware testing practice of turning on a
new piece of hardware for the first time and considering it a success if it does not catch
on fire.
STRESS TESTING
SYSTEM TESTING
UNIT TESTING
Functional and reliability testing in an Engineering environment. Producing tests for the
behavior of components of a product to ensure their correct behavior prior to system
integration.
Functionality testing
Performance testing
Load & Stress testing
Regression testing is any type of software testing that seeks to uncover software errors
by partially retesting a modified program. The intent of regression testing is to assure that
a bug fix has been successfully corrected based on the error that was found, while
providing a general assurance that no other errors were introduced in the process of fixing
the original problem. Regression is commonly used to efficiently test bug fixes by
systematically selecting the appropriate minimum test suite needed to adequately cover
the affected software code/requirements change. Common methods of regression testing
include rerunning previously run tests and checking whether previously fixed faults have
re-emerged.
"One of the main reasons for regression testing is that it's often extremely difficult for a
programmer to figure out how a change in one part of the software will echo in other
parts of the software."[
Since Regression Testing tends to verify the software application after a change has been made
everything that may be impacted by the change should be tested during Regression Testing.
Generally the following areas are covered during Regression Testing:
Introduction:
‘System Testing’ is the next level of testing. It focuses on testing the system as a whole.
This article attempts to take a close look at the System Testing Process and analyze:
Why System Testing is done? What are the necessary steps to perform System Testing? How to
make it successful?
19
In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the individual
components are working OK. The ‘Integration testing’ focuses on successful integration of all
the individual pieces of software (components or units of code).
Once the components are integrated, the system as a whole needs to be rigorously tested to
ensure that it meets the Quality Standards.
Thus the System testing builds on the previous levels of testing namely unit testing and
Integration Testing.
........- In the Software Development Life cycle System Testing is the first level where
...........the System is tested as a whole
........- The System is tested to verify if it meets the functional and technical
...........requirements
........- The application/System is tested in an environment that closely resembles the
...........production environment where the application will be finally deployed
........- The System Testing enables us to test, verify and validate both the Business
...........requirements as well as the Application Architecture
When necessary, several iterations of System Testing are done in multiple environments.
20
As you may have read in the other articles in the testing series, this document typically describes
the following:
.........- The Testing Goals
.........- The key areas to be focused on while testing
.........- The Testing Deliverables
.........- How the tests will be carried out
.........- The list of things to be Tested
.........- Roles and Responsibilities
.........- Prerequisites to begin Testing
.........- Test Environment
.........- Assumptions
.........- What to do after a test is successfully carried out
.........- What to do if test fails
.........- Glossary
A Test Case describes exactly how the test should be carried out.
The System test cases help us verify and validate the system.
The System Test Cases are written such that:
........- They cover all the use cases and scenarios
........- The Test cases validate the technical Requirements and Specifications
........- The Test cases verify if the application/System meet the Business & Functional
...........Requirements specified
........- The Test cases may also verify if the System meets the performance standards
Since a dedicated test team may execute the test cases it is necessary that System Test Cases. The
detailed Test cases help the test executioners do the testing as specified without any ambiguity.
The format of the System Test Cases may be like all other Test cases as illustrated below:
21
Test Case ID
Test Case Description:
o What to Test?
o How to Test?
Input Data
Expected Result
Actual Result
Test
What To How to Expected Actual
Case Input Data Pass/Fail
Test? Test? Result Result
ID
. . . . . . .
1) Test Coverage: System Testing will be effective only to the extent of the coverage of Test
Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered by the test
cases are sufficient. The Test cases should “cover” all scenarios, use cases, Business
Requirements, Technical Requirements, and Performance Requirements. The test cases should
enable us to verify and validate that the system/application meets the project goals and
specifications.
2) Defect Tracking: The defects found during the process of testing should be tracked.
Subsequent iterations of test cases verify if the defects have been fixed.
22
3) Test Execution: The Test cases should be executed in the manner specified. Failure to do so
results in improper Test Results.
4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is a
compilation of the various components that make the application deployed in the appropriate
environment. The Test results will not be accurate if the application is not ‘built’ correctly or if
the environment is not set up as specified. Automating this process may help reduce manual
errors.
5) Test Automation: Automating the Test process could help us in many ways:
b. Some scenarios can be simulated if the tests are automated for instance
simulating a large number of users or simulating increasing large amounts
of input/output data
6) Documentation: Proper Documentation helps keep track of Tests executed. It also helps
create a knowledge base for current and future projects. Appropriate metrics/Statistics can be
captured to validate or verify the efficiency of the technical design /architecture.
Performance: What is our peak processing capability (CPU/DB/Memory within tolerance and
steady).
Load: When does our peak processing capability begin to digress (CPU/DB/Memory begins
to run out).
Stress: When does our processing capability digress below our expectations.
(CPU/DB/Memory gone...)
Here's a good interview question for a tester: how do you define performance/load/stress testing?
Many times people use these terms interchangeably, but they have in fact quite different meanings.
This post is a quick review of these concepts, based on my own experience, but also using definitions
from testing literature -- in particular: "Testing computer software" by Kaner et al, "Software testing
techniques" by Loveland et al, and "Testing applications on the Web" by Nguyen et al.
From the referrer logs I see that this post comes up fairly often in Google searches. I'm updating it
with a link to a later post I wrote called 'More on performance vs. load testing'.
Performance testing
The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a
baseline for future regression testing. To conduct performance testing is to engage in a carefully
controlled process of measurement and analysis. Ideally, the software under test is already stable
A clearly defined set of expectations is essential for meaningful performance testing. If you don't know
where you want to go in terms of the performance of the system, then it matters little which direction
you take (remember Alice and the Cheshire Cat?). For example, for a Web application, you need to
Once you know where you want to be, you can start on your way there by constantly increasing the
load on the system while looking for bottlenecks. To take again the example of a Web application,
these bottlenecks can exist at multiple levels, and to pinpoint them you can use a variety of tools:
at the application level, developers can use profilers to spot inefficiencies in their code (for
at the database level, developers and DBAs can use database-specific profilers and query
optimizers
at the operating system level, system engineers can use utilities such as top, vmstat, iostat
(on Unix-type systems) and PerfMon (on Windows) to monitor hardware resources such as CPU,
memory, swap, disk I/O; specialized kernel monitoring software can also be used
at the network level, network engineers can use packet sniffers such as tcpdump, network
protocol analyzers such as ethereal, and various utilities such as netstat, MRTG, ntop, mii-tool
From a testing point of view, the activities described above all take a white-box approach, where the
system is inspected and monitored "from the inside out" and from a variety of angles. Measurements
24
However, testers also take a black-box approach in running the load tests against the system under
test. For a Web application, testers will use tools that simulate concurrent users/HTTP connections and
measure response times. Some lightweight open source tools I've used in the past for this purpose are
ab, siege, httperf. A more heavyweight tool I haven't used yet is OpenSTA. I also haven't used The
When the results of the load test indicate that performance of the system does not meet its expected
goals, it is time for tuning, starting with the application and the database. You want to make sure your
code runs as efficiently as possible and your database is optimized on a given OS/hardware
configurations. TDD practitioners will find very useful in this context a framework such as Mike Clark's
jUnitPerf, which enhances existing unit test code with load test and timed test functionality. Once a
particular function or method has been profiled and tuned, developers can then wrap its unit tests in
jUnitPerf and ensure that it meets performance requirements of load and timing. Mike Clark calls this
"continuous performance testing". I should also mention that I've done an initial port of jUnitPerf to
If, after tuning the application and the database, the system still doesn't meet its expected goals in
terms of performance, a wide array of tuning procedures is available at the all the levels discussed
before. Here are some examples of things you can do to enhance the performance of a Web
Publish highly-requested Web pages statically, so that they don't hit the database
Scale the database servers horizontally and split them into read/write servers and read-only
Scale the Web and database servers vertically, by adding more hardware resources (CPU,
RAM, disks)
Performance tuning can sometimes be more art than science, due to the sheer complexity of the
systems involved in a modern Web application. Care must be taken to modify one variable at a time
and redo the measurements, otherwise multiple changes can have subtle interactions that are hard to
In a standard test environment such as a test lab, it will not always be possible to replicate the
production server configuration. In such cases, a staging environment is used which is a subset of the
production environment. The expected performance of the system needs to be scaled down
accordingly.
The cycle "run load test->measure performance->tune system" is repeated until the system under
test achieves the expected levels of performance. At this point, testers have a baseline for how the
system behaves under normal conditions. This baseline can then be used in regression tests to gauge
Another common goal of performance testing is to establish benchmark numbers for the system under
test. There are many industry-standard benchmarks such as the ones published by TPC, and many
hardware/software vendors will fine-tune their systems in such ways as to obtain a high ranking in the
TCP top-tens. It is common knowledge that one needs to be wary of any performance claims that do
not include a detailed specification of all the hardware and software configurations that were used in
Load testing
We have already seen load testing as part of the process of performance testing and tuning. In that
context, it meant constantly increasing the load on the system via automated tools. For a Web
In the testing literature, the term "load testing" is usually defined as the process of exercising the
system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called
a specific case of volume testing is zero-volume testing, where the system is fed empty tasks
testing a client-server application by running the client in a loop against the server over an
expose bugs that do not surface in cursory testing, such as memory management bugs,
ensure that the application meets the performance baseline established during performance
testing. This is done by running regression tests against the application at a specified maximum
load.
Although performance testing and load testing can seem similar, their goals are different. On one
hand, performance testing uses load testing techniques and tools for measurement and benchmarking
purposes and uses various load levels. On the other hand, load testing operates at a predefined load
level, usually the highest load that the system can accept while still functioning properly. Note that
load testing does not aim to break the system by overwhelming it, but instead tries to keep the
In the context of load testing, I want to emphasize the extreme importance of having large datasets
available for testing. In my experience, many important bugs simply do not surface unless you deal
with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory,
thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies
on file systems, etc. Testers obviously need automated tools to generate these large data sets, but
fortunately any good scripting language worth its salt will do the job.
Stress testing
Stress testing tries to break the system under test by overwhelming its resources or by taking
resources away from it (in which case it is sometimes called negative testing). The main purpose
behind this madness is to make sure that the system fails and recovers gracefully -- this quality is
known as recoverability.
27
Where performance testing demands a controlled environment and repeatable measurements, stress
testing joyfully induces chaos and unpredictability. To take again the example of a Web application,
here are some ways in which stress can be applied to the system:
randomly shut down and restart ports on the network switches/routers that connect the
run processes that consume resources (CPU, memory, disk, network) on the Web and
database servers
I'm sure devious testers can enhance this list with their favorite ways of breaking systems. However,
stress testing does not break the system purely for the pleasure of breaking it, but instead it allows
testers to observe how the system reacts to failure. Does it save its state or does it crash suddenly?
Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last
good state? Does it print out meaningful error messages to the user, or does it merely display
incomprehensible hex codes? Is the security of the system compromised because of unexpected
Conclusion
I am aware that I only scratched the surface in terms of issues, tools and techniques that deserve to
be mentioned in the context of performance, load and stress testing. I personally find the topic of
performance testing and tuning particularly rich and interesting, and I intend to post more articles on
I recently got some comments/questions related to my previous blog entry on performance vs. load
vs. stress testing. Many people are still confused as to exactly what the difference is between
performance and load testing. I've been thinking more about it and I'd like to propose the following
question as a litmus test to distinguish between these two types of testing: are you actively profiling
your application code and/or monitoring the server(s) running your application? If the answer is yes,
28
then you're engaged in performance testing. If the answer is no, then what you're doing is load
testing.
Another way to look at it is to see whether you're doing more of a white-box type testing as opposed
to black-box testing. In the white-box approach, testers, developers, system administrators and DBAs
work together in order to instrument the application code and the database queries (via specialized
profilers for example), and the hardware/operating system of the server(s) running the application
and the database (via monitoring tools such as vmstat, iostat, top or Windows PerfMon). All these
The black box approach is to run client load tools against the application in order to measure its
responsiveness. Such tools range from lightweight, command-line driven tools such as httperf,
openload, siege, Apache Flood, to more heavy duty tools such as OpenSTA, The Grinder, JMeter. This
type of testing doesn't look at the internal behavior of the application, nor does it monitor the
hardware/OS resources on the server(s) hosting the application. If this sounds like the type of testing
In practice though the 2 terms are often used interchangeably, and I am as guilty as anyone else of
doing this, since I called one of my recent blog entries "HTTP performance testing with httperf,
autobench and openload" instead of calling it more precisely "HTTP load testing". I didn't have access
to the application code or the servers hosting the applications I tested, so I wasn't really doing
I think part of the confusion is that no matter how you look at these two types of testing, they have
one common element: the load testing part. Even when you're profiling the application and monitoring
the servers (hence doing performance testing), you still need to run load tools against the application,
As far as I'm concerned, these definitions don't have much value in and of themselves. What matters
most is to have a well-established procedure for tuning the application and the servers so that you can
meet your users' or your business customers' requirements. This procedure will use elements of all the
types of testing mentioned here and in my previous entry: load, performance and stress testing.
Here's one example of such a procedure. Let's say you're developing a Web application with a
database back-end that needs to support 100 concurrent users, with a response time of less than 3
29
seconds. How would you go about testing your application in order to make sure these requirements
are met?
1. Start with 1 Web/Application server connected to 1 Database server. If you can, put both servers
behind a firewall, and if you're thinking about doing load balancing down the road, put the Web server
behind the load balancer. This way you'll have one each of different devices that you'll use in a real
production environment.
2. Run a load test against the Web server, starting with 10 concurrent users, each user sending a total
of 1000 requests to the server. Step up the number of users in increments of 10, until you reach 100
users.
3. While you're blasting the Web server, profile your application and your database to see if there are
any hot spots in your code/SQL queries/stored procedures that you need to optimize. I realize I'm
glossing over important details here, but this step is obviously highly dependent on your particular
application.
Also monitor both servers (Web/App and Database) via command line utilities mentioned before (top,
vmstat, iostat, netstat, Windows PerfMon). These utilities will let you know what's going on with the
servers in terms of hardware resources. Also monitor the firewall and the load balancer (many times
you can do this via SNMP) -- but these devices are not likely to be a bottleneck at this level, since
they usualy can deal with thousands of connections before they hit a limit, assuming they're
This is one of the most important steps in the whole procedure. It's not easy to make sense of the
output of these monitoring tools, you need somebody who has a lot of experience in system/network
architecture and administration. On Sun/Solaris platforms, there is a tool called the SE Performance
Toolkit that tries to alleviate this task via built-in heuristics that kick in when certain thresholds are
4. Let's say your Web server's reply rate starts to level off around 50 users. Now you have a
repeatable condition that you know causes problems. All the profiling and monitoring you've done in
step 3, should have already given you a good idea about hot spots in your applicationm about SQL
queries that are not optimized properly, about resource status at the hardware/OS level.
30
At this point, the developers need to take back the profiling measurements and tune the code and the
database queries. The system administrators can also increase server performance simply by throwing
more hardware at the servers -- especially more RAM at the Web/App server in my experience, the
5. Let's say the application/database code, as well as the hardware/OS environment have been tuned
to the best of everybody's abilities. You re-run the load test from step 2 and now you're at 75
At this point, there's not much you can do with the existing setup. It's time to think about scaling the
system horizontally, by adding other Web servers in a load-balanced Web server farm, or adding other
database servers. Or maybe do content caching, for example with Apache mod_cache. Or maybe
One very important product of this whole procedure is that you now have a baseline number for your
application for this given "staging" hardware environment. You can use the staging setup for nightly
peformance testing runs that will tell you whether changes in your application/database code caused
6. Repeat above steps in a "real" production environment before you actually launch your application.
All this discussion assumed you want to get performance/benchmarking numbers for your application.
If you want to actually discover bugs and to see if your application fails and recovers gracefully, you
need to do stress testing. Blast your Web server with double the number of users for example. Unplug
network cables randomly (or shut down/restart switch ports via SNMP). Take out a disk from a RAID
The conclusion? At the end of the day, it doesn't really matter what you call your testing, as long as
you help your team deliver what it promised in terms of application functionality and performance.
Performance testing in particular is more art than science, and many times the only way to make
progress in optimizing and tuning the application and its environment is by trial-and-error and
perseverance. Having lots of excellent open source tools also helps a lot.
31
Today, rigorous application testing is a critical part of virtually all software development projects. As
more organizations develop mission – critical systems to support their business activities, the need is
greatly increased for testing methods that support business objectives. It is necessary to ensure that these
systems are reliable, built according to specification and have the ability to support business processes.
Many internal and external factors are forcing organizations to ensure a high level of software quality and
reliability.
32
In the past, most software tests were performed using manual methods. This required a large staff of
test personnel to perform expensive and time-consuming manual test procedures. Owing to the size
and complexity of today’s advanced software applications, manual testing is no longer a viable option
for most testing situations.
By definition, testing is a repetitive activity. The methods that are employed to carry out testing
(manual or automated) remain repetitious throughout the development life cycle. Automation of
testing processes allows machines to complete the tedious, repetitive work while human personnel
perform other tasks. Automation eliminates the required “think time” or “read time” necessary for the
manual interpretation of when or where to click the mouse. An automated test executes the next
operation in the test hierarchy at machine speed, allowing test to be completed many times faster
than the fastest individual. Automated test also perform load/stress testing very effectively.
The cost of performing manual testing is prohibitive when compared to automated methods. The
reason is that computers can execute instructions many times faster and with fewer errors than
individuals. Many automated testing tools can replicate the activity of a large number of users (and
their associated transactions) using a single computer. Therefore, load/stress testing using automated
methods requires only a fraction of the computer hardware that would be necessary to complete a
manual test.
Automation allows the testing organization to perform consistent and repeatable test. When
applications need to be deployed across different hardware or software platforms, standard or
benchmark tests can be created and repeated on target platforms to ensure that new platforms
operate consistently.
The productivity gains delivered by automated testing allow and encourage organization to test more often and more
completely. Greater application test coverage also reduces the risk if exposing users to malfunctioning or non-
compliant software.
Results Reporting
Full-featured automated testing systems also produce convenient test reporting and analysis. These reports provide a
standardized measure of test status and results, thus allowing more accurate interpretation of testing outcomes.
Manual methods require the user to self-document test procedures and test results.
The introduction of automated testing into the business environment involves far more than buying and installing an
automated testing tool.
Typical Testing Steps: Most software testing projects can be divided into general steps
Test Design: This step determines how the tests should be built the level of quality.
Test Environment Preparation: Technical environment is established during this step.
Test Construction: At this step, test scripts are generated and test cases are developed.
Test Execution: This step is where the test scripts are executed according to the test plans.
Test evaluation: After the test is executed, the test results are compared to the expected results and evaluations can
be made about the quality of an application.
Most, but not all, types of tests can be automated. Certain types of tests like user comprehension tests test that run
only once and tests that require constant human intervention are usually not worth the investment incurred to
automate. The following are examples of criteria that can be used to identify tests that are prime candidates for
automation.
High path frequency – Automated testing can be used to verify the performance of application paths that are used
with a high degree of frequency when the software is running in full production. Examples include: creating
customer records.
Critical Business Processes – Mission-critical processes are prime candidates for automated testing. Examples
include: financial month-end closings, production planning, sales order entry and other core activities. Any
application with a high –degree of risk associated with a failure is a good candidate for test automation.
Repetitive Testing – If a testing procedure can be reused many times, it is also a prime candidate for automation
Applications with a Long Life Span – If an application is planned to be in production for a long period of time, the
greater the benefits are from automation.
In performing software testing, there are many tasks that need to be performed before or after the actual test. For
example, if a test needs to be executed to create sales orders against current inventory, goods need to be in
inventory. The tasks associated with placing items in inventory can be automated so that the test can run repeatedly.
Additionally, highly repetitive tasks not associated with testing can be automated utilizing the same approach.
There is no clear consensus in the testing community about which group within an organization should be
responsible for performing the testing function. It depends on the situation prevailing in the organization.
This article start with brief Introduction to Automated Testing, Different methods in Automated Testing, Benefits of
Automated Testing and the guidelines that Automated testers must follow to get the benefits of automation.
Introduction:
"Automated Testing" is automating the manual testing process currently in use. This requires that a formalized
"manual testing process", currently exists in the company or organization.
Automation is the use of strategies, tools and artifacts that augment or reduce the need of manual or human
involvement or interaction in unskilled, repetitive or redundant tasks.
34
Detailed test cases, including predictable "expected results", which have been developed from Business Functional
Specifications and Design documentation
A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases
are able to be repeated each time there are modifications made to the application.
Stress - determining the absolute capacities of the application and operational infrastructure.
Performance - providing assurance that the performance of the system will be adequate for both batch runs and online
transactions in relation to business projections and requirements.
Load - determining the points at which the capacity and performance of the system become degraded to the situation
that hardware or software upgrades would be required.
Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human error
Repeatable: You can test how the software reacts under repeated execution of the same operations.
Programmable: You can program sophisticated tests that bring out hidden information from the application.
Comprehensive: You can build a suite of tests that covers every feature in your application.
Reusable: You can reuse tests on different versions of an application, even if the user interface changes.
Better Quality Software: Because you can run more tests in less time with fewer resources
Fast: Automated Tools run tests significantly faster than human users.
Cost Reduction: As the number of resources for regression test are reduced.
Choosing the right tools for the job and targeting the right areas of the organization to deploy them can only realize
these benefits. The right areas where the automation fit must be chosen.
Automated testers must follow the following guidelines to get the benefits of
automation:
• Repeatable: Test can be run many times in a row without human intervention.
• Robust: Test produces same result now and forever. Tests are not affected by changes in the external environment.
• Sufficient: Tests verify all the requirements of the software being tested.
• Specific: Each test failure points to a specific piece of broken functionality; unit test failures provide "defect
triangulation".
• Independent: Each test can be run by itself or in a suite with an arbitrary set of other tests in any order.
• Traceable: To and from the code it tests and to and from the requirements.
Though the automation testing has many advantages, it has its own disadvantages too. Some of the disadvantages
are:
• Proficiency is required to write the automation test scripts.
• Debugging the test script is major issue. If any error is present in the test script, sometimes it may lead to deadly
consequences.
• Test maintenance is costly in case of playback methods. Even though a minor change occurs in the GUI, the test
script has to be rerecorded or replaced by a new test script.
• Maintenance of test data files is difficult, if the test script tests more screens.
Some of the above disadvantages often cause damage to the benefit gained from the automated scripts. Though the
automation testing has pros and corns, it is adapted widely all over the world.
“Totality of characteristics of an entity that bears on its ability to satisfy stated and implied needs.”
This means that the Software product delivered should be as per the requirements defined. We now
examine a few more terms used in association with Software Quality.
Quality Planning:
In the Planning Process we determine the standards that are relevant for the Software Product, the
36
Software Quality Management simply stated comprises of processes that ensure that the Software Project
would reach its goals. In other words the Software Project would meet the clients expectations.
The key processes of Software Quality Management fall into the following three categories:
1) Quality Planning
2) Quality Assurance
3) Quality Control
The Software Quality Management comprises of Quality Planning, Quality Assurance and Quality
Control Processes. We shall now take a closer look at each of them.
1) Quality Planning
Quality Planning is the most important step in Software Quality Management. Proper planning ensures
that the remaining Quality processes make sense and achieve the desired results. The starting point for the
Planning process is the standards followed by the Organization. This is expressed in the Quality Policy
and Documentation defining the Organization-wide standards. Sometimes additional industry standards
relevant to the Software Project may be referred to as needed. Using these as inputs the Standards for the
specific project are decided. The Scope of the effort is also clearly defined. The inputs for the Planning
are as summarized as follows:
Using these as Inputs the Quality Planning process creates a plan to ensure that standards agreed upon are
met. Hence the outputs of the Quality Planning process are:
To create these outputs namely the Quality Plan various tools and techniques are used. These tools and
techniques are huge topics and Quality Experts dedicate years of research on these topics. We would
briefly introduce these tools and techniques in this article.
a. Benchmarking: The proposed product standards can be decided using the existing performance
benchmarks of similar products that already exist in the market.
b. Design of Experiments: Using statistics we determine what factors influence the Quality or features of
the end product
c. Cost of Quality: This includes all the costs needed to achieve the required Quality levels. It includes
prevention costs, appraisal costs and failure costs.
d. Other tools: There are various other tools used in the Planning process such as Cause and Effect
Diagrams, System Flow Charts, Cost Benefit Analysis, etc.
All these help us to create a Quality Management Plan for the project.
2) Quality Assurance
The Input to the Quality Assurance Processes is the Quality Plan created during Planning.
Quality Audits and various other techniques are used to evaluate the performance of the project. This
helps us to ensure that the Project is following the Quality Management Plan. The tools and techniques
used in the Planning Process such as Design of Experiments, Cause and Effect Diagrams may also be
used here, as required.
3) Quality Control
The Quality Control Processes use various tools to study the Work done. If the Work done is found
unsatisfactory it may be sent back to the development team for fixes. Changes to the Development
process may be done if necessary.
If the work done meets the standards defined then the work done is accepted and released to the clients.
Importance of Documentation:
In all the Quality Management Processes special emphasis is put on documentation. Many software shops
fail to document the project at various levels. Consider a scenario where the Requirements of the
Software Project are not sufficiently documented. In this case it is quiet possible that the client has a set
of expectations and the tester may not know about them. Hence the testing team would not be able test the
software developed for these expectations or requirements. This may lead to poor “Software Quality” as
the product does not meet the expectations.
38
Similarly consider a scenario where the development team does not document the installation
instructions. If a different person or a team is responsible for future installations they may end up making
mistakes during installation, thereby failing to deliver as promised.
Once again consider a scenario where a tester fails to document the test results after executing the test
cases. This may lead to confusion later. If there were an error, we would not be sure at what stage the
error was introduced in the software at a component level or when integrating it with another component
or due to environment on a particular server etc. Hence documentation is the key for future analysis and
all Quality Management efforts.
Steps:
In a typical Software Development Life Cycle the following steps are necessary for Quality Management:
Various Software Tools have been development for Quality Management. These Tools can help us track
Requirements and map Test Cases to the Requirements. They also help in Defect Tracking.
The Capability Maturity Model defines various levels of Organization based on the processes that they
follow.
Level 0
The following is true for “Level 0” Organizations -
There are no Processes, tracking mechanisms, no plans. It is left to the developer or any person
responsible for Quality to ensure that the product meets expectations.
There are processes within a team and the team can repeat them or follow the processes for all projects
that it handles.
However the process is not standardized throughout the Organization. All the teams within the
organization do not follow the same standard.
Level 3 – Well-Defined
In “Level 3” Organizations the processes are well defined and followed throughout the organization.
- The processes are well defined and followed throughout the organization
- The Goals are defined and the actual output is measured
- Metrics are collected and future performance can predicted
A 1994 study in US revealed that only about “9% of software projects were successful”
A large number of projects upon completion do not have all the promised features or they do not meet all
the requirements that were defined when the project was kicked off.
It is an understatement to say that – An increasing number of businesses depend on the software for their
day-to-day businesses. Billions of Dollars change hands every day with the help of commercial software.
Lots of lives depend on the reliability of the software for example running critical medical systems,
controlling power plants, flying air planes and so on.
Whether you are part of a team that is building a book keeping application or a software that runs a power
plant you cannot afford to have less than reliable software.
Unreliable software can severely hurt businesses and endanger lives depending on the criticality of the
application. The simplest application poorly written can deteriorate the performance of your environment
such as the servers, the network and thereby causing an unwanted mess.
To ensure software application reliability and project success Software Testing plays a very crucial role.
Everything can and should be tested –
Testing in each phase of the Development cycle to ensure that the “bugs”(defects) are eliminated at the
earliest
Testing to ensure no “bugs” creep through in the final product
Above all testing to ensure that the user expectations are met
The effectiveness of testing can be measured with the degree of success in achieving the above goals.
41
Several factors influence the effectiveness of Software Testing Effort, which ultimately determine the
success of the Project.
A) Coverage:
All the scenarios that can occur when using the software application
Each business requirement that was defined for the project
Specific levels of testing should cover every line of code written for the application
There are various levels of testing which focus on different aspects of the software application. The often-
quoted V model best explains this:
Unit Testing
Integration Testing
System Testing
The goal of each testing level is slightly different thereby ensuring the overall project reliability.
The system testing is done in an environment that is similar to the production environment i.e. the
environment where the product will be finally deployed.
42
There are various types of System Testing possible which test the various aspects of the software
application.
Track Defects
Analyze
Having followed the above steps for various levels of testing the product is rolled.
It is not uncommon to see various “bugs”/Defects even after the product is released to production. An
effective Testing Strategy and Process helps to minimize or eliminate these defects. The extent to which it
eliminates these post-production defects (Design Defects/Coding Defects/etc) is a good measure of the
effectiveness of the Testing Strategy and Process.
As the saying goes - 'the proof of the pudding is in the eating'
Methodologies:
VelociQ :
LEAN: Lean is a philosophy that shortens the timeline between customer order and
shipment by eliminating waste. By eliminating waste, quality is improved; production
time and cost are reduced . Lean in the Software industry is quite similar to Agile
software development .
o CR scrubs
o CR standup meetings
o CR effort analysis
o Value stream mapping
o Review request and testing in parallel
o CR fixing guidelines document circulated
o Module specific debugging tips documented and trained
o Database maintained with CRs fixed as of now
QA links:
http://ramya-moorthy.blogspot.com/2007/07/tips-for-developing-effective.html
http://www.tutorialspoint.com/perl/perl_oo_perl.htm
http://www.bjnet.edu.cn/tech/book/perl/ch19.htm
10. FAQS:
1. What are the performance requirements of the system/server application?
In terms of response time, concurrent users, concurrent sessions, simultaneous users, operations etc
Workload: It refers to the user load subjected by a web application under real time user access or during
the performance test and the way the users are distributed between various transaction flows.
Normally, the web server log files of a web site can be analyzed to learn about the workload of the site
(if the web site is already in production). For web sites which are yet to go alive for the first time, a
workload model needs to be developed based on the discussion with business analysts, application
experts, etc. It is very important to know the workload of the application before conducting the
performance test. Conducting the performance test for a system without proper analysis on the
workload might lead to misleading results.
Baseline test : test the performance for 1 user and then compare the performance with more number of
users.
performance testing :
vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.
44
netstat - Print network connections, routing tables, interface statis-tics, masquerade connections, and
multicast memberships
In PCNL :
For each patch releases, the load&stress is performed to simulate large number of clients and identify
crashes, connection loses etc.
OpenSTA graphs both virtual user response times and resource utilization information from all Web
Servers, Application Servers, Database Servers and Operating Platforms under test, so that precise
performance measurements can be gathered during load tests and analysis on these measurements can
be performed.
OpenSTA is Open Source software licensed under the GNU General Public License.
Creation of new process requires new resources and Address space whereas the thread can be created
in the same address space of the process which not only saves space and resources but are also easy to
create and delete and many threads can exhists in a process.
45
JMeter is an Apache Jakarta project that can be used as a load testing tool for analyzing and
measuring the performance of a variety of services, with a focus on web applications.
JMeter can be used as a unit test tool for JDBC database connections, FTP, LDAP, Webservices,
JMS, HTTP and generic TCP connections. JMeter can also be configured as a monitor, although
this is typically considered an ad-hoc solution in lieu of advanced monitoring solutions.
JMeter supports variable parameterization, assertions (response validation), per thread cookies,
configuration Variables and a variety of reports.
JMeter architecture is based on plugins. Most of its "out of the box" features are implemented
with plugins. Off-site developers can easily extend JMeter with custom plugins.
ISO, CMM and Six Sigma—the best of the industry's standards to enable quality—come together
in Wipro's integrated Quality System veloci-Q.
Since veloci-Q is built on the sound foundations of these quality models, it means your projects
can run at reduced risk; improved productivity; fewer defects; on time, on budget delivery; and
with better visibility.
Few important links in veloci-Q are mentioned below:
http://channelw.wipro.com/velociq/qs/policy/index.htm
http://channelw.wipro.com/velociq/qs/procedures/index.htm
http://channelw.wipro.com/velociq/qs/lcm/index.htm
E-Cube – Enabling Excellence in Execution – is an initiative that the IS and SEPG groups have
embarked on to enable excellence in the way projects are executed in Wipro.
E-Cube was launched with the objective of building an Enterprise Project Management
application tool that will provide the organization with:
a) Core project management functionality, e.g., managing resources, schedules, risks,
estimation, cost tracking, etc.
Agile software development is a conceptual framework for undertaking software engineering projects .
Process Improvement Proposal (PIP) is open to all employees in Wipro Technologies. It could be
any process related issue from a 'nice to have feature' to an interesting article with respect to
veloci-Q
PIP is more specific to the Quality System.
PIPs can be raised with respect to Process Models, Procedures, Project Forms, Guidelines,
Coding Standards.
SQA
Ensures that all the projects in the vertical follow the processes defined in the Wipro Quality
System.
Conducts training on the quality system.
Facilitates sharing of knowledge and best practices among all the projects within the division.
Plan, conduct and organize QIC meeting.
SQA should respond to all the queries with respect to quality and process.
PDB
Project Data Bank (PDB) in veloci-Q is a rich repository of quantified project metrics,
measurements and learning's collected from closed projects across the organization
In order to integrate knowledge collected from various projects, PDB has been merged with
KNet offering more powerful search and GUI.
b) WILL
c) WISE
47
MRM
Management Review meeting (MRM) are organized in the organizational level or vertical level.
The main purpose of conducting MRM is to review the quality system of the organization and
implement the changes if required.
In an organizational level the MRM is conducted once every year and in the vertical level it is
organized once every six months.
The MRM is conducted by Management representatives who are responsible for
implementation of the quality system in the organization.
Management Representatives are appointed by vice – chairman and president.
QIC
Quality Improvement Councils (QIC) are organized in the vertical level and group level.
The main purpose of conducting QIC is to review the quality system of the organization and
implement the changes if required.
QIC are conducted every month in the vertical/group level.
The QIC is conducted by Management representatives who are responsible for implementation
of the quality system in the organization.
Both MRM and QIC are very essential in reviewing the quality system, norms and processes
followed by the organization.
Work Breakdown Structure is a bottom up estimation technique. Activities carried out in the project are
decomposed into smallest possible tasks as is technically feasible at the time of estimation.
Process Improvement Proposal (PIP)
Any proposals for changes to QS are raised through PIPs. This is also referred as Procedure Improvement
Proposal. A PIP should be raised for implementing the piloted process into the QS.
Look Ahead Meeting (LAM)
LAMs are planned meetings and should be conducted at the beginning of a phase (e.g. start of plan,
design, Implementation and Testing phase) to identify the possible Defect Prevention activities in the
subsequent phase(s) that are about to start.
For resume:
Conducted Look Ahead Meetings (LAM). Participated in QIC meetings (Quality Improvements Council).
RE: In general, how do you see automation fitting into the overall process of testing?
--------------------------------------------------------------------------------
Automation can come into picture only when the application has become stable. And in the case of
Maintenance projects where we work for continuous improvments and Regression testing is demanded
often running the developed automation scripts will save our time.
-------------------------
FAQs
1. If the actual result doesn't match with expected result in this situation what should we do?
4. What is use case? What is the diffrence between test cases and use cases?
5. What is the difference between the test case and a test script
7. How do you test if you have minimal or no documentation about the product?
10. In general, how do you see automation fitting into the overall process of testing?
11. How do you deal with environments that are hostile to quality change efforts?
49
12. Describe to me the Software Development Life Cycle as you would define it?
13. Describe to me when you would consider employing a failure mode and defect analysis?
18. How you used whitebox and block box technologies in your application?
19. 1) What are the demerits of winrunner?2) We write the test data after what are the principles to do
testing an application?
20. What is the job of Quality Assurance Engineer? Difference between the Testing & Quality Assurance
job.
http://www.exforsys.com/tutorials/testing/software-quality-
management.html
During both above tests, we all collect product system resources usages(CPU utilization information) by using
Win:PerfMon, Unix/Linux:vmstat/mpstat/iostat/netstat/lsof/ps/sar. %CPU, context switch, memory, IO
wait/queue, num of established/time-wait sockets are counters helping to identify product’s bottlenecks & defects.
http://agiletesting.blogspot.com/2005/02/performance-vs-load-vs-stress-testing.html
50
Gokul -- for the largest repository of open source projects, go to sourceforge.net. Search for your favorite OS or
programming language, and take it from there.
Come up with baselines for your applications (load testing), then increase the load to see the behavior, which is
called stress testing.
In computer science, a memory leak is a particular type of unintentional memory consumption by a computer
program where the program fails to release memory when no longer needed. This condition is normally the result of
a bug in a program that prevents it from freeing up memory that it no longer needs.
This term has the potential to be confusing, since memory is not physically lost from the computer. Rather, memory
is allocated to a program, and that program subsequently loses the ability to access it due to program logic flaws.
A memory leak has symptoms similar to a number of other problems (see below) and generally can only be
diagnosed by a programmer with access to the program source code; however, many people refer to any unwanted
increase in memory usage as a memory leak, even if this is not strictly accurate.
There are several tools to detect the memory leaks and perfmon etc.
In this methodology Development and Testing takes place at the same time with the same
kind of information in their hands.
Typical V shows Development Phases on the Left hand side and Testing Phases on the Right
hand side.
and
'V' Model
51
Coding
Operational issues like need to deploy bigger teams, procurement is required early etc.
The V-model is a software development process which can be presumed to be the extension of
the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards
after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships
between each phase of the development life cycle and its associated phase of testing.
The V-model deploys a well-structured method in which each phase can be implemented by the
detailed documentation of the previous phase. Testing activities like test designing start at the
beginning of the project well before coding and therefore saves a huge amount of the project
time.
52
The V-model consists of a number of phases. The Verification Phases are on the Left hand side
of the V, the Coding Phase is at the bottom of the V and the Validation Phases are on the Right
hand side of the V .
In the Requirements analysis phase, the requirements of the proposed system are collected by
analyzing the needs of the user(s). This phase is concerned about establishing what the ideal
system has to perform. However it does not determine how the software will be designed or
built. Usually, the users are interviewed and a document called the user requirements document
is generated.
The user requirements document will typically describe the system’s functional, physical,
interface, performance, data, security requirements etc as expected by the user. It is one which
the business analysts use to communicate their understanding of the system back to the users.
The users carefully review this document as this document would serve as the guideline for the
system designers in the system design phase. The user acceptance tests are designed in this
phase. See also Functional requirements.
Systems design is the phase where system engineers analyze and understand the business of the
proposed system by studying the user requirements document. They figure out possibilities and
techniques by which the user requirements can be implemented. If any of the requirements are
not feasible, the user is informed of the issue. A resolution is found and the user requirement
document is edited accordingly.
The software specification document which serves as a blueprint for the development phase is
generated. This document contains the general system organization, menu structures, data
structures etc. It may also hold example business scenarios, sample windows, reports for the
better understanding. Other technical documentation like entity diagrams, data dictionary will
also be produced in this phase. The documents for system testing are prepared in this phase.
The phase of the design of computer architecture and software architecture can also be referred
to as high-level design. The baseline in selecting the architecture is that it should realize all
which typically consists of the list of modules, brief functionality of each module, their interface
relationships, dependencies, database tables, architecture diagrams, technology details etc. The
integration testing design is carried out in this phase.
53
The module design phase can also be referred to as low-level design. The designed system is
broken up into smaller units or modules and each of them is explained so that the programmer
can start coding directly. The low level design document or program specifications will contain a
detailed functional logic of the module, in pseudocode:
database tables, with all elements, including their type and size
all interface details with complete API references
In the V-model of software development, unit testing implies the first stage of dynamic testing
process. According to software development expert Barry Boehm, a fault discovered and
corrected in the unit testing phase is more than a hundred times cheaper than if it is done after
delivery to the customer.
It involves analysis of the written code with the intention of eliminating errors. It also verifies
that the codes are efficient and adheres to the adopted coding standards. Testing is usually white
box. It is done using the Unit test design prepared during the module design phase. This may be
carried out by software developers.
In integration testing the separate modules will be tested together to expose faults in the
interfaces and in the interaction between integrated components. Testing is usually black box as
the code is not directly checked for errors.
System testing will compare the system specifications against the actual system. The system test
design is derived from the system design documents and is used in this phase. Sometimes system
testing is automated using testing tools. Once all the modules are integrated several errors may
arise. Testing done at this stage is called system testing.
Acceptance testing is the phase of testing used to determine whether a system satisfies the
requirements specified in the requirements analysis phase. The acceptance test design is derived
from the requirements document. The acceptance test phase is the phase used by the customer to
determine whether to accept the system or not. The following description is unaccaptable in and
overview article Acceptance testing:
- To determine whether a system satisfies its acceptance criteria or not.
- To enable the customer to determine whether to accept the system or not.
- To test the software in the "real world" by the intended audience.
Purpose of acceptance testing:
- To verify the system or changes according to the original needs.
2I process model
This process model can be chosen for projects like reengineering of certain components, analysis
of problems and providing consultation, Test runs, Test Case Development etc
It is recommended to choose this model if each request is < 3 phases of any other Process Model
and each request is < 3 person months of maximum effort
This process model will help projects, which handle one time testing, projects with repetitive QA
cycle and Test certification projects.
Various agile practices like scrum, XP combined to form agile project management model
Agile helps in
Testing starts very early in the lifecycle, so maintenance cost will come down
Each iteration is to develop a shippable product (includes design, coding, UT etc). Since all
phases are covered in first iteration itself, it helps in unearthing the issues/technical challenges
which can be address immediately. In typical water fall model this happens only by the end of
the life cycle.
XP - Extreme programming
Whatever is good for the project, take it to extreme levels. Reviews are good, so do reviews
every day, for every piece of code (similar)
Automatic updates, revenue, using logo. (beneficiary for independent Software Vendors).
To qualify for the Works with Windows Vista logo, an application must meet the
following requirements:
A small tweak I would rather suggest to make this technique more effective is to award the
testers who will find the quality bug, may be called as “Bug of the week”. This way quality
bugs will be the main focus of software testers rather than running behind the quantity.
Obviously you should not ignore those small UI bugs also
Jeff Feldstein - How To Recruit, Motivate, and Energize Superior Test Engineers
ABSTRACT
The expectations today are for increasingly high-quality software, ... all » requiring more
sophisticated automation in testing. Test and QA teams must work more closely with
development to ensure that this sophisticated automation is possible. This has lead to software
engineers applying creativity, talent and expertise to not just application development, but
testing as well. This transition from manual to scripting to highly engineered test automation
changes the way we recruit, hire, motivate and retain great test engineering talent.
The speaker uses examples of how his team at Cisco changed the way it tests over the past six
years. In this class, he'll review eight points for why test is a better place for software
developers than software development, and he'll show how and when to express these points
to hire, motivate and retain top talent. You'll see how to inspire greater innovation and
creativity in your testing processes, and how to manage and inspire test and development
teams that are spread across different locations. You'll also learn the place of manual testing in
the new environment.
58
This not only build his/her functional testing knowledge but also the project Domain
knowledge.
2) Most of the testers can do the functional testing, BV analysis but they may not know how to
measure test coverage,How to test a complete application, How to perform load testing.
Company can provide the training to their employees in those areas.
3) Involve them in all the project meetings, discussions, project design so that they can
understand the project well and can write the test cases well.
4) Encourage them to do the extra activities other than the regular testing activities. Such
activities can include Inter team talk on their project experience, Different exploratory talks on
project topics.
Most important is to give them freedom to think outside the box so that they can take better
decision on Testing activities like test plan, test execution, test coverage.
If you have a better idea to boost the testers performance don’t forget to comment on!
Test engineers are equally capable of software engineers, they should be able to design equally
Dev team do review the test plan, need to be closely interacting with developers.
o customer interaction and business knowledge (dev team will not have the idea).
o test team members can perform better addressing the customer problems at the
site as they
have the big picture, where as dev team member will have knowledge of part of it.
Developer has a narrow view, where as test engineer will have a bird eye view or the big
picture.
Stop calling QA (quality assurance - ISO standards, TL9k standards, standard procedures and
policies, entry and exit criteria, it should be part the dev & test teams).
Slide 17 : CMMI is an appraisal method developed by Software Engineering Institute in Pittsburg to develop and
refine an organization’s processes. CMMI is used as a benchmark for assessing different organizations for equivalent
comparison. CMMI describes the maturity of the company based upon the project the company is handling and the
related clients.
CMMI Level 1 - Initial : CMMI Level 1 - Initial Processes are ad-hoc Organizations does not provide a stable
environment Success depends upon the individuals Frequently exceed budget and schedule Over commit – abandon
processes in time of crisis and do not have repeatable successes
60
CMMI Level 2 - Repeatable : CMMI Level 2 - Repeatable Successes are repeatable Existing practices are retained
during times of stress Use some basic Project Management to track cost and schedule Processes may not repeat for
all the projects Minimum process discipline is in place to repeat earlier successes with similar application Still risk of
exceeding cost and time
CMMI Level 3 - Defined : CMMI Level 3 - Defined Standard processes are established and improved over time
Effective project management system is implemented
CMMI Level 4 – Quantitatively Managed : CMMI Level 4 – Quantitatively Managed Use precise measurements Set
quantitative quality goals Processes are controlled using statistical and other quantitative techniques
CMMI Level 5 - Optimizing : CMMI Level 5 - Optimizing Improve processes through innovative technological
improvements
2. Time taken to test one application depends on number of bugs we get in the application.
3. Some times resolved bugs gets reopen due to some side effects.
Am I thinking in correct way?? If not, please suggest me good planning. It helps lot at my work
place.
-----
Have you ever used PERT (Program Evaluation Riview Technique)? Considering what you
mentioned above, I believe it would be fairly simple to convert into PERT.
You'd just need to ad an estimation of the most likely duration. Let's say 400 hours in this case.
End Time (E), Optimistic (O), Most likely (M) and Pessimistic (P). E=(O+4M+P)/6, gives you
E=(350+4x400+500)/6=408.
---
You are correct and unless we can somehow learn how to predict the future, we will probably
keep on struggling with those problems.
1. give gross estimates to your work - "assuming everything is OK, this will take 2 weeks to
test/check".
This will give you a start estimation. No less than X days are needed.
2. Continuing 1. consider known risk factors and add time accordingly. For example, if this is a
new feature so tests don't exist yet, development code is new...I smell trouble; feature will
probably be pretty buggy at first.
3. provide enough testing space between builds and reduce that as product matures.
---
Assumptions, Assumptions, Assumptions - they are key to estimates! I'd suggest that you do
your estimation keeping factors like reality, feasibility and achievability in mind and ensure they
are based on your assumptions. If the assumptions differ from what the case is in actuality, one
gets an opportunity to revise them.
Needless to mention, assumptions also need to be realistic. Things like delayed builds will and
shall always happen, hence I'd recommend we keep some contingency for these kind of
deviations.
----
I also recommend a few things that might help lowering the % of failure in the estimation:
1- Have a template with the Basics TC's and your time for create and execute
- have a minimum base estimate, I understand and know that the major problem are the
complexity of each requirement or module or task
2- Also create indicators, where you have a record of all projects and their testing lifecycle, for
example:
- Spent time when Validated/Closed bugs: Average time to report or close a Bug
This help you making decisions for future estimates, because you can have the average the time
or #: "bugs/executed time/created time" for each cycle run
Activities involved
Infrastructure issues
Defect logging/meetings
63
Defect retesting
Test management
Contingency effort
Note: Discussions between testing team and development team is very critical and this consumes
a good amount of time. So, this has to be considered in the estimation.
Other factors:
Estimation Basis
Scheduled Period for Testing(Design
and Execution)-Nigeria 97
Scheduled Period for Testing(Design
and Execution)-Namibia 71
Total No. of Test Cases 9200
Working Hours Per day 8
Review Effort 15%
Test Data Requirement Preparation 10%
Test Management 10%
Retesting effort 20%
Logging Defect 5%
Defect Meeting 5%
Test Audit Effort 5%
No.of Working days Per Week 5
End of the day it is the quality of the product is important. As part of test team, we should
understanding what kind of changes in functionality are going in ? are they trivial or major, if
required how much percentage of test cases needs to be rewritten, what is the amount of test case
design, review, management effort involved, how the testing dates will get impacted, all these
details needs to be articulated as part of impact anlaysis.
64
The data needs to be presented to the client, then they can take a decision.
Click the Releases button on the sidebar. The Releases module enables you
to define releases and cycles for managing the testing process.
➤ Click the Requirements button on the sidebar. The Requirements module
enables you to specify your testing requirements. This includes defining
what you are testing, defining requirement topics and items, and analyzing
the requirements.
➤ Click the Test Plan button on the sidebar. The Test Plan module enables you
to develop a test plan based on your testing requirements. This includes
dividing your plan into categories, developing tests, automating tests where
beneficial, and analyzing the plan.
➤ Click the Test Lab button on the sidebar. The Test Lab module enables you
to run tests on your application and analyze the results.
➤ Click the Defects button on the sidebar. The Defects module enables you to
add defects, determine repair priorities, repair open defects, and analyze the
data.
• Define Scope -Use requirements documents to determine testing scope—test goals and
objectives.
• Detail Requirements- Describe each requirement, assign a priority level, and add
attachments if necessary.
• Test Plan: Add Test cases for each requirement and attach test scripts to the same.
• Define Test cases- Add a basic definition of each test to the test plan tree.
• Design Test Steps - Develop manual tests by adding steps to the tests in test plan tree.
Test steps describe the test operations, the points to check, and the expected outcome
of each test. Decide which tests to automate.
• Automate Tests - For tests need to automate, create test scripts with a Mercury
Interactive testing tool,
• Analyze Test cases - Review your tests to determine their suitability to your testing
goals
• Test Lab : Execute the test plans, schedule the test plans execution
• Create Test Sets -Determine which tests to include in each test set.
• Run Tests - Execute the tests in your test set automatically or manually
• Analyze Test Results - View the results of your test runs in order to determine whether
a defect has been detected in your application.
• Tasks
• Create/Edit Defects
• Review New Defects - Review new defects and determine which ones should be fixed
• Test New Build - Test a new build of application. Continue this process until defects are
repaired.
• Analyze Defect Data - Generate reports to assist in analyzing the progress of defect
repairs, and to help determine when to release the application
Quality centre (Formerly Test Director) is a web based test management tool by Mercury (now by HP).
66
Quality Center helps to organize and manage all phases of the application testing process like specifying
testing requirements, planning test cases, executing tests, and tracking defects.
Quality Assurance managers use the testing scope to determine the overall testing requirements
for the application under test. They define requirement topics and assign them to the QA testers
in the test team. Each QA tester uses Quality Center to record the requirement topics for which
they are responsible.
Each requirement can be like -> security, performance, a module, usability, scalability, language support.
Once Requirements are created in requirements tree, use the requirements as basis for defining
the tests in your test plan tree and running tests in a test set.
You can also create test plans, and add tests to test plan tree.
Also you can associate a defect with a test. Associate defect with a requirement.
When test engineers find any defect in the application, they will submit the defect into quality centre.
The Defect data can be accessed by the QA and support teams.
We can use Word as well as Excel to Export or import test case in to Quality Center With help
of Quality Center Following Addins
Word Addin
Excel Addin
Gain real-time visibility into requirements coverage and associated defects to paint a clear picture of
business risk
Manage the release process and make more informed release decisions with real-time KPIs and
reports
Facilitate standardized testing and quality processes that boost productivity through workflows and
alerts
Lower costs by using QA testing tools to capture critical defects before they reach production
67
Quality Center (QC) is Mercury Interactive Web-based Global test management tool,
which brings communication, organization, documentation and structure into every
testing project..
QC is very much suitable for multi user environment, with very short times, enabling lot
more of reuse of test information over time.
QC is an integrated enterprise application, for organizing and managing the entire testing
process .
QC helps to maintain a repository of test cases that cover all the aspects of applications,
with each test case designed to fulfill a specific requirement of the application
It provides for efficient method of scheduling and executing test sets, collecting test
results and analyzing data.
It features a sophisticated system of tracking defects from initial detection till resolution.
QC automates the test management process making is more efficient and cost effective.
Testers can test the application, report defects and developers can then fix them and ask
for retesting.
Code coverage is a measure used in software testing. It describes the degree to which the source
code of a program has been tested. It is a form of testing that inspects the code directly and is
therefore a form of white box testing[1]. Currently, the use of code coverage is extended to the
field of digital hardware, the contemporary design methodology of which relies on Hardware
description languages (HDLs).
Function coverage - Has each function (or subroutine) in the program been called?
68
Decision coverage (also known as branch coverage) - Has each control structure (such as an IF
statement) evaluated both to true and false?
Condition coverage (or predicate coverage) - Has each boolean sub-expression evaluated both
to true and false? This does not necessarily imply decision coverage.
Assume this function is a part of some bigger program and this program was run with some test
suite. If during this execution function 'foo' was called at least once, then function coverage for
this function is satisfied.
Statement coverage for this function will be satisfied if it was called e.g. as 'foo(7,1)'.
The tests with 'foo(7,1)' and 'foo(7,0)' calls will satisfy decision coverage.
Condition coverage can be satisfied with tests, which do 'foo(7,1)', 'foo(7,0)' and 'foo(4,0)'.
In languages, like Pascal, where standard boolean operations are not short circuit, condition
coverage does not necessarily implies decision coverage. For example, consider the following
fragment of code:
if a and b then
a=true, b=false
a=false, b=true
To measure the code coverage, the source code needs to be instrumented and build has to be
prepared. Then set of test cases to be executed on the instrumented build to get the code coverage
details. Eg : gcov is a code coverage tool for C language code.
ClearQuality is part of Clarify Inc.'s Service Management System.While ClearSupport provides high
volume call tracking, ClearQuality provides defect tracking. Information it keeps includes priority,severity,
module and description. It allows related information to be attached by the user. In addition to Motif on
UNIX platforms, ClearQuality's client may be run from PCs and Macintosh machines. A supplier WWW
site is available at http://www.clarify.com/.
(1)- Complete document of customer and business requirements specifying all the requirements
for the development of the product.
(2)- Latest completed build and URL of the application to hit for Testing.
(3)- Software requirements for installing the application on PCs for testing or for database
connectivity.
(6)- If the build comes for retesting, then it should be accompanied by the revised document
which includes the updated changes incorporated in the software.
(7)- Clarity regarding which member of development team should be contacted in case of any
clarification required during the testing phase regarding the functionality of the module or if
testers encounter a showstopper in the Software.
(8)- After release of the bugs list to development team, how much time they will require for fixing
the bugs.
70
22. QA challenges
Difference of opinion between Dev and QA teams. Need to sort with due diligence.
Unavailability of loads (versions) in expected time. QA schedules will get impacted. Need to work on
contingency plan and execute accordingly.
Dev team needs to give a release notes for each load they release explaining the changes between
previous load, known issues/limitations etc. which will help in better planning of testing.
-> Continuous eduction to testers on importance of testing, how find critical defects,
2) Regression testing:
When project goes on expanding the regression testing work simply becomes uncontrolled.
Pressure to handle the current functionality changes, previous working functionality checks and
bug tracking.
work. This results into incomplete, insufficient and ad-hoc testing throughout the testing life
cycle.
9) Automation testing:
Many sub challenges - Should automate the testing work? Till what level automation should be
done? Do you have sufficient and skilled resources for automation? Is time permissible for
automating the test cases? Decision of automation or manual testing will need to address the pros
and cons of each process.
These are some top software testing challenges we face daily. Project success or failure
depends largely on how you address these basic issues.
For further reference and detailed solutions on these challenges refer book “Surviving the Top
Ten challenges of Software Testing” written by William E. Perry and Randall W. Rice.
Entry Criteria
Exit Criteria
System Testing
Entry Criteria
Exit Criteria
Entry Criteria
Exit Criteria
Entry Criteria
Exit Criteria
ABSTRACT
BACKGROUND
Like any complex piece of software there is no single, all inclusive quality
measure that fully characterizes a WebSite (by which we mean any web
browser enabled application).
Timeliness: WebSites change often and rapidly. How much has a WebSite changed
since the last upgrade? How do you highlight the parts that have changed?
Structural Quality: How well do all of the parts of the WebSite hold together? Are all
links inside and outside the WebSite working? Do all of the images work? Are there
parts of the WebSite that are not connected?
Content: Does the content of critical pages match what is supposed to be there? Do
key phrases exist continually in highly-changeable pages? Do critical pages maintain
quality content from version to version? What about dynamically generated HTML
(DHTML) pages?
Accuracy and Consistency: Are today's copies of the pages downloaded the same as
yesterday's? Close enough? Is the data presented to the user accurate enough? How
do you know?
Response Time and Latency: Does the WebSite server respond to a browser request
within certain performance parameters? In an e-commerce context, how is the end-
to-end response time after a SUBMIT? Are there parts of a site that are so slow the
user discontinues working?
Performance: Is the Browser --> Web --> ebSite --> Web --> Browser connection
quick enough? How does the performance vary by time of day, by load and usage? Is
performance adequate for e-commerce applications? Taking 10 minutes -- or maybe
even only 1 minute -- to respond to an e-commerce purchase may be unacceptable!
Here are the major pieces of WebSites as seen from a Quality perspective.
Browser. The browser is the viewer of a WebSite and there are so many
different browsers and browser options that a well-done WebSite is probably
designed to look good on as many browsers as possible. This imposes a kind
of de facto standard: the WebSite must use only those constructs that work
with the majority of browsers. But this still leaves room for a lot of
creativity, and a range of technical difficulties. And, multiple browsers'
renderings and responses to a WebSite have to be checked.
HTML. There are various versions of HTML supported, and the WebSite ought to be
built in a version of HTML that is compatible. This should be checkable.
Java, JavaScript, ActiveX. Obviously JavaScript and Java applets will be part of any
serious WebSite, so the quality process must be able to support these. On the
Windows side, ActiveX controls have to be handled well.
Cgi-Bin Scripts. This is link from a user action of some kind (typically, from a FORM
passage or otherwise directly from the HTML, and possibly also from within a Java
applet). All of the different types of Cgi-Bin Scripts (perl, awk, shell-scripts, etc.)
need to be handled, and tests need to check "end to end" operation. This kind of a
"loop" check is crucial for e-commerce situations.
Object Mode. The display you see changes dynamically; the only
constants are the "objects" that make up the display. These aren't real
76
objects in the OO sense; but they have to be treated that way. So, the
quality test tools have to be able to handle URL links, forms, tables,
anchors, buttons of all types in an "object like" manner so that
validations are independent of representation.
Interaction & Feedback. For passive, content-only sites the only real
quality issue is availability. For a WebSite that interacts with the user,
the big factor is how fast and how reliable that interaction is.
o Fonts and Preferences. Most browsers support a wide range of fonts and
presentation preferences, and these should not affect how quality on a
WebSite is assessed or assured.
o Object Mode. Edit fields, push buttons, radio buttons, check boxes, etc. All
should be treatable in object mode, i.e. independent of the fonts and
preferences.
77
o Tables and Forms. Even when the layout of a table or form varies in the
browser's view, tests of it should continue independent of these factors.
o Frames. Windows with multiple frames ought to be processed simply, i.e. as if
they were multiple single-page frames.
Test Context. Tests need to operate from the browser level for two
reasons: (1) this is where users see a WebSite, so tests based in
browser operation are the most realistic; and (2) tests based in
browsers can be run locally or across the Web equally well. Local
execution is fine for quality control, but not for performance
measurement work, where response time including Web-variable
delays reflective of real-world usage is essential.
o Page Consistency. Is the entire page identical with a prior version? Are key
parts of the text the same or different?
o Table, Form Consistency. Are all of the parts of a table or form present?
Correctly laid out? Can you confirm that selected texts are in the "right
place".
o Page Relationships. Are all of the links on a page the same as they were
before? Are there new or missing links? Are there any broken links?
o Error Recovery. While browser failure due to user inputs is rare, test suites
should have the capability of resynchronizing after an error.
o Structural. All of the links and anchors should match with prior "baseline"
data. Images should be characterizable by byte-count and/or file type or
other file properties.
o Checkpoints, Exact Reproduction. One or more text elements -- or even all
text elements -- in a page should be markable as "required to match".
o Gross Statistics. Page statistics (e.g. line, word, byte-count, checksum, etc.).
Taking these requirements into account, and after investigation of W3C's Amaya
Browser and the open-architecture Mozilla/Netscape Browser we chose the IE Brower
as our initial base for our implementation of eValid.
User Interface. How the user interacts with the product is very
important, in part because in some cases the user will be someone
very familiar with WebSite browsing and not necessarily a testing
expert. The design we implemented takes this reality into account.
80
o Pull Down Menus. In keeping with the way browsers are built, we put all the
main controls for eValid on a set of Pull Down menus, as shown in the
accompanying screen shot.
Keysave File. This is the file that is being created -- the file is shown
line by line during script recording as the user moves around the
candidate WebSite.
Timing File. Results of timings are shown and saved in this file.
Event File. This file contains a complete log of recording and playback
activities that is useful primarily to debug a test recording session or to
better understand what actually went on during playback.
o Validate Selected Text Capability. A key need for WebSite content checking,
as described above, is the ability to capture an element of text from an image
so that it can be compared with a baseline value. This feature was
implemented by digging into the browser data structures in a novel way (see
below for an illustration). The user highlights a selected passage of the web
page and clicks on the "Validate Selected Text" menu item.
Enforcing tools usage like wipro code checker which gives violations w.r.to standard code quality.
Using customer specified tool (NASA) which internally uses findbugs etc.
Ensuring all defects found during review were closed before submitting
Testing FAQS
3. The measure used to evaluate the correctness of a product is called the product:
a. Policy
b. Standard
c. Procedure to do work
d. Procedure to check work
e. Guideline
4. Which of the four components of the test environment is considered to be the most
important component of the test environment:
a. Management support
b. Tester competency
c. Test work processes
d. Testing techniques and tools
5. Effective test managers are effective listeners. The type of listening in which the tester is
performing an analysis of what the speaker is saying is called:
a. Discriminative listening
83
b. Comprehensive listening
c. Therapeutic listening
d. Critical listening
e. Appreciative listening
7. Which of the following are risks that testers face in performing their test activities:
a. Not enough training
b. Lack of test tools
c. Not enough time for testing
d. Rapid change
e. All of the above
8. All of the following are methods to minimize loss due to risk. Which one is not a method
to minimize loss due to risk:
a. Reduce opportunity for error
b. Identify error prior to loss
c. Quantify loss
d. Minimize loss
e. Recover loss
11. The defect attribute that would help management determine the importance of the
defect is called:
a. Defect type
b. Defect severity
84
c. Defect name
d. Defect location
e. Phase in which defect occurred
12. The system test report is normally written at what point in software development:
a. After unit testing
b. After integration testing
c. After system testing
d. After acceptance testing
15. What is the difference between testing software developed by a contractor outside your
country, versus testing software developed by a contractor within your country:
a. Does not meet people needs
b. Cultural differences
c. Loss of control over reallocation of resources
d. Relinquishment of control
e. Contains extra features not specified
17. The condition that represents a potential for loss to an organization is called:
a. Risk
b. Exposure
c. Threat
d. Control
e. Vulnerability
85
18. A flaw in a software system that may be exploited by an individual for his or her
advantage is called:
a. Risk
b. Risk analysis
c. Threat
d. Vulnerability
e. Control
20. The following is described as one of the five levels of maturing a new technology into an
IT organization’s work processes. The “People-dependent technology” level is equivalent to
what level in SEI’s compatibility maturity model:
a. Level 1
b. Level 2
c. Level 3
d. Level 4
e. Level 5
Other topics:
Equivalence partition
Use tools to reduce hardware cost – like VmWare instead of procuring multiple test
machines, you can use VmWare.
Setup test case design and test execution guidelines to reduce review and rework time.
Conduct reviews at earlier stage of test development to save time in future phases.
Provide training on product, test methodologies to team for better understanding of scope and
vision of the program.
Look for reuse options (Test cases, environment, test plan etc).
Test Strategy:
It is a company level document and developed by QA category people like QA and PM. This
document defines "Testing Approach" to achieve testing objective. Test strategy is the freezed part of
BRS from which we get Test Policy and Test Strategy.
Test Plan:
Test plan is the freezed document developed from SRS, FS, UC. After completion of testing team
formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test,
who to test, and when to test.
There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general
talk about Project Test Plan.
Components are as follows:
1. Test Plan id
88
2. Introduction
3. Test items
4. Features to be tested
5. Features not to be tested
6. Approach
7. Testing tasks
8. Suspension criteria
9. Features pass or fail criteria
10. Test environment (Entry criteria, Exit criteria)
11. Test delivarables
12. Staff and training needs
13. Responsibilities
14. Schedule
15. Risk and mitigation
16. Approach
This is one of the standard approach to prepare test plan document, but things can vary company-to-
company
Quality Assurance: A set of activities designed to ensure that the development and/or
maintenance process is adequate to ensure a system will meet its objectives.
Quality Control: A set of activities designed to evaluate a developed work product.
Testing: The process of executing a system with the intent of finding defects. (Note that the
"process of executing a system" includes test planning prior to the execution of the test cases.)
Adherence to commitment
Quality of deliverables
89
Handling challenges
Risk management
Flexibility
When you initially report the defect to the Quality Center project, it is
assigned the status New, by default. A quality assurance or project manager
reviews the defect and determines whether or not to consider the defect for
repair. If the defect is refused, it is assigned the status Rejected. If the defect is
accepted, the quality assurance or project manager determines a repair
priority, changes its status to Open, and assigns it to a member of the
development team. A developer repairs the defect and assigns it the status
Fixed. You retest the application, making sure that the defect does not recur.
If the defect recurs, the quality assurance or project manager assigns it the
status Reopened. If the defect is repaired, the quality assurance or project
TO have a single view of all the projects running across the program
• Overall management responsibility for the One SIBS Testing Program - Stage 1 and Stage 2 implementation
• Coordinating with OCBC, Development Vendors,PMO and Wipro Testing Team teams for Testing Activities
• Planning and administering Testing the overall One SIBS program
• Reviews/approves plans for conformance to program strategy and program plan and schedule
• Anchor the development of Master Test Plan and Strategy
• Managing scope and change requests for One SIBS Testing Program initiative
• Milestone reviews
• Managing Program Level Testing risks
• Escalation and Issue management in Testing
• Test Program Governance
Dependency Management across stakeholders
A Responsibility Assignment Matrix (RAM), also known as RACI matrix (pronounced /resi/) or
Linear Responsibility Chart (LRC), describes the participation by various roles in completing
tasks or deliverables for a project or business process. It is especially useful in clarifying roles
and responsibilities in cross-functional/departmental projects and processes.
Responsible
Those who do the work to achieve the task. There is typically one role with a
participation type of Responsible, although others can be delegated to assist in the work
required (see also RASCI below for separately identifying those who participate in a
supporting role).
Accountable (also Approver or final Approving authority)
Those who are ultimately accountable for the correct and thorough completion of the
deliverable or task, and the one to whom Responsible is accountable. In other words, an
Accountable must sign off (Approve) on work that Responsible provides. There must be
only one Accountable specified for each task or deliverable.
Consulted
Those whose opinions are sought; and with whom there is two-way communication.
Informed
Those who are kept up-to-date on progress, often only on completion of the task or
deliverable; and with whom there is just one-way communication.
Support
Resources allocated to Responsible. Unlike Consulted, who may provide input to the
task, Support will assist in completing the task.
91
Program management: Process of managing multiple interdependent projects that will enable
organization to achieve intended business benefits. Projects deliver outputs like IT systems set up,
programs create outcomes to meet business objective like business metrics improvement.
Program Governance: Function that ensures the program management objective are met through
performance reviews, eliminating risk, resolving issues and enable team for effective and efficient
program management.
• Knowledge management
Governance
Time detected
System Post-
Requirements Architecture Construction
test release
What constitutes an "acceptable defect rate" depends on the nature of the software; A flight
simulator video game would have much higher defect tolerance than software for an actual
airplane.
Although there are close links with SQA, testing departments often exist independently, and
there may be no SQA function in some companies.
34 Severity vs Priority
Severity means Technical (i.e if the application is shuts down when we perform one function in
the Application the severity level is high)
Priority means Business(If we are going to deliver a build in that build defect may be low
severity but the Priority is high.)
Severity only decided by the tester. Priority will be decided by the PM.
Functionality testing is based on functional requirements of the application whereas the system testing is end to end
testing it covers all the functionality, performance, usabilty, database, stress testing.
Functional testing is the subset of system testing, but both are blackbox testing
System Testing :