Professional Documents
Culture Documents
Software Testing
Software Quality
A May 2005 newspaper article reported that a major hybrid car manufacturer had
to install a software fix on 20,000 vehicles due to problems with invalid engine
warning lights and occasional stalling. In the article, an automotive software
specialist indicated that the automobile industry spends $2 billion to $3 billion per
year fixing software problems.
Media reports in January of 2005 detailed severe problems with a $170 million
high-profile U.S. government IT systems project. Software testing was one of the
five major problem areas according to a report of the commission reviewing the
project. In March of 2005 it was decided to scrap the entire project.
In July 2004 newspapers reported that a new government we lfare management
system in Canada costing several hundred million dollars was unable to handle a
simple benefits rate increase after being put into live operation. Reportedly the
original contract allowed for only 6 weeks of acceptance testing and the system
was never tested for its ability to handle a rate increase.
Millions of bank accounts were impacted by errors due to installation of
inadequately tested software code in the transaction processing system of a
major North American bank, according to mid-2004 news reports. Articles about
the incident stated that it took two weeks to fix all the resulting errors, that
additional problems resulted when the incident drew a large number of e-mail
phishing attacks against the bank's customers, and that the total cost of the
incident could exceed $100 million.
While all projects will benefit from testing, some projects may not require
independent test staff to succeed.
Which projects may not need independent test staff? The answer depends on the
size and context of the project, the risks, the development methodology, the skill
and experience of the developers, and other factors.
For instance, if the project is a short-term, small, low risk project, with highly
experienced programmers utilizing thorough unit testing or test-first development,
then test engineers may not be required for the project to succeed.
In some cases an IT organization may be too small or new to have a testing staff
even if the situation calls for it. In these circumstances it may be appropriate to
instead use contractors or outsourcing, or adjust the project management and
development approach (by switching to more senior developers and agile test-
first development, for example).
What is Software?
Internet Explorer
Microsoft word
Notepad
What is Hardware?
What is SDLC?
VB script
Java Script
Perl
Etc..
What is Data?
What is a Database?
What is a Browser?
Software program used to view and interact with various types of Internet
resources available on the World Wide Web.
Netscape and Internet Explorer are two common examples.
What is a Server?
Apache
Web logic
Web Sphere
IIS
Etc..
Load Balancing
A server that receives requests intended for another server and that acts
on the behalf of the client behalf (as the client proxy) to obtain the requested
service. A proxy server is often used when the client and the server are
incompatible for direct connection. For example, the client is unable to meet the
security authentication requirements of the server but should be permitted some
services.
What is a Protocol?
What is Networking?
What is Bandwidth?
Bandwidth is the amount of data that can be transferred over the network
in a fixed amount of time. On the Net, it is usually expressed in bits per second
(bps) or in higher units like Mbps (millions of bits per second). 28.8 modem can
deliver 28,800 bps, a T1 line is about 1.5 Mbps
What is Firewall?
What is middleware?
What is Environment?
Development
Testing or QC
Staging or Pre-Production
Production
Etc..
What is an IP Address?
What is Configuration?
This is a general-purpose computer term that can refer to the way you
have your computer set up.
It is also used to describe the total combination of hardware components
that make up a computer system and the software settings that allow various
hardware components of a computer system to communicate with one another.
Poorly documented code - it's tough to maintain and modify code that is badly
written or poorly documented; the result is bugs. In many organizations
management provides no incentive for programmers to document their code or
write clear, understandable, maintainable code. In fact, it's usually the opposite:
they get points mostly for quickly turning out code, and there's job security if
nobody else can understand it ('if it was hard to write, it should be hard to read').
Black box testing - not based on any knowledge of internal design or code. Tests
are based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's
code. Tests are based on coverage of code statements, branches, paths,
conditions.
unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires
detailed knowledge of the internal program design and code. Not always easily
done unless the application has a well-designed architecture with tight code; may
require developing test driver modules or test harnesses.
incremental integration testing - continuous testing of an application as new
functionality is added; requires that various aspects of an application's
functionality be independent enough to work separately before all parts of the
program are completed, or that test d rivers be developed as needed; done by
programmers or by testers.
integration testing - testing of combined parts of an application to determine if
they function together correctly. The 'parts' can be code modules, individual
applications, client and server applications on a network, etc. This type of testing
is especially relevant to client/server and distributed systems.
functional testing - black-box type testing geared to functional requirements of an
application; this type of testing should be done b y testers. This doesn't mean that
the programmers shouldn't check that their code works before releasing it (which
of course applies to any stage of testing.)
system testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
end-to-end testing - similar to system testing; the 'macro' end of the test scale;
involves testing of a complete application environment in a situation that mimics
real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if
appropriate.
sanity testing or smoke testing - typically an initial testing effort to determine if a
new software version is performing well enough to accept it for a major testing
effort. For example, if the new software is crashing systems every 5 minutes,
bogging down systems to a crawl, or corrupting databases, the software may not
be in a 'sane' enough condition to warrant further testing in its current state.
regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed,
especially near the end of the development cycle. Automated testing tools can be
especially useful for this type of testing.
acceptance testing - final testing based on specifications of the end -user or
customer, or based on use by end-users/customers over some limited period of
time.
load testing - testing an application under heavy loads, such as testing of a web
site under a range of loads to determine at what point the system's response time
degrades or fails.
stress testing - term often used interchangeably with 'load' and 'performance'
testing. Also used to describe such tests as system functional testing while under
unusually heavy loads, heavy repetition of certain actions or inputs, input of large
numerical values, large complex queries to a database system, etc.
performance testing - term often used interchangeably with 'stress' and 'load'
testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in
requirements documentation or QA or Test Plans.
usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will
depend on the targeted e nd-user or customer. User interviews, surveys, video
recording of user sessions, and other techniques can be used. Programmers and
testers are usually not appropriate as usability testers.
install/uninstall testing - testing of full, partial, or upgrade i nstall/uninstall
processes.
recovery testing - testing how well a system recovers from crashes, hardware
failures, or other catastrophic problems.
failover testing - typically used interchangeably with 'recovery testing'
security testing - testing how well the system protects against unauthorized
internal or external access, willful damage, etc; may require sophisticated testing
techniques.
compatability testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
exploratory testing - often taken to mean a creative, informal software test that is
not based on formal test plans or test cases; testers may be learning the
software as they test it.
ad-hoc testing - similar to exploratory testing , but often taken to mean that the
testers have significant understanding of the software before testing it.
context-driven testing - testing driven by an understanding of the environment,
culture, and intended use of software. For example, the testing approach for life -
critical medical equipment software would be completely different than that for a
low-cost computer game.
user acceptance testing - determining if software is satisfactory to an end-user or
customer.
comparison testing - comparing software weaknesses and strengths to
competing products.
alpha testing - testing of an application when development is nearing completion;
minor design changes may still be made as a result of such testing. Typically
done by end-users or others, not by programmers or testers.
beta testing - testing when development and testing are essentially completed
and final bugs and problems need to be found before final release. Typically
done by end-users or others, not by programmers or testers.
mutation testing - a method for determining if a set of test data or test cases is
useful, by deliberately introducing various code changes ('bugs') and retesting
with the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources.
Test plan
A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a test
plan is a useful way to think through the efforts needed to validate the
acceptability of a software product. The completed document will help people
outside the test group understand the 'why' and 'how' of product validation. It
should be thorough enough to be useful but not so thorough that no one outside
the test group will read it. The following are some of the items that might be
included in a test plan, depending on the particular project:
Title
Identification of software including version/release numbers
Revision history of document including authors, dates, approvals
Table of Contents
Purpose of document, intended audience
Objective of testing effort
Software product overview
Relevant related document list, such as requirements, design documents, other
test plans, etc.
Relevant standards or legal requirements
Traceability requirements
Relevant naming conventions and identifier conventions
Overall software project organization and personnel/contact-info/responsibilties
Test organization and personnel/contact-info/responsibilities
Assumptions and dependencies
Project risk analysis
Testing priorities and focus
Scope and limitations of testing
Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
Outline of data input equivalence classes, boundary value analysis, error classes
Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
Test environment validity analysis - differences between the test and production
systems and their impact on test validity.
Test environment setup and configuration issues
Software migration processes
Software CM processes
Test data setup requirements
Database setup requirements
Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
Test automation - justification and overview
Test tools to be used, including versions, patches, etc.
Test script/test code maintenance processes and version control
Problem tracking and resolution - tools and processes
Project test metrics to be used
Reporting requirements and testing deliverables
Software entrance and exit criteria
Initial sanity testing period and criteria
Test suspension and restart criteria
Personnel allocation
Personnel pre-training needs
Test site/location
Outside test organizations to be utilized and their purpose, responsibilties,
deliverables, contact persons, and coordination issues
Relevant proprietary, classified, security, and licensing issues.
Test case
3 – Tier Application.
Introduction
Why 3-tier
What is 3-tier-architecture
Advantages
Introduction
Why 3-tier?
Unfortunately the 2 -tier model shows striking weaknesses, that make the
development and maintenance of such applications much more expensive.
The complete development accumulates on the PC. The PC processes and
presents information which leads to monolithic applications that are expensive to
maintain. That’s why it’s called a "fat client".
In a 2-tier architecture, business-logic is implemented on the PC. Even the
business-
logic never makes direct use of the windowing-system, programmers have to be
trained for the complex API under Windows.
Windows 3.X and Mac-systems have tough resource restrictions. For this reason
applications programmers also have to be well trained in systems technology, so
that they can optimize scarce resources.
Increased network load: since the actual processing of the data takes place on
the remote client, the data has to be transported over the network. As a rule this
leads to increased network stress.
How to conduct transactions is controlled by the client. Advanced techniques like
two-phase-committing can’t be run.
PCs are considered to be "untrusted" in terms of security, i.e. they are relatively
easy to crack. Nevertheless, sensitive data is transferred to the PC, for lack of an
alternative.
Data is only "offered" on the server, not processed. Stored-procedures are a form
of assistance given by the database provider. But they have a limited application
field and a proprietary nature.
Application logic can’t be reused because it is bound to an individual PC-
program.
The influences on change-management are drastic: due to changes in business
politics or law (e.g. changes in VAT computation) processes have to be changed.
Thus possibly dozens of PC-programs have to be adapted because the same
logic has been implemented numerous times. It is then obvious that in turn each
of these programs have to undergo quality control, because all programs are
expected to generate the same results again.
The 2-tier-model implies a complicated software-distribution-procedure: as all of
the application logic is executed on the PC, all those machines (maybe
thousands) have to be updated in case of a new release. This can be very
expensive, complicated, prone to error and time consuming. Distribution
procedures include the distribution over networks (perhaps of large files) or the
production of an adequate media like floppies or CDs. Once it arrives at the
user’s desk, the software first has to be installed and tested for correct execution.
Due to the distributed character of such an update procedure, system
management cannot guarantee that all clients work on the correct copy of the
program.
3- and n-tier architectures endeavour to solve these problems. This goal is
achieved primarily by moving the application logic from the client back to the
server.
From here on we will only refer to 3-tier architecture, that is to say, at least 3-tier
architecture.
The following diagram shows a simplified form of reference-architecture, though
in principal, all possibilities are illustrated.
Client-tier
Is responsible for the presentation of data, receiving user events and controlling
the user interface. The actual business logic (e.g. calculating added value tax)
has been moved to an application-server. Today, Java -applets offer an
alternative to traditionally written PC-applications. See our Internet-page for
further information.
Application-server-tier
This tier is new, i.e. it isn’t present in 2-tier architecture in this explicit form.
Business-objects that implement the business rules "live" here, and are available
to the client-tier. This level now forms the central key to solving 2 -tier problems.
This tier protects the data from direct access by the clients.
The object oriented analysis "OOA", on which many books have been written,
aims in this tier: to record and abstract business processes in business-objects.
This way it is possible to map out the applications-server-tier directly from the
CASE-tools that support OOA.
Furthermore, the term "component" is also to be found here. Today the term pre-
dominantly describes visual components on the client-side. In the non-visual area
of the system, components on the server-side can be defined as configurable
objects, which can be put together to form new application processes.
Data-server-tier
This tier is responsible for data storage. Besides the widespread relational
database systems, existing legacy systems databases are often reused here.
It is important to note that boundaries between tiers are logical. It is quite easily
possible to run all three tiers on one and the same (physical) machine. The main
importance is that the system is neatly structured, and that there is a well
planned definition of the software boundaries between the different tiers.
The advantages of 3-tier architecture
The bug needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested, and determinations
made regarding requirements for regression testing to check that fixes didn't
create problems elsewhere. If a problem-tracking system is in place, it should
encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available
The best bet in this situation is for the testers to go through the process of
reporting whatever bugs or blocking-type problems initially show up, with the
focus being on critical bugs. Since this type of problem can severely affect
schedules, and indicates deeper problems in the software development process
(such as insufficient unit testing or insufficient integration testing, poor design,
improper build or release procedures, etc.) managers should be notified, a nd
provided with some documentation as evidence of the problem.
Web sites are essentially client/server applications - with web servers and
'browser' clients. Consideration should be given to the interactions between html
pages, TCP/IP communications, Internet connections, firewalls, applications that
run in web pages (such as applets, javascript, plug-in applications), and
applications that run on the server side (such as cgi scripts, database interfaces,
logging applications, dynamic page generators, asp, etc.). Additionally, there are
a wide variety of servers and browsers, various versions of each, small but
sometimes significant differences between them, variations in connection
speeds, rapidly changing technologies, and multiple standards and protocols.
The end result is that testing for web sites can become a major ongoing effort.
Other considerations might include:
What are the expected loads on the server (e.g., number of hits per unit time?),
and what kind of performance is required under such loads (such as web server
response time, database query response times). What kinds of tools will be
needed for performance testing (such as web load testing tools, other tools
already in house that can be adapted, web robot downloading tools, etc.)?
Who is the target audience? What kind of browsers will they be using? What kind
of connection speeds will they by using? Are they intra- organization (thus with
likely high connection speeds and similar browsers) or Internet-wide (thus with a
wide variety of connection speeds and browser types)?
What kind of performance is expected on the client side (e.g., how fast should
pages appear, how fast should animations, applets, etc. load and run)?
Will down time for server and content maintenance/upgrades be allowed? how
much?
What kinds of security (firewalls, encryptions, passwords, etc.) will be required
and what is it expected to do? How can it be tested?
How reliable are the site's Internet connections required to be? And how does
that affect backup system or redundant connection requirements and testing?
What processes will be required to manage updates to the web site's content,
and what are the requirements for maintaining, tracking, and controlling page
content, graphics, links, etc.?
Which HTML specification will be adhered to? How strictly? What variations will
be allowed for targeted browsers?
Will there be any standards or requirements for page appearance and/or
graphics throughout a site or parts of a site??
How will internal and external links be validated and updated? how often?
Can testing be done on the production system, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variabilities, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
How extensive or customized are the server logging and reporting requirements;
are they considered an integral part of the system and do they require testing?
How are cgi programs, applets, javascripts, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
Pages should be 3-5 screens max unless content is tightly focused on a single
topic. If larger, provide internal links within the page.
The page layouts and design elements should be consistent throughout a site, so
that it's clear to the user that they're still within a site.
Pages should be as browser-independent as possible, or pages should be
provided or generated based on the browser-type.
All pages should have links external to the page; there should be no dead-end
pages.
The page owner, revision date, and a link to a contact person or organization
should be included on each page.
A good test engineer has a 'test to break' attitude, an ability to take the point of
view of the customer, a strong desire for quality, and an attention to detail. Tact
and diplomacy are useful in maintaining a cooperative relationship with
developers, and an ability to communicate with both technical (developers) and
non-technical (customers, management) people is useful. Previous software
development experience can be helpful as it provides a deeper understanding of
the software development process, gives the tester an appreciation for the
developers' point of view, and reduce the learning curve in automated test tool
programming. Judgement skills are needed to assess high-risk areas of an
application on which to focus testing efforts when time is limited.
The same qualities a good tester has are useful for a QA engineer. Additionally,
they must be able to understand the entire software development process and
how it can fit into the business approach and goals of the organization.
Communication skills and the ability to understand various sides of issues are
important. In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find problems as well
as to see 'what's missing' is important for inspections and reviews.
The life cycle begins when an application is first conceived and ends when it
is no longer in use. It includes aspects such as initial concept, requirements
analysis, functional design, internal design, documentation planning, test
planning, coding, document preparation, integration, testing, maintenance,
updates, retesting, phase-out, and other aspects.
1. Test Planning
2. Test Development
3. Test Execution
4. Test Results
5. Defects Generation
6. Reporting
Every step along the system development life cycle has its own risks and a
number of available techniques to improve process discipline and resulting
output quality. Moving through the development life cycle, you might encounter
the following major steps:
In addition, you might have support activities throughout the development effort
such as:
Written guidance for all these steps would constitute the core of your
methodology. You can see how it wouldn't take long to fill a number o f big
binders with development processes and procedures. Hence, the importance of
selecting processes wisely - to address known risks - keeping the methodology
streamlined, and allowing for some discretion on the part of the project team.
Waterfall Methodologies Summarized
Waterfall Deliverables
Define Design Code Test Imp
Waterfall Strengths
Most of the benefits from using a waterfall methodology are directly related to its
underlying principles of structure. These strengths include:
¦ Ease in analyzing potential changes
While waterfall has advantages, its highly structured approach also leads to
disadvantages such as the following:
¦ Lack of flexibility
Even though the impact of changes can be analyzed more effectively using a
waterfall approach, the time required to analyze and implement each change can
be significant. This is simply due to the structured nature of the project, and this
is particularly acute when the changes are frequent or large.
Even with improvements in providing wire frame screen mockups and more
detailed flowcharts, the front-end planning process has a hard time effectively
predicting the most effective system design on the front-end. One of the most
challenging factors that causes this difficulty is the unfamiliarity most Subject
Matter Experts are with formal system design techniques. One complaint that I
have heard a lot is “the document looks impressive, but I don’t know if the system
will meet my needs until I see the actual screens.”
Even more disturbing is the inevitable loss of knowledge between the planning
and programming phases. Even with the most detailed documents, the analysts
and architects always have an implicit understanding of the project needs that
are very hard to transfer via paper documents. The information loss is
particularly harmful to the project when it is developing a relatively new system
as opposed to modifying an existing system.
Closely linked to the knowledge loss effect, is the fact that the waterfall
methodology discourages team cohesion. Many studies have found that truly
effective teams begin a project with a common goal and stay together to the end.
The tendency to switch out project staff from phase to phase weakens this
overall team cohesion. In fact, it is common for the project manager be the only
person that sees a project from beginning to end. The effect on team productivity
is very hard to quantify, but may be illustrated with the following question: Would
you have a passion for quality if you knew that someone else would be
responsible for fixing your document or code in the next phase?
The most significant weakness is the possibilities that a poor design choice will
not be discovered until the final phases of testing or implementation. The risk of
this occurring increase as project size and duration goes up. Even dedicated
and competent people make simple mistakes. In the context of the rigid wate rfall
timetable, mistakes made in the master design may not be discovered until six or
nine months of programming have been completed and the entire system is
being tested.
Iterative Methodologies Summarized
Iterative Sequence
Iteration 1 Iteration 2 Iteration 3 Iteration 4 Release
20% 20% 30% 25% 5%
Iterative Deliverables
Iteration 1 Iteration 2 Iteration 3 Iteration 4 Release
Iterative Strengths
Feedback from the Subject Matter Experts or users can be based on an actual
working prototype of the system relatively early in the project life cycle. This
enables the SME to base his or her feedback on actually working with a limited
version of final product. Much like it is easier to decide if a product meets your
needs if you can examine it in the store than if someone were to just describe it
to you, the SME is instantly able to identify potential problems with the
application as the developer is interpreting his requirements before too much
time has passed.
Since the development team receives feedback at early stages in the overall
development process, changes in requirements can be more easily incorporated
into the finished product. More importantly, if the SME determines that a feature
would no t be as valuable, it can be omitted before too much development time
has been spent or integrating the particular component into the overall system.
In a similar way, since the team is deploying actual working prototype versions of
the application along the way, a flaw in the design should become more apparent
earlier in the project schedule. Instead of discovering a potential problem only
after the system goes to full-scale testing, more design flaws can be addressed
before they impact other features and require significant effort to correct.
Because each iteration actually functions (sometimes to a limited degree),
deploying parts of the system in a staged roll-out becomes much easier. Using
an iterative methodology, the team simply stabilizes an earlier iteration of the
component, collaborates with the SME to ensure it is stable and rolls it out.
Another advantage of doing a staged roll-out in this way is that actual production
use will generate more improvement suggestions to be incorporated in
subsequent iterations of the same component and/or other components.
The team approach stressed in the iterative methodology increases overall
motivation and productivity. Because the same people are involved from
beginning to end, they know that the design choices made will ultimately affect
their ability to successfully complete the project. Productivity will be enhanced
because of the sense of ownership the project team has in the eventual result.
While it may seem like the “empowerment” fad, many studies ha ve found that a
team charged with a common goal tends to be much more productive than
groups of people with individual incentives and shifting assignments. One
example of such a study is Groups that Work by Gerard Blair.
The fact that an integrated team maintains a thorough understanding of the
project is a more tangible benefit. This effect arises simply by having the same
individuals involved from the very beginning and listening first hand to the
Subject Matter Experts describe their needs and objectives. The subsequent
feedback during each iteration of the project builds upon the initial understanding.
Since the same person is listening to the needs and writing the code, less time
needs to be spent authoring documents to describe those requirements for
eventual hand-off. This translates into more time spent writing and testing the
actual software.
Iterative Weaknesses
The drawbacks to using an iterative approach are worth considering and should
be weighed carefully when deciding on a methodology for a new project. Some
of the more serious weaknesses include:
¦ Difficulty coordinating larger teams
¦ Can result in a never-ending project if not managed properly
Iterative projects tend to be most effective with small, highly skilled teams. It is
much more difficult to ensure that the components mesh together smoothly
across larger, geographically distributed projects. While steps can be taken to
minimize the chances of failure, coordinating large iterative development efforts
is typically very hard to accomplish effectively because of the lack of detailed
planning documents.
Because there are no specific cut-off milestones for new features, an iterative
project runs the risk of continuing into perpetuity. Even though one of the
strengths is the ability to react to changing business needs, the project leader
must determine when the major business needs have been met. Otherwise, the
project will continue to adapt to ever changing business needs and the software
will never end. This will result in never really deploying a finished product to full
production use. This is a risk even in a staged roll-out situation because there
are always improvements possible to any software.
In any software project, there is always the tendency to borrow time from the final
system documentation tasks to resolve more defects or polish certain features
more. This risk increases on iterative projects because there is usually no
scheduled documentation period. The result is a system that is very hard to
maintain or enhance.
In a similar way, in an iterative project it is much easier to fix a definite project
schedule or dollar budget than determine exactly what features will be able to be
built within that timeline. This is simply due to the fact that the features change
based on user feedback and the evolution of design.
Conclusion
I have clearly presented the tradeoffs between two basic approaches to software
development in order to show that no methodology is universally superior.
Instead, the approach that you should take on your next project should depend
on its particular needs and the constraints that you have to work with. While
certainly not an exhaustive reference on the subject of how a particular
methodology is structured, my purpose was to help you become more familiar
with the strengths and weaknesses inherent in each of the large schools of
thought prevalent in the software development community. Based on the issues
discussed, a few basic guidelines that may help point you in the right decision
are listed below. Keep in mind that no methodology should ever be considered a
substitute for ensuring project members have the proper experience and skillset
for the task to be accomplished.
¦ The iterative methodology is usually better for new concepts
Spiral Methodology:
The spiral methodology reflects the relationship of tasks with rapid prototyping,
increased parallelism, and concurrency in design and build activities. The spiral
method should still be planned methodically, with tasks and deliverables
identified for each step in the spiral.
Documentation:
TABLE OF CONTENTS
1. Introduction
1.1 Document Purpose
1.2 Objectives
2. Project Scope
2.1 In Scope
2.2 Out Of Scope but Critical to Project Success
2.3 Out of Scope
3. Project Resources
4. Test Strategies/Techniques
4.1 Test Design
4.2 Test Data
5. Automation Coding Strategy
6. Test Suite Backup Strategy
7. Test Suite Version Control Strategy
8. Metrics Table
9. Project Tasks/Schedule
10. Tool Inventory
11. Hardware/Software Configuration
12. Naming Conventions
13. Defect Responsibility/Resolution
14. Exit Criteria
15. Goals and Deliverables
16. Glossary of Standard Terms
Introduction
Document Purpose
Table 4.2. Use Case listing with brief description and Test Case mapping.
Use Case ID Description Test Case
UC-1 Use Case 1 description TC-1a
TC-1b
Table 4.3. Test Case listing with mapping to generative Use Case, description
and Requirement
reference.
Test Case Use Case ID Description Requirement
ID
TC-1a UC-1 Test Case 1a description R1.1-R5.3
TC-1b UC-1 R6.1-R10.3
TC-1c UC-1 R10.3-R11
Test Data
Description of data sets to be used in testing, origin of data sets, purpose for
using each set (e.g. different user data for different user permissions), where
data sets obtained, whose expertise guided data set selection, etc.
This section describes the automation coding strategy that will be used for every
test script: Generic examples follow.
Automation of the test suite for the XX application will be performed using XX
Software’s XX suite (automation tool: XX; scripting language: XX). The
automation coding strategy that will be used in test suite building will include the
following rules:
Start and Stop Point: All Test Script navigation will start and finish on the XX
window/page of the XX application.
Browser Caption Verifications: Browser Captions will be verified on every window
that is encountered in the application. The execution of these verifications will
occur immediately after each window is loaded.
Object Properties: Properties of objects that must be verified will be retrieved
from application objects using the test tool’s data capture functionality. The
retrieved data will then be compared against validated data in test suite files.
Results will be output to the test log.
Maintainability: Scripting will adhere to modular coding practices and will test
following the strategy described above.
Test suite builds will employ RTTS’ proprietary language extension (rttsutil.dll).
Test Suite Backup Strategy
List all paths to test artifacts here.
How will test suite data (code, external files, tool data, etc.) be backed up for
storage?
How often?
Where will backup location be?
How accessible will backups be?
How many backups will be kept at any given time?
Project Tasks/Schedule
Tool Inventory
Project Administration
Test Management
Capture/Playback
Defect/Issue Tracking
Requirements
Management
Team
Communications
(email, WebEx)
Utilities (RTTS
Utilities)
. Hardware/Software Configuration
Field 1 represents the defined user stream through PROJECT by name. This
section varies in length from one to three characters.
Separation Character is an underscore.
Field 2 represents the section of PROJECT being tested.
Field 3 represents the type of transaction being tested.
Additional Character (if needed) represents a numeric counter for multiple scripts
of the same type and name.
Defect Responsibility/Resolution
Exit Criteria
The following exit criteria will be used for each stage of the testing process.
Testing can proceed to the next stage of the process when a sufficient proportion
of the current stage has been completed (for example, test case preparation
need not be completed before automated coding begins). The end of the project
should satisfy all exit criteria.
Deliverables
The following list describes the defined deliverables for each stage of the testing
process:
Test Case
Better tests have more reliable results as well as lowering costs in three
categories:
1. Productivity - less time to write and maintain cases
2. Testability - less time to execute them
3. Scheduling reliability- better reliability in estimates
Accurate: -
Appropriate: -
Traceable: -
You have to know what requirement the case is testing. It may meet all the
other standards, but if its result, pass or fail, doesn't matter, why bother?
Self-cleaning: -
Picks up after itself. It returns the test environment to the pre-test state.
Tests should be destructive, including trying to break a simulated
production environment in controlled, repeatable ways.
Format of test cases
Automated scripts:
A decision to use automated testing software is more related to the project
and organization doing the testing than to what is being tested. There are
some technical issues that must be met, varying from tool to tool, but most
applications can find a technical fit. The project management must
understand that writing automated cases takes longer than manual tests
because the manual tests must be still are written first. When the interface
is stable, then the tests can be recorded.
The real payback of automated testing comes in the maintenance phase
Myth: Step-by-step test cases take too long to write. We can't afford them.
Reality: They may or may not take longer to write, but it is easy to
maintain. They are the only way to test some functions adequately.
Myth: A matrix is always the best choice. Make it work.
Reality:
A persistent problem is putting together a matrix with proper set-up
setups or classes of input can't be forced into a matrix with a like group,
Myth: High tech is best. If you can automate test cases, do it.
Reality:
A decision to use automated testing should be based on many factors.
Myth: We don't have time to write manual test cases. Let's automate
them.
Reality: Automated test cases take longer to create than the other two
types.
For example, you may have tests for maintaining a supplier database.
Many, but not all, the steps would also apply to a shipper database. As
you get to know the software throug h requirements or prototypes,
strategize which functions work in such a way that you can clone the test
cases. Writing them as clones does not mean they need to be tested
together.
You can clone steps as well as test cases.
Word processing and test authoring software support cloning with features
such as "Save As," "Copy," and "Replace." It's very important to proofread
these cases to make sure all references to the original are
replaced in the clone.
Matrixes can also be cloned, especially if the setup section is the same.
The variables would be changes in the field names and values. Again,
make sure to proofread the new version.
Improving productivity with test management software
productivity booster for writing test cases. It has these advantages over
• Before writing cases, and at every status meeting, find out where
the greatest risk of requirement changes are.
• Strategize what cases will and won't be affected by the change.
Write the ones that won't first.
• Build in variables or "to be decided" that you will come back and fill
in later.
• Make sure the budget owner knows the cost of revising test cases
that are already written. Quantify what it costs per case.
• Let project management set priorities for which cases should be
written or revised. Let them see you can't do it all and ask them to
decide where they have greatest risk.
• Release the not-quite-right test cases unrevised. Ask the testers to
mark up what has to be changed.
• Schedule more time to test each case, plus time for maintaining the
tests.
• If a testing date is moved up, get management to participate in the
options of how test cases will be affected. As in the changing
requirements challenge, let them choose what they want to risk.
• Add staff only if time permits one to two weeks of training before
they have to be productive, and only if you have someone to
mentor and review their work.
• Shift the order of writing cases so you write those first that will be
tested first. Try to stay one case ahead of the testers.
• You can skinny down the test cases to just a purpose, what
requirement is being tested, and a setup.
• Offer to have writers do the testing and write as they go. Schedule
more time for testing and finishing the writing after testing.
Quality Attributes