You are on page 1of 56

Software Quality Assurance

Software QA involves the entire software development PROCESS -


monitoring and improving the process, making sure that any agreed-upon
standards and procedures are followed, and ensuring that problems are found
and dealt with. It is oriented to 'prevention'.

Software Testing

Testing involves operation of a system or application under controlled conditions


and evaluating the results (eg, 'if the user is in interface A of the application while
using hardware B, and does C, then D should happen'). The controlled conditions
should include both normal and abnormal conditions. Testing should intentionally
attempt to make things go wrong to determine if things happen when they
shouldn't or things don't happen when they should. It is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility for QA and
testing. Sometimes they're the combined responsibility of one group or individual.
Also common are project teams that include a mix of testers and developers who
work closely together, with overall QA processes monitored by project managers.
It will depend on what best fits an organization's size and business structure.

Software Quality

Quality software is reasonably bug-free, delivered on time and within budget,


meets requirements and/or expectations, and is maintainable. However, quality is
obviously a subjective term. It will depend on who the 'customer' is and their
overall influence in the scheme of things. A wide-angle view of the 'customers' of
a software development project might include end -users, customer acceptance
testers, customer contract officers, customer management, the development
organization's management/accountants/testers/salespeople, future software
maintenance engineers, stockholders, magazine columnists, etc. Each type of
'customer' will have their own slant on 'quality' - the accounting department might
define quality in terms of profits while an end-user might define quality as user-
friendly and bug-free.

Some recent major computer system failures caused by software bugs

A May 2005 newspaper article reported that a major hybrid car manufacturer had
to install a software fix on 20,000 vehicles due to problems with invalid engine
warning lights and occasional stalling. In the article, an automotive software
specialist indicated that the automobile industry spends $2 billion to $3 billion per
year fixing software problems.
Media reports in January of 2005 detailed severe problems with a $170 million
high-profile U.S. government IT systems project. Software testing was one of the
five major problem areas according to a report of the commission reviewing the
project. In March of 2005 it was decided to scrap the entire project.
In July 2004 newspapers reported that a new government we lfare management
system in Canada costing several hundred million dollars was unable to handle a
simple benefits rate increase after being put into live operation. Reportedly the
original contract allowed for only 6 weeks of acceptance testing and the system
was never tested for its ability to handle a rate increase.
Millions of bank accounts were impacted by errors due to installation of
inadequately tested software code in the transaction processing system of a
major North American bank, according to mid-2004 news reports. Articles about
the incident stated that it took two weeks to fix all the resulting errors, that
additional problems resulted when the incident drew a large number of e-mail
phishing attacks against the bank's customers, and that the total cost of the
incident could exceed $100 million.

Every software project need testers?

While all projects will benefit from testing, some projects may not require
independent test staff to succeed.
Which projects may not need independent test staff? The answer depends on the
size and context of the project, the risks, the development methodology, the skill
and experience of the developers, and other factors.

For instance, if the project is a short-term, small, low risk project, with highly
experienced programmers utilizing thorough unit testing or test-first development,
then test engineers may not be required for the project to succeed.

In some cases an IT organization may be too small or new to have a testing staff
even if the situation calls for it. In these circumstances it may be appropriate to
instead use contractors or outsourcing, or adjust the project management and
development approach (by switching to more senior developers and agile test-
first development, for example).

Inexperienced managers sometimes gamble on the success of a project by


skipping thorough testing or having programmers do post-development functional
testing of their own work, a decidedly high risk gamble.

For non-trivial-size projects or projects with non-trivial risks, a testing staff is


usually necessary. As in any business, the use of personnel with specialized
skills enhances an organization's ability to be successful in large, complex, or
difficult tasks. It allows for both a) deeper and stronger skills and b) the
contribution of differing perspectives. For example, programmers typically have
the perspective of 'what are the technical issues in making this functionality
work?'. A test engineer typically has the perspective of 'what might go wrong with
this functionality, and how can we ensure it meets expectations?'. Technical
people who can be highly effective in approaching tasks from both of those
perspectives are rare, which is why, sooner or later, organizations bring in test
specialists.
What is Operating System?

The most important program that runs on a computer.


Every general-purpose computer must have an operating system to run other
programs.
Operating systems perform basic tasks, such as recognizing input from the
keyboard, sending output to the display screen, keeping track of files and
directories on the disk, and controlling peripheral devices such as disk drives and
printers.
Commonly used Operating systems are
Windows XP
Linux
Solaris

What is Software?

A program or set of instructions that controls the operation of a computer.


Distinguished from the actual hardware of the computer

Internet Explorer
Microsoft word
Notepad

What is Hardware?

The physical equipment of computing and computer-directed activities.


The physical components of a computer system.
Mouse
Modem
Hard drive
CD Rom

What is SDLC?

SDLC: System Development Life Cycle


A methodology used to develop, maintain, and replace information systems.
Typical phases in the SDLC are:
Analysis
Design
Development
Integration and Testing
Implementation, etc
What is an application?

A software program designed to perform a specific task or group of tasks, such


as word processing, communications, or database management
2 Tier Application
3 Tier Application
N Tier Application

What is a Scripting Language?

A programming language in which programs are the series of commands


that are interpreted and then executed one by one. Doesn't require the
compilation phase, for the price of lower performance

VB script
Java Script
Perl
Etc..

What is Data?

Information stored on the computer system, used by applications to accomplish


tasks.
Types of Data:
Text
Numbers
Alpha Numeric
Pictures
Music file
Movie file
Animation

What is a Database?

A database is a collection of information/data stored in a computer in a


systematic way, such that a computer program can consult it to answer
questions.
The software used to manage and query a database is known as a database
management system (DBMS). The properties of database systems are studied in
information science.
Type of data storage methods/formats
File System in different formats
DBMS
RDBMS (Relational Database Management system)
Etc ..
What is a Client?

A client is a system that accesses a (remote) service on another


computer by some kind of network.
The term was first applied to devices that were not capable of running their own
stand-alone programs, but could interact with remote computers via a network.
These dumb terminals were clients of the time-sharing mainframe computer.
Some examples are
Web Client
Java Client
Etc..

What is a Browser?

Software program used to view and interact with various types of Internet
resources available on the World Wide Web.
Netscape and Internet Explorer are two common examples.

What is a Server?

A computer that delivers information and software to other computers


linked by a network.

What is a Web Server?

A computer that is connected to the Internet and stores files written in


HTML (hypertext markup language) that is publicly available through an Internet
connection.

Apache
Web logic
Web Sphere
IIS
Etc..

What is application Server?

An application server is a software platform that delivers content to the


Web.
This means that an application server interprets site traffic and constructs pages
based on a dynamic content repository.
This content is typically personalized based on site visitor information, such as
the content he/she has viewed up to that point, his/her past buying history, or
preferences he/she has set during previous visits.
Commonly used servers are
Tomcat
Web Logic
Web Sphere

Load Balancing

Distributing processing and communications activity evenly across a


computer network so that no single device is overwhelmed.
Load balancing is especially important for networks where it's difficult to predict
the number of requests that will be issued to a server.
Busy Web sites typically employ two or more Web servers in a load balancing
scheme.
If one server starts to get swamped, requests are forwarded to another server
with more capacity.
Load balancing can also refer to the communications channels themselves.

What is Proxy Server?

A server that receives requests intended for another server and that acts
on the behalf of the client behalf (as the client proxy) to obtain the requested
service. A proxy server is often used when the client and the server are
incompatible for direct connection. For example, the client is unable to meet the
security authentication requirements of the server but should be permitted some
services.

What is a Protocol?

On the Internet "protocol" usually refers to a set of rules that define an


exact format for communication between systems.
HTTP protocol defines the format for communication between web browsers and
web servers.
IMAP protocol defines the format for communication between IMAP email
servers and clients
SSL protocol defines a format for encrypted communications over the Internet.

What are Cookies?

Are small files that can be created and written to by a


programming/scripting language.
Client-side cookies are JavaScript cookies that are read/written to a user's hard
drive by a JavaScript program that runs in the web browser when a user visits a
web site.
Server-side cookies may be created by languages such as PHP. ...
What is XML?

Extensible Markup Language. A flexible way to create common


information formats and share both the format and the data on the World Wide
Web, intranets, and elsewhere.
XML is a formal recommendation from the World Wide Web Consortium (W3C)
similar to the language of today's Web pages, the Hypertext Markup Language
(HTML).

What is Networking?

A system of connecting computer systems or peripheral devices, each one


remote from the other.

What is Bandwidth?

Bandwidth is the amount of data that can be transferred over the network
in a fixed amount of time. On the Net, it is usually expressed in bits per second
(bps) or in higher units like Mbps (millions of bits per second). 28.8 modem can
deliver 28,800 bps, a T1 line is about 1.5 Mbps

What is Firewall?

A firewall is either the program or the computer it runs on, usually an


Internet gateway server, that protects the resources of one network from users
from other networks. Typically, an enterprise with an intranet that allows its
workers access to the wider Internet will wa nt a firewall to prevent outsiders from
accessing its own private data resource

What is middleware?

Software that connects two otherwise separate applications. Middleware is


sometimes called "plumbing" because it connects two sides of an application and
passes data between them.
Oracle's SQL*Net connect clients to the database server. Oracle's Gateways
connect different types of databases (for example Oracle to SQL Server or DB2)
JMS
Etc..

What is Environment?

A collection of hardware, software, network communications and


procedures that work together to provide a discrete type of computer service.
There may be one or more environments on a physical platform, eg test,
production. An environment has unique features and characteristics that dictate
how the y are administered in similar, yet diverse manners.

Development
Testing or QC
Staging or Pre-Production
Production
Etc..

What is Network Drive?

A connection to the handle drive of a remote computer, allowing you to


access shared files and directories.
You can establish a network drive connection to a directory in the file space.

What is Version Control?

Each time content is changed and checked back into a content


management system, a copy of the content is saved and its identifier (version
number) is incremented to indicate its difference from the previous copy.
Commonly used applications are:
Visual Source Safe
Perforce
Etc..

What is Net Work Printer?

A printer available for use by workstations on a network. A network printer


either has its own built-in network interface card, or it's connected to a printer on
the network.
This is commonly shared between a group, people around the printer.
Some printers are privileged such as color, laser and some other specialty
printers depending upon the organization types.

What is an IP Address?

This is a unique string of numbers that identifies a computer or server on


the Internet.
These numbers are normally shown in groups separated by periods.
Example: 216.239.51.
Hosting accounts for websites can have either a shared or unique IP address

What is host name?

In the Internet suite of protocols, the name that is given to a machine.


Sometimes, host name is used to mean fully qualified domain name (FQDN).
Other times, it is used to mean the most specific sub name of a fully qualified
domain name.
For example, if rchland.vnet.ibm.com is the fully qualified domain name, either of
the following can be considered the host name: (a) rchland.vnet.ibm.com, or (b)
rchland

What is Configuration?

This is a general-purpose computer term that can refer to the way you
have your computer set up.
It is also used to describe the total combination of hardware components
that make up a computer system and the software settings that allow various
hardware components of a computer system to communicate with one another.

Why does software have bugs?

Miscommunication or no communication - as to specifics of what an application


should or shouldn't do (the application's requirements).

Software complexity - the complexity of current software applications can be


difficult to comprehend for anyone without experience in modern-day software
development. Multi-tiered applications, client-server and distributed applications,
Data communications, enormous relational databases, and sheer size of
applications have all contributed to the exponential growth in software/system
complexity.
Programming errors - programmers, like anyone else, can make mistakes.

Changing requirements (whether documented or undocumented) - the end-user


may not understand the effects of changes, or may understand and request them
anyway - redesign, rescheduling of engineers, effects on other projects, work
already completed that may have to be redone or thrown out, hardware
requirements that may be affected, etc. If there are many minor changes or any
major changes, known and unknown dependencies among parts of the project
are likely to interact and cause problems, and the complexity of coordinating
changes may result in errors.

Enthusiasm of engineering staff may be affected. In some fast-changing


business environments, continuously modified requirements may be a fact of life.
In this case, management must understand the resulting risks, and QA and test
engineers must adapt and plan for continuous extensive testing to keep the
inevitable bugs from running out of control –
Time pressures - scheduling of software projects is difficult at best, often
requiring a lot of guesswork. When deadlines loom and the crunch comes,
mistakes will be made.

Egos - people prefer to say things like:


'no problem' 'piece of cake'
'I can whip that out in a few hours'
'it should be easy to update that old code'

Poorly documented code - it's tough to maintain and modify code that is badly
written or poorly documented; the result is bugs. In many organizations
management provides no incentive for programmers to document their code or
write clear, understandable, maintainable code. In fact, it's usually the opposite:
they get points mostly for quickly turning out code, and there's job security if
nobody else can understand it ('if it was hard to write, it should be hard to read').

Software development tools - visual tools, class libraries, compilers, scripting


tools, etc. often introduce their own bugs or are poorly documented, resulting in
added bugs.

What kinds of testing should be considered?

Black box testing - not based on any knowledge of internal design or code. Tests
are based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's
code. Tests are based on coverage of code statements, branches, paths,
conditions.
unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires
detailed knowledge of the internal program design and code. Not always easily
done unless the application has a well-designed architecture with tight code; may
require developing test driver modules or test harnesses.
incremental integration testing - continuous testing of an application as new
functionality is added; requires that various aspects of an application's
functionality be independent enough to work separately before all parts of the
program are completed, or that test d rivers be developed as needed; done by
programmers or by testers.
integration testing - testing of combined parts of an application to determine if
they function together correctly. The 'parts' can be code modules, individual
applications, client and server applications on a network, etc. This type of testing
is especially relevant to client/server and distributed systems.
functional testing - black-box type testing geared to functional requirements of an
application; this type of testing should be done b y testers. This doesn't mean that
the programmers shouldn't check that their code works before releasing it (which
of course applies to any stage of testing.)
system testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
end-to-end testing - similar to system testing; the 'macro' end of the test scale;
involves testing of a complete application environment in a situation that mimics
real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if
appropriate.
sanity testing or smoke testing - typically an initial testing effort to determine if a
new software version is performing well enough to accept it for a major testing
effort. For example, if the new software is crashing systems every 5 minutes,
bogging down systems to a crawl, or corrupting databases, the software may not
be in a 'sane' enough condition to warrant further testing in its current state.
regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed,
especially near the end of the development cycle. Automated testing tools can be
especially useful for this type of testing.
acceptance testing - final testing based on specifications of the end -user or
customer, or based on use by end-users/customers over some limited period of
time.
load testing - testing an application under heavy loads, such as testing of a web
site under a range of loads to determine at what point the system's response time
degrades or fails.
stress testing - term often used interchangeably with 'load' and 'performance'
testing. Also used to describe such tests as system functional testing while under
unusually heavy loads, heavy repetition of certain actions or inputs, input of large
numerical values, large complex queries to a database system, etc.
performance testing - term often used interchangeably with 'stress' and 'load'
testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in
requirements documentation or QA or Test Plans.
usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will
depend on the targeted e nd-user or customer. User interviews, surveys, video
recording of user sessions, and other techniques can be used. Programmers and
testers are usually not appropriate as usability testers.
install/uninstall testing - testing of full, partial, or upgrade i nstall/uninstall
processes.
recovery testing - testing how well a system recovers from crashes, hardware
failures, or other catastrophic problems.
failover testing - typically used interchangeably with 'recovery testing'
security testing - testing how well the system protects against unauthorized
internal or external access, willful damage, etc; may require sophisticated testing
techniques.
compatability testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
exploratory testing - often taken to mean a creative, informal software test that is
not based on formal test plans or test cases; testers may be learning the
software as they test it.
ad-hoc testing - similar to exploratory testing , but often taken to mean that the
testers have significant understanding of the software before testing it.
context-driven testing - testing driven by an understanding of the environment,
culture, and intended use of software. For example, the testing approach for life -
critical medical equipment software would be completely different than that for a
low-cost computer game.
user acceptance testing - determining if software is satisfactory to an end-user or
customer.
comparison testing - comparing software weaknesses and strengths to
competing products.
alpha testing - testing of an application when development is nearing completion;
minor design changes may still be made as a result of such testing. Typically
done by end-users or others, not by programmers or testers.
beta testing - testing when development and testing are essentially completed
and final bugs and problems need to be found before final release. Typically
done by end-users or others, not by programmers or testers.
mutation testing - a method for determining if a set of test data or test cases is
useful, by deliberately introducing various code changes ('bugs') and retesting
with the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources.

What is SEI? CMM? CMMI? ISO? IEEE? ANSI?

SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by


the U.S. Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity
Model Integration'), developed by the SEI. It's a model of 5 levels of process
'maturity' that determine effectiveness in delivering quality software. It is geared
to large organizations such as large U.S. Defense Department contractors.
However, many of the QA processes involved are appropriate to any
organization, and if reasonably applied can be helpful. Organizations can receive
CMMI ratings by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic
efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.

Level 2 - software project tracking, requirements management,


realistic planning, and configuration management
processes are in place; successful practices can
be repeated.

Level 3 - standard software development and maintenance processes


are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes,


and products. Project performance is predictable,
and quality is consistently high.

Level 5 - the focus is on continouous process improvement. The


impact of new processes and technologies can be
predicted and effectively implemented when required.

Perspective on CMM ratings: During 1997-2001, 1018 organizations


were assessed. Of those, 27% were rated at Level 1, 39% at 2,
23% at 3, 6% at 4, and 5% at 5. (For ratings during the period
1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were
U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in
Software Quality Assurance.

ISO = 'International Organisation for Standardization' - The ISO 9001:2000


standard (which replaces the previous standard of 1994) concerns quality
systems that are assessed by outside auditors, and it applies to many kinds of
production and manufacturing organizations, not just software. It covers
documentation, design, development, production, testing, installation, servicing,
and other processes. The full set of standards consists of: (a)Q9001-2000 -
Quality Management Systems: Requirements; (b)Q9000-2000 - Quality
Management Systems: Fundamentals and Vocabulary; (c)Q9004-2000 - Quality
Management Systems: Guidelines for Performance Improvements. To be ISO
9001 certified, a third-party auditor assesses an organization, and certification is
typically good for about 3 years, after which a complete reassessment is
required. Note that ISO certification does not necessarily indicate quality
products - it indicates only that documented processes are followed.
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things,
creates standards such as 'IEEE Standard for Software Test Documentation'
(IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI
Standard 1008), 'IEEE Standard for Software Quality Assurance Plans'
(IEEE/ANSI Standard 730), and others.
ANSI = 'American National Standards Institute', the primary industrial standards
body in the U.S.; publishes some software-related standards in conjunction with
the IEEE and ASQ (American Society for Quality).
Other software development/IT management process assessment methods
besides CMMI and ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL,
MOF, and CobiT.
What steps are needed to develop and run software tests?

The following are some of the steps to consider:


Obtain requirements, functional design, and internal design specifications and
other necessary documents
Obtain budget and schedule requirements
Determine project-related personnel and their responsibilities, reporting
requirements, required standards and p rocesses (such as release processes,
change processes, etc.)
Determine project context, relative to the existing quality culture of the
organization and business, and how it might impact testing scope, aproaches,
and methods.
Identify application's highe r-risk aspects, set priorities, and determine scope and
limitations of tests
Determine test approaches and methods - unit, integration, functional, system,
load, usability tests, etc.
Determine test environment requirements (hardware, software, communications,
etc.)
Determine testware requirements (record/playback tools, coverage analyzers,
test tracking, problem/bug tracking, etc.)
Determine test input data requirements
Identify tasks, those responsible for tasks, and labor requirements
Set schedule estimates, timelines, milestones
Determine input equivalence classes, boundary value analyses, error classes
Prepare test plan document and have needed reviews/approvals
Write test cases
Have needed reviews/inspections/approvals of test cases
Prepare test environment and testware, obtain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking
processes, set up logging and archiving processes, set up or obtain test input
data
Obtain and install software releases
Perform tests
Evaluate and report results
Track problems/bugs and fixes
Retest as needed
Maintain and update test plans, test cases, test environment, and testware
through life cycle

Test plan

A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a test
plan is a useful way to think through the efforts needed to validate the
acceptability of a software product. The completed document will help people
outside the test group understand the 'why' and 'how' of product validation. It
should be thorough enough to be useful but not so thorough that no one outside
the test group will read it. The following are some of the items that might be
included in a test plan, depending on the particular project:
Title
Identification of software including version/release numbers
Revision history of document including authors, dates, approvals
Table of Contents
Purpose of document, intended audience
Objective of testing effort
Software product overview
Relevant related document list, such as requirements, design documents, other
test plans, etc.
Relevant standards or legal requirements
Traceability requirements
Relevant naming conventions and identifier conventions
Overall software project organization and personnel/contact-info/responsibilties
Test organization and personnel/contact-info/responsibilities
Assumptions and dependencies
Project risk analysis
Testing priorities and focus
Scope and limitations of testing
Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
Outline of data input equivalence classes, boundary value analysis, error classes
Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
Test environment validity analysis - differences between the test and production
systems and their impact on test validity.
Test environment setup and configuration issues
Software migration processes
Software CM processes
Test data setup requirements
Database setup requirements
Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
Test automation - justification and overview
Test tools to be used, including versions, patches, etc.
Test script/test code maintenance processes and version control
Problem tracking and resolution - tools and processes
Project test metrics to be used
Reporting requirements and testing deliverables
Software entrance and exit criteria
Initial sanity testing period and criteria
Test suspension and restart criteria
Personnel allocation
Personnel pre-training needs
Test site/location
Outside test organizations to be utilized and their purpose, responsibilties,
deliverables, contact persons, and coordination issues
Relevant proprietary, classified, security, and licensing issues.

Test case

A test case is a document that describes an input, action, or event and an


expected response, to determine if a feature of an application is working
correctly. A test case should contain particulars such as test case identifier, test
case name, objective, test conditions/setup, input data requirements, steps, and
expected results.
Note that the process of developing test cases can help find problems in the
requirements or design of an application, since it requires completely thinking
through the operation of the application. For this reason, it's useful to prepare test
cases early in the development cycle if possible.

3 – Tier Application.

Introduction
Why 3-tier
What is 3-tier-architecture
Advantages

Introduction

Through the appearance of Local-Area-Networks, PCs came out of their


isolation, and were soon not only being connected mutually but also to servers.
Client/Server-computing was born.
Servers today are mainly file and database servers; application servers are the
exception. However, database-servers only offer data on the server;
consequently the application intelligence must be implemented on the PC
(client). Since there are only the architecturally tiered data server and client, this
is called 2 -tier architecture. This model is still predominant today, and is actually
the opposite of its popular terminal based predecessor that had its entire
intelligence on the host system.
One reason why the 2-tier model is so widespread, is because of the quality of
the tools and middleware that have been most commonly used since the 90’s:
Remote-SQL, ODBC, relatively inexpensive and well integrated PC-tools (like
Visual Basic, Power-Builder, MS Access, 4-GL-Tools by the DBMS
manufactures). In comparison the server side uses relatively expensive tools. In
addition the PC-based tools show good Rapid-Application-Development (RAD)
qualities i.e. that simpler applications can be produced in a comparatively short
time. The 2-tier model is the logical consequence of the RAD-tools’ popularity :
for many managers it was and is simpler to attempt to achieve efficiency in
software development using tools, than to choose the steep and stony path of
"brainware".

Why 3-tier?

Unfortunately the 2 -tier model shows striking weaknesses, that make the
development and maintenance of such applications much more expensive.
The complete development accumulates on the PC. The PC processes and
presents information which leads to monolithic applications that are expensive to
maintain. That’s why it’s called a "fat client".
In a 2-tier architecture, business-logic is implemented on the PC. Even the
business-
logic never makes direct use of the windowing-system, programmers have to be
trained for the complex API under Windows.
Windows 3.X and Mac-systems have tough resource restrictions. For this reason
applications programmers also have to be well trained in systems technology, so
that they can optimize scarce resources.
Increased network load: since the actual processing of the data takes place on
the remote client, the data has to be transported over the network. As a rule this
leads to increased network stress.
How to conduct transactions is controlled by the client. Advanced techniques like
two-phase-committing can’t be run.
PCs are considered to be "untrusted" in terms of security, i.e. they are relatively
easy to crack. Nevertheless, sensitive data is transferred to the PC, for lack of an
alternative.
Data is only "offered" on the server, not processed. Stored-procedures are a form
of assistance given by the database provider. But they have a limited application
field and a proprietary nature.
Application logic can’t be reused because it is bound to an individual PC-
program.
The influences on change-management are drastic: due to changes in business
politics or law (e.g. changes in VAT computation) processes have to be changed.
Thus possibly dozens of PC-programs have to be adapted because the same
logic has been implemented numerous times. It is then obvious that in turn each
of these programs have to undergo quality control, because all programs are
expected to generate the same results again.
The 2-tier-model implies a complicated software-distribution-procedure: as all of
the application logic is executed on the PC, all those machines (maybe
thousands) have to be updated in case of a new release. This can be very
expensive, complicated, prone to error and time consuming. Distribution
procedures include the distribution over networks (perhaps of large files) or the
production of an adequate media like floppies or CDs. Once it arrives at the
user’s desk, the software first has to be installed and tested for correct execution.
Due to the distributed character of such an update procedure, system
management cannot guarantee that all clients work on the correct copy of the
program.
3- and n-tier architectures endeavour to solve these problems. This goal is
achieved primarily by moving the application logic from the client back to the
server.

What is 3 - and n-tier architecture?

From here on we will only refer to 3-tier architecture, that is to say, at least 3-tier
architecture.
The following diagram shows a simplified form of reference-architecture, though
in principal, all possibilities are illustrated.

Client-tier

Is responsible for the presentation of data, receiving user events and controlling
the user interface. The actual business logic (e.g. calculating added value tax)
has been moved to an application-server. Today, Java -applets offer an
alternative to traditionally written PC-applications. See our Internet-page for
further information.

Application-server-tier

This tier is new, i.e. it isn’t present in 2-tier architecture in this explicit form.
Business-objects that implement the business rules "live" here, and are available
to the client-tier. This level now forms the central key to solving 2 -tier problems.
This tier protects the data from direct access by the clients.
The object oriented analysis "OOA", on which many books have been written,
aims in this tier: to record and abstract business processes in business-objects.
This way it is possible to map out the applications-server-tier directly from the
CASE-tools that support OOA.
Furthermore, the term "component" is also to be found here. Today the term pre-
dominantly describes visual components on the client-side. In the non-visual area
of the system, components on the server-side can be defined as configurable
objects, which can be put together to form new application processes.

Data-server-tier

This tier is responsible for data storage. Besides the widespread relational
database systems, existing legacy systems databases are often reused here.
It is important to note that boundaries between tiers are logical. It is quite easily
possible to run all three tiers on one and the same (physical) machine. The main
importance is that the system is neatly structured, and that there is a well
planned definition of the software boundaries between the different tiers.
The advantages of 3-tier architecture

As previously mentioned 3-tier architecture solves a number of problems that are


inherent to 2-tier architectures. Naturally it also causes new problems, but these
are outweighed by the advantages.

Clear separation of user-interface-control and data presentation from application-


logic. Through this separation more clients are able to have access to a wide
variety of server applications. The two main advantages for client-applications
are clear: quicker development through the reuse of pre-built business-logic
components and a shorter test phase, because the server-components have
already been tested.
Re-definition of the storage strategy won’t influence the clients. RDBMS’ offer a
certain independence from storage details for the clients. However, cases like
changing table attributes make it necessary to adapt the client’s application. In
the future, even radical changes, like let’s say switching form an RDBMS to an
OODBS, won’t influence the client. In well designed systems, the client still
accesses data over a stable and well designed interface which encapsulates all
the storage details.
Business-objects and data storage should be brought as close together as
possible, ideally they should be together physically on the same server. This way
- especially with complex accesses - network load is eliminated. The client only
receives the results of a calculation - through the business-object, of course.
In contrast to the 2-tier model, where only data is accessible to the public,
business-objects can place applications -logic or "services" on the net. As an
example, an inventory number has a "test-digit", and the calculation of that digit
can be made available on the server.
As a rule servers are "trusted" systems. Their authorization is simpler than that of
thousands of "untrusted" client-PCs. Data protection and security is simpler to
obtain. Therefore it makes sense to run critical business processes, that work
with security sensitive data, on the server.
Dynamic load balancing: if bottlenecks in terms of performance occur, the server
process can be moved to other servers at runtime.
Change management: of course it’s easy - and faster - to exchange a component
on the server than to furnish numerous PCs with new program versions. To come
back to our VAT example: it is quite easy to run the new version of a tax-object in
such a way that the clients automatically work with the version from the exact
date that it has to be run. It is, however, compulsory that interfaces remain stable
and that old client versions are still compatible. In addition such components
require a high standard of quality control. This is because low quality
components can, at worst, endanger the functions of a whole set of client
applications. At best, they will still irritate the systems operator.
As shown on the diagram, it is relatively simple to use wrapping techniques in 3-
tier architecture. As implementation changes are transparent from the viewpoint
of the object's client, a forward strategy can be developed to replace legacy
system smoothly. First, define the object's interface. However, the functionality is
not newly implemented but reused from an existing host application. That is, a
request from a client is forwarded to a legacy syste m and processed and
answered there. In a later phase, the old application can be replaced by a
modern solution. If it is possible to leave the business object’s interfaces
unchanged, the client application remains unaffected. A requirement for wrapping
is, however, that a procedure interface in the old application remains existent. It
isn’t possible for a business object to emulate a terminal. It is also important for
the project planner to be aware that the implementation of wrapping objects can
be very complex.

What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested, and determinations
made regarding requirements for regression testing to check that fixes didn't
create problems elsewhere. If a problem-tracking system is in place, it should
encapsulate these processes. A variety of commercial problem-
tracking/management software tools are available

The following are items to consider in the tracking process:


Complete information such that developers can understand the bug, get an idea
of it's severity, and reproduce it if necessary.
Bug identifier (number, ID, etc.)
Current bug status (e.g., 'Released for Retest', 'New', etc.)
The application name or identifier and version
The function, module, feature, object, screen, etc. where the bug occurred
Environment specifics, system, platform, relevant hardware specifics
Test case name/number/identifier
One-line bug description
Full b ug description
Description of steps needed to reproduce the bug if not covered by a test case or
if the developer doesn't have easy access to the test case/test script/test tool
Names and/or descriptions of file/data/messages/etc. used in test
File excerpts/error messages/log file excerpts/screen shots/test tool logs that
would be helpful in finding the cause of the problem
Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
Was the bug reproducible?
Tester name
Test date
Bug reporting date
Name of developer/group/organization the problem is assigned to
Description of problem cause
Description of fix
Code section/file/module/class/method that was fixed
Date of fix
Application version that contains the fix
Tester responsible for retest
Retest date
Retest results
Regression testing requirements
Tester responsible for regression tests
Regression testing results
A reporting or tracking process should enable notification of appropriate
personnel at various stages. For instance, testers need to know when retesting is
needed, developers need to know when bugs are found and how to get the
needed information, and reporting/summary capabilities are needed for
managers.

What is 'configuration management'?

Configuration management covers the processes used to control, coordinate,


and track: code, requirements, documentation, problems, change requests,
designs, tools/compilers/libraries/patches, changes made to them, and who
makes the changes

What if the software is so buggy it can't really be tested at all?

The best bet in this situation is for the testers to go through the process of
reporting whatever bugs or blocking-type problems initially show up, with the
focus being on critical bugs. Since this type of problem can severely affect
schedules, and indicates deeper problems in the software development process
(such as insufficient unit testing or insufficient integration testing, poor design,
improper build or release procedures, etc.) managers should be notified, a nd
provided with some documentation as evidence of the problem.

How can it be known when to stop testing?

This can be difficult to determine. Many modern software applications are so


complex, and run in such an interdependent environment, that complete testing
can never be done. Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point
Bug rate falls below a certain level
Beta or alpha testing period ends

What if there isn't enough time for thorough testing?

Use risk analysis to determine where testing should be focused.


Since it's rarely possible to test every possible aspect of an application, every
possible combination of events, every dependency, or everything that could go
wrong, risk analysis is appropriate to most software development projects. This
requires judgement skills, common sense, and experience. (If warranted, formal
methods are also available.) Considerations can include:
Which functionality is most important to the project's intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which aspects of the application can be tested early in the development cycle?
Which parts of the code are most complex, a nd thus most subject to errors?
Which parts of the application were developed in rush or panic mode?
Which aspects of similar/related previous projects caused problems?
Which aspects of similar/related previous projects had large maintenance
expenses?
Which parts of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the worst publicity?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
Which tests will have the best high-risk-coverage to time-required ratio?
What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the same
considerations as described previously in 'What if there isn't enough time for
thorough testing?' apply. The tester might then do ad hoc testing, or write up a
limited test plan based on the risk analysis.

How does a client/server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies


among clients, data communications, hardware, and servers, especially in multi-
tier systems. Thus testing requirements can be extensive. When time is limited
(as it usually is) the focus should be on integration and system testing.
Additionally, load/stress/performance testing may be useful in determining
client/server application limitations and capabilities. There are commercial tools
to assist with such testing.

How can World Wide Web sites be tested?

Web sites are essentially client/server applications - with web servers and
'browser' clients. Consideration should be given to the interactions between html
pages, TCP/IP communications, Internet connections, firewalls, applications that
run in web pages (such as applets, javascript, plug-in applications), and
applications that run on the server side (such as cgi scripts, database interfaces,
logging applications, dynamic page generators, asp, etc.). Additionally, there are
a wide variety of servers and browsers, various versions of each, small but
sometimes significant differences between them, variations in connection
speeds, rapidly changing technologies, and multiple standards and protocols.
The end result is that testing for web sites can become a major ongoing effort.
Other considerations might include:
What are the expected loads on the server (e.g., number of hits per unit time?),
and what kind of performance is required under such loads (such as web server
response time, database query response times). What kinds of tools will be
needed for performance testing (such as web load testing tools, other tools
already in house that can be adapted, web robot downloading tools, etc.)?
Who is the target audience? What kind of browsers will they be using? What kind
of connection speeds will they by using? Are they intra- organization (thus with
likely high connection speeds and similar browsers) or Internet-wide (thus with a
wide variety of connection speeds and browser types)?
What kind of performance is expected on the client side (e.g., how fast should
pages appear, how fast should animations, applets, etc. load and run)?
Will down time for server and content maintenance/upgrades be allowed? how
much?
What kinds of security (firewalls, encryptions, passwords, etc.) will be required
and what is it expected to do? How can it be tested?
How reliable are the site's Internet connections required to be? And how does
that affect backup system or redundant connection requirements and testing?
What processes will be required to manage updates to the web site's content,
and what are the requirements for maintaining, tracking, and controlling page
content, graphics, links, etc.?
Which HTML specification will be adhered to? How strictly? What variations will
be allowed for targeted browsers?
Will there be any standards or requirements for page appearance and/or
graphics throughout a site or parts of a site??
How will internal and external links be validated and updated? how often?
Can testing be done on the production system, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variabilities, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
How extensive or customized are the server logging and reporting requirements;
are they considered an integral part of the system and do they require testing?
How are cgi programs, applets, javascripts, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
Pages should be 3-5 screens max unless content is tightly focused on a single
topic. If larger, provide internal links within the page.
The page layouts and design elements should be consistent throughout a site, so
that it's clear to the user that they're still within a site.
Pages should be as browser-independent as possible, or pages should be
provided or generated based on the browser-type.
All pages should have links external to the page; there should be no dead-end
pages.
The page owner, revision date, and a link to a contact person or organization
should be included on each page.

What makes a good Software Test engineer?

A good test engineer has a 'test to break' attitude, an ability to take the point of
view of the customer, a strong desire for quality, and an attention to detail. Tact
and diplomacy are useful in maintaining a cooperative relationship with
developers, and an ability to communicate with both technical (developers) and
non-technical (customers, management) people is useful. Previous software
development experience can be helpful as it provides a deeper understanding of
the software development process, gives the tester an appreciation for the
developers' point of view, and reduce the learning curve in automated test tool
programming. Judgement skills are needed to assess high-risk areas of an
application on which to focus testing efforts when time is limited.

What makes a good Software QA engineer?

The same qualities a good tester has are useful for a QA engineer. Additionally,
they must be able to understand the entire software development process and
how it can fit into the business approach and goals of the organization.
Communication skills and the ability to understand various sides of issues are
important. In organizations in the early stages of implementing QA processes,
patience and diplomacy are especially needed. An ability to find problems as well
as to see 'what's missing' is important for inspections and reviews.

SOFTWARE LIFE CYCLE:

The life cycle begins when an application is first conceived and ends when it
is no longer in use. It includes aspects such as initial concept, requirements
analysis, functional design, internal design, documentation planning, test
planning, coding, document preparation, integration, testing, maintenance,
updates, retesting, phase-out, and other aspects.

SDLC (SOFTWARE DEVELOPMENT LIFE CYCLE)

SDLC has total seven phases:


• Initiate the Project (Initial Phase)
• Define the System (Analysis Phase)
• Design the system (after client approval)
• Build the System
• System Testing (Based on SRS – Software Requirement Planning)
• Deploy the system (Production)
• Support the system (Maintenance)
TESTING LIFE CYCLE:

1. Test Planning
2. Test Development
3. Test Execution
4. Test Results
5. Defects Generation
6. Reporting

Methodologies: What and Why?

Software engineering is the practice of using selected process techniques to


improve the quality of a software development effort. This is based on the
assumption, subject to endless debate and supported by patient experience, that
a methodical approach to software development results in fewer defects and,
therefore, ultimately provides shorter delivery times and better value. The
documented collection of policies, processes and procedures used by a
development team or organization to practice software engineering is called its
software development methodology (SDM) or system development life cycle
(SDLC).

Methodology as Risk Management

The challenge in selecting and following a methodology is to do it wisely -- to


provide sufficient process disciplines to deliver the quality required for business
success, while avoiding steps that waste time, squander productivity, demoralize
developers, and create useless administrivia. The best approach for applying a
methodology is to consider it as a means to manage risk. You can identify risks
by looking at past projects.

If your organization has been plagued by problems resulting from poor


requirements management, then a robust requirements management
methodology would be well advised. Once this problem has been solved,
through a repeatable process, the organization might then streamline its process,
while ensuring that quality is maintained.

Every step along the system development life cycle has its own risks and a
number of available techniques to improve process discipline and resulting
output quality. Moving through the development life cycle, you might encounter
the following major steps:

• Project charter and business case


• Definition of the business process and business requirements
• Documentation of user, functional and system requirements
• Top level architecture, technical approach, and system design
• System decomposition into component and unit specifications and design
• Coding, unit test planning, and unit test
• Generation of test data for unit testing and system testing
• System integration and testing
• Implementation, delivery and cut-over
• Training and user support
• System upgrades and routine software maintenance

In addition, you might have support activities throughout the development effort
such as:

• Configuration management (version identification, baseline management


and change control)
• Requirements management and tracability
• Quality management (quality assurance, quality reviews, defect tracking)
• System engineering reviews (requirements review, prelim. and critical
design reviews, etc.)
• Support environment (development tools, libraries, files management,
data management)

Written guidance for all these steps would constitute the core of your
methodology. You can see how it wouldn't take long to fill a number o f big
binders with development processes and procedures. Hence, the importance of
selecting processes wisely - to address known risks - keeping the methodology
streamlined, and allowing for some discretion on the part of the project team.
Waterfall Methodologies Summarized

Rather than try to give an all-encompassing definition for methodologies that


should be classified as waterfall approaches, it easier to describe some common
characteristics. Primarily, a waterfall methodology structures a project into
distinct phases with defined deliverables from each phase. The phases are
always named something different, depending on which company is trying to
differentiate its own particular flavor, but the basic idea is that the first phase tries
to capture What the system will do (its requirements), the second determines
How it will be designed, in the middle is the actual programming, the fourth phase
is the full system Testing, and the final phase is focused on Implementation tasks
such as go-live, training, and documentation.
Define
15% Design
15%
Code
35%
Test
30%
Imp
Waterfall Sequence 5%

Waterfall Deliverables
Define Design Code Test Imp

Require Screens UI Test Scripts Training


ments.
Database Logic Defect Report Documentat
Objects Reports User ion
Feedback
Test Plan
Project Management
Project Charter, Status Reports, Change Requests

Typically waterfall methodologies result in a project schedule with 20-40% of the


time budgeted for the first two phases, 30-40% of the time to the programming,
and the rest allocated to testing and implementation time. The actual project
organization tends to be highly structured. Most medium to large size projects
will include a rigidly detailed set of procedures and controls to cover everything
from the types of communications to use in various situations, to authorizing and
tracking change orders, to the specific ways that defects are logged,
communicated, resolved, and re-tested.
Perhaps most importantly, waterfall methodologies also call for an evolution of
project staffing throughout the various phases. While typical cons ulting
companies will refer to the differences in staffing as simply “roles,” which imply
that the same people could remain on the project and simply switch roles, the
reality is that the project staff constantly changes as it progresses. Reasons for
the change include economics, mentoring, and expertise - economics in the
sense that the project budget encourages the replacement of a relatively highly
paid architect with a lower paid staff programmer as soon as possible. On the
other hand, an architect with a particular skill set or an analyst with valuable
subject area knowledge may be demanded on another project. A fundamental
assumption is that the extensive project documentation and control procedures
enable relatively easy knowledge transfer to new project staff.
Waterfall Resources
Define Design Code Test Imp

SMEs SMEs Architects Testers Analysts


Analysts Analysts Coders Coders Coders
Proj Architects Proj. Mgr. Proj. Mgr. Proj. Mgr.
Mgr.
Proj . Mgr.

Waterfall Strengths

Most of the benefits from using a waterfall methodology are directly related to its
underlying principles of structure. These strengths include:
¦ Ease in analyzing potential changes

¦ Ability to coordinate larger teams, even if geographically distributed

¦ Can enable precise dollar budget

¦ Less total time required from Subject Matter Experts

Because the requirements and design documents contain an abstract of the


complete system, the project manager can relatively quickly analyze what impact
a change will have on the entire system. An example might be if one developer
wanted to modify the fields in a database table or a class. The project manager
could look up what other components of the system rely on that particular table
or class and determine what side effects the change may have.
The same documents that take so much time to assemble at the front-end also
make dividing up and subsequently coordinating the work easier. Because the
design produced in a waterfall approach is so detailed, it is easier to ensure that
the pieces will be easier to integrate when the project nears the end of the
programming phase.
Even with a waterfall approach, the only way to ensure a precise up-front budget
cost is to have an external vendor submit a fixed bid for the remaining phases of
the project after one or two of the initial phases have been completed. In this
way, financial risk is contained even if the project takes longer than expected.
Perhaps one of the most compelling reasons for using the waterfall approach is
simply the relatively small amount of time required of the subject matter experts.
Because the SMEs in a project typically have other primary responsibilities, their
time is limited. The waterfall approach ensures tha t a significant involvement
from them is only required during the initial requirements phase as well as part of
the design, testing, and implementation phases. They have very little
involvement during the entire programming phase.
Waterfall Weaknesses

While waterfall has advantages, its highly structured approach also leads to
disadvantages such as the following:
¦ Lack of flexibility

¦ Hard to predict all needs in advance

¦ Intangible knowledge lost between hand-offs

¦ Lack of team cohesion

¦ Design flaws not discovered until the Testing phase

Even though the impact of changes can be analyzed more effectively using a
waterfall approach, the time required to analyze and implement each change can
be significant. This is simply due to the structured nature of the project, and this
is particularly acute when the changes are frequent or large.
Even with improvements in providing wire frame screen mockups and more
detailed flowcharts, the front-end planning process has a hard time effectively
predicting the most effective system design on the front-end. One of the most
challenging factors that causes this difficulty is the unfamiliarity most Subject
Matter Experts are with formal system design techniques. One complaint that I
have heard a lot is “the document looks impressive, but I don’t know if the system
will meet my needs until I see the actual screens.”
Even more disturbing is the inevitable loss of knowledge between the planning
and programming phases. Even with the most detailed documents, the analysts
and architects always have an implicit understanding of the project needs that
are very hard to transfer via paper documents. The information loss is
particularly harmful to the project when it is developing a relatively new system
as opposed to modifying an existing system.
Closely linked to the knowledge loss effect, is the fact that the waterfall
methodology discourages team cohesion. Many studies have found that truly
effective teams begin a project with a common goal and stay together to the end.
The tendency to switch out project staff from phase to phase weakens this
overall team cohesion. In fact, it is common for the project manager be the only
person that sees a project from beginning to end. The effect on team productivity
is very hard to quantify, but may be illustrated with the following question: Would
you have a passion for quality if you knew that someone else would be
responsible for fixing your document or code in the next phase?
The most significant weakness is the possibilities that a poor design choice will
not be discovered until the final phases of testing or implementation. The risk of
this occurring increase as project size and duration goes up. Even dedicated
and competent people make simple mistakes. In the context of the rigid wate rfall
timetable, mistakes made in the master design may not be discovered until six or
nine months of programming have been completed and the entire system is
being tested.
Iterative Methodologies Summarized

The iterative family of methodologies shares a common emphasis on highly


concentrated teams with minimal structure and access to constant feedback from
the Subject Matter Experts. While it may appear that they lack design or testing,
these types of methodologies actually place a great deal of emphasis on them.
They just do it in a different way. Typically, a project will begin with the
integrated project team being assembled and briefed on the project objectives.
The team will consist of all of the essential roles from the very beginning and
each member may actually play multiple roles. Rather than a distinct
progression through phases, the iterative methodology emphasizes creating a
series of working prototypes for evaluation by the SMEs until the objectives are
accomplished and the system is ready for final release. During the process, it is
critical for the actual project lead as well as the senior members of the team to
balance the SME requests against the overall system constraints and platform
limitations to ensure that quality and performance objectives can be met. It is the
development team’s responsibility to offer constructive feedback to the SMEs in
order to suggest alternatives and work together to the best mutual solution.
Where possible, individual team members will be given complete ownership of a
particular component and charged with ensuring its usability, quality, and
performance. The senior team members are responsible for enforcing quality
and consistency standards. Even on large projects, the initial planning will
consist of only a broad outline of the business objectives and creating a
framework for the overall project components. In some cases, a set of core
features for the entire system will be built initially and subsequent features added
as the project progresses. In other cases, just certain modules will be built
entirely during the early part of the project and other components added over
time.

Iterative Sequence
Iteration 1 Iteration 2 Iteration 3 Iteration 4 Release
20% 20% 30% 25% 5%

Iteration 1 Iteration 2 Iteration 3 Iteration 4 Release


25% 20% 25% 25% 5%
Iterative Resources
Iteration 1 Iteration 2 Iteration 3 Iteration 4 Release

SMEs SMEs SMEs SMEs SMEs


Analysts Analysts Analysts Analysts Analysts
Architect Architect Architect Architect Architect
Coders Coders Coders Coders Coders
Proj Proj Proj Proj Proj
Mgr. Mgr. Mgr. Mgr. Mgr.

Iterative Deliverables
Iteration 1 Iteration 2 Iteration 3 Iteration 4 Release

Tangible Tangible Tangible Tangible Improved


Software Software Software Software Software
Training
Document
Project Management ation
Project Charter, Status Reports, Budget Control, Timeline Control

Iterative Strengths

Many of the strengths of the iterative system are listed below:


¦ Rapid feedback from actual users

¦ Flexibility to address evolving requirements

¦ Design flaws discovered quickly

¦ Easy to roll-out new functionality in stages

¦ Higher motivation and great productivity

¦ Very little knowledge loss between phases

Feedback from the Subject Matter Experts or users can be based on an actual
working prototype of the system relatively early in the project life cycle. This
enables the SME to base his or her feedback on actually working with a limited
version of final product. Much like it is easier to decide if a product meets your
needs if you can examine it in the store than if someone were to just describe it
to you, the SME is instantly able to identify potential problems with the
application as the developer is interpreting his requirements before too much
time has passed.
Since the development team receives feedback at early stages in the overall
development process, changes in requirements can be more easily incorporated
into the finished product. More importantly, if the SME determines that a feature
would no t be as valuable, it can be omitted before too much development time
has been spent or integrating the particular component into the overall system.
In a similar way, since the team is deploying actual working prototype versions of
the application along the way, a flaw in the design should become more apparent
earlier in the project schedule. Instead of discovering a potential problem only
after the system goes to full-scale testing, more design flaws can be addressed
before they impact other features and require significant effort to correct.
Because each iteration actually functions (sometimes to a limited degree),
deploying parts of the system in a staged roll-out becomes much easier. Using
an iterative methodology, the team simply stabilizes an earlier iteration of the
component, collaborates with the SME to ensure it is stable and rolls it out.
Another advantage of doing a staged roll-out in this way is that actual production
use will generate more improvement suggestions to be incorporated in
subsequent iterations of the same component and/or other components.
The team approach stressed in the iterative methodology increases overall
motivation and productivity. Because the same people are involved from
beginning to end, they know that the design choices made will ultimately affect
their ability to successfully complete the project. Productivity will be enhanced
because of the sense of ownership the project team has in the eventual result.
While it may seem like the “empowerment” fad, many studies ha ve found that a
team charged with a common goal tends to be much more productive than
groups of people with individual incentives and shifting assignments. One
example of such a study is Groups that Work by Gerard Blair.
The fact that an integrated team maintains a thorough understanding of the
project is a more tangible benefit. This effect arises simply by having the same
individuals involved from the very beginning and listening first hand to the
Subject Matter Experts describe their needs and objectives. The subsequent
feedback during each iteration of the project builds upon the initial understanding.
Since the same person is listening to the needs and writing the code, less time
needs to be spent authoring documents to describe those requirements for
eventual hand-off. This translates into more time spent writing and testing the
actual software.
Iterative Weaknesses

The drawbacks to using an iterative approach are worth considering and should
be weighed carefully when deciding on a methodology for a new project. Some
of the more serious weaknesses include:
¦ Difficulty coordinating larger teams
¦ Can result in a never-ending project if not managed properly

¦ Tendency to not document thoroughly

¦ Predicting the precise features to be accomplished in a fixed


time/budget

Iterative projects tend to be most effective with small, highly skilled teams. It is
much more difficult to ensure that the components mesh together smoothly
across larger, geographically distributed projects. While steps can be taken to
minimize the chances of failure, coordinating large iterative development efforts
is typically very hard to accomplish effectively because of the lack of detailed
planning documents.
Because there are no specific cut-off milestones for new features, an iterative
project runs the risk of continuing into perpetuity. Even though one of the
strengths is the ability to react to changing business needs, the project leader
must determine when the major business needs have been met. Otherwise, the
project will continue to adapt to ever changing business needs and the software
will never end. This will result in never really deploying a finished product to full
production use. This is a risk even in a staged roll-out situation because there
are always improvements possible to any software.
In any software project, there is always the tendency to borrow time from the final
system documentation tasks to resolve more defects or polish certain features
more. This risk increases on iterative projects because there is usually no
scheduled documentation period. The result is a system that is very hard to
maintain or enhance.
In a similar way, in an iterative project it is much easier to fix a definite project
schedule or dollar budget than determine exactly what features will be able to be
built within that timeline. This is simply due to the fact that the features change
based on user feedback and the evolution of design.
Conclusion

I have clearly presented the tradeoffs between two basic approaches to software
development in order to show that no methodology is universally superior.
Instead, the approach that you should take on your next project should depend
on its particular needs and the constraints that you have to work with. While
certainly not an exhaustive reference on the subject of how a particular
methodology is structured, my purpose was to help you become more familiar
with the strengths and weaknesses inherent in each of the large schools of
thought prevalent in the software development community. Based on the issues
discussed, a few basic guidelines that may help point you in the right decision
are listed below. Keep in mind that no methodology should ever be considered a
substitute for ensuring project members have the proper experience and skillset
for the task to be accomplished.
¦ The iterative methodology is usually better for new concepts

¦ Waterfall is usually better for modifications to existing systems or


building large scale systems after proof-of concept prototypes have
been established

¦ However, some situations will require a hybrid approach

Spiral Methodology:

While the waterfall methodology offers an orderly structure for software


development, demands for reduced time-to-market make its series steps
inappropriate. The next evolutionary step from the waterfall is where the various
steps are staged for multiple deliveries or handoffs. The ultimate evolution from
the water fall is the spiral, taking advantage of the fact that development projects
work best when they are both incremental and iterative, where the team is able to
start small and benefit from enlightened trial and error along the way.

The spiral methodology reflects the relationship of tasks with rapid prototyping,
increased parallelism, and concurrency in design and build activities. The spiral
method should still be planned methodically, with tasks and deliverables
identified for each step in the spiral.
Documentation:

The reality is that increased processes usually result in increased


documentation. An improved process produces intermediate work products that
represent the elaboration of the product design at each step in the development
life cycle. Where possible, documentation should be generated using automated
tools, so outputs can contribute to generation of code structures or help generate
the code itself.

The difference between hacking and software engineering is professional


discipline applied with common sense. Software quality, reliability, and
maintainability are enhanced by having good documentation for requirements,
architecture, interfaces, detailed design, well-commented code, and good test
procedures. Requirements documentation practices should facilitate your
customer's understa nding and review of the real requirements. Software project
planning should include estimating the time and resources to produce, review,
approve, and manage such documentation products.

Sample Software Documentation Work Products


Sample Test Plan

TABLE OF CONTENTS

1. Introduction
1.1 Document Purpose
1.2 Objectives
2. Project Scope
2.1 In Scope
2.2 Out Of Scope but Critical to Project Success
2.3 Out of Scope
3. Project Resources
4. Test Strategies/Techniques
4.1 Test Design
4.2 Test Data
5. Automation Coding Strategy
6. Test Suite Backup Strategy
7. Test Suite Version Control Strategy
8. Metrics Table
9. Project Tasks/Schedule
10. Tool Inventory
11. Hardware/Software Configuration
12. Naming Conventions
13. Defect Responsibility/Resolution
14. Exit Criteria
15. Goals and Deliverables
16. Glossary of Standard Terms
Introduction
Document Purpose

Document overview; high-level summary of major issues addressed.


This Test Plan reviews:
Existing project information.
Business Requirements and critical transactions to be tested.
Testing types and strategies to be implemented
A proposed testing schedule
Objectives
State the objective of the testing project, its duration and justification. General
comments concerning the objective of testing are appropriate (e.g. make the QA
function more efficient; lower testing cycle time; improve software quality;
enhance testing process.
. Project Scope
In Scope
State scope in detail and duration of process.
Out Of Scope but Critical to Project Success
State any out-of-scope critical project dependency. E.g.: Database snapshots for
test system that accurately reflect current user population.
Out of Scope
State in detail any out-of-scope activities. (Ex., Performance, stress, and volume
testing (beyond the gathering of timing information from automated script
executions) are out of project scope.)
Project Resources
Table 3.1. Project Roles and Responsibilities
Role Responsibilities Resource
Name(s)
Testers Plan testing activities
Execute Test Cases
Automate Test Cases
Find, report and track defects
Measure test effort
Analyze results
Developers Deliver complete builds of the application
Provide Testers with feedback regarding
changes, new functionality
Provide expertise and knowledge of the
application-under-test
Eliminate agreed upon defects
Interview Users
Business Create Business Requirements
Analysts Create Test Scenarios, Test Cases
Users Describe and review Business
Requirements
Describe and review user profiles
Perform User Acceptance Testing (UAT)
DBA Provide access rights to database
Assist with extraction of data for testing
purposes
Provide a stable testing environment
Assist with returning database instance to a
known preferred state
Provide trouble-shooting and knowledge
Network Provide network access privileges
Administrator General troubleshooting and knowledge
Desktop Installation of software
Administrators Troubleshooting of hardware/software
Information regarding standard desktop
Management High-level problem solving
Mediation of issues
Interface of activities with different business
units
. Test Strategies/Techniques
Test Design
Describe test types that are relevant to the project. Provide justification for their
relevance.

Table 4.1. Summary of Test Types


Test Type Definition
Unit Test Test verifies the program (or module) logic and is
based on the knowledge of the program
structure. Programmers using the White Box
technique perform this test type.
Integration Test Test verifies the entire system’s functionality
(including feeds to and from the system)
according to the business and design
specifications.
Business Requirements Verifies the specific requirements of the user are
met. Also known as Business Rules
Acceptance Testing Verifies that the system needs to meet the initial
objectives and user’s exceptions. Used to prove
that the system works. Known as positive testing.
Regression Testing Verify that t he fixes/modifications are correct and
no other parts of the system have been affected.
System Test – Testing the Volume testing - to determine whether the
application architecture in a program can handle the required volume of data,
production-simulated requests, etc.
environment for normal and
worst-case situations.
Load Testing – Identify peak load conditions at
which the program will fail to handle required
processing loads within required time span.
Performance testing - determine whether the
program meets its performance requirements.
Resource Usage testing - determine whether the
program uses resources (memory, disk space,
etc.) at levels which exceed expectations
Interoperability Testing - Assure the application
can co-exist on the desktop with other
applications running. Also known as Compatibility
Testing.
Security testing - Show that the program’s
security requirements have been met
Concurrency testing - Tests to verify that the
system does not corrupt data when 2 or more
users attempt to update the same record at the
same time or when 2 or more users update
different records at the same time and set a
unique field to the same value
Graphical User Interface Verify GUI features and elements and compare
(GUI) Test them to GUI standards and test design

How will test types proposed above be tested?

Table 4.2. Use Case listing with brief description and Test Case mapping.
Use Case ID Description Test Case
UC-1 Use Case 1 description TC-1a
TC-1b

Table 4.3. Test Case listing with mapping to generative Use Case, description
and Requirement
reference.
Test Case Use Case ID Description Requirement
ID
TC-1a UC-1 Test Case 1a description R1.1-R5.3
TC-1b UC-1 R6.1-R10.3
TC-1c UC-1 R10.3-R11

Test Data
Description of data sets to be used in testing, origin of data sets, purpose for
using each set (e.g. different user data for different user permissions), where
data sets obtained, whose expertise guided data set selection, etc.

Automation Coding Strategy

This section describes the automation coding strategy that will be used for every
test script: Generic examples follow.
Automation of the test suite for the XX application will be performed using XX
Software’s XX suite (automation tool: XX; scripting language: XX). The
automation coding strategy that will be used in test suite building will include the
following rules:
Start and Stop Point: All Test Script navigation will start and finish on the XX
window/page of the XX application.
Browser Caption Verifications: Browser Captions will be verified on every window
that is encountered in the application. The execution of these verifications will
occur immediately after each window is loaded.
Object Properties: Properties of objects that must be verified will be retrieved
from application objects using the test tool’s data capture functionality. The
retrieved data will then be compared against validated data in test suite files.
Results will be output to the test log.
Maintainability: Scripting will adhere to modular coding practices and will test
following the strategy described above.
Test suite builds will employ RTTS’ proprietary language extension (rttsutil.dll).
Test Suite Backup Strategy
List all paths to test artifacts here.
How will test suite data (code, external files, tool data, etc.) be backed up for
storage?
How often?
Where will backup location be?
How accessible will backups be?
How many backups will be kept at any given time?

How long will backup data be kept? Will it be archived?


Test Suite Version Control Strategy
As test suites are modified for each build, how will test suite version
control/change management be addressed?
Will an external software tool be used for version control?
Will there be a need to run archived test suites against old builds of the
application? If so, how will this be facilitated?
Metrics Table
A central part of test planning is the gathering of metrics. An accurate collection
of metrics for all key project activities provides an assessment of the total effort
required for the project.

Table 8.1. Project Metrics


Activity Metric
Interview with a knowledgeable User to
characterize one user transaction
Walkthrough of the valid test case
Creation of a written test case by a business
analyst/SME
Automation and debugging of the script reflecting
the test case
Extraction of one requirement from requirements
documentation
Extraction of one requirement from user guide
documentation
Extraction of one requirement from release notes

Project Tasks/Schedule

Table 9.1. Project Schedule


Task Resources Comments Projected Completion
Test Plan None
Completed
Test Installation of
Environment Automated Tool
Prepared
Requirements Sectioned by user
Processed by paths through the
Tool application
Test Cases One per requirement
Planned
Test Cases One per requirement
Created
Test Cases Recorded in Tool,
Recorded and executed against each
Executed build and release of
PROJECT.
Defects Submitted and tracked
submitted and in Defect Tracking Tool
tracked
Test Cycle Continuous effort
Evaluation
Test Suite Continuous effort
Backup Strategy
Task Resources Comments Projected Completion
Test Suite Continuous effort
Version Control

Tool Inventory

Table 10.1. Software tools to be used in the Automated Testing of Project


Function Tool Name Vendor Version

Project Administration

Test Management

Capture/Playback

Defect/Issue Tracking

Requirements
Management

Team
Communications
(email, WebEx)

Utilities (RTTS
Utilities)

. Hardware/Software Configuration

Table 11.1. Hardware/Software


System Resources
Resource Details
Test PC(s)
Network OS
Communication Protocol
Server – Database
Server - Web
Applications Server
Database
Automation Software
Other Software
Front-End Development
Tools
Naming Conventions
All Test Cases and Test Scripts created for this project will adhere to the
following naming convention. Each Test Script will have the same name as its
respective Test Case. We will use the following scheme, based upon … :
B C - B I 2__
Numeric counter
char. char. 2 char. 3 char. 4 char. 5 char. 6 1 through 9
(only used in the case of
more than one)
Underscore FIELD 3:
FIELD 2: Type of Transaction:
FIELD 1: Area of application where I=Issue
The initial characters describe the transaction is performed:
the stream by name: R=Release
NC=Name Clearance
QBC=Quote, Binder, Cert stream Q=Quote

Field 1 represents the defined user stream through PROJECT by name. This
section varies in length from one to three characters.
Separation Character is an underscore.
Field 2 represents the section of PROJECT being tested.
Field 3 represents the type of transaction being tested.
Additional Character (if needed) represents a numeric counter for multiple scripts
of the same type and name.

Defect Responsibility/Resolution

Possible defects identified through automated or manual testing will be discussed


with development team members and/or the Project Manager to verify that the
observed behavior constitutes a defect. Identified defects will be logged in defect
tracking software. Defects found manually will be coded into relevant automated
test scripts for inclusion in future regression testing. Once the development team
has corrected a defect, the defect will be retested using the same Test Script that
detected the defect. Validated fixes will be entered into the defect-tracking tool.
Accurate defect status data will be maintained in the defect-tracking tool by. In
order to preserve data quality in the defect tracking process, … will serve as
gatekeeper for the defect database. Responsibilities include: evaluation of all
reported defects to verify the conditions under which they occur; reproducibility of
reported defects; accuracy of defect descriptions; uniqueness of logged defects.

Exit Criteria

The following exit criteria will be used for each stage of the testing process.
Testing can proceed to the next stage of the process when a sufficient proportion
of the current stage has been completed (for example, test case preparation
need not be completed before automated coding begins). The end of the project
should satisfy all exit criteria.

Stage 1: Test Process Assessment – Delivery of a written assessment of the


current test process with recommendations for improvement
Stage 2: Test Planning Stage – Test Plan delivery.
Stage 3: Test Design Stage – The application hierarchy, requirements hierarchy,
defined transactions, and detailed, written test cases approved.
Stage 4: Test Automation, Execution, and Defect Tracking Stage – 100% of test
cases are scripted and executed, 100% of produced documents are verified, and
100% of defects are retested and removed.
Stage 5: Evaluation and Improvement – Automated suite evaluation and
improvement.

Goals and Deliverables

Sample generic goals and deliverables follow.


Goals
The following list describes the defined goals for the test process:
To accomplish all tasks described in this test plan.
To install a measurable, improvable, repeatable, and manageable test process at
Client Company.
To decrease the time necessary to test new builds of Client Company’s
PROJECT.
To verify the functionality and content of the current version of the PROJECT
application.
To reduce the frequency of error associated with manual testing.
To find and successfully track 100% of defects present along the user path
defined in this plan.

Deliverables
The following list describes the defined deliverables for each stage of the testing
process:

Test Process Assessment - An assessment of the current test process with


recommendations for improvement.
Test Planning Stage - A complete Test Plan, including preliminary Test
Requirements.
Test Design Stage - Test Cases describing input, conditions, and expected
output for each requirement, verified Test Requirements.
Test Automation, Execution, and Defect Tracking stage – Test Scripts, logged
test results, defect/issue reports.
Evaluation and Improvement – Metrics proving the efficiency and benefit of
automated testing, Test Cycle Evaluation, Project summary/evaluation.

Glossary of Standard Terms


Table 16.1. Glossary
Term Definition
Test Scenario A path through an application to elicit the normal functioning of the
application. The path may be a user path, a path defined by
specific requirements or a path examining back-end functionality.
Examples: ‘make a deposit’ (path=common user path); ‘send
request to server for cost of mailing a package from point A to
point B’ (path=back-end path).
Test Case A text document that states the objectives and details of a test
scenario: the steps taken, specific test data used, test conditions,
and expected results.
Test Script A script containing the Automation Tool code that executes the
Test Scenario described in the corresponding Test Case.

TESTING TYPE DESCRIPTION


XML Testing Validation of XML data content on a transaction-by-
transaction basis. Where desirable, validation of formal
XML structure (metadata structure) may also be
included.
Java Testing (EJB, Direct exercise of class methods to validate that both
J2EE) object properties and methods properly reflect and
handle data according to business and functional
requirements of the layer. Exercise of transactions at
this layer may be performed to measure both
functional and performance characteristics
Data Integrity Testing Validation of system data at all data capture points in a
system, including front-end, middle- or content-tier,
and back-end database. Data integrity testing includes
strategies to examine and validate data at all critical
component boundaries.
GUI Testing Validation of GUI characteristics against GUI
requirements.
Issue/Defect Tracking Tracking software issues and defects is at the core of
the software quality management process. Software
quality can be assessed at any point in the
development process by tracking numbers of defects
and defect criticality. Software readiness-for-
deployment can be analyzed by following defect trends
for the duration of the project.
Requirements Requirements both define the shape of software (look-
Management and-feel, functionality, business rules) and set a
baseline for testing. As such, requirements
management, or the orderly process of gathering
requirements and keeping requirements
documentation updated on a release-by- release
basis, is critical to the deployment of quality software.
Interoperability Testing Validation that applications in a given platform
configuration do not conflict, causing loss of
functionality.
Functional Testing Validation of business requirements, GUI requirements
and data handling in an application.
Security Testing Validation that security requirements of a system have
been correctly implemented, including: resistance to
password cracking, Denial of Service (DOS) attacks,
and that known security flaws have been properly
patched.
Business Rules Testing Validation that business rules have been properly
implemented in a system, enforcing correct business
practices on the user.
COM+ Testing Direct exercise of COM methods to validate that both
object properties and methods properly reflect and
handle data according to business and functional
requirements of the COM layer. Exercise of
transactions at this layer may be performed to
measure both functional and performance
characteristics.
Integration Testing Testing in which software components, hardware
components, or both are combined and tested to
evaluate the interaction between them.
Network Latency Analysis of the fundamental amount of time it takes a
Modeling given message to traverse a given distance across a
specific network. This factor influences all messages
that traverse a network, and is key in modeling
network behavior.
Transaction Determining the footprint of business transactions.
Characterization This includes bandwidth on the network, CPU and
memory utilization on back-end systems. Additionally
used in Network Latency Modeling and Resource
Usage Testing.
Load/Scalability Testing Increase load on the target environment until
requirements are exceeded or saturation of a
resource. This is usually combined with other test
types to optimize performance.
Performance Testing Determining if the test environment meets
requirements at set loads and mixes of transactions by
testing specific business scenarios.
Stress Testing Exercising the target system or environment at the
point of saturation (depletion of a resource: CPU,
memory, etc.) to determine if the behavior changes
and possibly becomes detrimental to the system,
application or data.
Configuration Testing Encompasses testing various system configurations to
assess the requirements and resources needed.
Volume Testing Determining the volume of transactions that a
complete system can process. Volume Testing is
conducted in conjunction with Component,
Configuration and/or Stress Testing.
Resource Usage Multi-user testing conducted beyond Transaction
Testing Characterization to determine the total resource usage
of applications and subsystems or modules.
Concurrency Testing Multi-user testing geared towards determining the
effects of accessing the same application code,
module or database records. Identifies and measures
the level of locking, deadlocking and use of single-
threaded code and locking semaphores.
Infrastructure Testing Verifying and quantifying the flow of data through the
environment infrastructure.
Component Testing The appropriate tests are conducted against the
components individually to verify that each individual
component can support without failure. This testing is
typically conducted while the environment is being
assembled to identify any weak links.
Fail over Testing In environments that employ redundancy and load
balancing, Fail over Testing analyzes the theoretical
fail over procedure, tests and measures the overall fail
over process and its effects on the end-user.
Reliability Testing Once the environment or application is working and
optimized for performance, a longer period (24 to 48
hour) Reliability Test will determine if there are any
long term detrimental issues that may affect
performance in production.
SLA Testing Specialized business transaction testing to measure
Service Level Agreements with third party vendors.
The typical agreement guarantees a specified volume
of activity over a predetermined time period with a
specified maximum response time.
Web Site Monitoring Monitoring business transaction response times after
production deployment to ensure end -user
satisfaction.

Test Case

How to Write Better Test Cases


Poor test cases do indeed expose you to considerable risk.
They may cover the requirements in theory, but are hard to test and have
ambiguous results.

Better tests have more reliable results as well as lowering costs in three
categories:
1. Productivity - less time to write and maintain cases
2. Testability - less time to execute them
3. Scheduling reliability- better reliability in estimates

Elements of test cases


A test case is a set of actions with expected results based on
requirements for the system. The case includes these elements:
• The purpose of the test or description of what requirement is being
tested
• The method of how it will be tested
• The setup to test: version of application under test, hardware,
software, operating system, data files, security access, time of day,
logical or physical date, prerequisites such as other tests, and any
another other setup information pertinent to the requirement(s)
being tested
• Actions and expected results, or inputs and outputs
• Any proofs or attachments (optional)
These same elements need to be in test cases for eve ry level of testing –
• Unit
• Integration
• System, or acceptance testing.
They are valid for functional, performance, and usability testing.

An alternate description of test cases is that the description, purpose, and


setup are the case or Specification.

The steps to accomplish it are called a script.

Quality of test cases


Quality of writing a test case is objective and measurable.
It is simple to set up an objective checklist of the structural elements of
test cases -- purpose, method, setup, inputs and outputs. Then walk
through each case. Is the element there or not? In addition to their
structure, the cases must also meet these standards of quality:

Accurate: -

They test what their descriptions say they will test.


Economical: -
They have only the steps or fields needed for their purpose. They don't
give a guided tour of the software.
Repeatable, self-standing: -.
A test case is a controlled experiment. It should get the same results every
time no matter who tests it. If only the writer can test it and get the result,
or if the test gets different results for different testers, it needs more work
in the setup or actions.

Appropriate: -

A test case has to be appropriate for the testers and environment. If it is


theoretically sound but requires skills that none of the testers have, it will
sit on the shelf. Even if you know who is testing the first time, you need to
consider down the road -- maintenance and regression.

Traceable: -

You have to know what requirement the case is testing. It may meet all the
other standards, but if its result, pass or fail, doesn't matter, why bother?

Self-cleaning: -

Picks up after itself. It returns the test environment to the pre-test state.
Tests should be destructive, including trying to break a simulated
production environment in controlled, repeatable ways.
Format of test cases

What does a test case look like?


They seem to fall into three major groups:
Ø Step-by-step
Ø Matrix
Ø Automated script.
While the automated script will run as an online document, there is no
assumption that the other two must be paper-based. They, too, might be
online.

Best uses for each type of case


The most productive uses for step-by-step cases are:
• One-off test cases, each one different
• Business scenario goes from screen to screen
• Many processing rules
• GUI interfaces
• Input and output hard to represent in a matrix

The most productive uses for matrix cases are:


• Many variations of filling out a form, same fields, different values,
input files
• Same inputs, different platforms, browsers, configurations
• Character based screens
• Input and outputs best represented in a matrix
Nearly any test can be represented in a matrix, but the question to decide
is whether a matrix is the best way to test. It is most important that the
matrix be supported by a description, setup, and how to record results for
the test.
A variation of the matrix is a list of inputs. It can be inserted in a step-by-
step test or stand as a matrix with the explanatory elements of the test.

Automated scripts:
A decision to use automated testing software is more related to the project
and organization doing the testing than to what is being tested. There are
some technical issues that must be met, varying from tool to tool, but most
applications can find a technical fit. The project management must
understand that writing automated cases takes longer than manual tests
because the manual tests must be still are written first. When the interface
is stable, then the tests can be recorded.
The real payback of automated testing comes in the maintenance phase

of the software lifecycle. Then the scripts can be executed repeatedly,

even unattended, for great savings in testing time.

Besides record/playback scripts, automated tools are used for


performance and load testing.
They may use manual step-by-step cases or matrixes which detail how
automated tools will be used to create virtual users, launch transaction
scripts, monitor performance, and other activities.
Choosing a test type
The preference for one type of test case over another is driven as much
by the culture and perceptions of the organization as by what is the best fit
for the software and test plan.

Myth: Step-by-step test cases take too long to write. We can't afford them.
Reality: They may or may not take longer to write, but it is easy to
maintain. They are the only way to test some functions adequately.
Myth: A matrix is always the best choice. Make it work.
Reality:
A persistent problem is putting together a matrix with proper set-up

information. Too often this information is omitted, or worse yet, if different

setups or classes of input can't be forced into a matrix with a like group,

they are not tested at all

Myth: High tech is best. If you can automate test cases, do it.

Reality:
A decision to use automated testing should be based on many factors.

Myth: We don't have time to write manual test cases. Let's automate
them.
Reality: Automated test cases take longer to create than the other two
types.

Step-by-step cases tend to be more verbal, and matrixes more numeric.


Good training should build understanding and confidence to use all types
of cases, each where it is most productive. Often the most productive
route is to use all three types of cases.
The first two for unit, integration, and system testing; and automated
scripts for regression testing.

Improving test cases

Improving testability of test cases


The definition of testability is easy to test -- accurately. Easy can be
measured by how long it takes to execute the test, and whether the tester
has to get clarification during the testing process. Accurately means that if
the tester follows the directions, the result of pass or fail will be correct.

Improving productivity with templates


A test case template is a form with labeled fields. This is a great way to
start improving test cases. It jump starts the writing process and supports
each of the elements of a good case. Here are some other benefits of
using templates:
• Prevents blank page panic
• Assists the disorganized
• Builds in standards
• Prints spiffy looking tests
• Assists testers to find information
• Can include other fields relating to testing process

Improving productivity with clones:


Cloning test cases means to model one test case on another one. A case
is a good candidate for cloning if it fits the need for a step-by-step case but
has variables that can easily be substituted.

For example, you may have tests for maintaining a supplier database.
Many, but not all, the steps would also apply to a shipper database. As
you get to know the software throug h requirements or prototypes,
strategize which functions work in such a way that you can clone the test
cases. Writing them as clones does not mean they need to be tested
together.
You can clone steps as well as test cases.
Word processing and test authoring software support cloning with features
such as "Save As," "Copy," and "Replace." It's very important to proofread
these cases to make sure all references to the original are
replaced in the clone.
Matrixes can also be cloned, especially if the setup section is the same.
The variables would be changes in the field names and values. Again,
make sure to proofread the new version.
Improving productivity with test management software

Software designed to support test authoring is the single greatest

productivity booster for writing test cases. It has these advantages over

word processing, database, or spreadsheet software:

• Makes writing and outlining easier


• Facilitates cloning of cases and steps
• Easy to add, move, delete cases and steps
• Automatically numbers and renumbers
• Prints tests in easy-to-follow templates
• Test authoring is usually included in off-the-shelf test management
software, or it could be custom written.
Additional functions:
• Exports tests to common formats
• Multi-user
• Tracks test writing progress, testing progress
• Tracks test results, or ports to database or defect tracker
• Links to requirements and/or creates coverage matrixes
• Builds test sets from cases
• Allows flexible security

The seven most common test case mistakes

1. Making cases too long


2. Incomplete, incorrect, or incoherent setup
3. Leaving out a step
4. Naming fields that changed or no longer exist
5. Unclear whether tester or system does action
6. Unclear what is a pass or fail result
7. Failure to clean up

Handling challenges to good test cases

• Before writing cases, and at every status meeting, find out where
the greatest risk of requirement changes are.
• Strategize what cases will and won't be affected by the change.
Write the ones that won't first.
• Build in variables or "to be decided" that you will come back and fill
in later.
• Make sure the budget owner knows the cost of revising test cases
that are already written. Quantify what it costs per case.
• Let project management set priorities for which cases should be
written or revised. Let them see you can't do it all and ask them to
decide where they have greatest risk.
• Release the not-quite-right test cases unrevised. Ask the testers to
mark up what has to be changed.
• Schedule more time to test each case, plus time for maintaining the
tests.
• If a testing date is moved up, get management to participate in the
options of how test cases will be affected. As in the changing
requirements challenge, let them choose what they want to risk.
• Add staff only if time permits one to two weeks of training before
they have to be productive, and only if you have someone to
mentor and review their work.
• Shift the order of writing cases so you write those first that will be
tested first. Try to stay one case ahead of the testers.
• You can skinny down the test cases to just a purpose, what
requirement is being tested, and a setup.
• Offer to have writers do the testing and write as they go. Schedule
more time for testing and finishing the writing after testing.

Protecting test case assets:


The most important activity to protect the value of test cases is to maintain
them so they are testable. They should be maintained after each testing
cycle, since testers will find defects in the cases as well as in the software.
When testing schedules are created, time should be allotted for the test
analyst or writer to fix the cases while programmers fix bugs in the
application.
Configuration management (CM) of cases should be handled by the
organization or project, rather than the test management. If the
organization does not have this level of process maturity, the test manager
or test writer needs to supply it. Either the project or the test manager
should protect valuable test case assets with the following configuration
management standards:
ü Naming and numbering conventions
ü Formats, file types
ü Versioning
ü Test objects needed by the case, such as databases
ü Read only storage
ü Controlled access
ü Off-site backup
Test management needs to have an index of all test cases. If one is not
supplied by CM, create your own.
A database should be searchable on keys of project, software, test name,
number, and requirement. A
full-text search capability would be even better.
Leveraging test cases

Test cases as development assets have a life beyond testing. They


represent a complete picture of how the software works written in plain
English. Even if the focus is destructive, they must also prove that all
business scenarios work as required. Often the cases are written for
testers who are the business users so they use real world language and
terms. A set of use cases has tremendous value to others who are
working to learn or sell the software:
ü Business users
ü Technical writers
ü Help desk technicians
ü Trainers
ü Sales and marketing staff
ü Web administrators
All of these people have a stake in seeing the software succeed, and are
also potential testers.
Depending on the organization, good will and open communication
between test writers and these groups can greatly speed up the time to
production or release.

TEST CASE CHECKLIST

Quality Attributes

• Accurate - tests what the description says it will test.


• Economical - has only the steps needed for its purpose
• Repeatable, self-standing - it results no matter who tests it.
• Appropriate - for both immediate and future testers
• Traceable - to a requirement
• Self cleaning - returns the test environment to clean state

Structure and testability


• Has a name and number
• Has a stated purpose that includes what requirement is being tested
• Has a description of the method of testing
• Specifies setup information - environment, data, prerequisite tests,
security access
• Has actions and expected results
• States if any proofs, such as reports or screen grabs, need to be saved
• Leaves the testing environment clean
• Uses active case language
• Does not exceed 15 steps
• Matrix does not take longer than 20 minutes to test
• Automated script is commented with purpose, inputs, expected results
• Setup offers alternative to prerequisite tests, if possible
• Is in correct business scenario order with other tests

You might also like