Professional Documents
Culture Documents
STE
STE
This section of the document will describe the Software Development Life Cycle (SDLC) for
small to medium database application development efforts. This chapter presents an
overview of the SDLC, alternate lifecycle models, and associated references. This chapter also
describes the internal processes that are common across all stages of the SDLC and describes
the inputs, outputs, and processes of each stage.
The relationship of each stage to the others can be roughly described as a waterfall, where
the outputs from a specific stage serve as the initial inputs for the following stage.
To follow the waterfall model, one proceeds from one phase to the next in a purely
sequential manner. For example,
After completing the “Project Planning” phase, one will be completing the
"requirements definitions" phase.
When and only when the requirements are fully completed, one proceeds to design.
This design should be a plan for implementing the requirements given.
When and only when the design is fully completed, an implementation of that design
is made by coders. Towards the later stages of this implementation phase, disparate
software components produced by different teams are integrated.
Thus the waterfall model maintains that one should move to a phase only when it’s
proceeding phase is completed and perfected. Phases of development in the waterfall model
are thus discrete, and there is no jumping back and forth or overlap between them.
The central idea behind the waterfall model - time spent early on making sure that
requirements and design are absolutely correct is very useful in economic terms (it will save
you much time and effort later). One should also make sure that each phase is 100%
complete and absolutely correct before proceeding to the next phase of program creation.
It is argued that the waterfall model in general can be suited to software projects which are
stable (especially with unchanging requirements) and where it is possible and likely that
designers will be able to fully predict problem areas of the system and produce a correct
design before implementation is started.
The waterfall model also requires that implementers follow the well made, complete design
accurately, ensuring that the integration of the system proceeds smoothly.
The waterfall model however is argued by many to be a bad idea in practice, mainly because
of their belief that it is impossible to get one phase of a software product's lifecycle
"perfected" before moving on to the next phases and learning from them (or at least, the
belief that this is impossible for any non-trivial program). For example clients may not be
aware of exactly what requirements they want before they see a working prototype and can
comment upon it - they may change their requirements constantly, and program designers
and implementers may have little control over this.
If clients change their requirements after a design is finished, that design must be modified
to accommodate the new requirements, invalidating quite a good deal of effort if overly large
amounts of time have been invested into the model.
In response to the perceived problems with the "pure" waterfall model, many modified
waterfall models have been introduced namely Royce's final model, sashimi model, and other
alternative models. These models may address some or all of the criticism of the "pure"
After the project is completed, the Primary Developer Representative (PDR) and Primary End-
User Representative (PER), in concert with other customer and development team personnel
develop a list of recommendations for enhancement of the current software.
Prototypes
The software development team, to clarify requirements and/or design elements, may
generate mockups and prototypes of screens, reports, and processes. Although some of the
prototypes may appear to be very substantial, they're generally similar to a movie set:
everything looks good from the front but there's nothing in the back.
When a prototype is generated, the developer produces the minimum amount of code
necessary to clarify the requirements or design elements under consideration. No effort is
made to comply with coding standards, provide robust error management or integrate with
other database tables or modules. As a result, it is generally more expensive to retrofit a
prototype with the necessary elements to produce a production module than it is to develop
the module from scratch using the final system design document. For these reasons,
prototypes are never intended for business use, and are generally crippled in one way or
another to prevent them from being mistakenly used as production modules by end-users.
Allowed Variations
In some cases, additional information is made available to the development team that
requires changes in the outputs of previous stages. In this case, the development effort is
usually suspended until the changes can be reconciled with the current design, and the new
results are passed down the waterfall until the project reaches the point where it was
suspended.
The PER and PDR may, at their discretion, allow the development effort to continue while
previous stage deliverables are updated in cases where the impacts are minimal and strictly
limited in scope. In this case, the changes must be carefully tracked to make sure all their
impacts are appropriately handled.
Apart from the waterfall model other popular SDLC modules are the “Spiral” model and “V”
model which have been explained in this section.
Spiral Lifecycle
The spiral model starts with an initial pass through a standard waterfall lifecycle, using a
subset of the total requirements to develop a robust prototype. After an evaluation period,
the cycle is initiated again, adding new functionality and releasing the next prototype. This
process continues with the prototype becoming larger and larger with each iteration, hence
the “spiral.”
The Spiral model is used most often in large projects and needs constant review to stay on
target. For smaller projects, the concept of agile software development is becoming a viable
alternative. Agile software development tends to be rather more extreme in their approach
than the spiral model.
V-Model
The V-model was originally developed from the waterfall software process model. The four
main process phases – requirements, specification, design and Implementation – have a
corresponding verification and validation testing phase. Implementation of modules is tested
by unit testing, system design is tested by Integration testing, system specifications are
tested by system testing and finally Acceptance testing verifies the requirements. The V-
model gets its name from the timing of the phases. Starting from the requirements, the
system is developed one phase at a time until the lowest phase, the implementation phase, is
finished. At this stage testing begins, starting from unit testing and moving up one test level
at a time until the acceptance testing phase is completed. During development stage the
program will be tested at all levels simultaneously.
V-Model
The different levels in V-Model are unit tests, integration tests, system tests and acceptance
test. The unit tests and integration tests ensure that the system design is followed in the
code. The system and acceptance tests ensure that the system does what the customer
wants it to do. The test levels are planned so that each level tests different aspects of the
Verification Vs Validation
Verification Validation
Am I building the product right Am I building the right product
The review of interim work steps and
interim deliverables during a project to Determining if the system complies with the
ensure they are acceptable. To determine requirements and performs functions for which it
if the system is consistent, adheres to is intended and meets the organization’s goals
standards, uses reliable techniques and and user needs. It is traditional and is performed
prudent practices, and performs the at the end of the project.
selected functions in the correct manner.
Am I accessing the data right (in the right Am I accessing the right data (in terms of the
place; in the right way) data required to satisfy the requirement)
Low level activity High level activity
Performed during development on key
Performed after a work product is produced
artifacts, like walkthroughs, reviews and
against established criteria ensuring that the
inspections, mentor feedback, training,
product integrates correctly into the environment
checklists and standards
Demonstration of consistency, Determination of correctness of the final
completeness, and correctness of the software product by a development project with
software at each stage and between each respect to the user needs and requirements
The software testing is a process of devising a set of inputs to a given piece of software that
will check if the results produced by the software are in accordance with the expected results.
Software testing will identify the correctness, completeness and quality of developed
application. Testing can only find defects, but cannot prove that there are none.
Positive Approach:
Negative Approach:
Levels of Testing
Types of Testing
The test strategy consists of a series of tests that will fully exercise the product. The
primary purpose of these tests is to uncover limitations, defects and measure its full
capability. A list of few of the tests and a brief explanation is given below.
Sanity test
System test
Performance test
Security test
Functionality/Automated test
Stress and Volume test
Recovery test
Sanity Test
System Test
System tests focus on the behavior of the product. User scenarios will be executed against
the system as well as screen mapping and error message testing. Overall, system tests will
test the integrated system and verify that it meets the requirements defined in the
requirements document.
Performance Test
Performance test will be conducted to ensure that the product’s response time meets user
expectations and does not fall outside the specified performance criteria. During these tests,
the response time will be measured under simulated heavy stress and/or volume.
Security Test
Security tests will determine how secure the product is. The tests will verify that unauthorized
user access to confidential data is prevented. It will also verify data storage security,
strength of encryption algorithms, vulnerability to hacking, etc.
A suite of automated tests will be developed to test the basic functionality of the product and
perform regression testing on all areas of the system to identify and log all errors. The
automated testing tool will also assist in executing user scenarios thereby emulating several
users simultaneously.
The product will be subjected to high input conditions and a high volume of data simulating
peak load scenarios. The System will be stress tested based on expected load and
performance criteria specified by the customer.
Recovery Test
Recovery tests will force the system to fail in various ways (all scenarios created to resemble
real-life failures like disk failure, network failure, power failure, etc.) and verify that the
system recovers itself based on customer’s requirement.
Documentation Test
Tests are conducted to verify the accuracy of user documentation. These tests will ensure
that the documentation is a true and fair reflection of the product in full and can be easily
understood by the reader.
Beta test
Beta tests are carried out on a product prior to it is commercially released. This is usually the
last set of tests carried out on the system and normally involves sending the product to beta
test sites outside the company for real-world exposure.
Once the product is ready for implementation, the Business Analysts and a team that
resembles the typical user profile uses the application to verify that it conforms to all
expectations they have of the product. These tests are carried out to confirm that the
system performs to specifications, ready for operational use and meets user expectations.
Load Testing
Load testing generally refers to the practice of modeling the expected usage of a software
program by simulating multiple users accessing the program's services concurrently. As such,
this testing is most relevant for multi-user systems; often one built using a client/server
model, such as web servers. However, other types of software systems can be load-tested
also. For example, a word processor or graphics editor can be forced to read an extremely
When the load placed on the system is raised beyond normal usage patterns, in order to test
the system's response at unusually high or peak loads, it is known as Stress testing. The load
is usually so great that error conditions are the expected result, although there is a gray area
between the two domains and no clear boundary exists when an activity ceases to be a load
test and becomes a stress test.
Volume Testing
Volume Testing, as its name implies, is testing that purposely subjects a system (both
hardware and software) to a series of tests where the volume of data being processed is the
subject of the test. Such systems can be transactions processing systems capturing real time
sales or could be database updates and or data retrieval.
Volume testing will seek to verify the physical and logical limits to a system's capacity and
ascertain whether such limits are acceptable to meet the projected capacity of the
organization’s business processing.
Usability testing
Usability testing it is carried out pre-release so that any significant issues identified can be
addressed. Usability testing can be carried out at various stages of the design process. In the
early stages, however, techniques such as walkthroughs are often more appropriate.
Usability testing is not a substitute for a human-centered design process. Usability testing
comes in many flavors and should occur at different points in the development process.
Explorative testing gathers input from participants in the early stages of site development.
Based on the experience and opinions of target users, the development team can decide the
appropriate direction for the site's look and feel, navigation, and functionality.
Assessment testing occurs when the site is close to launch. Here you can get feedback on
issues that might present huge problems for users but are relatively simple to fix.
Evaluation testing can be useful to validate the success of a site subsequent to launch. A site
can be scored and compared to competitors, and this scorecard can be used to evaluate the
project's success.
It is used to study how memory and space is used by the program, either in resident memory
or on disk. If there are limits of these amounts, storage tests attempt to prove that the
program will exceed them.
Reliability testing
Reliability of an object is defined as the probability that it will not fail under specified
conditions, over a period of time. The specified conditions are usually taken to be fixed, while
the time is taken as an independent variable. Thus reliability is often written R (t) as a
function of time t, the probability that the object will not fail within time t.
Any computer user would probably agree that most software is flawed, and the evidence for
this is that it does fail. All software flaws are designed in; that the software does not break,
rather it was always broken. But unless conditions are right to excite the flaw, it will go
unnoticed -- the software will appear to work properly.
Internationalization testing
Testing related to handling foreign text and data (for ex: currency) within the program. This
would include sorting, importing and exporting test and data, correct handling of currency
and date and time formats, string parsing, upper and lower case handling and so forth.
Compatibility Testing
It is the process of determining the ability of two or more systems to exchange information.
In a situation where the developed software replaces an already working program, an
investigation should be conducted to assess possible comparability problems between the
new software and other programs or systems.
Install/uninstall testing
Configuration testing
Quality Monitor runs tests to automatically verify (before deployment) that all installation
elements – such as file extensions and shortcuts – are configured correctly and that all key
functionality will work in the intended environment.
For example, all of the application shortcuts are automatically listed. Each one can be
selected and run. If the shortcut launches the application as expected, a positive comment
can be logged and the test will be marked as successful.
Black Box Testing is testing a product without having the knowledge of the internal workings
of the item being tested. For example, when black box testing is applied to software
engineering, the tester would only know the "legal" inputs and what the expected outputs
should be, but not how the program actually arrives at those outputs. It is because of this
that black box testing can be considered testing with respect to the specifications, no other
knowledge of the program is necessary. The opposite of this would be white box (also known
as glass box) testing, where test data are derived from direct examination of the code to be
tested. For white box testing, the test cases cannot be determined until the code has
actually been written. Both of these testing techniques have advantages and disadvantages,
but when combined, they help to ensure thorough testing of the product.
White box testing is a testing technique where knowledge of the internal structure and logic
of the system is necessary to develop hypothetical test cases. It's sometimes called structural
testing or glass box testing. It is also a test case design method that uses the control
structure of the procedural design to derive test cases.
If a software development team creates a block of code that will allow the system to process
information in a certain way, a test team would verify this structurally by reading the code
and given the systems' structure, see if the code could work reasonably. If they felt it could,
they would plug the code into the system and run an application to structurally validate the
system.
White box tests are derived from the internal design specification or the code. The knowledge
needed for white box test design approach often becomes available during the detailed
design of development cycle.
Guarantee that all independent paths within a module have been executed at least
once
Execute all loops at their boundaries and within their operational bounds
Logic errors and incorrect assumptions are inversely proportional to the probability that a
program path will be executed - errors tend to creep into our work when we design and
implement functions, conditions and control that are out of the mainstream. Everyday
processing tends to be well understood while special case processing tends to fall in cracks.
We often believe that a logical path is not likely to be executed when, in fact, it may be
executed on a regular basis - our assumptions about the logical flow of a program and data
may lead us to make design errors that are uncovered only when the path testing
Typographical errors are random - when a program is translated into its programming
language source code, it is likely that some typing errors will occur. It is likely that a typo will
exist on obscure logical paths on a mainstream path.
At first glance it would seem that very through white box testing would lead to 100% correct
program. All we need to do is to define all logical paths. Develop test cases to exercise them
and evaluate their results, i.e. generate test cases to exercise program logic exhaustively.
Unfortunately, exhaustive testing presents certain logistical problems. For even small
programs, the number of possible logical paths can be very large. For example, the
procedural design that might correspond to a 100-line Pascal program with a single loop that
may be executed no more than 20 times. There are approximately 10*14 possible paths that
may be executed!
To put this number in perspective, we assume that a magic test processor ("magic", because
no such processor exists) has been developed for exhaustive testing. The processor can
develop a test case, execute it and evaluate the result in one millisecond. Working 24 hours a
day, 365 days a year, the processor would work for 3170 years to test the program
represented in figure. This would, undeniably cause havoc in most development schedules.
Exhaustive testing is impossible for large software systems. White box testing should not,
however be dismissed as impractical. Limited number of important logical paths can be
selected and exercised. Important data structure can be probed for validity.
Execution of specific path segments derived from data flow combinations definitions
and uses of variables.
The goal for white box testing is to ensure that internal components of a program are
working properly. A common focus is on structural elements such as statements and braches.
The tester develops test cases that exercise these structural elements to determine if defects
exist in the program structure. By exercising all of these structural elements, the tester hopes
to improve the chances of detecting defects.
The testers need a framework for deciding which structural elements to select as a focus of
testing, for choosing the appropriate test data and for deciding when the testing efforts are
adequate enough to terminate the process with confidence that the software is working
properly. The criteria can be viewed as representing minimal standards for testing a program.
Helping them to select a test data set for the program based on the selected
properties.
Indicating to testers whether or not testing can be stopped for that program.
A program is adequately tested to a given criterion if all the target structural elements have
been exercised according to the selected criterion. Using the selected adequacy criterion a
tester can terminate testing, when the target structures have been exercised. Adequacy
criteria are usually expressed as statements that depict the property or feature of interest,
and the conditions under which testing can be stopped.
In addition to statement and branch adequacy criteria, other types of program-based test
adequacy criteria in use are the path testing and loop testing wherein, the total paths and all
the loops are executed.
Statement Testing
Branch testing
Path testing
Loop testing
Statement testing
In this method every source language statement in the program is executed at least once so
that no statements are missed out. Here we need to be concerned about the statements
controlled by decisions. The statement testing is usually regarded as the minimum level of
coverage in the hierarchy of code coverage and is therefore a weak technique. It is based on
- feeling absurd to release a piece of software without having executed every statement.
Branch testing
Branch testing is used to exercise the true and false outcomes of every decision in a software
program or a code. It is a superset of statement testing where the chances of achieving code
coverage is more. The disadvantage of branch testing is that it is weak in handling multiple
conditions.
Multiple condition coverage is a testing metrics that exercise on all possible combinations of
true and false outcomes of conditions within a decision in the control structure of a program.
This is a superset of branch testing.
The primary advantage of multiple condition testing is that it tests all feasible combinations of
outcomes for each condition with in program.
The drawback of this metrics is that it provides no assistance for choosing data. Another
expensive drawback of this coverage is that the decision involving n Boolean conditions has
2n combinations => 2n test cases.
Path testing
Set of test cases by which every path in the program (in control flow graph) is executed at
least once.
The figure in page#25 shows the graphical representation of flow of control in a program.
The number of paths in this representation can be found out and we can clearly that all paths
are distinct. Note that the path shown by 1-2-5-6 is different from that depicted in 1-3-4-6.
The major disadvantage of path testing is that we cannot achieve 100% path testing, as
there are potentially infinite paths in a program, especially if the code is large and complex
then path testing is not feasible.
Modified path testing as the name suggests is a modification of existing path testing and is
based on number of distinct paths to be executed, which is in turn based on McCabe
complexity number. The McCabe number is given as V = E - N + 2, where E is the number of
edges in the flow graph and N is the number of nodes.
Choosing the distinct path is not easy and different people may choose different distinct
paths.
Loop testing strategies focus on detecting common defects associated with loop structures
like simple loops, concatenated loops, nested loops, etc. The loop testing makes sure that all
the loops in the program have been traversed at least once during testing. Defects in these
areas are normally due to poor programming practices or inadequate reviewing.
When coverage related testing goal is expressed as a percentage, it is often called "degree of
coverage." The planned degree of coverage is usually specified in the test plan and then
measured when the tests are actually executed by a coverage tool. The planned degree of
coverage is usually specified as 100% if the tester wants to completely satisfy the commonly
applied test adequacy or coverage criteria.
Under some circumstances, the planned degree of coverage may be less than 100% possibly
due to the following reasons:
The time set aside for testing is not adequate to achieve 100% coverage.
There are not enough trained testers to complete coverage for all units.
The concept of coverage is not only associated with white box testing. Testers also use
coverage concepts to support black box testing where a testing goal might be to exercise or
cover all functional requirements, all equivalence classes or all system features. In contrast to
black box approaches, white box based coverage goals have stronger theoretical and
practical support.
The application of coverage analysis is associated with the use of control and data models to
represent structural elements and data. The logic elements are based on the flow of control
in a code. They are:
Program statements
© 2006 Zeta Cyber Solutions (P) Limited Page 23
Decisions or branches (these influence the program flow of control)
Conditions (expression that evaluate true/false and do not contain any other
true/false valued expression)
All structured programs can be built from three basic primes - sequential, decision and
iterative. Primes are nothing but the representation of flow of control in a software program
which can take anyone form. The figure shows the graphical representation of primes
mentioned above.
1 1 1
True False
False
False
True
2 3 2
4
2 3
Iteration
Sequence Condition
Using the concept of a prime and the ability to use the combinations of primes to develop
structured code, a control flow diagram for the software under test can be developed. The
tester to evaluate the code with respect to its testability as well as to develop white box test
cases can use the flow graph. The direction of the transfer depends upon the outcome of the
condition.
Cyclomatic Complexity
Thomas McCabe coined the term Cyclomatic complexity. The Cyclomatic complexity is
software metric that provides a quantitative measure of the logical complexity of a program.
When used in the context of the basis path testing method, the value computed for
Cyclomatic complexity defines the number of independent paths in the basis set of a program
and provides us with an upper bound for the number of tests that must be conducted to
ensure that all statements have been executed at least once. The concept of Cyclomatic
complexity can be well explained using a flow graph representation of a program as shown
below in figure.
2 4 3
6
In the diagram above circles denote nodes, and arrows denote edges. Each circle represents
a processing task (one or more source code statements) and arrows represent flow of
control.
1. The number of regions of the flow graph edges corresponds to the cyclomatic complexity.
The tester can use the value of V along with past project data to approximate the
testing time and resources required to test a software module.
The complexity value V along with control flow graph give the tester another tool for
developing white box test cases using the concept of path.
Bug life cycles are similar to software development life cycles. At any time during the
software development life cycle errors can be made during the gathering of requirements,
requirements analysis, functional design, internal design, documentation planning, document
preparation, coding, unit testing, test planning, integration, testing, maintenance, updates,
retesting and phase-out.
Bug life cycle begins when a programmer, software developer, or architect makes a mistake,
creates an unintentional software defect, i.e. a bug, and ends when the bug is fixed, and the
bug is no longer in existence.
A test case specifies the pretest state of the IUT and its environment, the test inputs or
conditions, and the expected result. The expected result specifies what the IUT should
produce from the test inputs. This specification includes messages generated by the IUT,
exceptions, returned values, and resultant state of the IUT and its environment. Test cases
may also specify initial and resulting conditions for other objects that constitute the IUT and
its environment. Some more definitions are listed below.
“Test case is a set of test inputs, execution conditions, and expected results developed for a
particular objective, such as to exercise a particular program path or to verify compliance
with a specific requirement”
“Test case is a documentation specifying inputs, predicted results, and a set of execution
conditions for a test item” (as per IEEE Standard 829-1983 definition)
“Test cases are the specific inputs that you’ll try and the procedures that you’ll follow when
you test the software”
There are two important elements in quality writing— the way it is written and what is written.
There are particular benefits from each element on its own, but your writing will be most
effective if you pay attention to both.
The ability to write well can provide a number of direct and indirect benefits to your testing
efforts. The following things are important to remember each time you write:
better writing leads to better thinking
always write in your own voice
always keep your audience in mind
Use Cases
To write test cases, it is best if Use Cases were made available. A Use Case describes the
functional component in great detail. It forms the specification of the component. A Use
Case ill contain information that will form the basis of the tests that need to be carried out for
the component as it is the basis on which the software has been written. A Use Case
contains the following information:
Name. The name should implicitly express the user's intent or purpose of the
use case, such as "Enroll Student in Seminar."
Identifier [Optional]. A unique identifier, such as "UC1701," that can be used
in other project artifacts (such as your class model) to refer to the use case.
Description. Several sentences summarizing the use case.
Actors [Optional]. The list of actors associated with the use case. Although this
information is contained in the use case itself, it helps to increase the
understandability of the use case when the diagram is unavailable.
Status [Optional]. An indication of the status of the use case, typically one of:
work in progress, ready for review, passed review, or failed review.
Frequency. How often this use case is invoked by the actor. This is often a free-
form answer such as once per each user login or once per month.
Pre-conditions. A list of the conditions, if any, that must be met before a use
case may be invoked.
Post-conditions. A list of the conditions, if any, that will be true after the use
case finishes successfully.
Extended use case [Optional]. The use case that this use case extends (if
any). An extend association is a generalization relationship where an extending
use case continues the behavior of a base use case. The extending use case
accomplishes this by inserting additional action sequences into the base use-case
sequence. This is modeled using a use-case association with the <<extend>>
stereotype.
Where such use cases are not available, do not attempt to create it as it is very difficult to
create accurate and complete Use Cases unless the design documents are available.
To create a test case when Use Cases are not available, the best approach is to start with
creating the work-flow of the application and ending up with test cases. The sequence of
activities is as follows:
a) Draw the work-flow diagram from the descriptive document that you created when
you navigated through the application.
b) For each component in the work-flow, create a list of the screens that are involved.
c) For each screen, create a list of fields that must be checked. For each field, create a
list of tests that need to be carried out.
d) Refer to the DB Schema and the DB Structures to test that all integrity checks are in
place and that all necessary fields appear on the screen and are active.
Software inspection is a verification process, which is carried out in a formal, systematic and
organized manner. Verification techniques are used during all phases of software
development life cycle to ensure that the product generation is happening the "right way".
The various verification techniques other than formal inspection are reviews and
walkthroughs. These processes work directly on the work product under development. The
other verification techniques, which also ensure product quality, are management reviews
and audits. We will be focusing more on reviews, walkthroughs and formal inspections.
Inspection
Entry
Policies and plan
Criteria
Checklist
Invitation
Preparation
Inspection
Meeting
Defect
Database
Reporting
Results
Metric
Database
Rework and
Follow-up
Exi
t
The figure given above reveals the several steps that are carried out during the inspection
Entry criteria
The inspection process begins when the inspection pre-conditions are met as specified in the
inspection policies, procedures and plans. A personal pre-inspection be performed carefully by
each team member. Error, problems and items should be noted by each individual for each
item on the list. When the actual meeting takes place, the document under review is
presented by a reader and is discussed as it is read. Attention is paid to issues related to
quality, adherence to standard, testability, traceability and satisfaction of the user's
requirements
Checklist
The checklist varies with the software artifact being inspected. It contains items that the
inspection participants should focus their attention on, check and evaluate. The inspection
participants address each item on the checklist. The recorder records any discrepancies,
misunderstandings, errors and ambiguities or in general any problem associated with an item.
The competed checklist is part of the review summary document.
Reviewers use organization's standard checklist for all work-products to look for common
errors. Specific checklist is also prepared for all work-products to increase review
effectiveness before going for actual review/inspection. Checklist is dynamic and improves
over time in the organization by analyzing root causes for the common issues found during
inspection process.
Invitation
The inspection leader invites each member participating in the meeting and distributes the
documents that are essential for the conduct of the meeting.
Preparation
The key item that the inspection leader prepares is the check list of items that serves as
agenda for the review which helps in determining the area of focus, its objectives and tactics
to be used.
Reporting results
When the inspection meeting has been completed, (all agenda items covered), the inspectors
are usually asked to sign a written document that is sometimes called as summary report.
The defect report is stored in the defect database. Inspection metrics are also recorded in the
metric database.
Recorder/scribe
Recording and documenting problems,
defects, findings, and recommendations
Author
Owner of the document, Present review item
Perform any needed
rework on the reviewed item
Inspectors
Attend review-training sessions, Prepare for
reviews, Participate in meetings;
Evaluate reviewed item, Perform rework
where appropriate
o Reliability: Extent to which the work product can be expected to perform its
required functions
o Stability: Ability of the work product to remain stable when a change occurs.
o Understandability:
Code inspection
When a software code is inspected, one can check the code's adherence to design
specification and the constraints it is supposed to handle. We can also check for its language
Reviews
Review involves a meeting of a group of people, whose intention is to evaluate a software
related item. Reviews are a type of static testing technique that can be used to evaluate the
quality of software artifacts such as requirements document, a test plan, a design document
or a code component.
The composition of a review group may consist if managers, clients, developers, testers and
other personnel, depending on the type of artifact under review.
Review objectives
The general goals for the reviewers are to
- Identify problem components in the software artifact that do not need improvement.
Other review goals are informational, communicational and educational where by review
participants learn about the contents of the developing software artifacts, to help them
understand the role of their own work and to pan for the future stages of development.
Benefits of reviewing
Reviews often represent project milestones and support the establishment of a baseline for a
software artifact. Review data can also have an influence on test planning which can help test
planners, select effective classes of test and may also have an influence on testing goals. In
some cases, clients attend the review meetings and give feedback to development team so
reviews are also a means for client communication. To summarize, the benefits of review
programs are
Reviews are characterized by its less formal nature compared to inspections. The reviews are
more like a peer group discussion with no specific focus on defect identification, data
collection and analysis. Reviews do not require elaborate preparation.
Walkthroughs
Walkthroughs are a type of technical review where the producer of the review material
servers as the review leader and actually guides the progress of the review. They are
normally applied on the following documents.
In the case of detailed design and code walkthroughs, test inputs may be selected and review
participants walkthrough the design or code with the set of inputs in a line by line manner.
The reader can compare this process to manual execution of the code. If the presenter gives
a skilled presentation of the material, the walkthrough participants are able to build a
comprehensive mental model of the detailed design or code and are able to evaluate its
quality and detect defects.
The primary intent of a walkthrough is to make a team familiarized with a work product.
Walkthroughs are useful when one wants to impart a complex work product to a team. For
example, a walkthrough for a complex design document can be done by the lead designer to
the coding team.
- One can eliminate the checklist and preparation step for a walkthrough.
Benefits of Inspection
The benefits of inspection can be classified as direct and indirect benefits which are as
discussed below:
Direct benefits:
- By using various inspection techniques we can improve the development productivity by
25% to 30%.
- We can reduce the testing cost and in turn the time required for testing by 50% to
90%.
Indirect benefits:
The indirect benefits include the following:
Management benefits:
- The software inspection process gives visibility to the relevant facts and figures about
their software engineering environment earlier in the software life cycle.
Deadline benefits:
- Inspection will give early danger signals so that one can act appropriately and reduce the
impact to the deadline of the project.
- Also the software professionals can expect to live under less intense deadline pressure.
A unit test is a level of testing where a single component or a unit is tested. Unit testing is
also called module testing. A unit is the smallest possible testable software component which
can be a function or a group of functions, class or collection of classes, a screen or collection
of screens or a report.. It can be characterized in several ways. For example, a unit in a
typical procedure oriented software system:
Developers perform unit test in many cases soon after the module is completed. Some
developers also perform an informal review of the unit.
The module interface is tested to ensure that information properly flows into and out off the
program unit which is under test. Tests of dataflow across the module interface are required
before any other test is initiated because; if data does not enter and exit properly all other
tests are debatable.
The local data structures (groups of related data items) are examined to ensure that the data
stored temporarily maintains its integrity during all steps in an algorithms execution.
These include handling data types and boundary conditions which help in ensuring that the
data types like characters, integers etc are properly checked along with boundary value
checks.
The exceptions that arise during the running of .the program need to be anticipated and
handled so that the program recovers and continues to function properly. An example is
when a network disconnect occurs, the program should inform the user of the same and
allow him to restart is activities when the network is restored.
These errors include reading or writing beyond array bounds and not releasing allocated
memory (results in resource leaks during running).
These include all errors related to displaying and formatting data. For example, name getting
truncated and displayed incorrectly on the screen or using positive number format for
negative values while using string formatted output functions like 'printf' function in C
language.
Independent paths
Unit tests need to unearth non-reachable code segments. All independent paths through the
control structure need to be exercised to ensure that all statements in the module are
executed at least once.
Among the more common errors, that are detected using unit tests, some are listed below.
- There are mixed mode operations that can give rise to errors in the program.
Selective testing or execution paths are essential tasks during the unit tests. Test cases
should be designed to uncover erroneous computations, incorrect comparisons and improper
control flow. Path and loop testing are effective techniques to uncover broad array of path
errors. They should also uncover potential defects in
Be sure that we design tests to execute every error handling path. If we do not do that the
path may fail when it is invoked. Good design dictates that error conditions be anticipated
and error handling paths set reroute or cleanly terminate processing when an error occurs.
Among the potential errors that should be tested when error handling is evaluated are:
- When error noted in unit code does not correspond to error encountered.
- When an error description does not provide enough information to assist in the location
of the cause of the error.
Interface
Driver
Local data structures
Boundary conditions
Independent paths
Error handling paths
Module
To be tested
Stub Test
Stub Test
Cases
Cases
RESULTS
The unit test environment is illustrated in Figure above. Since the component is not a stand-
alone program, driver and/or stub software must be developed for each unit test. In most
applications the driver is nothing other than a "main program" that accepts test case data,
passes such data to the component to be tested, and prints the relevant results. Stubs serve
to replace modules that are subordinate to (called by) the component under test. A stub or
"dummy subprogram" uses the subordinate module's interface, may do minimal data
manipulation, prints verification of entry, and returns control to the module under testing.
Drivers and stubs present overhead, as they are not part of the final deliverable. Judicious
decision must be taken whether to write too many stubs and drivers of postpone the some of
the testing to integration phase.
Unit testing is simplified when a component with high cohesion is designed. Hence only one
function is addressed by a component, the number of test cases is reduced and errors can be
more easily predicted and uncovered.
Coverage analyzers
These are tools that help in coverage analysis of the software work product. Some of
them are: JCover, PureCoverage, and McCabe Toolset
Test framework
Test frameworks provide one with a standard set of functionality that will take care of
common tasks specific to a domain, so that the tester can quickly develop application
specific test cases using facilities provided by the framework. Frameworks contain
specialized APls, services, and tools. This will allow the tester to develop test tools
without having to know the underlying technologies, thereby saving time. Some of
the test frameworks available are JVerify, JUnit, CppUnit, Rational Realtime, Cantata,
and JTest
Static Analyzers
Static analyzers carry out analysis of the program without executing it. Static analysis
can be used to find the quality of the software by doing objective measurements on
attributes such as cyclomatic complexity, nesting levels etc. Some of the commonly
used static analyzers are JStyle, QualityAnalyzer, JTest
M1
M2 M3 M4
M5 M6 M7
M8
In this, all the modules are put together and combined randomly into integrated system and
then tested as a whole. Here all modules are combined in advance and the entire program is
tested. The result is a usually a chaos because a set of errors are encountered. Correction is
difficult because isolation of the causes is complicated by the vast expansion of the program.
Once these errors are corrected one can find that new defects pop up and this becomes a
seemingly endless loop. Therefore to solve this problem we come up with another strategy,
which is called incremental approach.
Incremental Approach
As the name suggests here we have assemble different units one by one in a systematic or
incremental manner. Each module will be tested before it is integrated with another unit. And
this approach is taken care when all the units are integrated into one system. We can classify
the incremental approach as
Top down approach
Bottom up approach
Sandwich approach
M1
M2 M3 M4
M5 M6 M7
5M8
2. Develop stubs that perform limited functions that stimulate the actual module.
This approach develops stubs to perform limited functions is workable but can lead to
significant overhead as stubs become more and more complex.
3. Integrate the software form bottom of the hierarchy upward.
This approach called the bottom-up approach is discussed in the next section
Bottom-UP INTEGRATION
As its name implies, in case of bottom-up integration, testing begins with construction and
testing with atomic modules i.e., components at the lower levels in the program structure.
Since components are integrated from the bottom up, processing required for the
components subordinate to a given level is always available and the need for stubs is
eliminated.
Bottom-up integration is a technique in integration testing where modules are added to the
growing subsystem starting from the bottom or lower level. Bottom-up integration of modules
begins with testing the lowest-level modules. These modules do not call other modules.
Drivers are needed to test these modules. The next step is to integrate modules on the next
upper level of the structure. In the process for bottom-up integration after a module has
been tested, its driver is replaced by an actual module (the next one to be integrated. This
next module to be integrated may also need a driver and this will be the case until we reach
the highest level of the structure. Bottom-up integration follows the pattern illustrated in the
figure below:
Mc
ccc
ccc
Mx Mb
vx cc1
vx ccc
D1 D2 D3
vx
111
vx
111
v
111
111
Cluster 3
111
111
111
Cluster1
11
Cluster 2
Sandwich Integration
It is a strategy where both top-down and bottom-up integration strategies are used to
integration test a program. This is a combined approach, which is useful when the program
structure is very complex and frequent use of drivers and stubs becomes unavoidable.
Because using sandwich approach at some portions of the code we can eliminate the excess
use of drivers and stubs thereby simplifying the integration testing process. It is an approach
that uses top down approach for modules in the upper level of the hierarchy and bottom up
approach for lower levels.
M1
M2 M3 M4
M5 M6 M7 M8
In the figure bottom up approach is applied to the modules M1, M2 and M3. Similarly a top
down approach is applied to M1, M4 and M8. The sandwich model has been widely used
since it uses both top down and bottom up approach.
Regression Testing
They are tests that are run every time a change is made to the software so that we can find
that the change has not broken or altered any other part of the software program.
Regression testing is an important strategy for reducing side effects.
Each time a new module is added as part of integration testing, the software changes. New
data flow paths are established, new I/O may occur and new control logic is invoked. These
changes may cause problems with functions that previously worked flawlessly. In the context
of an integration test strategy, the regression testing is the re-execution of some subset of
tests that have already been conducted to ensure that changes have not propagated
unintended side effects.
In a broader context, successful tests results in the discovery of errors and errors must be
corrected. Whenever software is corrected some aspect of software configuration like the
program, its documentation, etc. is changed. Regression testing is the activity that helps to
ensure that changes do not introduce unintended behavior or additional errors.
As the integration testing proceeds the number of regression tests can also grow quite large.
Regression testing may be conducted manually or by using automated tools. It is impractical
and inefficient to re-execute every test for every program function once change has occurred.
Therefore, regression test suite should be designed to include only those tests that address
one or more classes of errors in each of the major program functions.
Smoke Testing
Smoke testing is an integration testing strategy that is commonly used when "shrink-
wrapped" software products are being developed. It is designed as a pacing mechanism for
time-critical projects, allowing the software team to assess its projects on a frequent basis.
The smoke testing approach encompasses the following activities:
Software components that have been translated into code are integrated into a
"build." A build includes all data field, libraries, reusable modules and engineered
components that are required to implement one or more product components.
A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover "show stopper" errors that
have the highest likelihood of throwing the software project behind schedule.
The build is integrated with other builds and the entire product in its current form is
smoke tested daily. The integration approach may be top down or bottom up.
Treat the daily build as the heartbeat of the project. If there is no heartbeat then the project
is dead. The smoke test should exercise the entire system from end to end. It does not have
to be exhaustive but it should be capable of exposing major problems. The smoke test should
be thorough enough that if the build passes, you can assume that it is stable enough to be
tested more thoroughly.
During integration testing it becomes important that for every integrated module, necessary
validation be carried out so that faults or defects are not injected and carried over to other
parts of the program. While integrating we need to check out several areas where the issues
are likely to gather.
As integration testing is conducted, tester should identify issues in the following areas:
Critical modules:
Critical modules are those that have the following characteristics:
Addresses several software requirements
Has a high level control (resides relatively high in the program structure)
A module which is complex or error prone (cyclomatic complexity can be used as
an indicator)
Modules that have definite performance requirements
Modules that use critical system resources like CPU, memory, I/O devices, etc.
Critical modules should be tested as early as possible. In addition regression tests should
focus on critical module function.
Interface integrity:
Internal and external interfaces are tested as each module or duster is incorporated into the
structure.
Functional validity:
Tests designed to uncover functional errors associated with the system are conducted to
track the faults
Information content:
Tests designed to uncover errors associated with local or global data structures are
conducted.
Non-functional issues:
Performance - Tests designed to verify performance bounds established during
software design are conducted. Performance issues such as time requirements for a
transaction should also be subjected to tests.
Resource hogging issues - When modules are integrated, then one module may
Integration test must be planned. Planning can begin when high-level design is complete so
that the system architecture is defined. Other documents relevant to integration test planning
are requirements document, the user manual, and usage scenarios. These documents contain
structure char3, state char3, data dictionaries, cross-reference tables, module interface
descriptions, data flow descriptions, messages and event descriptions, all necessary to plan
integration tests.
The strategy for integration should be defined. For procedural-oriented system the order of
integration of the units of the units should be defined. This depends on the strategy selected.
Consider the fact that the testing objectives are to assemble components into subsystems
and to demonstrate that the sub system functions properly with the integration test cases.
For object-oriented systems a working definition of a cluster or similar construct must be
described, and relevant test cases must be specified. In addition, testing resources and
schedules for integration should be included in the test plan.
As stated earlier in the section, one of the integration tests is to build working subsystems,
and then combine theses into the system as a whole. When planning for integration test the
planner selects subsystems to build based upon the requirements and user needs. Very often
subsystems selected for integration are prioritized. Those that represent key features, critical
features, and/or user-oriented functions may be given the highest priority. Developers may
want to show the clients that certain key subsystems have been assembled and are minimally
functional.
Exploratory Testing
The plainest definition of exploratory testing is test design and test execution at the same
time. This is the opposite of scripted testing (predefined test procedures, whether manual or
automated). Exploratory tests, unlike scripted tests, are not defined in advance and carried
out precisely according to plan. This may sound like a straightforward distinction, but in
practice it's murky. That's because "defined" is a spectrum. Even an otherwise elaborately
defined test procedure will leave many interesting details (such as how quickly to type on the
keyboard, or what kinds of behavior to recognize as a failure) to the discretion of the tester.
Likewise, even a free-form exploratory test session will involve tacit constraints or mandates
about what parts of the product to test, or what strategies to use. A good exploratory tester
will write down test ideas and use them in later test cycles. Such notes sometimes look a lot
like test scripts, even if they aren't. Exploratory testing is sometimes confused with "ad hoc"
testing. Ad hoc testing normally refers to a process of improvised, impromptu bug searching.
To the extent that the next test we do is influenced by the result of the last test we did, we
are doing exploratory testing. We become more exploratory when we can't tell what tests
should be run, in advance of the test cycle, or when we haven't yet had the opportunity to
create those tests. If we are running scripted tests, and new information comes to light that
suggests a better test strategy, we may switch to an exploratory mode (as in the case of
discovering a new failure that requires investigation). Conversely, we take a more scripted
approach when there is little uncertainty about how we want to test, new tests are relatively
unimportant, the need for efficiency and reliability in executing those tests is worth the effort
of scripting, and when we are prepared to pay the cost of documenting and maintaining
tests. The results of exploratory testing aren't necessarily radically different than those of
scripted testing, and the two approaches to testing are fully compatible.
Recurring themes in the management of an effective exploratory test cycle are tester, test
strategy, test reporting and test mission. The scripted approach to testing, attempts to
mechanize the test process by taking test ideas out of a test designer's head and putting
them on paper. There's a lot of value in that way of testing. But exploratory testers take the
view that writing down test scripts and following them tends to disrupt the intellectual
processes that make testers able to find important problems quickly. The more we can make
testing intellectually rich and fluid, the more likely we will hit upon the right tests at the right
time. That's where the power of exploratory testing comes in: the richness of this process is
only limited by the breadth and depth of our imagination and our emerging insights into the
nature of the product under test.
Scripting has its place. We can imagine testing situations where efficiency and repeatability
are so important that we should script or automate them. For example, in the case where a
test platform is only intermittently available, such as a client-server project where there are
only a few configured servers available and they must be shared by testing and development.
The logistics of such a situation may dictate that we script tests carefully in advance to get
the most out of every second of limited test execution time. Exploratory testing is especially
useful in complex testing situations, when little is known about the product, or as part of
preparing a set of scripted tests. The basic rule is this: exploratory testing is called for any
time the next test you should perform is not obvious, or when you want to go beyond the
obvious.
The diagram above outlines the Test Process approach that will be followed.
a. Organize Project involves creating a System Test Plan, Schedule & Test Approach, and
requesting/assigning resources.
b. Design/Build System Test involves identifying Test Cycles, Test Cases, Entrance & Exit
Criteria, Expected Results, etc. In general, test conditions/expected results will be
identified by the Test Team in conjunction with the Project Business Analyst or Business
Expert. The Test Team will then identify Test Cases and the Data required. The Test
conditions are derived from the Business Design and the Transaction Requirements
Documents
c. Design/Build Test Procedures includes setting up procedures such as Error
Management systems and Status reporting, and setting up the data tables for the
Automated Testing Tool.
d. Build Test Environment includes requesting/building hardware, software and data set-
ups.
e. Execute Project Integration Test
f. Execute Operations Acceptance Test
g. Signoff - Signoff happens when all pre-defined exit criteria have been achieved.
Test Planning
Define Design
Test Execution
Test Recording
Test Phases
A tactical test plan must be developed to describe when and how testing will occur. The test
plan should provide background information on the software being tested, on the test
objectives and risks, as well as on the business functions to be tested and the specific tests to
be performed.
Entrance Criteria
Hardware has been be acquired and installed
Software has been acquired and installed
Test cases and test data have been identified and are available
Updated test cases
Complete functional testing database
Automated test cases (scripts) for regression testing
The Software Requirements Specification (SRS) and Test Plan have been signed off
Technical specifications for the system are available
Acceptance tests have been completed, with a pass rate of not less than 80%
Resumption Criteria
In the event that system testing is suspended resumption criteria will be specified and testing
will not re-commence until the software reaches these criteria.
Test Design
This involves designing test conditions and test cases using many of the automated tools out
in the market today. You produce a document that describes the tests that you will carry out.
It is important to determine the expected results prior to test execution.
Test Execution
This involves actually running the specified tests on a computer system either manually or by
using an automated Test tool like WinRunner, Rational Robot or SilkTest.
Test Recording
This involves keeping good records of the test activities that you have carried out. Versions of
the software you have tested and the test design are recorded along with the actual results
of each test.
Exit Criteria
The system has the ability to recover gracefully after failure.
All test cases have been executed successfully.
The system meets all of the requirements described in the Software Requirements
Specification.
All the first and second severity bugs found during QA testing have been resolved.
At least all high exposure minor and insignificant bugs have been fixed and a
resolution has been identified and a plan is in place to address the remaining bugs.
100% of test cases have been executed at least once.
All high-risk test cases (high-risk functions) have been successful executed.
The system can be successfully installed in the pre-live and production environments
and has the ability to recover gracefully from installation problems.
The system must be successfully configured and administered from the GUI.
The system must co-exist with other production applications software.
The system must successfully migrate from the prior version.
The system must meet all stated security requirements. The system and data
resources must be protected against accidental and/or intentional modifications or
misuse.
Should errors/bugs be encountered, fixes will be applied and included in a scheduled
release cycle dependent upon the priority level of the error. This process will continue
until an acceptable level of stability and test coverage is achieved.
The system displays maintainability.
A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way to
think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and 'how'
of product validation. It should be thorough enough to be useful but not so thorough that no
one outside the test group will read it. The following are some of the items that might be
included in a test plan, depending on the particular project:
Some type of unique company-generated number to identify this test plan, its level and the
level of software that it is related to. Preferably the test plan level will be the same as the
related software level. The number may also identify whether the test plan is a Master plan, a
Level plan, an integration plan or whichever plan level it represents. This is to assist in
coordinating software and test ware versions within configuration management.
1. References
2. Introduction
3. Test Items
4. Software Risk Issues
5. Features to be Tested
6. Features not to be Tested
7. Approach
8. Item Pass/Fail Criteria
9. Suspension Criteria and Resumption Requirements
10. Test Deliverables
11. Remaining Test Tasks
12. Environmental Needs
13. Staffing and Training Needs
14. Responsibilities
15. Schedule
16. Planning Risks and Contingencies
17. Approvals
18. Glossary
List all documents that support this test plan. Refer to the actual version/release number of
the document as stored in the configuration management system. Do not duplicate the text
from other documents as this will reduce the viability of this document and increase the
maintenance effort. Documents that can be referenced include:
Project Plan
Requirements specifications
High Level design document
Detail design document
Development and Test process standards
Methodology guidelines and examples
Corporate standards and guidelines
Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is
essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that contain
information relevant to this project/process. If preferable, you can create a references section
to contain all reference documents.
Identify the Scope of the plan in relation to the Software Project plan that it relates to. Other
items may include, resource and budget constraints, scope of the testing effort, how testing
relates to other evaluation activities (Analysis & Reviews), and possibly the process to be
used for change control and communication and coordination of key activities.
As this is the "Executive Summary" keep information brief and to the point.
These are things you intend to test within the scope of this test plan. Essentially, something
you will test, a list of what is to be tested. This can be developed from the software
application inventories as well as other sources of documentation and information.
This can be controlled on a local Configuration Management (CM) process if you have one.
This information includes version numbers, configuration requirements where needed,
(especially if multiple versions of the product are supported). It may also include key delivery
schedule issues for critical elements.
This section can be oriented to the level of the test plan. For higher levels it may be by
application or functional area, for lower levels it may be by program, unit, module or build.
Identify what software is to be tested and what the critical areas are, such as:
There are some inherent software risks such as complexity; these need to be identified.
1. Safety
2. Multiple interfaces
3. Impacts on Client
4. Government regulations and rules
Another key area of risk is a misunderstanding of the original requirements. This can occur at
the management, user and developer levels. Be aware of vague or unclear requirements and
requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential
areas within the software that are risky. If the unit testing discovered a large number of
defects or a tendency towards defects in a particular area of the software, this is an
indication of potential future problems. It is the nature of defects to cluster and clump
together. If it was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several brainstorming sessions.
Features to be tested
This is a listing of what is to be tested from the user's viewpoint of what the system does.
This is not a technical description of the software, but a USERS view of the functions.
This is a listing of what is 'not to be tested from both the user's viewpoint of what the system
does and a configuration management/version control view. This is not a technical description
of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the level of the
plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels
of plans. Overall rules and processes should be identified.
If this is a master test plan the overall project testing approach and coverage requirements
must also be identified.
MTBF, Mean Time between Failures - if this is a valid measurement for the test
involved and if the data is available.
SRE, Software Reliability Engineering - if this methodology is in use and if the
information is available.
What are the Completion criteria for this plan? This is a critical aspect of any test plan and
should be appropriate to the level of the plan.
This could be an individual test case level criterion or a unit level plan or it can be general
functional requirements for higher level plans.
Is it possible to compare this to the total number of defects? This may be impossible,
as some defects are never detected.
o A defect is something that may cause a failure, and may be acceptable to
leave in the application.
o A failure is the result of a defect as seen by the User, the system crashes,
etc.
Specify what constitutes stoppage for a test or series of tests and what is the acceptable level
of defects that will allow the testing to proceed past the defects.
Testing after a truly fatal error will generate conditions that may be identified as defects but
are in fact ghost errors caused by the earlier defects that were ignored.
Test deliverables
One thing that is not a test deliverable is the software itself that is listed under test items and
is delivered by development.
If the project is being developed as a multi-party process, this plan may only cover a portion
of the total functions/features. This status needs to be identified so that those other areas
have plans developed for them and to avoid wasting resources tracking defects that do not
relate to this plan.
When a third party is developing the software, this section may contain descriptions of those
test tasks belonging to both the internal groups and the external groups.
Are there any special requirements for this test plan, such as:
What is to be tested and who is responsible for the testing and training.
Responsibilities
Who is in charge?
This issue includes all areas of the plan. Here are some examples:
Setting risks.
Selecting features to be tested and not tested.
Setting overall strategy for this level of plan.
Ensuring all required elements are in place for testing.
Providing for resolution of scheduling conflicts, especially, if testing is done on the
production system.
Who provides the required training?
Who makes the critical go/no go decisions for items not covered in the test plans?
Schedule
A schedule should be based on realistic and validated estimates. If the estimates for the
development of the application are inaccurate, the entire project plan will slip and the testing
is part of the overall project plan.
As we all know, the first area of a project plan to get cut when it comes to crunch
time at the end of a project is the testing. It usually comes down to the decision,
How slippage in the schedule will to be handled should also be addressed here.
If the users know in advance that a slippage in the development will cause a slippage
in the test and the overall delivery of the system, they just may be a little more
tolerant, if they know it’s in their interest to get a better tested application.
By spelling out the effects here you have a chance to discuss them in advance of
their actual occurrence. You may even get the users to agree to a few defects in
advance, if the schedule slips.
At this point, all relevant milestones should be identified with their relationship to the
development process identified. This will also help in identifying and tracking potential
slippage in the schedule caused by the test process.
It is always best to tie all test dates directly to their related development activity dates. This
prevents the test team from being perceived as the cause of a delay. For example, if system
testing is to begin after delivery of the final build, then system testing begins the day after
delivery. If the delivery is late, system testing starts from the day of delivery, not on a
specific date. This is called dependent or relative dating.
What are the overall risks to the project with an emphasis on the testing process?
Requirements definition will be complete by January 1, 20XX, and, if the requirements change
after that date, the following actions will be taken:
The test schedule and development schedule will move out an appropriate number of
days. This rarely occurs, as most projects tend to have fixed delivery dates.
The number of tests performed will be reduced.
The number of acceptable defects will be increased.
Management is usually reluctant to accept scenarios such as the one above even though they
have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is that
testing is cut back or omitted completely, neither of which should be an acceptable option.
Approvals
Who can approve the process as complete and allow the project to proceed to the next level
(depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:
The audience for a unit test level plan is different than that of an integration, system
or master level plan.
The levels and type of knowledge at the various levels will be different as well.
Programmers are very technical but may not have a clear understanding of the
overall business process driving the project.
Users may have varying levels of business acumen and very little technical skills.
Always be wary of users who claim high levels of technical skills and programmers
that claim to fully understand the business process. These types of individuals can
cause more harm than good if they do not have the skills they believe they possess.
Glossary
Used to define terms and acronyms used in the document, and testing in general, to
eliminate confusion and promote consistent communications.
The bug needs to be communicated and assigned to developers who can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere. If a
problem-tracking system is in place, it should encapsulate these processes. A variety of
commercial problem-tracking/management software tools are available.
Complete information such that developers can understand the bug, get an idea of its
severity, and reproduce it if necessary.
Bug identifier (number, ID, etc.)
Current bug status (e.g., 'Released for Retest', 'new', etc.)
The application name or identifier and version
The function, module, feature, object, screen, etc. where the bug occurred
Environment specifics, system, platform, relevant hardware specifics
Test case name/number/identifier
One-line bug description
Full bug description
Description of steps needed to reproduce the bug if not covered by a test case or if
the developer doesn't have easy access to the test case/test script/test tool
Names and/or descriptions of file/data/messages/etc. used in test
File excerpts/error messages/log file excerpts/screen shots/test tool logs that would
be helpful in finding the cause of the problem
Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
Was the bug reproducible?
Tester name
Test date
Bug reporting date
Name of developer/group/organization the problem is assigned to
Description of problem cause
Description of fix
Code section/file/module/class/method that was fixed
Date of fix
Application version that contains the fix
Tester responsible for retest
Retest date
Configuration management covers the processes used to control, coordinate, and track: code,
requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them, and who makes the changes.
This can be difficult to determine. Many modern software applications are so complex, and
run in such an interdependent environment, that complete testing can never be done.
Common factors in deciding when to stop are:
Scope of Automation
Software must be tested to have confidence that it will work as it should in its intended
environment. Software testing needs to be effective at finding any defects which are there,
but it should also be efficient, performing the tests as quickly as possible.
Automating software testing can significantly reduce the effort required for adequate testing
or significantly increase the testing that can be done in limited time. Tests can be run in
minutes that would take hours to run manually. Savings as high as 80% of manual testing
effort have been achieved using automation.
At first glance it seems that automating testing is an easy task. Just buy one of the popular
test execution tools, record the manual tests, and play them back whenever you want them
to. Unfortunately, it doesn't work like that in practice. Just as there is more to software
design than knowing a programming language, there is more to automating testing than
knowing a testing tool.
Requirement
Specification Performance Acceptance
stimulator tools test
Test execution
Test design and comparison
tools:
Logical design
tools: Detailed Integration test Dynamic
Physical design design analysis tools
tools:
Management tools:
These include tools that assist in test planning, keeping track of what tests have been run,
etc.
Static tools:
They analyze code without execution.
Coverage tools:
They help in assessing how much of the software under test has been exercised by a set of
tests.
Debugging tools:
These are not essentially testing tools as debugging is not a part of testing. However they are
used in testing when trying to isolate a low-level defect. It is of more use to a developer.
Driver
Drivers are tools used to control and operate the software being tested.
Stubs
Stubs are essentially the opposite of drivers and they receive or respond to the data that the
software sends. Stubs are frequently used when software needs to communicate with
external devices.
Simulator
Simulators are used in place of actual systems and behave like the actual system. Simulators
are excellent way of test automation when the actual system that the software interfaces is
not available.
Emulator
Emulator is used to describe a device that is a plug-in replacement for the real device. A PC
acting as a printer, understanding the printer codes and responding to the software as
though it were a printer is an emulator. The difference between an emulator and stub is that
the stub also provides a means for a tester to view and interpret the data send to it.
Execute
During this phase, the test cases are run.
Oracle
Test oracle is a mechanism to produce the predicted outcomes to compare with the actual
outcomes of the software under test.
Test data
Data that exists (for example in a database) before it a test is executed and that affects by
the software under test
Cleanup
This task is done after This task is done after a test or set of tests has finished or stopped in
order to leave the system in a clean state for the next test or set of tests. It is particularly
important where a test has failed to complete.
Test automation can enable some testing tasks to be performed far more efficiently that
could ever be done by testing manually. Some of the benefits are included below.
Increased confidence
Knowing that an extensive set of automated tests have run successfully, there can be greater
confidence when the system is released provided that tests being run are good tests.
Some writers believe that test automation is so expensive relative to its value that it should
be used sparingly. Others, such as advocates of agile development, recommend automating
100% of all tests. A challenge with automation is that automated testing requires automated
test oracles (an oracle is a mechanism or principle by which a problem in the software can be
recognized). Such tools have value in load testing software (by signing on to an application
with hundreds or thousands of instances simultaneously), or in checking for intermittent
errors in software. The success of automated software testing depends on complete and
comprehensive test planning. Software development strategies such as test-driven
development are highly compatible with the idea of devoting a large part of an organization's
testing resources to automated testing. Many large software organizations perform
automated testing. Some have developed their own automated testing environments
specifically for internal development, and not for resale. The debate is still on…….
Acceptance test: Formal tests conducted to determine whether or not a system satisfies its
acceptance criteria and to enable the customer to determine whether or not to accept a
system. This particular kind of testing is performed with the STW/Regression suite of tools.
Back-to-back testing: For software subject to parallel implementation, back-to-back
testing is the execution of a test on the similar implementations and comparing the results.
Basis paths: The set of non-iterative paths.
Black Box testing: A test method where the tester views the program as a black box, that
is the test is completely unconcerned about the internal behavior and structure of the
program. Rather the tester is only interested in finding circumstances in which the program
does not behave according to its specifications. Test data are derived solely from the
specifications without taking advantage of knowledge of the internal structure of the
program. Black-box testing is performed with the STW/Regression suite of tools.
Bottom-up testing: Testing starts with lower level units. Driver units must be created for
units not yet completed, each time a new higher level unit is added to those already tested.
Again a set of units may be added to the software system at the time, and for enhancements
the software system may be complete before the bottom up tests starts. The test plan must
reflect the approach, though. The STW/Coverage suite of tools supports this type of testing.
Built-in testing: Any hardware or software device which is part of an equipment, subsystem
of system and which is used for the purpose of testing that equipment, subsystem or system.
Byte mask: A differencing mask used by EXDIFF that specifies to disregard differences
based on byte counts.
C0 coverage: The percentage of the total number of statements in a module that are
exercised, divided by the total number of statements present in the module.
C1 coverage: The percentage of logical branches exercised in a test as compared with the
total number of logical branches known in a program.
Call graph: The function call tree capability of S-TCAT. This utility shows caller-callee
relationship of a program. It helps the user to determine which function calls need to be
tested further.
Call pair: A connection between two functions in which one function "calls" (references)
another function.
Complexity: A relative measurement of the ``degree of internal complexity'' of a software
system, expressed possibly in terms of some algorithmic complexity measure.
Component: A part of a software system smaller than the entire system but larger than an
element.
45. What is your role in your current organization if you are a QA Engineer?
A: The QA Engineer's function is to use the system much like real users would, find all the
bugs, find ways to replicate the bugs, submit bug reports to the developers, and to provide
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.
49. Which of these roles are the best and most popular?
A: In testing, Tester roles tend to be the most popular. The less popular roles include the
roles of System Administrator, Test/QA Team Lead, and Test/QA Managers.
87. What is the difference between system testing and integration testing?
A: System testing is high level testing, and integration testing is a lower level testing.
Integration testing is completed first, not the system testing. In other words, upon
completion of integration testing, system testing is started, and not vice versa. For integration
testing, test cases are developed with the express purpose of exercising the interfaces
between the components. For system testing, on the other hand, the complete system is
configured in a controlled environment, and test cases are developed to simulate real life
scenarios that occur in a simulated real life test environment. The purpose of integration
testing is to ensure distinct components of the application still work in accordance to
customer requirements. The purpose of system testing, on the other hand, is to validate an
application's accuracy and completeness in performing the functions as designed, and to test
all functions of the system that are required in real life.
96. What black box testing types can you tell me about?
A: Black box testing is functional testing, not based on any knowledge of internal software
design or code. Black box testing is based on requirements and functionality. Functional
testing is also a black-box type of testing geared to functional requirements of an application.
System testing is also a black box type of testing. Acceptance testing is also a black box type
of testing. Functional testing is also a black box type of testing. Closed box testing is also a
black box type of testing. Integration testing is also a black box type of testing.
100. What is the difference between software bug and software defect?
A: A 'software bug' is a nonspecific term that means an inexplicable defect, error, flaw,
mistake, failure, fault, or unwanted behavior of a computer program. Other terms, e.g.
software defect and software failure, are more specific. While there are many who believe the
term 'bug' is a reference to insects that caused malfunctions in early electromechanical
computers (1950-1970), the term 'bug' had been a part of engineering jargon for many
decades before the 1950s; even the great inventor, Thomas Edison (1847-1931), wrote about
a 'bug' in one of his letters.
116. What techniques and tools can enable me to migrate from QC to QA?
A: Refer to above answers for question#115
120. What is the difference between bug and defect in software testing?
A: In software testing, the difference between bug and defect is small, and depends on your
company. For some companies, bug and defect are synonymous, while others believe bug is
a subset of defect. Generally speaking, we, software test engineers, discover BOTH bugs and
defects, before bugs and defects damage the reputation of our company. We, QA engineers,
use the software much like real users would, to find BOTH bugs and defects, to find ways to
replicate BOTH bugs and defects, to submit bug reports to the developers, and to provide
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.
Therefore, we, software engineers, do not differentiate between bugs and defects. In our bug
reports, we include BOTH bugs and defects, and any differences between them are minor.
Difference number one: In bug reports, the defects are usually easier to describe. Difference
number two: In bug reports, it is usually easier to write the descriptions on how to replicate
the defects. Defects tend to require brief explanations only.
In grey box testing, the tester applies a limited number of test cases to the internal workings
of the software under test. In the remaining part of the grey box testing, one takes a black
box approach in applying inputs to the software under test and observing the outputs. Gray
box testing is a powerful idea. The concept is simple; if one knows something about how the
product works on the inside, one can test it better, even from the outside.
Grey box testing is not to be confused with white box testing; i.e. a testing approach that
attempts to cover the internals of the product in detail. Grey box testing is a test strategy
based partly on internals. The testing approach is known as gray box testing, when one does
have some knowledge, but not the full knowledge of the internals of the product one is
testing. In gray box testing, just as in black box testing, you test from the outside of a
product, just as you do with black box, but you make better-informed testing choices because
126. What is the difference between data validity and data integrity?
A:
Difference number one: Data validity is about the correctness and reasonableness of
data, while data integrity is about the completeness, soundness, and wholeness of
the data that also complies with the intention of the creators of the data.
Difference number two: Data validity errors are more common, while data integrity
errors are less common. Difference number three: Errors in data validity are caused
by HUMANS -- usually data entry personnel – who enter, for example, 13/25/2005,
by mistake, while errors in data integrity are caused by BUGS in computer programs
that, for example, cause the overwriting of some of the data in the database, when
one attempts to retrieve a blank value from the database.
133. How can I be effective and efficient, when I do black box testing of
ecommerce web sites?
A: When you're doing black box testing of e-commerce web sites, you're most efficient and
effective when you're testing the sites' Visual Appeal, Contents, and Home Pages. When you
want to be effective and efficient, you need to verify that the site is well planned. Verify that
the site is customer-friendly. Verify that the choices of colors are attractive. Verify that the
choices of fonts are attractive. Verify that the site's audio is customer friendly. Verify that the
site's video is attractive. Verify that the choice of graphics is attractive. Verify that every page
of the site is displayed properly on all the popular browsers. Verify the authenticity of facts.
Ensure the site provides reliable and consistent information. Test the site for appearance.
Test the site for grammatical and spelling errors. Test the site for visual appeal, choice of
browsers, consistency of font size, download time, broken links, missing links, incorrect links,
and browser compatibility. Test each toolbar, each menu item, every window, every field
134. What is the difference between top down and bottom up design?
A: Top down design proceeds from the abstract (entity) to get to the concrete (design). The
Bottom up design proceeds from the concrete (design) to get to the abstract (entity). Top
down design is most often used in designing brand new systems, while bottom up design is
sometimes used when one is reverse engineering a design; i.e. when one is trying to figure
out what somebody else designed in an existing system. Bottom up design begins the design
with the lowest level modules or subsystems, and progresses upward to the main program,
module, or subsystem. With bottom up design, a structure chart is necessary to determine
the order of execution, and the development of drivers is necessary to complete the bottom
up approach. Top down design, on the other hand, begins the design with the main or top
level module, and progresses downward to the lowest level modules or subsystems. Real life
sometimes is a combination of top down design and bottom up design. For instance, data
modeling sessions tend to be iterative, bouncing back and forth between top down and
bottom up modes, as the need arises.
137. What is the difference between monkey testing and smoke testing?
Difference#1: Monkey testing is random testing, and smoke testing is a nonrandom
check to see whether the product "smokes" when it runs. Smoke testing is
nonrandom testing that deliberately exercises the entire system from end to end,
with the goal of exposing any major problems.
Difference#2: Monkey testing is performed by automated testing tools. On the other
hand, smoke testing, more often than not, is a manual check to see whether the
product "smokes" when it runs.
Difference#3: Monkey testing is performed by "monkeys", while smoke testing is
performed by skilled testers (to see whether the product "smokes" when it runs).
Difference#4: "Smart monkeys" are valuable for load and stress testing, but not very
valuable for smoke testing, because they are too expensive for smoke testing.
Difference#5: "Dumb monkeys" are inexpensive to develop, are able to do some
basic testing, but, if we use them for smoke testing, they find few bugs during smoke
testing.
138. Tell me about the process of daily builds and smoke tests
A: The idea behind the process of daily builds and smoke tests is to build the product every
day, and test it every day. The software development process at Microsoft and many other
software companies requires daily builds and smoke tests. According to their process, every
day, every single file has to be compiled, linked, and combined into an executable program.
And, then, the program has to be "smoke tested". Smoke testing is a relatively simple check
to see whether the product "smokes" when it runs. You should add revisions to the build only
when it makes sense to do so. You should to establish a Build Group, and build *daily*; set
your *own standard* for what constitutes "breaking the build", and create a penalty for
breaking the build, and check for broken builds *every day*. In addition to the daily builds,
you should smoke test the builds, and smoke test them Daily. You should make the smoke
test Evolve, as the system evolves. You should build and smoke test Daily, even when the
project is under pressure. Think about the many benefits of this process! The process of daily
builds and smoke tests minimizes the integration risk, reduces the risk of low quality,
supports easier defect diagnosis, improves morale, enforces discipline, and keeps pressure-
cooker projects on track. If you build and smoke test *daily*, success will come, even when
you're working on large projects!
142. Give me one test case that catches all the bugs!
A: If there is a "magic bullet", i.e. the one test case that has a good possibility to catch ALL
the bugs, or at least the most important bugs, it is a challenge to find it, because test cases
depend on requirements; requirements depend on what customers need; and customers can
have great many different needs. As software systems are getting increasingly complex, it is
increasingly more challenging to write test cases. It is true that there are ways to create
"minimal test cases" which can greatly simplify the test steps to be executed. But, writing
such test cases is time consuming, and project deadlines often prevent us from going that
route. Often the lack of enough time for testing is the reason for bugs to occur in the field.
However, even with ample time to catch the "most important bugs", bugs still surface with
amazing spontaneity. The challenge is, developers do not seem to know how to avoid
providing the many opportunities for bugs to hide, and testers do not seem to know where
the bugs are hiding.
143. What is the difference between a test plan and a test scenario?
A:
Difference#1: A test plan is a document that describes the scope, approach, resources, and
schedule of intended testing activities, while a test scenario is a document that describes
both typical and atypical situations that may occur in the use of an application.
Difference#2: Test plans define the scope, approach, resources, and schedule of the
intended testing activities, while test procedures define test conditions, data to be used for
testing, and expected results, including database updates, file outputs, and report results.
Difference#3: A test plan is a description of the scope, approach, resources, and schedule of
intended testing activities, while a test scenario is a description of test cases that ensure that
a business process flow, applicable to the customer, is tested from end to end.
162. Give me five common problems that occur during software development.
A: Poorly written requirements, unrealistic schedules, inadequate testing, and adding new
features after development is underway and poor communication.
1. Requirements are poorly written when requirements are unclear, incomplete, too general,
or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good until
Customers complain or the system crashes.
4. It is extremely common that new features are added after development is underway.
Books
ROGER S. PRESSMAN: Software Engineering - A Practitioner’s approach, McGrawHill, 2001
ILENE BURNSTEIN: Practical Software Testing, Springer, 2002
Website Links
http://www.stickyminds.com
http://en.wikipedia.org/wiki/Software_testing
http://www.reference.com/browse/wiki/Software_testing
http://www.softwareqatest.com/
http://www.testingstuff.com/autotest.html