Professional Documents
Culture Documents
flowing steadily downwards (like a waterfall l) through the phases of Conception, Initiation,
Analysis, Design (validation), Construction, Testing and maintenance.
The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall.
It should be readily apparent that the waterfall development model has its origins in the
manufacturing and construction industries; highly structured physical environments in which after-
the-fact changes are prohibitively costly, if not impossible. Since no formal software development
methodologies existed at the time, this hardware-oriented model was simply adapted for software
development. Ironically, the use of the waterfall model for software development essentially
ignores the 'soft' in 'software'.
The first formal description of the waterfall model is often cited to be an article published in 1970
by Winston W. Royce (1929-1995), although Royce did not use the term "waterfall" in this article.
Ironically, Royce was presenting this model as an example of a flawed, non-working model
(Royce 1970). This is in fact the way the term has generally been used in writing about software
development-as a way to criticize a commonly used software practice.
Waterfall Model Diagram
Let us now take a look at the different phases of the waterfall model diagram. One important
aspect that is worth mentioning before we start off with the waterfall model life cycle is that the
waterfall model is designed such that, until the preceding phase is complete, you cannot move
over to the next phase of development.
Requirement
Unless you know what you want to design, you cannot proceed with the project. Not only big
projects, but even a small code of adding two integer numbers also needs to be written with the
output in mind. Here, in this stage, the requirements which the software is going to satisfy is
specified. All requirements are presented to the team of programmers. If this phase is completed
successfully, it ensures a smooth working of the remaining waterfall model phases, as the
programmer is not burdened to make changes at later stages because of changing requirements.
Analysis
As per the requirements, the software and hardware for the proper completion of the project is
analyzed in this phase. Right from the point of which computer language should be used for the
designing of the software, to the database system that can be used for the smooth functioning of
the software is decided at this stage.
Design
The algorithm or flowchart of the program or the software code to be written in the next stage is
created now. It is a very important stage, which relies on the previous two stages for its proper
implementation and the proper execution of the same ensures a smooth working of the next
stage. If during the design phase it can be made out that there are some more requirements for
designing the code, it is added up to the list in the analysis phase and the design phase is carried
out according to the new set of resources.
Coding
Based on the algorithm or flowchart designed, the actual coding of the software is carried out.
This is the stage where the entire idea of the software of program to be designed is materialized.
A proper execution of the previous stages ensures a smooth implementation of this stage.
Testing
With the coding complete, the testing department now comes into scene. It checks out if there are
any flaws in the designed software and if the software has been designed as per the
specifications. A proper execution of this stage ensures that the client for which the software has
been designed, will be satisfied with the work. If there are any flaws, the problem is reverted back
to the design phase. In the design phase, the changes are implemented and then its succeeding
stages, coding and testing are again carried out. Read more on software testing.
Acceptance
This is the last stage of the software development, using the waterfall model. A proper execution
of all the preceding stages ensures a software as per the requirements and most importantly, it
ensures a satisfied client. However, at this stage you may need to provide the client with some
support regarding the software you have developed. If the client demands some further
enhancements to be made to the existing software, then the process again needs to be started,
right from the first phase, i.e., requirements.
The following are some of the interview questions for manual testing. This will give you a fair idea
of what these questions are like.
These were some of the manual testing interview questions for freshers, let us now move on to
other forms of manual testing questions.
Here are some software testing interview questions that will help you get into the more intricate
and complex formats of this form of manual testing.
Software testing is an integral part of the software development life cycle (SDLC). Effectively and
efficiently testing a piece of code is equally important, if not more, than writing it. So what is
software testing? Well, for those of you who are new to software testing and quality assurance,
here's the answer to this question.
Software testing is nothing but subjecting a piece of code to both, controlled as well as
uncontrolled operating conditions, in an attempt to observe the output and examine whether it is
in accordance with certain pre-specified conditions. Different sets of test cases and testing
strategies are prepared, all of which aim at achieving one common goal - removing all the bugs
and errors from the code and making the software error-free and capable enough of providing
accurate and optimum outputs. There are different types of software testing techniques and
methodologies. A software testing methodology is different from a software testing technique. We
will have a look at a few software testing methodologies in the later part of this article.
The above software testing methods can be implemented in two ways - manually or by
automation. Manual software testing is done by human software testers who manually i.e.
physically check, test and report errors or bugs in the product or piece of code. In case of
automated software testing, the same process is performed by a computer by means of an
automated testing software such as WinRunner, LoadRunner, Test Director, etc.
• Waterfall model
• V model
• Spiral model
• RUP
• Agile model
• RAD
Waterfall Model
The waterfall model adopts a 'top down' approach regardless of whether it is being used for
software development or testing. The basic steps involved in this software testing methodology
are:
1. Requirement analysis
2. Test case design
3. Test case implementation
4. Testing, debugging and validating the code or product
5. Deployment and maintenance
In this methodology, you move on to the next step only after you have completed the present
step. There is no scope for jumping backward or forward or performing two steps simultaneously.
Also, this model follows a non-iterative approach. The main benefit of this methodology is its
simplistic, systematic and orthodox approach. However, it has many shortcomings since bugs
and errors in the code are not discovered until and unless the testing stage is reached. This can
often lead to wastage of time, money and valuable resources.
V Model
The V model gets its name from the fact that the graphical representation of the different test
process activities involved in this methodology resembles the letter 'V'. The basic steps involved
in this methodology are more or less the same as those in the waterfall model. However, this
model follows both a 'top-down' as well as a 'bottom-up' approach (you can visualize them
forming the letter 'V'). The benefit of this methodology is that in this case, both the development
and testing activities go hand-in-hand. For example, as the development team goes about its
requirement analysis activities, the testing team simultaneously begins with its acceptance testing
activities. By following this approach, time delays are minimized and optimum utilization of
resources is assured.
Spiral Model
As the name implies, the spiral model follows an approach in which there are a number of cycles
(or spirals) of all the sequential steps of the waterfall model. Once the initial cycle is completed, a
thorough analysis and review of the achieved product or output is performed. If it is not as per the
specified requirements or expected standards, a second cycle follows, and so on. This
methodology follows an iterative approach and is generally suited for very large projects having
complex and constantly changing requirements.
Agile Model
This methodology follows neither a purely sequential approach nor does it follow a purely iterative
approach. It is a selective mix of both of these approaches in addition to quite a few new
developmental methods. Fast and incremental development is one of the key principles of this
methodology. The focus is on obtaining quick, practical and visible outputs and results, rather
than merely following theoretical processes. Continuous customer interaction and participation is
an integral part of the entire development process.
This was a short overview of some commonly used software testing methodologies. With the
applications of information technology growing with every passing day, the importance of proper
software testing has grown multifold.
In order to overcome the cons of "The Waterfall Model", it was necessary to develop a new
Software Development Model, which could help in ensuring the success of software project. One
such model was developed which incorporated the common methodologies followed in "The
Waterfall Model", but it also eliminated almost every possible/known risk factors from it. This
model is referred as "The Spiral Model" or "Boehm’s Model".
There are four phases in the "Spiral Model" which are: Planning, Evaluation, Risk Analysis and
Engineering. These four phases are iteratively followed one after other in order to eliminate all the
problems, which were faced in "The Waterfall Model". Iterating the phases helps in understating
the problems associated with a phase and dealing with those problems when the same phase is
repeated next time, planning and developing strategies to be followed while iterating through the
phases. The phases in "Spiral Model" are:
Plan: In this phase, the objectives, alternatives and constraints of the project are determined and
are documented. The objectives and other specifications are fixed in order to decide which
strategies/approaches to follow during the project life cycle.
Risk Analysis: This phase is the most important part of "Spiral Model". In this phase all possible
(and available) alternatives, which can help in developing a cost effective project are analyzed
and strategies are decided to use them. This phase has been added specially in order to identify
and resolve all the possible risks in the project development. If risks indicate any kind of
uncertainty in requirements, prototyping may be used to proceed with the available data and find
out possible solution in order to deal with the potential changes in the requirements.
Engineering: In this phase, the actual development of the project is carried out. The output of
this phase is passed through all the phases iteratively in order to obtain improvements in the
same.
Customer Evaluation: In this phase, developed product is passed on to the customer in order to
receive customer’s comments and suggestions which can help in identifying and resolving
potential problems/errors in the software developed. This phase is very much similar to TESTING
phase.
The process progresses in spiral sense to indicate iterative path followed, progressively more
complete software is built as we go on iterating through all four phases. The first iteration in this
model is considered to be most important, as in the first iteration almost all possible risk factors,
constraints, requirements are identified and in the next iterations all known strategies are used to
bring up a complete software system. The radical dimensions indicate evolution of the product
towards a complete system.
However, as every system has its own pros and cons, "The Spiral Model" does have its pros and
cons too. As this model is developed to overcome the disadvantages of the "Waterfall Model", to
follow "Spiral Model", highly skilled people in the area of planning, risk analysis and mitigation,
development, customer relation etc. are required. This along with the fact that the process needs
to be iterated more than once demands more time and is somehow expensive task.
Functional Testing: In this type of testing, the software is tested for the functional requirements.
This checks whether the application is behaving according to the specification.
Performance Testing: This type of testing checks whether the system is performing properly,
according to the user's requirements. Performance testing depends upon the Load and Stress
Testing, that is internally or externally applied to the system.
1. Load Testing : In this type of performance testing, the system is raised beyond the limits
in order to check the performance of the system when higher loads are applied.
2. Stress Testing : In this type of performance testing, the system is tested beyond the
normal expectations or operational capacity
Usability Testing: This type of testing is also called as 'Testing for User Friendliness'. This
testing checks the ease of use of an application. Read more on introduction to usability testing.
Regression Testing: Regression testing is one of the most important types of testing, in which it
checks whether a small change in any component of the application does not affect the
unchanged components. Testing is done by re-executing the previous versions of the application.
Smoke Testing: Smoke testing is used to check the testability of the application. It is also called
'Build Verification Testing or Link Testing'. That means, it checks whether the application is ready
for further major testing and working, without dealing with the finer details.
Sanity Testing: Sanity testing checks for the behavior of the system. This type of software
testing is also called as Narrow Regression Testing.
Parallel Testing: Parallel testing is done by comparing results from two different systems like old
vs new or manual vs automated.
Recovery Testing: Recovery testing is very necessary to check how fast the system is able to
recover against any hardware failure, catastrophic problems or any type of system crash.
Installation Testing: This type of software testing identifies the ways in which installation
procedure leads to incorrect results.
Configuration Testing: This testing is done to test for compatibility issues. It determines minimal
and optimal configuration of hardware and software, and determines the effect of adding or
modifying resources such as memory, disk drives and CPU.
Compliance Testing: This type of testing checks whether the system was developed in
accordance with standards, procedures and guidelines.
Error-Handling Testing: This software testing type determines the ability of the system to
properly process erroneous transactions.
Manual-Support Testing: This type of software testing is an interface between people and
application system.
Inter-Systems Testing: This type of software testing method is an interface between two or more
application systems.
Exploratory Testing: Exploratory Testing is a type of software testing, which is similar to ad-hoc
testing, and is performed to explore the software features. Read more on exploratory testing.
Volume Testing: This testing is done, when huge amount of data is processed through the
application.
Scenario Testing: This type of software testing provides a more realistic and meaningful
combination of functions, rather than artificial combinations that are obtained through domain or
combinatorial test design.
User Interface Testing: This type of testing is performed to check, how user-friendly the
application is. The user should be able to use the application, without any assistance by the
system personnel.
System Testing: System testing is the testing conducted on a complete, integrated system, to
evaluate the system's compliance with the specified requirements. This type of software testing
validates that the system meets its functional and non-functional requirements and is also
intended to test beyond the bounds defined in the software/hardware requirement specifications.
User Acceptance Testing: Acceptence Testing is performed to verify that the product is
acceptable to the customer and it's fulfilling the specified requirements of that customer. This
testing includes Alpha and Beta testing.
1. Alpha Testing: Alpha testing is performed at the developer's site by the customer in a
closed environment. This testing is done after system testing.
2. Beta Testing: This type of software testing is done at the customer's site by the customer
in the open environment. The presence of the developer, while performing these tests, is
not mandatory. This is considered to be the last step in the software development life
cycle as the product is almost ready.
Static and Dynamic Analysis: In static analysis, it is required to go through the code in order to
find out any possible defect in the code. Whereas, in dynamic analysis the code is executed and
analyzed for the output.
Statement Coverage: This type of testing assures that the code is executed in such a way that
every statement of the application is executed at least once.
Decision Coverage: This type of testing helps in making decision by executing the application, at
least once to judge whether it results in true or false.
Condition Coverage: In this type of software testing, each and every condition is executed by
making it true and false, in each of the ways at least once.
Path Coverage: Each and every path within the code is executed at least once to get a full path
coverage, which is one of the important parts of the white box testing.
Integration Testing: Integration testing is performed when various modules are integrated with
each other to form a sub-system or a system. This mostly focuses in the design and construction
of the software architecture. Integration testing is further classified into Bottom-Up Integration and
Top-Down Integration testing.
1. Bottom-Up Integration Testing: In this type of integration testing, the lowest level
components are tested first and then alleviate the testing of higher level components
using 'Drivers'.
2. Top-Down Integration Testing: This is totally opposite to bottom-up approach, as it tests
the top level modules are tested and the branch of the module are tested step by step
using 'Stubs' until the related module comes to an end.
Security Testing: Testing that confirms, how well a system protects itself against unauthorized
internal or external, or willful damage of code, means security testing of the system. Security
testing assures that the program is accessed by the authorized personnel only. Read more on
brief introduction to security testing.
Mutation Testing: In this type of software testing, the application is tested for the code that was
modified after fixing a particular bug/defect.
Software testing methodologies and different software testing strategies help to get through this
software testing process. These various software testing methods show you the outputs, using
the above mentioned software testing types, and helps you check if the software satisfies the
requirement of the customer. Software testing is indeed a vast subject and one can make a
successful carrier in this field. You could go through some software testing interview questions, to
prepare yourself for some software testing tutorials.
A perfect software product is built when every step is taken with full consideration that ‘A right
product is developed in a right manner’. ‘Software Verification & Validation’ is one such model,
which helps the system designers and test engineers to confirm that a right product is build right
way throughout the development process and improve the quality of the software product.
‘Verification & Validation Model’ makes it sure that, certain rules are followed at the time of
development of a software product and also makes it sure that the product that is developed
fulfills the required specifications. This reduces the risk associated with any software project up to
certain level by helping in detection and correction of errors and mistakes, which are unknowingly
done during the development process.
What is Verification?
The standard definition of Verification goes like this: "Are we building the product RIGHT?" i.e.
Verification is a process that makes it sure that the software product is developed the right way.
The software should confirm to its predefined specifications, as the product development goes
through different stages, an analysis is done to ensure that all required specifications are met.
Methods and techniques used in the Verification and Validation shall be designed carefully, the
planning of which starts right from the beginning of the development process. The Verification
part of ‘Verification and Validation Model’ comes before Validation, which incorporates Software
inspections, reviews, audits, walkthroughs, buddy checks etc. in each phase of verification (every
phase of Verification is a phase of the Testing Life Cycle)
During the Verification, the work product (the ready part of the Software being developed and
various documentations) is reviewed/examined personally by one ore more persons in order to
find and point out the defects in it. This process helps in prevention of potential bugs, which may
cause in failure of the project.
Few terms involved in Verification:
Inspection:
Inspection involves a team of about 3-6 people, led by a leader, which formally reviews the
documents and work product during various phases of the product development life cycle. The
work product and related documents are presented in front of the inspection team, the member of
which carry different interpretations of the presentation. The bugs that are detected during the
inspection are communicated to the next level in order to take care of them.
Walkthroughs:
Walkthrough can be considered same as inspection without formal preparation (of any
presentation or documentations). During the walkthrough meeting, the presenter/author
introduces the material to all the participants in order to make them familiar with it. Even when the
walkthroughs can help in finding potential bugs, they are used for knowledge sharing or
communication purpose.
Buddy Checks:
This is the simplest type of review activity used to find out bugs in a work product during the
verification. In buddy check, one person goes through the documents prepared by another person
in order to find out if that person has made mistake(s) i.e. to find out bugs which the author
couldn’t find previously.
What is Validation?
Validation is a process of finding out if the product being built is right?
i.e. whatever the software product is being developed, it should do what the user expects it to do.
The software product should functionally do what it is supposed to, it should satisfy all the
functional requirements set by the user. Validation is done during or at the end of the
development process in order to determine whether the product satisfies specified requirements.
Validation and Verification processes go hand in hand, but visibly Validation process starts after
Verification process ends (after coding of the product ends). Each Verification activity (such as
Requirement Specification Verification, Functional design Verification etc.) has its corresponding
Validation activity (such as Functional Validation/Testing, Code Validation/Testing,
System/Integration Validation etc.).
All types of testing methods are basically carried out during the Validation process. Test plan, test
suits and test cases are developed, which are used during the various phases of Validation
process. The phases involved in Validation process are: Code Validation/Testing, Integration
Validation/Integration Testing, Functional Validation/Functional Testing, and System/User
Acceptance Testing/Validation.
Integration Validation/Testing:
Integration testing is carried out in order to find out if different (two or more) units/modules co-
ordinate properly. This test helps in finding out if there is any defect in the interface between
different modules.
Functional Validation/Testing:
This type of testing is carried out in order to find if the system meets the functional requirements.
In this type of testing, the system is validated for its functional behavior. Functional testing does
not deal with internal coding of the project, in stead, it checks if the system behaves as per the
expectations.
Classification
The basic classification of the whole process is as follows
• Planning
• Analysis
• Design
• Development
• Implementation
• Testing
• Deployment
• Maintenance
Each of the steps of the process has its own importance and plays a significant part in the
product development. The description of each of the steps can give a better understanding.
Planning
This is the first and foremost stage in the development and one of the most important stages. The
basic motive is to plan the total project and to estimate the merits and demerits of the project. The
Planning phase includes the definition of the intended system, development of the project plan,
and Parallel management of the plan throughout the proceedings of the development.
A good and matured plan can create a very good initiative and can positively affect the complete
project.
Analysis
The main aim of the analysis phase is to perform statistics and requirements gathering. Based on
the analysis of the project and due to the influence of the results of the planning phase, the
requirements for the project are decided and gathered.
Once the requirements for the project are gathered, they are prioritized and made ready for
further use. The decisions taken in the analysis phase are out and out due to the requirements
analysis. Proceedings after the current phase are defined.
Design
Once the analysis is over, the design phase begins. The aim is to create the architecture of the
total system. This is one of the important stages of the process and serves to be a benchmark
stage since the errors performed until this stage and during this stage can be cleared here.
Most of the developers have the habit of developing a prototype of the entire software and
represent the software as a miniature model. The flaws, both technical and design, can be found
and removed and the entire process can be redesigned.
One of the main scenarios is the implementation of the prototype model into a full-fledged
working environment, which is the final product or software.
Testing
The testing phase is one of the final stages of the development process and this is the phase
where the final adjustments are made before presenting the completely developed software to the
end-user.
In general, the testers encounter the problem of removing the logical errors and bugs. The test
conditions which are decided in the analysis phase are applied to the system and if the output
obtained is equal to the intended output, it means that the software is ready to be provided to the
user.
Maintenance
The toughest job is encountered in the maintenance phase which normally accounts for the
highest amount of money. The maintenance team is decided such that they monitor on the
change in organization of the software and report to the developers, in case a need arises.
The information desk is also provided with in this phase. This serves to maintain the relationship
between the user and the creator.
A test case is a set of conditions or variables and inputs that are developed for a particular goal or
objective to be achieved on a certain application to judge its capabilities or features.
It might take more than one test case to determine the true functionality of the application being
tested. Every requirement or objective to be achieved needs at least one test case. Some
software development methodologies like Rational Unified Process (RUP) recommend creating at
least two test cases for each requirement or objective; one for performing testing through positive
perspective and the other through negative perspective.
1. Information
Information consists of general information about the test case. Information incorporates
Identifier, test case creator, test case version, name of the test case, purpose or brief
description and test case dependencies.
2. Activity
Activity consists of the actual test case activities. Activity contains information about the
test case environment, activities to be done at test case initialization, activities to be done
after test case is performed, step by step actions to be done while testing and the input
data that is to be supplied for testing.
3. Results
Results are outcomes of a performed test case. Results data consist of information about
expected results and the actual results.
Designing Test Cases
Test cases should be designed and written by someone who understands the function or
technology being tested. A test case should include the following information -
Designing test cases can be time consuming in a testing schedule, but they are worth giving time
because they can really avoid unnecessary retesting or debugging or at least lower it.
Organizations can take the test cases approach in their own context and according to their own
perspectives. Some follow a general step way approach while others may opt for a more detailed
and complex approach. It is very important for you to decide between the two extremes and judge
on what would work the best for you. Designing proper test cases is very vital for your software
testing plans as a lot of bugs, ambiguities, inconsistencies and slip ups can be recovered in time
as also it helps in saving your time on continuous debugging and re-testing test cases.
Going by this equation, let us try to see the answers to some of the prime questions aimed to
elaborate the concept a bit deeper.
We can use software testing as a generalized metric for quality. Depending upon the project time
frame, financial constraints & quality expectations, software testing activities can be planned.
Distinct Levels of Testing: Following five primary levels of testing have been defined
2) Demonstrating: The process of showing that major features work with typical input.
3) Verifying: The process of finding as many faults in the application under test (AUT) as possible.
4) Validating: The process of finding as many faults in requirements, design and AUT.
Various industry experts have provided different definitions of testing that are described as under
Definition - 1: (As per IEEE 83a) "Testing is defined as the process of exercising or evaluating a
system or system component by manual or automated means to verify that it satisfies specified
requirements".
Definition - 2: (As per Myers) "Software testing is defined as the process of executing any
program or a system with an intent of finding errors in it."
Definition - 3: (As per Hetzel) "Software testing involves an activity aimed at evaluating a
capability or an attribute of any program or a system and determining that it meets the required
results"
c) Establishes adequate confidence in the program that it will do what it is expected to do.
Thus all these statements are ambiguous, reason being – bound with the blocks of such
guidelines, we develop a natural tendency to operate our system in a conventional/normal way so
that it functions well. Unintentionally our natural instinct of feeding correct or normal testing data
compels us due to which the system does not fail. Moreover it is very difficult to certify that the
system has become free of defects at any particular stage – the reason being it is virtually
impossible to find out all the defects in any system with 100% accuracy.
Hence in nutshell, it can be said that - "Testing is an activity aimed at identifying errors."
b) Negative Testing: means an application is to be operated & tested under abnormal conditions
to see if the system crashes or fails or not. It involves use of illegal or incorrect test data with an
aim to cause the system misbehavior Intentionally so that we are able to detect the defects. In
short, here we see that whether our system performs the way it should not. Or whether it fails to
behave in a manner the way it is expected to?"
c) Positive aspect of Negative Testing: The prime objective of testing effort is to unearth the
errors or defects well before the actual user discovers them. Sometimes this may be a cause of
embarrassment for the testers or even the code-developers. A key attribute of a good tester is
that he is able to make a system fail successfully. A good tester’s attitude must be destructive &
must have a mentality to always hunt for negative aspects in any system. A tester attitude of a
tester is exactly in contrast with that of a developer or an author, who is always expected to be
positive & constructive.
Keywords: Software Testing, Software Verification & Validation, QuickTest Professional, Positive
Testing, Negative Testing, V&V
(Right from the first time any bug is detected till the point when the bug is fixed and closed, it is
assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending
Reject, Reject, Deferred, and Closed. For more information about various statuses used for a bug
during a bug life cycle, you can refer to article ‘Software Testing – Bug & Statuses Used During A
Bug Life Cycle’)
There are seven different life cycles that a bug can passes through:
This way, any bug that is found ends up with a status of Closed, Rejected, Deferred or
Postponed.
Introduction to Software Testing Life Cycle
Every organization has to undertakes testing of each of its products. However, the way it is
conducted differs from one organization to another. This refers to the life cycle of the testing
process. It is advisable to carry out the testing process from the initial phases, with regard to the
Software Development Life Cycle or SDLC to avoid any complications.
Software testing has its own life cycle that meets every stage of the SDLC. The software testing
life cycle diagram can help one visualize the various software testing life cycle phases. They are
1. Requirement Stage
2. Test Planning
3. Test Analysis
4. Test Design
5. Test Verification and Construction
6. Test Execution
7. Result Analysis
8. Bug Tracking
9. Reporting and Rework
10. Final Testing and Implementation
11. Post Implementation
Requirement Stage
This is the initial stage of the life cycle process in which the developers take part in analyzing the
requirements for designing a product. Testers can also involve themselves as they can think from
the users' point of view which the developers may not. Thus a panel of developers, testers and
users can be formed. Formal meetings of the panel can be held in order to document the
requirements discussed which can be further used as software requirements specifications or
SRS.
Test Planning
Test planning is predetermining a plan well in advance to reduce further risks. Without a good
plan, no work can lead to success be it software-related or routine work. A test plan document
plays an important role in achieving a process-oriented approach. Once the requirements of the
project are confirmed, a test plan is documented. The test plan structure is as follows:
Test Analysis
Once the test plan documentation is done, the next stage is to analyze what types of software
testing should be carried out at the various stages of SDLC.
Test Design
Test design is done based on the requirements of the project documented in the SRS. This phase
decides whether manual or automated testing is to be done. In automation testing, different paths
for testing are to be identified first and writing of scripts has to be done if required. There
originates a need for an end to end checklist that covers all the features of the project.
Test Execution
Planning and execution of various test cases is done in this phase. Once the unit testing is
completed, the functionality of the tests is done in this phase. At first, top level testing is done to
find out top level failures and bugs are reported immediately to the development team to get the
required workaround. Test reports have to be documented properly and the bugs have to be
reported to the development team.
Result Analysis
Once the bug is fixed by the development team, i.e after the successful execution of the test
case, the testing team has to retest it to compare the expected values with the actual values, and
declare the result as pass/fail.
Bug Tracking
This is one of the important stages as the Defect Profile Document (DPD) has to be updated for
letting the developers know about the defect. Defect Profile Document contains the following
The contents of a bug well explain all the above mentioned things.
Post Implementation
Once the tests are evaluated, the recording of errors that occurred during various levels of the
software testing life cycle, is done. Creating plans for improvement and enhancement is an
ongoing process. This helps to prevent similar problems from occuring in the future projects. In
short, planning for improvement of the testing process for future applications is done in this
phase.
Manual Testing Interview Questions Set-
A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.
Note that the process of developing test cases can help find problems in the requirements
or design of an application, since it requires completely thinking through the operation of
the application. For this reason, it's useful to prepare test cases early in the development
cycle if possible.
What are some recent major computer system failures caused by software bugs?
? A major U.S. retailer was reportedly hit with a large government fine in October of
2003 due to web site errors that enabled customers to view one anothers' online orders.
? News stories in the fall of 2003 stated that a manufacturing company recalled all their
transportation products in order to fix a software problem causing instability in certain
circumstances. The company found and reported the bug itself and initiated the recall
procedure in which a software upgrade fixed the problems.
? In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage
company could proceed; the lawsuit reportedly involved claims that the company was not
fixing system problems that sometimes resulted in failed stock trades, based on the
experiences of 4 plaintiffs during an 8-month period. A previous lower court's ruling that
"...six miscues out of more than 400 trades does not indicate negligence." was
invalidated.
? In April of 2003 it was announced that the largest student loan company in the U.S.
made a software error in calculating the monthly payments on 800,000 loans. Although
borrowers were to be notified of an increase in their required payments, the company will
still reportedly lose $8 million in interest. The error was uncovered when borrowers
began reporting inconsistencies in their bills.
? News reports in February of 2003 revealed that the U.S. Treasury Department mailed
50,000 Social Security checks without any beneficiary names. A spokesperson indicated
that the missing names were due to an error in a software change. Replacement checks
were subsequently mailed out with the problem corrected, and recipients were then able
to cash their Social Security checks.
? In March of 2002 it was reported that software bugs in Britain's national tax system
resulted in more than 100,000 erroneous tax overcharges. The problem was partly
attibuted to the difficulty of testing the integration of multiple systems.
? A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-
shelf software that had long been used in systems for tracking certain U.S. nuclear
materials. The same software had been recently donated to another country to be used in
tracking their own nuclear materials, and it was not until scientists in that country
discovered the problem, and shared the information, that U.S. officials became aware of
the problems.
? According to newspaper stories in mid-2001, a major systems development contractor
was fired and sued over problems with a large retirement plan management system.
According to the reports, the client claimed that system deliveries were late, the software
had excessive defects, and it caused other systems to crash.
? In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains would
not run due to their inability to recognize the date '31/12/2000'; the trains were started by
altering the control system's date settings.
? News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn't work.
? In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included 10,000
erroneous report cards and students left stranded by failed class registration systems; the
district's CIO was fired. The school district decided to reinstate it's original 25-year old
system for at least a year until the bugs were worked out of the new system by the
software vendors.
? In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple data conversion error. It was determined that
spacecraft software used certain data in English units that should have been in metric
units. Among other tasks, the orbiter was to serve as a communications relay for the Mars
Polar Lander mission, which failed for unknown reasons in December 1999. Several
investigating panels were convened to determine the process failures that allowed the
error to go undetected.
? Bugs in software supporting a large commercial high-speed data network affected
70,000 business customers over a period of 8 days in August of 1999. Among those
affected was the electronic trading system of the largest U.S. futures exchange, which
was shut down for most of a week as a result of the outages.
? In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military
satellite launch, the costliest unmanned accident in the history of Cape Canaveral
launches. The failure was the latest in a string of launch failures, triggering a complete
military and industry review of U.S. space launch programs, including software
integration and testing processes. Congressional oversight hearings were requested.
? A small town in Illinois in the U.S. received an unusually large monthly electric bill of
$7 million in March of 1999. This was about 700 times larger than its normal bill. It
turned out to be due to bugs in new software that had been purchased by the local power
company to deal with Y2K software issues.
? In early 1999 a major computer game company recalled all copies of a popular new
product due to software problems. The company made a public apology for releasing a
product before it was ready.
Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This
is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the
land and employed as a physician to a great lord. The physician was asked which of his
family was the most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion
someone is cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known
among the local peasants and neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes
form. His name is unknown outside our home."
If there are too many unrealistic 'no problem's', the result is bugs.
poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable,
maintainable code. In fact, it's usually the opposite: they get points mostly for quickly
turning out code, and there's job security if nobody else can understand it ('if it was hard
to write, it should be hard to read').
software development tools - visual tools, class libraries, compilers, scripting tools, etc.
often introduce their own bugs or are poorly documented, resulting in added bugs.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or
no preparation is usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of the inspection
is typically a document such as a requirements spec or a test plan, and the purpose is to
find problems and see what's missing, not to fix anything. Attendees should prepare for
this type of meeting by reading thru the document; most problems will be found during
this preparation. The result of the inspection meeting should be a written report.
Thorough preparation for inspections is difficult, painstaking work, but is one of the most
cost effective methods of ensuring quality. Employees who are most skilled at
inspections are like the 'eldest brother' in the parable in 'Why is it often hard for
management to get serious about quality assurance?'. Their skill may have low visibility
but they are extremely valuable to any software development organization, since bug
prevention is far more cost-effective than bug detection.
Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.
Level 5 - the focus is on continouous process improvement. The impact of new processes
and technologies can be predicted and effectively implemented when required.
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates
standards such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI
Standard 829), 'IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008),
'IEEE Standard for Software Quality Assurance Plans' (IEEE/ANSI Standard 730), and
others.
ANSI = 'American National Standards Institute', the primary industrial standards body in
the U.S.; publishes some software-related standards in conjunction with the IEEE and
ASQ (American Society for Quality).
Other software development process assessment methods besides CMM and ISO 9000
include SPICE, Trillium, TickIT. and Bootstrap.
other tools - for test case management, documentation management, bug reporting, and
configuration management.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards;
these may or may not apply to a particular situation:
minimize or eliminate use of global variables.
use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of
more than 20 characters is not out of line); be consistent in naming conventions.
use descriptive variable names - use both upper and lower case, avoid abbreviations, use
as many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions.
function and method sizes should be minimized; less than 100 lines of code is good, less
than 50 lines is preferable.
function descriptions should be clearly spelled out in comments preceding a function's
code.
organize code for readability.
use whitespace generously - vertically and horizontally
each line of code should contain 70 characters max.
one code statement per line.
coding style should be consistent throught a program (eg, use of brackets, indentations,
naming conventions, etc.)
in adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments
(including header blocks) as lines of code.
no matter how small, an application should include documentaion of the overall program
function and flow (even a few paragraphs is better than nothing); or if possible a separate
flow chart and detailed program documentation.
make extensive use of error handling procedures and status and error logging.
for C++, to minimize complexity and increase maintainability, avoid too many levels of
inheritance in class heirarchies (relative to the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator overloading (note
that the Java programming language eliminates multiple inheritance and operator
overloading.)
for C++, keep class methods small, less than 50 lines of code per method is preferable.
for C++, make liberal use of exception handlers