You are on page 1of 44

Q 1. Differentiate between verification and validation.

Verification Validation
1) It includes checking 1) It includes testing and
documents, design, codes and validating the actual product.
programs.
2) Verification is the static 2) Validation is the dynamic
testing. testing.
3) It does not include the 3) It includes the execution of
execution of the code. the code.
4) Methods used in verification 4) Methods used in validation
are reviews, walkthroughs, are Black Box Testing, White
inspections and desk-checking. Box Testing and non-functional
testing.
5) It checks whether the 5) It checks whether the
software conforms to software meets the
specifications or not. requirements and expectations
of a customer or not.
6) It can find the bugs in the 6) It can only find the bugs that
early stage of the development. could not be found by the
verification process.
Q 10. Differentiate between drivers and stub
Stubs Drivers
1) Stubs are dummy modules 1) Drivers are dummy modules
that always used to simulate the that always used to simulate the
low level modules. high level modules.
2) Stubs are the called 2) Drivers are the calling
programs. programs.
3) Stubs are used when sub 3) Drivers are only used when
programs are under main programs are under
construction. construction.
4) Stubs are used in top down 4) Drivers are used in bottom up
approach. integration.
Q 2. Give difference between walkthroughs and inspections.
Inspection Walkthrough
1) It is formal. 1) It is informal.
2) Initiated by project 2) Initiated by author.
team.
3) A group of relevant 3) Usually team members of the
persons from different same project take participation in the
departments participate in walkthrough. Author himself acts
the inspection. walkthrough leader.
4) Checklist is used to find 4) No checklist is used in the
faults. walkthrough.
5) Inspection processes 5) Walkthrough process includes
includes overview, overview, little or no preparation,
preparation, inspection, little or no preparation examination
and rework and follow up. (actual walkthrough meeting), and
rework and follow up.
6) Formalized procedure in 6) No formalized procedure in the
each step. steps.
Q 3. Give difference between Quality Assurance and Quality
Control.
Quality Assurance (QA) Quality Control (QC)
1) It focuses on providing assurance 1) It focuses on fulfilling
that the quality requested will be the quality requested.
achieved.
2) It is the technique of managing 2) It is the technique to
quality. verify quality.
3) It is involved during the 3) It is not included during
development phase. the development phase.
4) It does not include the execution of 4) It always includes the
the program. execution of the program.
5) It is a managerial tool. 5) It is a corrective tool.
6) It is process oriented. 6) It is product oriented.
7) Example: Verification 7) Example: Validation
Q 4. Difference between manual testing and automation
testing.
Manual Testing Automation Testing
1) Manual Testing shows lower 1) Automation Testing depicts a
accuracy due to the higher higher accuracy due to
possibilities of human errors. computer-based testing
eliminating the chances of
errors
2) Manual Testing needs time 2) Automation Testing easily
when testing is needed at a large performs testing at a large scale
scale. with the utmost efficiency.
3) Manual Testing takes more 3) Automation Testing
time to complete a cycle of completes a cycle of testing
testing, and thus the turnaround within record time and thus the
time is higher. turnaround time is much lower.
4) Manual Testing needs more 4) Automation Testing saves
cost as it involves the hiring of costs incurred as once the
expert professionals to perform software infrastructure is
testing. integrated, it works for a long
time.
5) Manual Testing ensures a 5) Automation Testing cannot
high-end User Experience to the guarantee a good User
end user of the software, as it Experience since the machine
requires human observation and lacks human observation and
cognitive abilities. cognitive abilities.
6) Manual Testing should be 6) Automation Testing should
used to perform Exploratory be used to perform Regression
Testing, Usability Testing, and Testing, Load
Ad-hoc Testing to exhibit the Testing, Performance
best results. Testing and Repeated Execution
for best results.
Q 5. Difference between top-down and bottom-up.
TOP DOWN APPROACH BOTTOM UP APPROACH
1) In this approach We focus 1) In bottom up approach, we
on breaking up the problem solve smaller problems and
into smaller parts. integrate it as whole and
complete the solution.
2) Mainly used by structured 2) Mainly used by object
programming language such oriented programming language
as COBOL, Fortran, C, etc. such as C++, C#, Python.
3) Each part is programmed 3) Redundancy is minimized by
separately therefore contain using data encapsulation and data
redundancy. hiding.
4) In this the communications 4) In this module must have
is less among modules. communication.
5) It is used in debugging, 5) It is basically used in testing.
module documentation, etc.
6) In top down approach, 6) In bottom up approach
decomposition takes place. composition takes place.
7) Pros- 7) Pros-
Easier isolation of interface Easy to create test conditions
errors Test results are easy to observe
It benefits in the case error It is suited if defects occur at the
occurs towards the top of the bottom of the program.
program.
Q 6. What is the difference between load testing and stress
testing ?
Load Testing Stress Testing
1) Load Testing is 1) Stress Testing is performed to test
performed to test the the robustness of the system or
performance of the system software application under extreme
or software application load.
under extreme load.
2) In load testing load limit 2) In stress testing load limit is above
is the threshold of a break. the threshold of a break.
3) In load testing, the 3) In stress testing, the performance
performance of the software is tested under varying data amounts.
is tested under multiple
number of users.
4) Huge number of users. 4) Too much users and too much
data.
5) Load testing is 5) Stress testing is performed to find
performed to find out the the behavior of the system under
upper limit of the system or pressure.
application.
6) The factor tested during 6) The factor tested during stress
load testing is performance. testing is robustness and stability.

7) Load testing determines 7) Stress testing ensures system


the operating capacity of a security.
system or application.
Q 7. Difference between alpha testing and beta testing.
alpha testing beta testing
1) Performed at developer site 1) Performed at end user’s site
2) Performed in controlled 2) Performed in uncontrolled
environment in developers environment in developers
presence absence
3) Less probability of finding 3) High probability of finding
errors as it is driven by errors as it is used by end user.
developer
4) It is done during 4) It is done at the pre-release of
implementation phase of the software
software
5) It is not considered as live 5) It is considered as a live
application of software application of the software.
6) Less time consuming as 6) More time consuming as user
developer can make necessary has to report the bugs if any via
changes in given time appropriate channels.
Q 8. Difference between black box and white box testing.
Black Box Testing White Box Testing
1) It is a way of software 1) It is a way of testing the
testing in which the internal software in which the tester has
structure or the program or the knowledge about the internal
code is hidden and nothing is structure or the code or the
known about it. program of the software.
2) Implementation of code is 2) Code implementation is
not needed for black box necessary for white box testing.
testing.
3) It is mostly done by software 3) It is mostly done by software
testers. developers.
4) No knowledge of 4) Knowledge of implementation
implementation is needed. is required.
5) It can be referred to as outer 5) It is the inner or the internal
or external software testing. software testing.
6) It is a functional test of the 6) It is a structural test of the
software. software.
7) This testing can be initiated 7) This type of testing of
based on the requirement software is started after a detail
specifications document. design document.
8) No knowledge of 8) It is mandatory to have
programming is required. knowledge of programming.
Q 9. Different between Static and Dynamic
Static Testing Dynamic Testing
1) It is performed in the early 1) It is performed at the later
stage of the software stage of the software
development. development.
2) In static testing whole code is 2) In dynamic testing whole
not executed. code is executed.
3) Static testing prevents the 3) Dynamic testing finds and
defects. fixes the defects.
4) Static testing is performed 4) Dynamic testing is performed
before code deployment. after code deployment.

5) Static testing is less costly. 5) Dynamic testing is highly


costly.
6) Static Testing involves 6) Dynamic Testing involves
checklist for testing process. test cases for testing process.

7) Example: 7) Example:
Verification Validation
Q 10. Define testing. List out any 4 objectives of software
testing.
Software testing is defined as performing Verification and
Validation of the Software Product for its correctness and
accuracy of working. Software Testing is the process of
executing a program with the intent of finding errors. A
successful test is one that uncovers an as-yet-undiscovered error.
Testing can show the presence of bugs but never their absence.
Testing is a support function that helps developers look good by
finding their mistakes before anyone else does.
Objectives of software testing are as follows:
1. Finding defects which may get created by the programmer
while developing the software.
2. Gaining confidence in and providing information about the
level of quality.
3. To prevent defects.
4. To make sure that the end result meets the business and user
requirements.
5. To ensure that it satisfies the BRS that is Business
Requirement Specification and SRS that is System
Requirement Specifications.
6. To gain the confidence of the customers by providing them
a quality product.
Q 11. Describe principles of software testing.
There are seven principles in software testing:
1. Testing shows the presence of defects
2. Exhaustive testing is not possible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context-dependent
7. Absence of errors fallacy
1. Testing shows the presence of defects: The goal of
software testing is to make the software fail. Software
testing reduces the presence of defects. Software testing
talks about the presence of defects and doesn’t talk about the
absence of defects. Software testing can ensure that defects
are present but it can not prove that software is defect-free.
Even multiple testing can never ensure that software is
100% bug-free. Testing can reduce the number of defects
but not remove all defects.
2. Exhaustive testing is not possible: It is the process of
testing the functionality of the software in all possible inputs
(valid or invalid) and pre-conditions is known as exhaustive
testing. Exhaustive testing is impossible means the software
can never test at every test case. It can test only some test
cases and assume that the software is correct and it will
produce the correct output in every test case. If the software
will test every test case then it will take more cost, effort,
etc., which is impractical.
3. Early Testing: To find the defect in the software, early test
activity shall be started. The defect detected in the early
phases of SDLC will be very less expensive. For better
performance of software, software testing will start at the
initial phase i.e. testing will perform at the requirement
analysis phase.
4. Defect clustering: In a project, a small number of modules
can contain most of the defects. Pareto Principle to software
testing state that 80% of software defect comes from 20% of
modules.
5. Pesticide paradox: Repeating the same test cases, again and
again, will not find new bugs. So it is necessary to review
the test cases and add or update test cases to find new bugs.
6. Testing is context-dependent: The testing approach
depends on the context of the software developed. Different
types of software need to perform different types of testing.
For example, The testing of the e-commerce site is different
from the testing of the Android application.
7. Absence of errors fallacy: If a built software is 99% bug-
free but it does not follow the user requirement then it is
unusable. It is not only necessary that software is 99% bug-
free but it is also mandatory to fulfill all the customer
requirements.
Q 12. Describe any 4 skills of a software tester.
• Document preparation: As a software tester, a part of your
job is to document events for the testing process. The
organisation may use specific templates or ask you to create
templates for documenting issues and testing methodologies.
Proper documentation allows the organisation to standardise
the debugging process and communicate it across teams.
• Database knowledge: Most software organisations use
databases, such as MySQL or DB2, to store vast amounts of
information. As a software tester, your job can include
verifying data from these databases. To do so, you often
require knowledge of running database queries.
• Test preparation: One key aspect of the job is preparing
for the tests. Preparation usually requires specific skills,
such as creating a testing plan, developing test scenarios and
detailing test cases. Experience as a software tester is crucial
for developing this skill.
• Knowledge of testing procedures: Software testing
typically requires understanding the process behind
identifying and removing system issues. The organisation
may expect you to design a testing scenario based on the
team's budget, allotted time, customer priorities and the type
of application. Knowing the different software testing
models, such as the waterfall, agile, V and spiral can help
effectively design testing procedures.
• Knowledge of automation tools: Software testing is of two
types, manual testing and automation testing. If you are a
manual tester looking to upgrade to an automated tester,
learning automation tools can help you excel. Some
automation testing tools include ACCELQ, TestComplete,
Virtuoso and testRigor.
Q 13. What is a test case ? State its specification parameter.
A test case is a document, which has a set of test data,
preconditions, expected results and postconditions, developed for
a particular test scenario in order to verify compliance against a
specific requirement.
Test Case acts as the starting point for the test execution, and
after applying a set of input values, the application has a
definitive outcome and leaves the system at some end point or
also known as execution postcondition.
Typical Test Case Parameters:
1. Test Case ID, 2. Test Scenario, 3. Test Case Description, 4.
Test Steps, 5. Prerequisite, 6. Test Data, 7. Expected Result, 8.
Test Parameters, 9. Actual Result, 10. Environment Information
Example: While developing the test cases for the above scenario,
the test cases are documented the following way. In the below
example, the first case is a pass scenario while the second case is
a FAIL.

Scenario Test Step Expected Actual


Result Outcome
Verify that the Login to Application Application
input field that application should be able accepts all 10
can accept and key in 10 to accept all 10 characters.
maximum of 10 characters characters.
characters
Verify that the Login to Application Application
input field that application should NOT accepts all 10
can accept and key in 11 accept all 11 characters.
maximum of 11 characters characters.
characters
Q 14. What is entry and exit criteria of software testing ?
Process model is a way to represent any given phase of software
development that prevent and minimize the delay between defect
injection and defect detection/correction.
• Entry criteria, specifies when that phase can be started also
included the inputs for the phase.
• Tasks or steps that need to be carried out in that phase, along
with measurements that characterize the tasks.
• Verification, which specifies methods of checking that tasks
have been carried out correctly. Clear entry criteria make
sure that a given phase does not start prematurely.
• The verification for each phase helps to prevent defects. At
least defects can be minimized.
Exit criteria, which stipulate the conditions under which one can
consider the phases as done and included are the outputs for the
phase.
Exit criteria may include:
1. All test plans have been run
2. All requirements coverage has been achieved.
3. All severe bugs are resolved.
Q 15. Explain V-model.
• V model is known as Verification and Validation model.
• This model is an extension of the waterfall model.
• In the life cycle of V-shaped model, processes are executed
sequentially.
• Every phase completes its execution before the execution of
next phase begins.
• V Model is a highly disciplined SDLC model which has a
testing phase parallel to each development phase. The V
model is an extension of the waterfall model wherein
software development and testing is executed in a sequential
way. It is known as the Validation or Verification Model.

Advantages of V-model
• V-model is easy and simple to use.
• Many testing activities i.e planning, test design are executed
in the starting, it saves more time.
• Calculation of errors is done at the starting of the project
hence, less chances of error occurred at final phase of
testing.
Disadvantages of V-model
• V-model is not suitable for large and composite projects.
• If the requirements are not constant then this model is not
acceptable.

Q 16. Define inspection and walkthrough.


Inspection:
1. Inspections are the most formal type of reviews.
2. They are highly structured and require training for each
participant.
3. Inspections are different from peer reviews and
walkthroughs in the original programmer.
4. These forces someone else to learn and understand the
materialbeing presented, potentially giving a different slant
and interpretation at the inspection meeting.
Walkthrough:
1. Walkthroughs are the next step up in formality from peer
reviews.
2. In a walkthrough, the programmer who wrote the code
formally presents (Walks through) it to a small group of five
or so other programmers and testers.
3. The reviewers should receive copies of the software in
advance of the review so they can examine it and write
comments and questions that they want to ask at the review.
4. Having at least one senior programmer as a reviewer is very
important.
Q 17. Explain black box testing with two advantages and
disadvantages
Black box testing or functional testing is a method which is used
to examine software functionality without knowing its internal
code structure. It can be applied to all software testing levels but
is mostly employed for the higher level acceptance and system
related ones.
To elaborate, a professional using this method to test an
application’s functionality will only know about the input and
expected output but not about the program which helps the
application reach the desired output. The professional will only
enter valid and invalid inputs and determine the expected outputs
without having any in-depth knowledge of the internal structure.

Black Box Testing Techniques


Test cases in the black box testing method are built around the
specifications, requirements, and design parameters of a
software. Some reliable techniques applied to create those test
cases are:
1. Boundary Value Analysis: The most commonly used black
box testing technique, Boundary Value Analysis or BVA is used
to find the error in the boundaries of input values rather than the
center.
2. Equivalence Class Partitioning: This technique is used to
reduce the number of possible inputs to small yet effective
inputs. Used to test an application exhaustively and avoid
redundancy of inputs, it is done by dividing inputs into classes
and getting value from each class.
3. Decision Table Based Testing: This approach is the most
rigorous one and is ideally implemented when the number of
combinations of actions is taken under varying conditions.
4. Cause-Effect Graphing Technique: This technique
considers a system’s desired external behavior only. It helps in
selecting test cases which relate Causes to Effects to create test
cases. In the aforementioned statement, Cause implies a distinct
input condition which results in internal change in a system
while Effect implies an output condition brought by a
combination of causes.
5. Error Guessing: The success of this technique is solely
dependent on the experience of the tester. There are no tools and
techniques as such, but one can write test cases either while
reading the document or while encountering an undocumented
error during the testing.

Advantages / Pros of Black Box Testing


• Unbiased tests because the designer and tester work
independently.
• Tester is free from any pressure of knowledge of specific
programming languages to test the reliability and
functionality of an application / software.
• Facilitates identification of contradictions and vagueness in
functional specifications.
Disadvantages / Cons of Black Box Testing
• Tests can be redundant if already run by the software
designer.
• Test cases are extremely difficult to be designed without
clear and concise specifications.
• Testing every possible input stream is not possible because
it is time-consuming and this would eventually leave many
program paths untested.
• Results might be overestimated at times.
• Cannot be used for testing complex segments of code.
Q 18. Explain White box testing with two advantages and
disadvantages
White box testing, also known as structural testing or code-based
testing, is a methodology which ensures and validates a software
application’s mechanisms, internal framework, and objects and
components. This method of testing not only verifies a code as
per the design specifications, but also uncovers an application’s
vulnerabilities. It is also known as transparent box, glass box,
and clear box testing as it clearly visualizes the software’s
internal mechanisms for a software engineering team.
During the white box testing phase, a code is run with
preselected input values to validate the preselected output values.
If a mismatch is found, it implies that the software application is
marred by a bug. This process also involves writing software
code stubs and drivers.

White Box Testing Techniques: The most important part in the


white box testing method is the code coverage analysis which
empowers a software engineering team to find the area in a code
which is unexecuted by a given set of test case thereby, helping
in improving a software application’s quality. There are different
techniques which can be used to perform the code coverage
analysis. Some of these are:
• Statement Coverage: This technique is used to test every
possible statement at least once. Cantata++ is the preferred
tool when using this technique.
• Decision Coverage: This includes testing every possible
decision condition and other conditional loops at least once.
TCAT-PATH, supporting C, C++, and Java applications, is
the go-to tool when this technique is followed.
• Condition Coverage: This makes one time code execution
mandatory when all the conditions are tested.
• Decision/Condition Coverage: This is a mixed technique
which is implemented to test all the Decision / Condition
coverage at least once while the code is executed.
• Multiple Condition Coverage: In this type of white box
testing technique, each entry point of a system has to be
executed at least once.
Advantages / Pros of White Box Testing
• Code optimization by revealing hidden errors
• Transparency of the internal coding structure which is
helpful in deriving the type of input data needed to test an
application effectively
• Covers all possible paths of a code thereby, empowering a
software engineering team to conduct thorough application
testing
• Enables programmer to introspect because developers can
carefully describe any new implementation
• Test cases can be easily automated
• Gives engineering-based rules to stop testing an application
Disadvantages / Cons of White Box Testing
• A complex and expensive procedure which requires the
adroitness of a seasoned professional, expertise in
programming and understanding of internal structure of a
code
• Updated test script required when the implementation is
changing too often
• Exhaustive testing becomes even more complex using the
white box testing method if the application is of large size
• Some conditions might be untested as it is not realistic to
test every single one
• Necessity to create full range of inputs to test each path and
condition make the white box testing method time-
consuming
• Defects in the code may not be detected or may be
introduced considering the ground rule of analyzing each
line by line or path by path.
Q 19. Define Quality Control and Quality Assurance
Quality Control:
• It is Product oriented activities.
• A part of quality management focused on fulfilling quality
requirements.
• The operational techniques and activities used to fulfill
requirements for quality.
• Quality Control on the other hand is the physical verification
that the product conforms to these planned arrangements by
inspection, measurement etc.
• Quality Control is the process involved within the system to
ensure job management, competence and performance
during the manufacturing of the product or service to ensure
it meets the quality plan as designed.
• Quality Control just measures and determines the quality
level of products or services.
Quality Assurance:
• It is Process oriented activities.
• A part of quality management focused on providing
confidence that quality requirements will be fulfilled.
• All the planned and systematic activities implemented
within the quality system that can be demonstrated to
provide confidence that a product or service will fulfill
requirements for quality
• Quality Assurance is fundamentally focused on planning
and documenting those processes to assure quality including
things such as quality plans and inspection and test plans.
• Quality Assurance is a system for evaluating performance,
service, of the quality of a product against a system,
standard or specified requirement for customers.
Q 20. What do you mean by stub and driver
Drivers: The module where the required inputs for the module
under test are simulated for the purpose of module or unit testing
is known as a Driver module. The driver module may print or
interpret the result produced by the module under test.

Stubs: The module under testing may also call some other
module which is not ready at the time of testing. There is need of
dummy modules required to simulate for testing, instead of
actual modules. These are called stubs.

For example if a developer is developing a loop for searching


functionality of an application which is a very small unit of the
whole code of that application then to verify that the particular
loop is working properly or not is known as unit testing.
Importance:
• Stubs and Drivers works as a substitute for the missing or
unavailable module.
• They are specifically developed, for each module, having
different functionalities.
• Generally, developers and unit testers are involved in the
development of stubs and drivers.
• Their most common use may be seen in the integration
incremental testing, where stubs are used in top bottom
approach and drivers in a bottom up approach.
Q 21. Explain GUI testing with suitable example.
GUI Testing:
i. GUI testing is a testing technique in which the
application's user interface is tested whether the
application performs as expected with respect to user
interface behavior.
ii. GUI Testing includes the application behavior towards
keyboard and mouse movements and how different GUI
objects such as toolbars, buttons, menu bars, dialog boxes,
edit fields, lists, behaviour to the user input.
GUI Testing Guidelines: i. Check Screen Validations
ii. Verify All Navigations, iii. Check usability Conditions
iv. Verify Data Integrity, v. Verify the object states
vi. Verify the date Field and Numeric Field Formats

GUI Automation Tools


Q 22. Define defect with its effects.
It refers to the several troubles with the software product, with
its external behavior or its internal features. A defect is an
error in coding that causes a program to fail or to produce
incorrect /unexpected results.

Effects: Accurate and proper bug reporting is very important.


If the report contains all the necessary information, the bug is
easy to identify and is repeatable. People who correct a
specific defect can reproduce it so that it can be repaired. This
helps to avoid unnecessary misunderstandings, useless work,
and additional explanations. Reproducing the bug is necessary
to make sure that what we consider to be a bug is indeed a
software bug, not a user mistake or environment error.
Q 23. Explain types of defects.
Defects are defined as the deviation of the actual and expected
result of system or software application. Defects can also be
defined as any deviation or irregularity from the specifications
mentioned in the product functional specification document.
Defects are caused by the developer in development phase of
software. When a developer or programmer during the
development phase makes some mistake then that turns into bugs
that are called defects. It is basically caused by the developers’
mistakes.
Defect in a software product represents the inability and
inefficiency of the software to meet the specified requirements
and criteria and subsequently prevent the software application to
perform the expected and desired working.
Types of Defects:
• Arithmetic Defects:
It include the defects made by the developer in some
arithmetic expression or mistake in finding solution of such
arithmetic expression. This type of defects are basically
made by the programmer due to access work or less
knowledge. Code congestion may also lead to the arithmetic
defects as programmer is unable to properly watch the
written code.
• Logical Defects:
Logical defects are mistakes done regarding the
implementation of the code. When the programmer doesn’t
understand the problem clearly or thinks in a wrong way
then such types of defects happen. Also while implementing
the code if the programmer doesn’t take care of the corner
cases then logical defects happen. It is basically related to
the core of the software.
• Syntax Defects:
Syntax defects means mistake in the writing style of the
code. It also focuses on the small mistake made by
developer while writing the code. Often the developers do
the syntax defects as there might be some small symbols
escaped. For example, while writing a code in C++ there is
possibility that a semicolon(;) is escaped.
• Multithreading Defects:
Multithreading means running or executing the multiple
tasks at the same time. Hence in multithreading process
there is possibility of the complex debugging. In
multithreading processes sometimes there is condition of the
deadlock and the starvation is created that may lead to
system’s failure.
• Interface Defects:
Interface defects means the defects in the interaction of the
software and the users. The system may suffer different
kinds of the interface testing in the forms of the complicated
interface, unclear interface or the platform based interface.
• Performance Defects:
Performance defects are the defects when the system or the
software application is unable to meet the desired and the
expected results. When the system or the software
application doesn’t fulfill the users’s requirements then that
is the performance defects. It also includes the response of
the system with the varying load on the system.
Q 24. Explain defect life cycle with the help of a suitable
diagram.
What is Defect Life Cycle?
Defect Life Cycle or Bug Life Cycle in software testing is the
specific set of states that defect or bug goes through in its entire
life. The purpose of Defect life cycle is to easily coordinate and
communicate current status of defect which changes to various
assignees and make the defect fixing process systematic and
efficient.
Defect Status
Defect Status or Bug Status in defect life cycle is the present
state from which the defect or a bug is currently undergoing. The
goal of defect status is to precisely convey the current state or
progress of a defect or bug in order to better track and
understand the actual progress of the defect life cycle.
The number of states that a defect goes through varies from
project to project. Below lifecycle diagram, covers all possible
states
• New: When a new defect is logged and posted for the first
time. It is assigned a status as NEW.
• Assigned: Once the bug is posted by the tester, the lead of
the tester approves the bug and assigns the bug to the
developer team
• Open: The developer starts analyzing and works on the
defect fix
• Fixed: When a developer makes a necessary code change
and verifies the change, he or she can make bug status as
“Fixed.”
• Pending retest: Once the defect is fixed the developer gives
a particular code for retesting the code to the tester. Since
the software testing remains pending from the testers end,
the status assigned is “pending retest.”
• Retest: Tester does the retesting of the code at this stage to
check whether the defect is fixed by the developer or not
and changes the status to “Re-test.”
• Verified: The tester re-tests the bug after it got fixed by the
developer. If there is no bug detected in the software, then
the bug is fixed and the status assigned is “verified.”
• Reopen: If the bug persists even after the developer has
fixed the bug, the tester changes the status to “reopened”.
Once again the bug goes through the life cycle.
• Closed: If the bug is no longer exists then tester assigns the
status “Closed.”
• Duplicate: If the defect is repeated twice or the defect
corresponds to the same concept of the bug, the status is
changed to “duplicate.”
• Rejected: If the developer feels the defect is not a genuine
defect then it changes the defect to “rejected.”
• Deferred: If the present bug is not of a prime priority and if
it is expected to get fixed in the next release, then status
“Deferred” is assigned to such bugs
• Not a bug:If it does not affect the functionality of the
application then the status assigned to a bug is “Not a bug”.
Q 25. Explain defect management process.

1. Defect Prevention: Implementation of techniques,


methodology and standard processes to reduce the risk of
defects.
2. Deliverable Baseline: Deliverables are considered to be
ready for further development. i.e. the deliverables meet exit
criteria.
3. Defect Discovery: To find the defect through the process of
verification and validation.
4. Defect Resolution: Defect is corrected or corrective action is
taken and notification is given to tester.
5. Process Improvement: To identify ways to improve the
process to prevent further future occurrences of similar
defects i.e. Corrective and preventive action is taken for
processes improvement.
6. Management Reporting: Reporting is about status of
application and processes.
Q 26. Explain defect template with its attributes.
Defect template: A defect report documents an anomaly
discovered during testing. It includes all the information
needed to reproduce the problem, including the author,
release/build number, open/close dates, problem area, problem
description, test environment, defect type, how it was detected,
who detected it, priority, severity, status, etc.
After uncovering a defect (bug), testers generate a formal
defect report. The purpose of a defect report is to state the
problem as clearly as possible so that developers can replicate
the defect easily and fix it.

DEFECT REPORT TEMPLATE


In most companies, a defect reporting tool is used and the
elements of a report can vary. However, in general, a defect
report can consist of the following elements.
Q 26. Explain how a test plan is prepared.
The test plan acts as the anchor for the execution, tracking and
reporting of the entire testing project and covers.
• Preparing test plan: What needs to be tested the scope of
testing, including clear identification of what will be the
tested & what will not be tested.
• How the testing is going to be performed breaking down the
testing into small and manageable tasks and identifying the
strategies to be used for carrying out the tasks.
• What resources are needed for testing- computer as well as
human resources.
• The time lines by which the testing activities will be
performed. Risks that may be faced in all of the above, with
appropriate mitigation and contingency plans.

Q 27. List out any 4 advantages of test planning.


• Serves as a guide to testing throughout the
development.
• We only need to define test points during the testing
phase.
• Serves as a valuable record of what testing was done.
• The entire test plan can be reused if regression testing is
done later on.
• The test plan itself could have defects just like
software!
Q 28. Explain the terms Mistake, Error, Defect, Bug, Fault
and Failure in relation with software testing.

Error: Error is a situation that happens when the Development


team or the developer fails to understand a requirement
definition and hence that misunderstanding gets translated to
buggy code. This situation is referred to as an Error and is
mainly a term coined by the developers.

Defect: A defect refers to the situation when the application is


not working as per the requirement and the actual and expected
result of the application or software are not in sync with each
other.

Bug: A bug refers to defects which means that the software


product or the application is not working as per the adhered
requirements set. When we have any type of logical error, it
causes our code to break, which results in a bug. It is now that
the Automation/ Manual Test Engineers describe this situation as
a bug.

Fault: Sometimes due to certain factors such as Lack of


resources or not following proper steps Fault occurs in software
which means that the logic was not incorporated to handle the
errors in the application. This is an undesirable situation, but it
mainly happens due to invalid documented steps or a lack of data
definitions.

Failure: Failure is the accumulation of several defects that


ultimately lead to Software failure and results in the loss of
information in critical modules thereby making the system
unresponsive.
Q 28. Explain code functional testing and code coverage
testing with example
Code Functional Testing:
• Code Functional Testing involves tracking a piece of data
completely through the software.
• At the unit test level this would just be through an individual
module or function.
• The same tracking could be done through several integrated
modules or even through the entire software product
although it would be more time consuming to do so.
• During data flow, the check is made for the proper
declaration of variables declared and the loops used are
declared and used properly.
For example:
1. #include
2. void main()
3. {
4. int i , fact= 1, n;
5. printf(“Enter the number:”);
6. scanf(“%d”, &n);
7. for(i =1; i<=n; i++)
8. fact = fact * i;
9. printf(“The factorial of a number is: “%d”,”);
10. }
Code Coverage Testing:
• The logical approach is to divide the code just as you did in
black-box testing into its data and its states (or program
flow).
• By looking at the software from the same perspective, you
can more easily map the white-box information you gain to
the black-box cases you’ve already written.
• Consider the data first. Data includes all the variables,
constants, arrays, data structures, keyboard and mouse input,
files and screen input and output, and I/O to other devices
such as modems, networks, and so on.
For example:
1. #include
2. void main()
3. {
4. int i , fact= 1, n;
5. printf(“Enter the number:”);
6. scanf(“%d”,&n);
7. for(i =1; i<=n; i++)
8. fact = fact * i;
9. printf(“The factorial of a number is: “%d”,fact”);
10.}
The declaration of data is complete with the assignment
statement and the variable declaration statements. All the
variable declared are properly utilized.
Q 30. Define static and dynamic testing.
Static testing: In static testing code is not executed. Rather it
manually checks the code, requirement documents, and design
documents to find errors. Main objective of this testing is to
improve the quality of software products by finding errors in
early stages of the development cycle.
Dynamic testing: The dynamic testing is done by executing
program. Main objective of this testing is to confirm that the
software product works in conformance with the business
requirements.

Q 31. State any two examples of integration testing.


1. Verifying the interface link between the login page and the
home page i.e. when a user enters the credentials and logs it
should be directed to the homepage
2. Check the interface link between the Login and Mailbox
module
3. Check the interface link between the Mailbox and Delete
Mails Module.
4. Verifying the interface link between the home page and the
profile page i.e. profile page should open up.
Q 32. Describe the Integration Testing.
Integration testing -- also known as integration and testing (I&T)
-- is a type of software testing in which the different units,
modules or components of a software application are tested as a
combined entity. However, these modules may be coded by
different programmers. The aim of integration testing is to test
the interfaces between the modules and expose any defects that
may arise when these components are integrated and need to
interact with each other.

What does integration testing involve?


Also known as string testing or thread testing, integration testing
involves integrating the various modules of an application and
then testing their behaviour as a combined, or integrated, unit.
Verifying if the individual units are communicating with each
other properly and working as intended post-integration is
essential.
Q 33. Different techniques for finding defects are:

1. Static technique 2. Dynamic technique 3. Operational


technique

1. Static Techniques: Static techniques of quality control define


checking the software product and related artifacts without
executing them. It is also termed desk checking/verification
/white box testing. It may include reviews, walkthroughs,
inspection, and audits here; the work product is reviewed by the
reviewer with the help of a checklist, standards, any other
artifact, knowledge and experience, in order to locate the defect
with respect to the established criteria. Static technique is so
named because it involves no execution of code, product,
documentation, etc. This technique helps in establishing
conformance to requirements view.

2. Dynamic Testing: Dynamic testing is a validation technique


which includes dummy or actual execution of work products to
evaluate it with expected behavior. It includes black box testing
methodology such as system testing and unit testing. The testing
methods evaluate the product with respect to requirements
defined; designs created and mark it as pass or fail.

3. Operational techniques: Operational techniques typically


include auditing work products and projects to understand
whether the processes defined for development /testing are being
followed correctly or not, and also whether they are effective or
not. It also includes revisiting the defects before and after fixing
and analysis. Operational technique may include smoke testing
and sanity testing of a work product.
Q 34. With the help of diagram describe client-server testing.

Client Server Software : Client-server software requires


specific forms of testing to prevent or predict catastrophic errors.
Servers go down, records lock, I/O (Input/Output) errors and lost
messages can really cut into the benefits of adopting this network
technology. Testing addresses system performance and
scalability by understanding how systems respond to increased
workloads and what causes them to fail.
Software testing is more than just review. It involves the
dynamic analysis of the software being tested. It instructs the
software to perform tasks and functions in a virtual environment.
This examines compatibility, capability, efficiency, reliability,
maintainability, and portability. A certain amount of faults will
probably exist in any software. However, faults do not
necessarily equal failures. Rather, they are areas of slight
unpredictability that will not cause significant damage or
shutdown. They are more errors of semantics. Therefore, testing
usually occurs until a company reaches an acceptable defect rate
that doesnt affect the running of the program or at least wont
until an updated version has been tested to correct the defects.

Client-Server Testing Techniques:


There are a variety of testing techniques that are particularly
useful when testing client and server programs. Risk driven
testing is time sensitive, which is important in finding the most
important bugs early on. It also helps because testing is never
allocated enough time or resources. Companies want to get their
products out as soon as possible. The prioritization of risks or
potential errors is the engine behind risk driven testing. In risk
driven testing the tester takes the system parts he/she wants to
test, modules or functions, for example, and examines the
categories of error impact and likelihood.
Q 35. Explain metrics life cycle.
Software Testing metrics are quantitative steps taken to evaluate
the software testing process's quality, performance, and
progress. This helps us to accumulate reliable data about the
software testing process and enhance its efficiency. This will
allow developers to make proactive and precise decisions for
upcoming testing procedures.
What is a metric in software testing metrics?
A Metric is a degree to which a system or its components retains
a given attribute. Testers don't define a metric just for the sake
of documentation. It serves greater purposes in software testing.
For example, developers can apply a metric to assume the time it
takes to develop software. It can also be assigned to determine
the numbers of new features and modifications, etc., added to
the software.
Importance of Software Testing Metrics
As mentioned, test metrics are crucial to measuring the quality
and performance of the software. With proper software testing
metrics, developers can−
• Determine what types of improvements are required to
deliver a defect-free, quality software
• Make sound decisions about the subsequent testing
phases, such as scheduling upcoming projects as well as
estimating the overall cost of those projects
• Evaluate the current technology or process and check
whether it needs further modifications
Types of software testing metrics
• Process Metrics: Process metrics define the
characteristics and execution of a project. These
characteristics are essential to the improvement and
maintenance of the process in the SDLC (Software
Development Life Cycle).
• Product Metrics: Product metrics define the size,

design, performance, quality, and complexity of a


product. By using these characteristics, developers can
enhance their software development quality.
• Project Metrics: Project Metrics determine the overall

quality of a project. It is used to calculate costs,


productivity, defects and estimate the resource and
deliverables of a project.
It is incredibly vital to identify the correct testing metrics for the
process. Few factors to consider−
• Choose your target audiences wisely before preparing
the metrics
• Define the goal behind designing the metrics
• Prepare metrics by considering the specific
requirements of the project
• Evaluate the financial gain behind each metrics
Software testing can be further divided into manual and
automated testing.
In manual testing, the test is performed by QA analysts in a step-
by-step process. Meanwhile, in automated testing, tests are
executed with the help of test automation frameworks, tools, and
software.
Both manual and automated testing has its strength and
weakness.

You might also like