You are on page 1of 31

D20MCA11140

Section A
Ques. 1: State the goal of a software tester.
Ans. 1:

Bug discovery: 

The immediate goal of testing is to find


errors at any stage of software
development. More the bugs discovered at
an early stage, better will be the success
rate of software testing.

Quality: 

Since the software is also a product, its


quality is primary from the users' point of
view. Thorough testing ensures superior
quality. Therefore, the first goal of
understanding and performing the testing
process is to enhance the quality of the
software product. Though quality depends
on various factors, such as correctness,
integrity, efficiency, etc., reliability is the
major factor to achieve quality. The
software should be passed through a rigorous reliability analysis to attain high-quality standards.
Reliability is a matter of confidence that the software will not fail, and this level of confidence increases
with rigorous testing. The confidence in the reliability, in turn, increases the quality, as shown in Figure
2.

Customer satisfaction: 
From the users perspective, the prime concern of testing is customer satisfaction only. If we want the
customer to be satisfied with the software product, then testing should be complete and thorough. Testing
should be complete in the sense that it must satisfy the user for all the specified requirements mentioned
in the user manual, as well as for the unspecified requirements, which are otherwise understood. A
complete testing process achieves reliability, which enhances the quality, and quality, in turn, increases
customer satisfaction, as shown in Figure 3.
Risk management: 
Risk is the probability that undesirable events will occur in a system. These undesirable events will
prevent the organization from successfully implementing its business initiatives. Thus, the risk is
basically concerned with the business perspective of an organization.

1
D20MCA11140

Risks must be controlled to manage them


with ease. Software testing may act as a
control, which can help in eliminating or
minimizing risks (see Figure 4). Thus,
managers depend on software testing to assist
them in controlling their business goals.

Reduced maintenance cost: 


The maintenance cost' of any software
product is not its physical cost, as the
software does not wear out. The only
maintenance cost in a software product is its
failure due to errors. Post-release errors are
costlier to fix, as they are difficult to detect.
Thus, if testing has been done rigorously and
effectively, then the chances of failure are
minimized and, in turn, the maintenance cost
is reduced.
Improved software testing
process: 
A testing process for one project may not be successful and there may be scope for improvement.
Therefore, the bug history and post-implementation results can be analyzed to find out snags in the
present testing process, which can be rectified in future projects. Thus, the long-term post-implementation
goal is to improve the testing process for future projects.
Bug prevention: 
It is the consequent action of bug discovery. From the behavior and interpretation of bugs discovered,
everyone in the software development team gets to learn how to code safely such that the bugs discovered
are not repeated in later stages or future projects. Though errors cannot be prevented to zero, they can be
minimized. In this sense, bug prevention is a superior goal of testing.

2
D20MCA11140

3
D20MCA11140

Ques. 2: Give short note on risk in testing.

Ans. 2:
The probability of any unwanted incident is defined as Risk. In Software Testing, risk analysis is the
process of identifying the risks in applications or software that you built and prioritizing them to test.
After that, the process of assigning the level of risk is done. The categorization of the risks takes place,
hence, the impact of the risk is calculated.

Why use Risk Analysis?


In any software, using risk analysis at the beginning of a project highlights the potential problem areas.
After knowing about the risk areas, it helps the developers and managers to mitigate the risks. When a  test
plan has been created, risks involved in testing the product are to be taken into consideration along with
the possibility of the damage they may cause to your software along with solutions.

Now, you might think what could be the possible risks that you could encounter? Well here is a list:

1. Use of new hardware


2. Use of new technology
3. Use of new automation tool
4. The sequence of code
5. Availability of test resources for the application

Now, you must know there are certain risks that are unavoidable. I am enumerating them below:

1. The time that you allocated for testing

2. A defect leakage due to the complexity or size of the application

3. Urgency from the clients to deliver the project

4. Incomplete requirements

In such cases, you have to tackle the situation with care. Following points can be taken care of:

 Conduct Risk Assessment review meeting

 Use maximum resources to work on high-risk areas

 Create a Risk Assessment database for future use

 Identify and notice the risk magnitude indicators: high, medium, low.

Now, what are these risk magnitude indicators? Well, here is an explanation.

High: means the effect of the risk would be very high and non-tolerable. The company might face loss.

Medium: it is tolerable but not desirable. The company may suffer financially but there is a limited risk.

4
D20MCA11140

Low: it is tolerable. There lies little or no external exposure or no financial loss.

Risk Identification
There are different sets of risks included in the risk identification process. Those are as follows:

1. Business Risks: This risk is the most common risk associated with our topic. It is the risk that
may come from your company or your customer, not from your project.

2. Testing Risks: You should be well acquainted with the platform you are working on, along with
the software testing tools being used.

3. Premature Release Risk: a fair amount of knowledge to analyze the risk associated with
releasing unsatisfactory or untested software is required

4. Software Risks: You should be well versed with the risks associated with the software
development process.

After identifying the risks associated with your software, the next step is to assess the risks; i.e, Risk
Assessment.

Risk Assessment
In the risk analysis process, these steps prove to be the most important one. It is said that this step is way
too complex and should be tackled with the utmost care. After risk identification, assessment has to be
dealt programmatically.

The perspective of Risk Assessment


There are three perspectives of Risk Assessment:

 Effect

 Cause

 Likelihood

Effect – To assess risk by Effect. In case you identify a condition, event or action and try to determine its
impact.

Cause – To assess risk by Cause is opposite of by Effect. Initialize scanning the problem and reach to the
point that could be the most probable reason behind that.

Likelihood – To assess risk by Likelihood is to say that there is a probability that a requirement won’t be
satisfied.

5
D20MCA11140

Ques. 3: How do load testing work?


Ans. 3:
Load testing is a type of performance testing that simulates a real-world load on any software,
application, or website. Without it, your application could fail miserably in real-world conditions.

That’s why we build tools like Retrace to help you monitor application performance and fix bugs before
your code ever gets to production.

Load testing examines how the system behaves during normal and high loads and determines if a system,
piece of software, or computing device can handle high loads given a high demand of end-users.

This tool is typically applied when a software development project nears completion.

How it Works
How to load test?

A load test can be done with end-to-end IT systems or smaller components like database servers or
firewalls.

It measures the speed or capacity of the system or component through transaction response time. When
the system components dramatically extend response times or become unstable, the system is likely to
have reached its maximum operating capacity.

When this happens, the bottlenecks should be identified and solutions provided.

Benefits of Load Testing


Benefits include the discovery of bottlenecks before production, scalability, reduction of system
downtime, improved customer satisfaction, and reduced failure costs. Specifically:

 Discovering bottlenecks before deployment. Evaluating a piece of software or a website before


deployment can highlight bottlenecks, allowing them to be addressed before they incur large real-
world costs.

 Enhance the scalability of a system. It can help identify the limit of an application’s operating
capacity. This can aid in determining infrastructure needs as the system scales upward.

 Reduced risk for system downtime. It can be used to ferret out scenarios that can cause a system to
fail. This makes it a great tool for finding solutions to high-traffic problems before they arise in the
real world.

 Improved customer satisfaction. If a website’s response times are short even as it scales up to a
higher audience, one-time customers will be more apt to revisit.

 Reduced failure cost. Identifying concerns at the earliest stage possible, especially before launch,
decreases the cost of failures. By contrast, after-launch failures can incur exponentially greater costs.

6
D20MCA11140

Best Practices for Load Testing


The best tool and software is not all it takes for you to perform favorable load testing of your application.
What you need most is knowledge of the best practices when load testing. Here are a few tried and tested
practices:

 Identify business goals. A strong understanding of future goals for scope and volume will draw clear
guidelines to inform the process.

 Determine key measures for the application and web performance. Agree on criteria to track. Some
criteria include response times, throughput, resource utilization, maximum user load, and business
performance metrics.

 Choose a suitable tool. Select a tool that best caters to your needs. Some tools include but are not
limited to WebLOAD, LoadView, and Loadrunner.

 Create a test case. In writing a test case, make sure both positive and negative scenarios are taken into
account. Test cases must be accurate and capable of being traced to requirements.

 Understand your environment. Consider different types of deployments you might want to test.
Create configurations similar to typical production. Test different system capacities like security,
hardware, software, and networks.

 Run tests incrementally. During these tests, the system will ultimately fail. One key goal is
determining what volume results in failure, and spotlighting what fails first.

 Always keep end-users in mind. The satisfaction of customers and site visitors is crucial to the
achievement of business metrics. This plays into their willingness to revisit a site or re-access an
application.

7
D20MCA11140

Ques. 4: What is meant by Boundary Value Analysis?

Ans. 4: Boundary Value Analysis (BVA) is a Black-Box testing technique used to check the errors at the
boundaries of an input domain.
The name comes from the Boundary, which means the limits of an area. So, BVA mainly focuses on
testing both valid and invalid input parameters for a given range of a software component.

If (Min,MAX) is the range given for a field validation, then the boundary values come as follows:

Invalid Boundary Check   { Min-1 ; Max+1 }   

Valid Boundary Check   {Min; Min+1 ;Max-1;Max }    

Boundary-value analysis is a software testing technique in which tests are designed to include


representatives of boundary values in a range.

The idea comes from the boundary. Given that we have a set of test vectors to test the system, a topology
can be defined on that set. Those inputs which belong to the same equivalence class as defined by
the equivalence partitioning theory would constitute the basis.

Given that the basis sets are neighbors, there would exist a boundary between them. The test vectors on
either side of the boundary are called boundary values.

In practice this would require that the test vectors can be ordered, and that the individual parameters
follows some kind of order (either partial order or total order).

Boundary value analysis is one of the widely used case design technique for black box testing. It is used
to test boundary values because the input values near the boundary have higher chances of error.

Whenever we do the testing by boundary value analysis, the tester focuses on, while entering boundary
value whether the software is producing correct output or not.

Boundary values are those that contain the upper and lower limit of a variable. Assume that, age is a
variable of any function, and its minimum value is 18 and the maximum value is 30, both 18 and 30 will
be considered as boundary values.

The basic assumption of boundary value analysis is, the test cases that are created using boundary values
are most likely to cause an error.

It’s widely recognized that the input values at the extreme ends of the input domain cause more errors in
the system.

More application errors occur at the boundaries of the input domain. ‘Boundary Value Analysis’ Testing
technique is used to identify errors at boundaries rather than finding those that exist in the center of the
input domain.

Boundary Value Analysis is the next part of Equivalence Partitioning for designing test cases where test
cases are selected at the edges of the equivalence classes.

8
D20MCA11140

Ques. 5: Compare verification and validation.

Ans. 5:

Verification in Software Testing is a process of checking documents, design, code, and program in
order to check if the software has been built according to the requirements or not.

The main goal of verification process is to ensure quality of software application, design, architecture etc.
The verification process involves activities like reviews, walk-throughs and inspection.

Validation in Software Engineering is a dynamic mechanism of testing and validating if the software
product actually meets the exact needs of the customer or not.

The process helps to ensure that the software fulfills the desired use in an appropriate environment. The
validation process involves activities like unit testing, integration testing, system testing and user
acceptance testing.

Key Difference:
 Verification process includes checking of documents, design, code and program whereas
Validation process includes testing and validation of the actual product.

 Verification does not involve code execution while Validation involves code execution.

 Verification uses methods like reviews, walkthroughs, inspections and desk-checking whereas
Validation uses methods like black box testing, white box testing and non-functional testing.

 Verification checks whether the software confirms a specification whereas Validation checks
whether the software meets the requirements and expectations.

 Verification finds the bugs early in the development cycle whereas Validation finds the bugs that
verification can not catch.

 Comparing validation and verification in software testing, Verification process targets on


software architecture, design, database, etc. while Validation process targets the actual software
product.

 Verification is done by the QA team while Validation is done by the involvement of testing team
with QA team.

 Comparing Verification vs Validation testing, Verification process comes before validation


whereas Validation process comes after verification.

9
D20MCA11140

Ques. 6: Write roles played by team lead in testing.


Ans. 6:
Software testing has gained importance over the years. It is more crucial for projects to under go rigorous
testing by an independent testing team. Software is tested at every level of its development.

The independent testing team is hired to test the software with a stringent plan, in an organized manner,
following the standard techniques, using software testing tools. In this article we list the roles of test
leader and test manager in software testing process of IT projects. 

The software testing team comprises of Test manager, Test leaders and Testers. The testers include entry-
level software testers, senior testers, automation testers, performance testers, etc.

There is a test manager who leads many testing groups, with each testing group led by a test leader. In
some projects, if there are one or two testing groups, a test leader leads them and hiring a test manager is
optional. The role of the test leader and test manager are similar.

The testers and testing groups are hired based on testing work load in the team. The testing team works in
collaboration with the other IT team members like business analysts, architects, developers and system
administrators. Below is the organisation structure for testing team in IT projects.

Before understanding the roles and responsibilities of a test lead or test manager, we will first know what
is test management? It is a important process to ensure software quality which involves the process of
testing and validating the software.

The test management practice includes organising and controlling the testing process and also ensuring
visibility, traceability and control of testing process which delivers a high quality software.

The software development life cycle includes software testing as one of its phases, has advantages like, it
improves quality, reliability and performance of the system and produces good quality product in the
competitive market.

To create an effective test process we need a good test manager. Test Manager or lead plays a central role
in the team. Test manager or lead takes the full responsibility for the project’s success.

10
D20MCA11140

The roles of Test leader and Test manager in software testing process of IT projects are listed below:

 Building and leading the testing team to the success of the project.
 Develop test strategy and test plans for projects
 Participate in developing and reviewing the test policies for organisation.
 Defining the scope of testing within the context of every release and every software testing level or
cycle.
 The use of resources in an effective way and managing the resources for software testing.
 Applying the appropriate test measurement and metrics for the software product and testing team. 
 Identify and resolve the project risks in testing team like
o No enough time to test 
o Not enough resources to test
o The project budget is low
o Testing teams are offshore
o The requirements are too complex 

11
D20MCA11140

Ques. 7: Write note on Incident Status.


Ans. 7:
Incident report can be defined as a written description of an incident observed during testing. To
understand better, let’s start with what an ‘incident’ is.

Incident in a software testing can be defined as a variation or deviation observed in system behavior from
what is expected. It can be a deviation from a functional requirement or from the environment setup.

Very often an incident is referred as a defect or a bug, but it is not always the case. Incident is basically
any unexpected behavior or response from software that requires investigation.

Let’s study the difference between the two to understand better.

S.No. Incident Defect

Occurrence of any unexpected When actual behavior does not


1
behavior while testing. match expected behavior.

It might or might not be required to fix


it, depending on whether it is a defect
2 It needs to be fixed.
or just a mistake or some
environmental issue.

As mentioned in the table, incident needs to be investigated and based on the investigation the incident
can be promoted to a defect. Most often it turns out to be a defect but sometimes it might occur due to
different factors such as

 Human mistake.
 Missing or Obscure documented requirement.
 Environment issue such as no response from back-end server causing intermittent unexpected
behavior or error.

Now that we understand incident let’s move to incident report. When an incident is observed a descriptive
report is prepared, logged and tracked.

A software may have many incidents, depending on its size. For a small software up to 100 incidents
might be reported. It is difficult to keep track of all of them in an unorganized manner. So a report comes
in handy to differentiate and manage the incidents.

Some of the benefits of having an incident report are as follow:

12
D20MCA11140

 It is easy to manage and track the incidents.


 It provide a detailed information to the developer or a third team to help in understanding the
problem and providing the solution for it.
 It helps in keeping or storing the records for future reference while creating regression test suite.
 A report can later help in categorizing the defects to help in root cause analysis and in finding out
the most problematic areas.
 There might be multiple testers testing the software. It is possible that more than one person
notice the same incident. An incident report helps the tester to avoid reporting a duplicate issue as
he can check the records before filing the incident.

13
D20MCA11140

Ques. 8: Mention the Gray box testing Techniques.


Ans. 8:
Gray box testing (a.k.a grey box testing) is a method you can use to debug software and evaluate
vulnerabilities. In this method, the tester has limited knowledge of the workings of the component being
tested.

This is in contrast to black box testing, where the tester has no internal knowledge, and white box testing,
where the tester has full internal knowledge.

Gray Box Testing Techniques:

Matrix Testing:

In matrix testing technique, business and technical risks which are defined by the developers in software
programs are examined. Developers define all the variables that exist in the program. Each of the
variables has an inherent technical and business risk and can be used with varied frequencies during its
life cycle.

Pattern Testing:

To perform the testing, previous defects are analyzed. It determines the cause of the failure by looking
into the code. Analysis template includes reasons for the defect. This helps test cases designed as they are
proactive in finding other failures before hitting production.

Orthogonal Array Testing:

It is mainly a black box testing technique. In orthogonal array testing, test data have n numbers of
permutations and combinations. Orthogonal array testing is preferred when maximum coverage is
required when there are very few test cases and test data is large. This is very helpful in testing complex
applications.

Regression Testing:

Regression testing is testing the software after every change in the software to make sure that the changes
or the new functionalities are not affecting the existing functioning of the system. Regression testing is
also carried out to ensure that fixing any defect has not affected other functionality of the software.

14
D20MCA11140

Ques. 9: Classify types of Static analysis.


Ans. 9:
Testing process is categorized into two types : Static testing and Dynamic testing. Static testing is
different from Dynamic testing in a way that static testing does not involve execution of the program or
the software. However, dynamic testing is performed by executing the code of the software product.

By definition, static testing is a human testing technique that does not involve executing or running the
program or software product. Instead it involves, checking or monitoring the software at each phase of
Software Development Life Cycle (SDLC) with the requirements or standards mentioned in the Software
Requirement Specifications (SRS) of the software product.

Types of Static
Testing :

1. Software Inspection :
Inspection process is performed in the earlier stages of the SLDC and is applied to a specific part of
product like SRS, code, product design. etc. It involves manually examining the various components of
the product at the earlier stages. The software inspection process comprises six phases which are as
follows :

 Planning for Inspection Meeting –


 This phase focuses on identifying the product to be inspected and the objective of this
inspection.
 A moderator – who manages the entire inspection process, is assigned in this phase.
 The assigned moderator checks if the product is ready for inspection or not. The
moderator also selects the inspection team and assigns them their roles.
 Moderator also schedules the inspection meeting and distributes the required material to
the inspection team.
 Overview –
 In this phase, the inspection team is given all the background information for the
inspection meeting.
 The author – who is the coder or designer responsible for developing the product presents
his logic and reasoning for the product including the product’s functions, its intended
purpose and the approach or concept used while developing it.
 It is made sure that each member of the inspection team has understood and is familiar to
the objectives and purpose of the inspection meeting that is to be held.
 Individual Preparation by members –

15
D20MCA11140

 In this phase, the members of the inspection team individually prepare for the inspection
meeting by studying the material provided in the earlier phases.
 The team members identify the potential errors or bugs in the product and record them in
a log. The log is finally submitted to the moderator. The moderator then compiles all the
logs received from the members and sends a copy of it to the author.
 The inspector – who is the person responsible for checking and identifying errors and
inconsistencies in the documents or programs, reviews the product and records any
problems found in it (both general and area-specific). The inspector records the problems
or issues on a log along with the time spent on preparation.
 The moderator reviews the logs to check if the team is ready and prepared for the
inspection meeting or not.
 Finally, the moderator submits all the compiled logs to the author.
 Inspection Meeting –
 This phase involves author’s discussion on the issues raised by the team members in the
compiled log.
 Members arrive at a decision of whether the issue raised is an error or not.
 Moderator concludes the meeting and provides a summary of the meeting – which is a
list of errors found in the product and are to resolved by the author.
 Rework –
 Rework is carried out by the author according the summary list presented by the
moderator in the previous phase.
 The author fixes all the bugs and reports to the moderator
 Follow – up –
 Moderator checks if all errors have been resolved or not. The moderator then prepares a
report. If all errors are fixed and addressed, the moderator releases the document.
 Otherwise, unresolved issues are added to the report and another inspection meeting is
scheduled.

2. Structured Walkthroughs :
This type of static testing is less formal and not so rigorous in nature. It has a simpler process as
compared to inspection process.

It involves the following four steps :

 Organization –
This step involves assigning roles and responsibilities to the team selected for structural
walkthroughs. The team can consist of the following members :
 Coordinator – organizes and coordinates with all the members for the walkthrough
related activities.
 Presenter – introduces the item to be inspected.
 Scribe – notes down all the issues and suggestions put forward by the members.
 Tester – finds the defects or bugs in the item to be inspected.
 Maintenance Oracle – focuses on future maintenance of the product.
 Standards Bearer – evaluates conformance to the standards and guidelines.
 User Representative – represents the needs and concerns of the user.
 Preparation –
 In this step, focus lies on preparing for the structural walkthroughs which could include
thinking of basic test cases that would be required to test the product.

16
D20MCA11140

 Walkthrough –
 Walkthrough is performed by a tester who comes to the meeting with a small set of test
cases.
 Test cases are executed by the tester mentally and results are noted down on a paper or a
presentation media.
 Rework and Follow – up –
 This step is similar to that of the last two phases of the inspection process.
 If no bugs are detected, then the product is approved for release. Otherwise, errors are
resolved and again, a structured walkthrough is conducted.

3. Technical Reviews :
It is a higher level technique as compared to the inspection or walkthrough technique because it also
involves management. This technique is used to assess and evaluate the product by checking its
conformance to the development standards, guidelines and specifications.
It does not have a defined process and most of the work is carried out by the moderator as discussed
below :
 Moderator gathers and distributes the material and documentation to all team members.
 Moderator also prepares a set of indicators to evaluate the product with respect to the specifications
and already established standards and guidelines :
 consistency
 documentation
 adherence to standards
 completeness
 problem definition and requirements
 The results are recorded in a document which includes both defects as well as suggestions.
 Finally, the defects are resolved and the suggestions are taken into account for improving the product.

17
D20MCA11140

Ques. 10: List three main categories of software testing.


Ans. 10:

Software Testing is a method to check whether the actual software product matches expected
requirements and to ensure that software product is Defect free.

It involves execution of software/system components using manual or automated tools to evaluate one or
more properties of interest.

The purpose of software testing is to identify errors, gaps or missing requirements in contrast to actual
requirements.

Some prefer saying Software testing definition as a White Box and Black Box Testing. In simple terms,
Software Testing means the Verification of Application Under Test (AUT).

This Software Testing course introduces testing software to the audience and justifies the importance of
software testing.

Types of Software Testing


Here are the software testing types:

Typically Testing is classified into three categories.

 Functional Testing
 Non-Functional Testing or Performance Testing
 Maintenance (Regression and Maintenance)

Testing Category Types of Testing

 Unit Testing
 Integration Testing
 Smoke
 UAT ( User Acceptance Testing)
Functional Testing  Localization
 Globalization
 Interoperability
 So on

 Performance
 Endurance
 Load
Non-Functional Testing  Volume
 Scalability
 Usability
 So on
18
 Regression
Maintenance  Maintenance
D20MCA11140

1. Functional Testing
Functional testing involves the testing of the functional aspects of a software application. When you’re
performing functional tests, you have to test each and every functionality. You need to see whether you’re
getting the desired results or not.
Functional tests are performed both manually and using automation tools. For this kind of testing, manual
testing is easy, but you should use tools when necessary.

Some tools that you can use for functional testing are Micro Focus UFT (previously known as QTP, and
UFT stands for Unified Functional Testing), Selenium, JUnit, soapUI, Watir, etc.

2. Non-functional Testing
Non-functional testing is the testing of non-functional aspects of an application, such as performance,
reliability, usability, security, and so on. Non-functional tests are performed after the functional tests.
With non-functional testing, you can improve your software’s quality to a great extent. Functional tests
also improve the quality, but with non-functional tests, you have the opportunity to make your software
even better. Non-functional testing allows you to polish the software. This kind of testing is not about
whether the software works or not. Rather, it’s about how well the software runs, and many other things.
Non-functional tests are not generally run manually. In fact, it’s difficult to perform this kind of tests
manually. So these tests are usually executed using tools.

19
D20MCA11140

Section B
Ques. 1: Illuminate the content of QA plan.

Ans. 1:
Quality assurance (QA) aims to keep the quality of a product or service at a specified level, focusing on
each stage of delivery or production to make sure there are no issues.

Maintaining a high-quality level is essential for the continued success of a business. A good quality
assurance program will support businesses’ goals by clearly identifying the areas in which they do well
and those that need some work.

Following a good quality assurance plan reduces the chance of costly mistakes and mitigates risk.

It should be noted that quality assurance is not quality control (QC). QA focuses on defect prevention
while QC focuses on identifying product defects. A QA plan differs from QC planning, though there may
be some overlap between the two.

Using a Quality Assurance Plan

Every company, no matter how big, needs to have procedures in place that make sure its products are
managed successfully. You can’t just have an idea and immediately begin selling the product or service
without having done some work to make sure it meets quality standards.

Every product or service should offer value to the person who purchases it. A quality assurance plan
(QAP) makes this happen, ensuring that the quality of the product meets the end-users’ needs and any
industry standards that are present. It also reduces the chance of errors in the production process and
highlight areas that need improving.

A quality control plan covers all aspects of project/product development by addressing different
functions, how they perform, and maintaining or making changes when necessary.

Using a quality assurance plan template is an effective way to make it adaptable depending on the
company’s needs. For example, it can be adapted to be specifically for product or software quality
assurance. Whether the organizational focus is on growth or customer care, each QAP focuses on those
defined areas.

Steps to Create a Quality Assurance Plan

If you’re building a QAP from scratch or making modifications to one that’s already in place, here are six
steps to follow.

1. Define Quality Objectives

The first step to quality control planning is to define your goals. One formula that is often used in this
process is the SMART formula.

20
D20MCA11140

Goals should be:

 Specific
 Measurable
 Appropriate
 Realistic
 Timely
They need to be clear, measurable, and something that can be realistically attained in a timely manner.

Quality goals also have to match what your customer is expecting. If the quality of your product is low, it
ends up failing both the customer and the company. You should also make sure that you adhere to any
government regulations that could apply to the project as well.

2. Roles and Responsibilities

When you decide to move forward with a quality plan, there are various options. Big companies may
have an entire department dedicated to QA whereas small ones may have to go outside the organization or
assign additional roles to current employees.

Training staff is important as is giving them jobs that meet their skill set. You want a team in place that
can detect and repair issues in the early part of product development. This staves off mistakes and makes
sure that the company meets its end-users’ expectations.

The QA team, no matter what form it takes, has to have processes and rules that everyone understands
and follows, making sure that every project is clear in its objections and expectations.

3. Implement the Quality Assurance Plan

Once the plan is ready to go, it’s time to implement your new quality control procedures. Make sure all
staff members have been trained and are aware of the contents of the plan. It’s also essential that your
team has the tools and resources it needs to meet its targets.

The plan should be clear in how it backs the company’s mission and should be simple so there is no
question that everyone understands it and that each step of the process is followed. Consult with HR and
production to confirm that the QA plan meets risk management and documentation guidelines as well.

4. Examine the Results

Once the Quality Assurance plan is active, consider it a living document that can adapt and evolve as the
team sees fit. While the initial goal of the QA plan should be held at the forefront, they are also used to
make sure the team’s objectives are being met.

If not, why and how do new policies and guidelines make this happen? Listen to the team, other
employees, and customers to make sure you are meeting everyone’s realistic expectations.

5. Make Adjustments

After listening to feedback and looking at the original goals, make adjustments as needed. While many
suggest a plan audit every two to three months, this can happen sooner if needed. If a flaw is found or

21
D20MCA11140

there are problems that need addressing, making adjustments sooner rather than later is helpful not only in
meeting the company’s goal but also in saving or increasing sales by putting an improved product or
service forward.

6. Keep Your Team in the Loop

Staff wants to know what they are doing is working. If the QA plan has improved the organization, let
them know. Positive or constructive feedback makes for a more solid team and lets them know they are
making a positive difference for the company.

That you are empowering staff to make a difference inspires them to continue to do a good job and gives
strong input into the QA process. So, have staff follow the plan, give both positive and negative
feedback, and listen so adjustments can be made.

Quality assurance planning can save your company time and money. Stay up to date by subscribing to our
newsletter at the QA Lead. It’s a great way to stay in the loop with podcasts, articles, deals, reviews,
guides, and more!

We’re a community of QA professionals exploring the newest and best applications of quality assurance
examples, automation, and testing where you can find the tips and tricks your team needs to banish the
bugs and make sure they never come back.

22
D20MCA11140

Ques. 2: Elaborate the importance of software quality in software testing.

Ans. 2:

In a manufacturing organization, the quality assurance department is responsible for assuring that the
products are built as designed, according to an approved project and quality process. Deviations from the
process are identified and corrected. Software quality assurance (SQA) can be seen in a similar way,
assuring that the necessary processes are written and followed and the resulting software (output) is free
from any anomalies or defects.

As part of the quality system, software quality assurance is important as it defines and measures the
adequacy of the software (SW) process, providing evidence that establishes confidence to produce SW
products of suitable quality for their intended purposes.

SQA also is a practice of monitoring the software engineering processes and methods used in a project to
ensure the proper quality of the software is achieved. It may include ensuring conformance to standards or
models, such as ISO 25010, which superseded ISO/IEC 9126, CMMI, or ASPICE.

There are six reasons why QA brings value to the project and the whole company as well.

1. Saves money and time: If bugs and defects are found in the early stages of development, you
will spend less money and time fixing them.
2. Stable and competitive product. QA processes and testing verify that the system meets the
different requirements, including functional, performance, reliability, security, usability, etc.
There are many devices, browsers, environments, and the product should work properly in any of
them.
3. Build constant processes: It is essential to notice that an excellent QA Team not only finds and
fixes bugs, but their primary purpose is to create continuous processes which allow preventing
the recurrence of defects and, as a consequence, improve the quality of future systems. Good
Quality adds value to your product and helps standout in the market.
4. Safety: When we create or develop something, we need to ask ourselves a simple question: is our
product (software, application, site, etc.) secure, efficient, and trustworthy?
5. Reputation: If you want your product to attract users, customers, or subscribers, you MUST be
sure that everything works properly before its release. If not, users will notice errors before you
do, which will impact your reputation and brand trust. Our QA Engineers work throughout the
software development life cycle and apply different testing methodologies to ensure that your
product will not receive bad reviews.
6. It helps meet client expectations: QA makes sure that the result meets the business and user
requirements. It ensures the reliability of the user's application/satisfaction which is 'key' for
business development.
7. New suggestions and views on your project: Who would know the entire product better than
one who thoroughly examines all its pitfalls? QA Experts can always suggest improvements to
your product.

23
D20MCA11140

Not only for the consumer’s sake but for the sake of the software product itself, it is important that
businesses use a credible SQA process to establish a baseline of expectations for the product. 

Software quality assurance is a reliable means of producing high-quality software. There shouldn’t be
much justification as to why you wouldn’t want high-quality software, but if that’s not the case consider
the other reasons why improving the software quality is of the utmost importance. 

First, a high-quality software product will save you time and money. As a business, providing a good
that’s worth buying is one of your primary objectives. 

If you’re building software, software quality assurance can confirm that your good, or software product,
is worth buying. 

The problem with a product that is not worth buying is that you lose money making the product, and you
don’t get a dime of that money back if there’s no profit earned. 

To that same end, delivering a high-quality product implies less maintenance over time because your
software product will be resilient in the first place. Therefore, you can spend the least amount of time and
money on upkeep, if the product needs future maintenance at all. 

Altogether, software quality assurance remains a key factor in scaling your business as well as
preserving a good reputation for your brand. 

Importance:

Saves Time and Money

The advantage of having systems and processes in place during development is that they anticipate and
prevent most bugs and flaws from developing in the first place. As a result, the errors that do surface are
relatively minor, and can be fixed easily.

On the other hand, without QA, most bugs would potentially be bigger and may only be caught in the
testing phase, or after the program was released. Fixing these bugs after the fact would require more time,
which in turn could cost more.

Maintains Product Quality

QA processes are designed to ensure that the software product works reliably and is stable. In addition,
there are Quality Control (QC) tests designed to test the functionality, performance, security, usability,
and more.

Furthermore, these tests also consider the fact that the user might not use the program as it was intended.
Part of this testing is to ‘idiot-proof’ the product so that improper use does cause failure.

As a result, the final product has minimal defects and is guaranteed to work as intended.

Ensures Security

24
D20MCA11140

Whilst a software program might perform all functions as intended, it may not necessarily be completely
secure. If there are any weakness in its defences, the product and users’ data could be compromised.

One of the reasons QA is so important in software development is to ensure that your product is built with
security in mind, and has been tested properly to ensure that the safeguards in place work.

Protects Your Reputation

The quality of your software can reflect on your company and brand. By releasing a high-quality product
that offers excellent features with comprehensive security, you can build a positive reputation for your
business.

This is where the importance of QA in software development is most evident. It ensures that your product
serves as a fitting brand ambassador for your business.

Customer Satisfaction

In order to ensure satisfied customers, your product needs to fulfil their needs. It should have all the
features required and work properly. The role of QA is exactly that – to make sure that the software gives
your customers exactly what they expect.

The QA team would define the features of the deliverables and then work through each step of the
development process to ensure that they are being delivered. They then check to see if the software works
smoothly and without any problems. As a result, customers get a quality product that they are happy to
use.

25
D20MCA11140

Ques. 3: Explain in detail working process of Path Testing.

Ans. 3:

Path Testing is a method that is used to design the test cases. In path testing method, the control flow
graph of a program is designed to find a set of linearly independent paths of execution.

In this method Cyclomatic Complexity is used to determine the number of linearly independent paths and
then test cases are generated for each path.

It give complete branch coverage but achieves that without covering all possible paths of the control flow
graph. McCabe’s Cyclomatic Complexity is used in path testing.
It is a structural testing method that uses the source code of a program to find every possible executable
path. Path testing is a structural testing method that involves using the source code of a program in order
to find every possible executable path.

It helps to determine all faults lying within a piece of code. This method is designed to execute all or
selected path through a computer program.

Any software program includes, multiple entry and exit points. Testing each of these points is a
challenging as well as time-consuming. In order to reduce the redundant tests and to achieve maximum
test coverage, basis path testing is used.

Path Testing Process:

 Control Flow Graph :


Draw the corresponding control flow graph of the program in which all the executable paths are to be
discovered.

 Cyclomatic Complexity :
After the generation of the control flow graph, calculate the cyclomatic complexity of the program
using the following formula.

26
D20MCA11140

 Make Set :
Make a set of all the path according to the control floe graph and calculated cyclomatic complexity.
The cardinality of set is equal to the calculated cyclomatic complexity.

 Create Test Cases :


Create test case for each path of the set obtained in above step.

Path Testing Techniques:


 Control Flow Graph (CFG) - The Program is converted into Flow graphs by representing
the code into nodes, regions and edges.
 Decision to Decision path (D-D) - The CFG can be broken into various Decision to
Decision paths and then collapsed into individual nodes.
 Independent (basis) paths - Independent path is a path through a DD-path graph which
cannot be reproduced from other paths by other methods.

27
D20MCA11140

Ques. 4: Differentiate among the Performance, stress and load testing.


Ans. 4:

Performance testing
Performance testing is a type of testing for determining the speed of a computer, network or device. It
checks the performance of the components of a system by passing different parameters in different load
scenarios.

Performance testing is the testing that is performed to ascertain how the components of a system are
performing under a certain given situation.

Resource usage, scalability, and reliability of the product are also validated under this testing.

This testing is a subset of performance engineering, which is focused on addressing performance issues in
the design and architecture of a software product.

Performance Testing is the superset for both load & stress testing. Other types of testing included in
performance testing are Spike testing, Volume testing, Endurance testing, and Scalability testing.

Load testing
Load testing is the process that simulates actual user load on any application or website. It checks how the
application behaves during normal and high loads. This type of testing is applied when a development
project nears to its completion.

Load testing is meant to test the system by constantly and steadily increasing the load on the system until
it reaches the threshold limit. This is a subset of performance testing.

Load testing can be easily done by employing any of the suitable automation tools available in the
market. WAPT and LoadRunner are two such famous tools that aid in load testing. Load testing is also
famous by names like Volume testing and Endurance testing.

28
D20MCA11140

However, Volume testing mainly focuses on databases. Endurance testing tests the system by keeping it
under a significant load for a sustained time period.
The sole purpose of load testing is to assign the system the largest job it can possibly handle to test the
endurance of the system and monitor the results. An interesting fact here is that sometimes the system is
fed with an empty task to determine the behavior of the system in a zero-load situation.

The attributes which are monitored in the load test include peak performance, server throughput, response
time under various load levels (below the threshold of break), adequacy of H/W environment, and how
many user applications it can handle without affecting the performance.

Stress testing
Stress testing is a type of testing that determines the stability and robustness of the system. It is a non-
functional testing technique. This testing technique uses auto-generated simulation model that checks all
the hypothetical scenarios.

Under stress testing, various activities to overload the existing resources with excess jobs are carried out
in an attempt to break the system down. Negative testing, which includes removal of components from
the system, is also done as part of stress testing.

Also known as fatigue testing, this testing should capture the stability of an application by testing it
beyond its bandwidth capacity.

Thus, stress testing evaluates the behavior of an application beyond peak load and normal conditions.

The purpose of stress testing is to ascertain the failure of the system and to monitor how the system
recovers back gracefully. The challenge here is to set up a controlled environment before launching the
test so that you can precisely capture the behavior of the system repeatedly under the most unpredictable
scenarios.

Issues that will eventually come out as a result of stress testing may include synchronization issues,
memory leaks, race conditions, etc. If the stress test is checking how the system behaves in the situation
of a sudden ramp-up in the number of users, then it is called a spike test.

If the stress test is to check the system’s sustainability over a period of time through a slow ramp-up in
the number of users, it is called a soak test.

29
D20MCA11140

Ques. 5: Classify Test Organization, explain each in detail.


Ans. 5:

Test designing is every test organization’s concern, you can find numerous resources explaining the
techniques, tips, and tricks. Surely there will be minor customizations, and you will observe some
enhancements regarding mobile testing.

The parent classification of test design techniques will be basically static and dynamic testing.

These test design techniques are different by the work products (requirements, specifications, code,
system under test, end product, etc.) they are exercised upon and the execution base they possess.

Let me dive deeper into the subject and provide more details:

Static testing (nonexecutable testing) is reviewing test basis, planning, analysis and design
documents (work products), and code.

 Walkthrough is a static test technique (formal or informal) performed on any kind of requirements,
design, or project plan and is generally moderated by the owner of the work product. Walkthrough can be
also done for training people and establishing consensus.

 Informal review is an informal static test technique performed on any kind of requirements, design, code,
or project plan. During informal review, the work product is given to a domain expert and the
feedback/comments are reviewed by the owner/author.

 Technical review is a static test technique performed on any kind of requirements, design, code, or project
plan and is generally moderated by the technical lead. Technical review is a formal type of static test and
done for observing whether or not the work product meets the technical specifications/standards.

 Management review is a static test technique performed on project plan, risk charter, audit report, or any
progress report and is generally moderated by a project manager or lead. Participants are the decision
makers, and the related documents are evaluated for their adequacy and consistency.

 Audit is a static test technique performed on any kind of requirements, design, or project plan and is
generally moderated by an external team of experts. Because of the fact that the assessors are generally
coming from outside, audits can be regarded as the most independent type of static tests. Work products
are evaluated for their consistency to specific regulations, standards, guidelines, and/or formal
procedures.

 Inspection is a static test technique performed on any kind of requirements, design, code, or project plan
and is generally led by a moderator. Because inspection is the most formal type of static test, there exist
specific roles for reading/tracing, recording, managing, and reviewing.

Dynamic testing (executable testing) tests a working (coding is over) software product or a


component.

 Specification-based (requirement-based) is a dynamic test technique based on written procedures,


specifications, requirements, user manuals, use cases, screen prototypes, and business processes. It can
also be defined as black-box testing and have many subtechniques such as equivalence partitioning,

30
D20MCA11140

boundary value analysis, pairwise testing, state transition testing, use case testing, user story testing,
decision table testing, combinatorial testing, classification tree testing, and so on.

 Structure-based (code-based) is a dynamic test technique based on the internal structure of the code,
database, architecture, or system flow. It can be defined as white-box testing and have many
subtechniques such as statement testing, decision testing, multiple condition testing, path testing, branch
testing, and API testing. These techniques differ from each other by the coverage criteria they possess.

 Model-based is a dynamic test technique based on the model representations of the software product. Test
cases are designed from models, not from the source code; consequently, model-based testing cannot be
taken as a white-box test activity. Instead, the models can be regarded as partial and abstract presentations
of the system under test.

 Risk-based is a dynamic test technique based on the risks of the software product. In order to identify the
risks well, several stakeholders such as developers, help-desk/call-center agents, audits, users, business
analysts, testers, and operation staff should be included, and their views should be considered. For a
detailed risk identification, you can use techniques such as brainstorming sessions, risk workshops,
benchmark metrics from past projects, expert interviews, and retrospective meetings.

 Defect-based is a dynamic test technique based on the type, classification, and clustering of defects.
Defect categories such as interface defects, timing problems, computational problems, text field
problems, date field problems, and data-related problems can be taken as foundations for defect-based
testing.

 Experience-based is a dynamic test technique based on the tester’s practice in a specific field. Error
guessing, checklist-based testing, attacks, and exploratory testing can be taken as experience-based test
design techniques.

 Platform-based is a dynamic test technique based on the characteristics and constraints of different mobile
platforms. OS settings, platform-specific features, application store behavior, and multitasking features
can be taken as platform-specific attributes a tester should focus on.

 Device-based is a dynamic test technique based on the characteristics and constraints of different mobile
devices such as smartphones, feature phones, tablets, or phablets (devices designed to combine the
functions of a smartphone and tablet). Device settings, hidden features, interruptions, notifications,
battery usage, gestures, session frequency, session duration, and device-specific features can be taken as
device-based attributes a tester should focus on.

31

You might also like