You are on page 1of 23

 

What is testing?

Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not.

Testing is executing a system in order to identify any gaps, errors, or missing


requirements in contrary to the actual requirements.

When to Start our Testing Process?

Well defined start of testing reduces the cost, time to revise and offer error free software
that is delivered to the respective client. However in Software Development Life Cycle
(SDLC) testing can be started from the requirements gathering phase and lasts till the
deployment of the software.

When to Stop our Testing Process?

Unlike ‘when to start the testing’, it is problematic to regulate when to stop the testing, as
the testing is certainly not an ending process and no one can say that any software is
100% tested. The following are the general aspects which should be reflected to the stop
the testing features:

 Testing deadlines.
 Completion of the test case implementation.
 Completion of the functional testing.
 Bug (Error of the program code) rate falls below a certain level and no high priority bugs
are identified
Test Plan:

1. A test plan is a document detailing a systematic approach to testing a system


such as a machine or software. The plan typically contains a detailed understanding of
the eventual workflow.

Test Plan Template:

(Name of the Product)

Prepared by:

(Names of Preparers)

(Date)

TABLE OF CONTENTS

1.0 INTRODUCTION

2.0 OBJECTIVES AND TASKS


2.1 Objectives
2.2 Tasks

3.0 SCOPE

4.0 Testing Strategy


4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing
5.0 Hardware Requirements

6.0 Environment Requirements


6.1 Main Frame
6.2 Workstation

7.0 Test Schedule

8.0 Control Procedures

9.0 Features to Be Tested

10.0 Features Not to Be Tested


11.0 Resources/Roles & Responsibilities

12.0 Schedules
13.0 Significantly Impacted Departments (SIDs)

14.0 Dependencies

15.0 Risks/Assumptions
16.0 Tools

17.0 Approvals

Verification & Validation


These two terms are very confusing for most people, who use them interchangeably. The
following table highlights the differences between verification and validation.

S.N Verification Validation


.

1 Verification addresses the concern: Validation addresses the concern:


"Are you building it in a right way?" "Are you building the right thing?"

2 Ensures that the software system Ensures that the functionalities meet


meets all the functionality. the intended behavior.

3 Verification takes place first and Validation occurs after verification and
includes the checking for mainly involves the checking of the
documentation, code, etc. overall product.

4 Done by developers. Done by testers.


5 It has static activities, as it includes It has dynamic activities, as it includes
collecting reviews, walkthroughs, and executing the software against the
inspections to verify software. requirements.

6 It is an objective process and no It is a subjective process and involves


subjective decision should be needed subjective decisions on how well a
to verify software. software works.
Most people get confused when it comes to pin down the differences among
QualityAssurance, Quality Control, and Testing. Although they are interrelated and to some
extent, they can be considered as same activities, but there exist distinguishing points that
set them apart. The following table lists the points that differentiate QA, QC, and Testing.

Quality Assurance Quality Control Testing

QA includes activities that It includes activities that It includes activities


ensure the implementation of ensure the verification of a that ensure the
processes, procedures and developed software with identification of
standards in context to respect to documented (or bugs/error/defects in
verification of developed not in some cases) software.
software and intended requirements. Are you
requirements. Are you building a right thing?
building it in a right way?

Focuses on processes and Focuses on actual testing Focuses on actual


procedures rather than by executing the software testing.
conducting actual testing on the with an aim to identify
system. bug/defect through
implementation of
procedures and process.

Process-oriented activities. Product-oriented activities. Product-oriented


activities.

Preventive activities. It is a corrective process. It is a preventive


process.

It is a subset of QC can be considered as Testing is the subset


Software Test Life Cycle the subset of Quality of Quality Control.
(STLC). Assurance.
A Comparison of Testing Methods
The following table lists the points that differentiate black-box testing, grey-box testing, and
white-box testing.

Black-Box Testing Grey-Box Testing White-Box Testing

The internal workings of an The tester has limited Tester has full
application need not be knowledge of the internal knowledge of the internal
known. workings of the application. workings of the
application.

Also known as closed-box Also known as translucent Also known as clear-box


testing, data-driven testing, testing, as the tester has testing, structural testing,
or functional testing. limited knowledge of the or code-based testing.
insides of the application.

Performed by end-users Performed by end-users and Normally done by testers


and also by testers and also by testers and and developers.
developers. developers.

Testing is based on Testing is done on the basis Internal workings are


external expectations - of high-level database fully known and the
Internal behavior of the diagrams and data flow tester can design test
application is unknown. diagrams. data accordingly.

It is exhaustive and the Partly time-consuming and The most exhaustive and
least time-consuming. exhaustive. time-consuming type of
testing.

Not suited for algorithm Not suited for algorithm Suited for algorithm
testing. testing. testing.

This can only be done by Data domains and internal Data domains and
trial-and-error method. boundaries can be tested, if internal boundaries can
known. be better tested.

There are different levels during the process of testing. In this chapter, a brief description is
provided about these levels.

Levels of testing include different methodologies that can be used while conducting
software testing. The main levels of software testing are:

 Functional Testing
 Non-functional Testing

Functional Testing

This is a type of black-box testing that is based on the specifications of the software that is
to be tested. The application is tested by providing input and then the results are examined
that need to conform to the functionality it was intended for. Functional testing of a
software is conducted on a complete, integrated system to evaluate the system's
compliance with its specified requirements.
There are five steps that are involved while testing an application for functionality.

Step Description
s

I The determination of the functionality that the intended application is meant to


perform.

II The creation of test data based on the specifications of the application.

III The output based on the test data and the specifications of the application.

IV The writing of test scenarios and the execution of test cases.

V The comparison of actual and expected results based on the executed test
cases.

Unit Testing
This type of testing is performed by developers before the setup is handed over to the
testing team to formally execute the test cases. Unit testing is performed by the respective
developers on the individual units of source code assigned areas. The developers use test
data that is different from the test data of the quality assurance team.
The goal of unit testing is to isolate each part of the program and show that individual parts
are correct in terms of requirements and functionality.

Limitations of Unit Testing


Testing cannot catch each and every bug in an application. It is impossible to evaluate
every execution path in every software application. The same is the case with unit testing.
There is a limit to the number of scenarios and test data that a developer can use to verify
a source code. After having exhausted all the options, there is no choice but to stop unit
testing and merge the code segment with other units.

Integration Testing
Integration testing is defined as the testing of combined parts of an application to
determine if they function correctly. Integration testing can be done in two ways: Bottom-up
integration testing and Top-down integration testing.

S.N Integration Testing Method


.

Bottom-up integration
1 This testing begins with unit testing, followed by tests of progressively higher-
level combinations of units called modules or builds.

Top-down integration
2 In this testing, the highest-level modules are tested first and progressively,
lower-level modules are tested thereafter.

In a comprehensive software development environment, bottom-up testing is usually done


first, followed by top-down testing. The process concludes with multiple tests of the
complete application, preferably in scenarios designed to mimic actual situations.

System Testing

System testing tests the system as a whole. Once all the components are integrated, the
application as a whole is tested rigorously to see that it meets the specified Quality
Standards. This type of testing is performed by a specialized testing team.
System testing is important because of the following reasons:
 System testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.
 The application is tested thoroughly to verify that it meets the functional and
technical specifications.
 The application is tested in an environment that is very close to the production
environment where the application will be deployed.
 System testing enables us to test, verify, and validate both
the businessrequirements as well as the application architecture.
Regression Testing
Whenever a change in a software application is made, it is quite possible that other areas
within the application have been affected by this change. Regression testing is performed
to verify that a fixed bug hasn't resulted in another functionality or business rule violation.
The intent of regression testing is to ensure that a change, such as a bug fix should not
result in another fault being uncovered in the application.
Regression testing is important because of the following reasons:
 Minimize the gaps in testing when an application with changes made has to be
tested.
 Testing the new changes to verify that the changes made did not affect any other
area of the application.
 Mitigates risks when regression testing is performed on the application.
 Test coverage is increased without compromising timelines.
 Increase speed to market the product.

Acceptance Testing
This is arguably the most important type of testing, as it is conducted by the Quality
Assurance Team who will gauge whether the application meets the intended specifications
and satisfies the client’s requirement. The QA team will have a set of pre-written scenarios
and test cases that will be used to test the application.
More ideas will be shared about the application and more tests can be performed on it to
gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not
only intended to point out simple spelling mistakes, cosmetic errors, or interface gaps, but
also to point out any bugs in the application that will result in system crashes or major
errors in the application.
By performing acceptance tests on an application, the testing team will deduce how the
application will perform in production. There are also legal and contractual requirements for
acceptance of the system.

Alpha Testing
This test is the first stage of testing and will be performed amongst the teams (developer
and QA teams). Unit testing, integration testing and system testing when combined
together is known as alpha testing. During this phase, the following aspects will be tested
in the application:
 Spelling Mistakes
 Broken Links
 Cloudy Directions
 The Application will be tested on machines with the lowest specification to test
loading times and any latency problems.

Beta Testing
This test is performed after alpha testing has been successfully performed. In beta testing,
a sample of the intended audience tests the application. Beta testing is also known as pre-
release testing. Beta test versions of software are ideally distributed to a wide audience
on the Web, partly to give the program a "real-world" test and partly to provide a preview of
the next release. In this phase, the audience will be testing the following:
 Users will install, run the application and send their feedback to the project team.
 Typographical errors, confusing application flow, and even crashes.
 Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.
 The more issues you fix that solve real user problems, the higher the quality of your
application will be.
 Having a higher-quality application when you release it to the general public will
increase customer satisfaction.
Non-Functional Testing
This section is based upon testing an application from its non-functional attributes. Non-
functional testing involves testing a software from the requirements which are
nonfunctional in nature but important such as performance, security, user interface, etc.
Some of the important and commonly used non-functional testing types are discussed
below.

Performance Testing
It is mostly used to identify any bottlenecks or performance issues rather than finding bugs
in a software. There are different causes that contribute in lowering the performance of a
software:

 Network delay

 Client-side processing

 Database transaction processing

 Load balancing between servers

 Data rendering
Performance testing is considered as one of the important and mandatory testing type in
terms of the following aspects:

 Speed (i.e. Response Time, data rendering and accessing)

 Capacity

 Stability

 Scalability
Performance testing can be either qualitative or quantitative and can be divided into
different sub-types such as Load testing and Stress testing.

Load Testing
It is a process of testing the behavior of a software by applying maximum load in terms of
software accessing and manipulating large input data. It can be done at both normal and
peak load conditions. This type of testing identifies the maximum capacity of software and
its behavior at peak time.
Most of the time, load testing is performed with the help of automated tools such as Load
Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer,
Visual Studio Load Test, etc.
Virtual users (VUsers) are defined in the automated testing tool and the script is executed
to verify the load testing for the software. The number of users can be increased or
decreased concurrently or incrementally based upon the requirements.

Stress Testing
Stress testing includes testing the behavior of a software under abnormal conditions. For
example, it may include taking away some resources or applying a load beyond the actual
load limit.
The aim of stress testing is to test the software by applying the load to the system and
taking over the resources used by the software to identify the breaking point. This testing
can be performed by testing different scenarios such as:

 Shutdown or restart of network ports randomly

 Turning the database on or off


 Running different processes that consume resources such as CPU, memory, server,
etc.

Usability Testing
Usability testing is a black-box technique and is used to identify any error(s) and
improvements in the software by observing the users through their usage and operation.
According to Nielsen, usability can be defined in terms of five factors, i.e. efficiency of use,
learn-ability, memory-ability, errors/safety, and satisfaction. According to him, the usability
of a product will be good and the system is usable if it possesses the above factors.
Nigel Bevan and Macleod considered that usability is the quality requirement that can be
measured as the outcome of interactions with a computer system. This requirement can be
fulfilled and the end-user will be satisfied if the intended goals are achieved effectively with
the use of proper resources.
Molich in 2000 stated that a user-friendly system should fulfill the following five goals, i.e.,
easy to Learn, easy to remember, efficient to use, satisfactory to use, and easy to
understand.
In addition to the different definitions of usability, there are some standards and quality
models and methods that define usability in the form of attributes and sub-attributes such
as ISO-9126, ISO-9241-11, ISO-13407, and IEEE std.610.12, etc.
UI vs Usability Testing
UI testing involves testing the Graphical User Interface of the Software. UI testing ensures
that the GUI functions according to the requirements and tested in terms of color,
alignment, size, and other properties.
On the other hand, usability testing ensures a good and user-friendly GUI that can be
easily handled. UI testing can be considered as a sub-part of usability testing.

Security Testing
Security testing involves testing a software in order to identify any flaws and gaps from
security and vulnerability point of view. Listed below are the main aspects that security
testing should ensure:

 Confidentiality

 Integrity

 Authentication

 Availability

 Authorization

 Non-repudiation

 Software is secure against known and unknown vulnerabilities

 Software data is secure

 Software is according to all security regulations

 Input checking and validation

 SQL insertion attacks

 Injection flaws

 Session management issues

 Cross-site scripting attacks

 Buffer overflows vulnerabilities

 Directory traversal attacks

Portability Testing
Portability testing includes testing a software with the aim to ensure its reusability and that
it can be moved from another software as well. Following are the strategies that can be
used for portability testing:

 Transferring an installed software from one computer to another.

 Building executable (.exe) to run the software on different platforms.


Portability testing can be considered as one of the sub-parts of system testing, as this
testing type includes overall testing of a software with respect to its usage over different
environments. Computer hardware, operating systems, and browsers are the major focus
of portability testing. Some of the pre-conditions for portability testing are as follows:

 Software should be designed and coded, keeping in mind the portability


requirements.

 Unit testing has been performed on the associated components.


 Integration testing has been performed.
 Test environment has been established.
Sanity Testing

What is Smoke Testing?


 
Smoke Testing is performed after software build to ascertain that the critical
functionalities of the program is working fine. It is executed "before" any detailed
functional or regression tests are executed on the software build.The purpose is to reject
a badly broken application, so that the QA team does not waste time installing and
testing the software application.

In Smoke Testing, the test cases chosen cover the most important functionality or
component of the system. The objective is not to perform exhaustive testing, but to verify
that the critical functionalities of the system are working fine.
For Example a typical smoke test would be - Verify that the application launches
successfully, Check that the GUI is responsive ... etc.

What is Sanity Testing?


After receiving a software build, with minor changes in code, or functionality, Sanity
testing is performed to ascertain that the bugs have been fixed and no further issues
are introduced due to these changes. The goal is to determine that the proposed
functionality works roughly as expected. If sanity test fails, the build is rejected to save
the time and costs involved in a more rigorous testing.
Smoke Testing Sanity Testing

Smoke Testing is performed to ascertain


Sanity Testing is done to check the new
that the critical functionalities of the
functionality / bugs have been fixed
program is working fine

The objective of this testing is to verify The objective of the testing is to verify
the "stability" of the system in order to the "rationality" of the system in order
proceed with more rigorous testing to proceed with more rigorous testing

This testing is performed by the Sanity testing is usually performed by


developers or testers testers

Smoke testing is usually documented or Sanity testing is usually not documented


scripted and is unscripted

Smoke testing is a subset of Regression Sanity testing is a subset of Acceptance


testing testing

Sanity testing exercises only the


Smoke testing exercises the entire
particular component of the entire
system from end to end
system

Smoke testing is like General Health Sanity Testing is like specialized health
Check Up check up
Context Driven Testing, Rapid Software Testing, Exploratory Testing,
and Session Based Test , defect tracking, testing using an Agile Methodology

Context Driven Testing


Context-driven testing is a model for developing and  debugging computer software that takes
into account the ways in which the programs will be used or are expected to be used in the
real world

Exploratory Testing
Exploratory testing is a hands-on approach in which testers are involved in minimum
planning and maximum test execution.

Test Harness
In software testing, a test harness or automated test framework is a collection
of software and test data configured to test a program unit by running it under varying conditions
and monitoring its behavior and outputs. It has two main parts: the test execution engine and
the test script repository
1) Boundary Value Analysis (complement equivalence partitioning):

This technique is like the technique Equivalence Partitioning, except that for creating the
test cases beside input domain use output domain. One can form the test cases in the
following way:

I. An input condition specifies a range bounded by values a and b → test cases should
be made with values just above and just below a and b respectively;

II. An input condition specifies various values → test cases should be produced to
exercise the minimum and maximum numbers;

III. Rules 1 and 2 apply to output conditions; If internal program data structures have
prescribed boundaries, produce test cases to exercise that data structure at its
boundary.
What is a bug?

In computer technology, a bug is a coding error in a computer program and the process of
finding bugs before program users do is called debugging

What is the difference between bug, defect and error?

BUG:
A bug is the result of a coding error.
When error is found before the software release is called bug.

Bug life cycle:

1. Its status is 'NEW' and assigns to the respective development team (Team lead or Manager).
2. The team lead assign's it to the team member, so the status is 'ASSIGNED TO'
3. The developer works on the bug fixes it and re-assign to the tester for testing. Now the status
is 'RE-ASSIGNED'
4. The tester, check if the defect is fixed, if it is fixed he changes the status to 'VERIFIED'
5. If the tester has the authority (depends on the company) he can after verifying change the
status to 'FIXED'. If not the test lead can verify it and change the status to 'fixed'.
6. If the defect is not fixed he re-assign's the defect back to the development team for re-fixing

Defect:
A defect is a deviation from the requirements. A defect does not necessarily mean there is a
bug in the code. It could be a function that was not implemented but defined in the
requirements of the software.
When error is found after software release is called defect.

Problem in algorithm leads to failure. A defect is for something that normally works, but it has
something out-of-spec.

Error:
When the actual result is different from expected result is called error.

The actual terminologies, and their meaning, can vary depending on people, projects,
organizations, or defect tracking tools, but the following is a normally accepted
classification.

 Critical: The defect affects critical functionality or critical data. It does not have a
workaround. Example: Unsuccessful installation, complete failure of a feature.
 Major: The defect affects major functionality or major data. It has a workaround
but is not obvious and is difficult. Example: A feature is not functional from one
module but the task is doable if 10 complicated indirect steps are followed in
another module/s.
 Minor: The defect affects minor functionality or non-critical data. It has an easy
workaround. Example: A minor feature that is not functional in one module but
the same task is easily doable from another module.
 Trivial: The defect does not affect functionality or data. It does not even need a
workaround. It does not impact productivity or efficiency. It is merely an
inconvenience. Example: Petty layout discrepancies, spelling/grammatical errors.

What is severity?

The degree of impact a defect has on the development or operation of a component or system.
The Bug severity specifies how considerable (large) effect a bug is having on the application.

It is the extent to which the defect can affect the software. In other words it defines the impact
that a given defect has on the system.
For example: If an application or web page crashes when a remote link is clicked, in this case
clicking the remote link by an user is rare but the impact of  application crashing is severe. So
the severity is high but priority is low

What is Priority?

The Priority is the level of (business) importance assigned to an element, e.g. defect. The Bug
priority specifies how significant (important) or how quickly it is needs to be fixed

Sl. Component Application


1 Blocker The application crashes or fails to initialize. The Database
connectivity procedure issue where scheme is unable to fetch data
or fetches wrong data.
2 Critical It is the most important feature.
3 Major The secondary feature in test is not effective as anticipated.

4 Normal It is secondary feature in test has some not so imperative issues.


The minor feature in the test is not operating as expected.
5 Minor The secondary feature in the test has some not so important issues.
6 Trivial The minor feature in the test that has some UI issues or typos in
the system.
7 Enhancement Any amendment in the existing product that is not covered in
requirement, but it will be important for look and feel or for
business purpose.

Component Application
1 High severity, Generally if we need to print some data on the paper or display
High priority some data on the screen, and we observe that the data is not printed
properly. For the illustration a Mobile number which is a
compulsory field is to be entered in a presentation and it is to be
printed, but it is not to be printed even though user has entered the
data. The scenario which is stated is of High Severity and High
Priority.
2 High severity, This is the second level of the combination of Severity and Priority.
Low priority If we need to print some data on the paper or display some data on
the screen, and we observe that the data is printed but not in order
which we expected here. For specimen an address is to be printed
as the Building name, followed by Flat No e.g. 08 but in actual
employment the order is not as per prerequisite. The scenario stated
is of High Severity but with Low Priority format.
3 Low severity, This is the third level of the combination of Severity and Priority. If
High priority we need to print some data on the paper or display some data on the
screen, and we observe that the data is printed but it gets trimmed at
the end. For example, if we consider a Mobile number which is a
mandatory field is to be entered in an application and it is to be
printed, but then again it is not printed completely. For example if
the user enters ten digits and in actual implementation only eight –
nine digits are printed even though there is more digits to be printed
here. The scenario which is stated is of Low Severity and High
Priority.
4 Low severity, This is the fourth level of the combination of Severity and Priority.
Low priority If we need to print some data on paper or display some data on the
screen, and we see that the data is published but some punctuation
are misplaced as we expected. For the example an address is to be
reproduced as the building name followed by the comma and Flat
No like 2 but in actual implementation the comma is missing. So
the scenario stated is of Low Severity then again with Low Priority.
Agile vs Scrum

Agile and Scrum are terms used in project management. The Agile methodology
employs incremental and iterative work cadences that are also called sprints. Scrum, on
the other hand is the type of agile approach that is used in software development.

Agile

The Agile methodology is used in project management and it helps project makers to
build software applications that are unpredictable in nature. Iterative and incremental
work cadences called sprints are used in this methodology. It is basically inspired from
traditional sequential model or the waterfall model.

The benefit of using the Agile methodology is that the direction of the project can be
accessed throughout its development cycle. The development is accessed with the help
of iterations or sprints. At the end of each sprint, an increment of work is presented by
the team developing the project. The focus is mainly on the repetition of work cycles
and the product they yield. This is the reason why the agile methodology is also called
as incremental and iterative.

In agile approach, the each step of development such as requirements, analysis, design
etc are continually monitored through the lifecycle of the project whereas this not the
case with the waterfall model. So by using agile approach, the development teams can
steer the project in the right direction.

Scrum

Scrum is a type of agile approach that is used in development of software


applications. It is just a framework and not a methodology or a full process. It
does not provide detailed instructions to what needs to be done rather most of it is
dependent on the team that is developing the software. Because the developing the
project knows how the problem can be solved that is why much is left on them.

Cross-functional and self-organizing teams are essential in case of scrum. There is no


team leader in this case who will assign tasks to the team members rather the
whole team addresses the issues or problems. It is cross-functional in a way that
everyone is involved in the project right from the idea to the implementation of the
project.

As it is an agile methodology, it also makes use of series of iterations or sprints. Some


of the features are developed as a part of the sprint and at the end of each sprint; the
features are completed right from coding, testing and their integration into the product. A
demonstration of the functionality is provided to the owner at the end of each sprint so
that feedback can be taken which can be helpful for the next sprint.

The product is the primary object of a scrum project. At the end of each sprint, the
system or product is brought to a shippable state by the team members.

What differentiates Scrum from other methodologies?


– Scrum has three roles: Product owner, team members, scrum master.
– Projects are divided into sprints, which typically last one, two or three weeks.
– At the end of each sprint, all stakeholders meet to assess the progress and
plan its next steps.
– The advantage of scrum is that a project’s direction to be adjusted based on
completed work, not on speculation or predictions.]

The Scrum process includes the following steps:


Backlog refinement
This process allows all team members to share thoughts and concerns, and
properly understand the workflow.
Sprint planning
Every iteration starts with a sprint planning meeting. The product owner holds a
conversation with the team and decides which stories are highest in priority, and
which ones they will tackle first. Stories are added to the sprint backlog, and the
team then breaks down the stories and turn them into tasks.
Daily Scrum
The daily scrum is also known as the daily standup meeting. This serves to
tighten communication and ensure that the entire team is on the same page.
Each member goes through what they have done since the last standup, what
they plan to work on before the next one, and outline any obstacles.
Sprint review meeting
At the end of a sprint, the team presents their work to the product owner. The
product owner goes through the sprint backlog and either accepts or rejects the
work. All uncompleted stories are rejected by the product owner.
Sprint retrospective meeting
Finally, after a sprint, the scrum master meets with the team for a retrospective
meeting. They go over what went well, what did not, and what can be improved
in the next sprint. The product owner is also present, and will listen to the team
lay out the good and bad aspects of the sprint. This process allows the entire
team to focus on its overall performance and identify strategies for improvement.
It is crucial as the ScrumMaster can observe common impediments and work to
resolve them

You might also like