You are on page 1of 10

BUG REVIEW PROCESS

FLAGGING BUGS
BUG REJECTION CAUSES

A GUIDE FOR TESTERS


V1.0 – 15th August 2019
Table of Contents
INTRODUCTION.............................................................................................................................3
DIFFERING STANDARDS..............................................................................................................3
PROJECT SCOPE – AN IMPORTANT POINT...............................................................................3
REVIEW STAGE 1...........................................................................................................................4
NOTE ON BUG DELETIONS..........................................................................................................4
REVIEW STAGE 2...........................................................................................................................4
HOW WE DEAL WITH FLAGS......................................................................................................5
BUG REPRODUCIBILITY..............................................................................................................5
INTRODUCTION TO REJECTIONS...............................................................................................7
A NOTE ON “CONVERSION” JOURNEYS...................................................................................7
EXPECTATION vs BUGS................................................................................................................7
DUPLICATION................................................................................................................................8
OUT OF SCOPE...............................................................................................................................8
OUT OF SCOPE – IMPACT.............................................................................................................9
OUT OF SCOPE – URL....................................................................................................................9
OUT OF SCOPE – USER JOURNEY.............................................................................................10
OUT OF SCOPE – PROHIBITED ACTIONS................................................................................10
TEST CASES..................................................................................................................................10
INTRODUCTION
The bug review process is multi-tiered and specifically designed to not allow issues that are not
genuine issues to get passed to the client.
We see some interesting comments that testers leave in bugs and we have developed this document to
answer some of the more common questions we see and to address some of the knowledge gaps that
the testers have with regards to our processes.

DIFFERING STANDARDS
Some testers are understandably confused about the seemingly inconsistent way we review bugs from
one project to another.
Proof of Concept tests and Live Monitoring tests (previously Always On) have stricter review criteria
than normal exploratory tests. The extra restrictions are detailed in the project instructions. It is
essential that the instructions are read the first time you test on a project AND that every time you
revisit the project you check to see if they have changed.
Rushing into a project without first checking the instructions is a sure way to accumulate bug report
rejections for being out of scope – YOU certainly don’t want that to happen and it may surprise you to
learn we get no satisfaction from rejecting those reports either.
The reviewer of a project will also affect the way that the project is handled, at least in the early
stages.
So, to the business of what happens when you log a bug and “we” get our hands on it.

PROJECT SCOPE – AN IMPORTANT POINT


We work on behalf of customers with differing needs and who engage with us for varying reasons and
desiring different outcomes. This means that the scopes of the projects will vary and that is out of our
hands. We would ask testers to understand this – the scope is dictated by our clients and not by the
staff in our office. Our job is to enforce the scope during the test cycle and apply it during the
reviews.
If a tester sees a bug and logs it but has also provided evidence they have gone out of scope we cannot
pass that to the client and so any bugs of this type will be rejected.
REVIEW STAGE 1
The primary reviewer for the project will have the first contact with your bug report. The primary
reviewer can be an external reviewer or a member of the office staff. They will check things such as ..

 The report is not reporting a duplicate issue


 The correct URL is being tested
 The bug has the correct impact / severity
 The screenshots and videos are clear and relevant
 The “write up” is complete, matches the video, and can be understood by the client.
 The bug report has been raised in accordance with the project instructions.
 In most cases that the journey being undertaken is one that we would expect an average user
to take.
The reviewer will then attempt to independently reproduce the issue (see section on reproducibility of
bugs later in this document)
The reviewer will then either ….

 Approve the bug


 Ask for more information, where appropriate. You usually have 24 hours to provide the
required information – bugs raised towards the end of a project or on a project with a short
duration will require you to reply faster.
 Offer you a chance to delete the bug
 Reject the bug
NOTE ON BUG DELETIONS

We do NOT normally allow rejected bugs to be deleted – if a bug is rejected because a tester was out
of scope etc it defeats our performance improvement model if we allow a bug to be deleted.
If we offer a deletion, we will offer it once only. These opportunities are given when there were
problems with the instructions or something generally outside of the control of the tester has occurred.
Our advice – if we offer a deletion – accept it.

REVIEW STAGE 2
Once a project has closed, or is approaching closure, members of the internal team will start to
validate “open” issues on the project. They will try and reproduce the issues previously approved by
the reviewer and perform checks to ensure that the points in Review Stage 1 were checked.
It is NOT uncommon for a bug report that was previously approved to be rejected at this stage. This
will simply be a result of another person identifying a problem that the reviewer did not spot at stage
1.
Once the approved issues on a project have been through at least 2 stages of review they will
generally remain approved.
During stage 2 we may also start looking at rejected bugs and some of those rejected may be reset and
approved at the discretion of the internal staff – this is particularly true of Proof of Concept tests.

HOW WE DEAL WITH FLAGS


Rejected issues may be flagged.
Flagged issues are reviewed by the internal staff and should the flag involve the decision of one of the
other internal staff members it will be someone else dealing with the flag UNLESS the issue was
notable and discussed by the team. Whilst the issue may be discussed between the person dealing
with the flag and the person who rejected it – the decision on what happens with the flag lies wholly
with the person dealing with the flag itself.
When dealing with flags we look at the bug report, we look at why the issue was rejected and what the
tester response to the rejection is.
If the tester response does not address the reason the reviewer initially rejected the bug then the bug
WILL remain rejected and the flag dismissed.
To successfully raise a flag you MUST convince the person dealing with the flag that the reviewer
was incorrect - you do that in the flag comment. The comment should be factual, to the point and
polite.
If you have nothing more to add than was in the original bug report then our advice is NOT to raise a
flag unless you can show that the reviewer was wrong.
One other point of bug reports and flags. We often see issues with single products or pages raised in a
bug report and then rejected as out of scope on Proof of Concept or Always On projects. The tester
will then raise a flag to say that the issue is more widespread. The flag is not the place for this – the
proper course of action is to put that information in the original bug report. Testers who do this kind
of thing will risk having the flag dismissed and the issue remaining rejected.

BUG REPRODUCIBILITY
One of the issues that frustrates testers is the issue of bug reproducibility. This is particularly true
when the tester can reproduce an issue 100% of the time and we cannot reproduce it once.
To explain this, the installation of software on a PC is complex and involves various modules (such as
.dll files, activex files etc) all pointing to the correct places and being bug free. Installation of any
software on a PC can upset this delicate balance.
In addition to this the tester environment (firewall, AV software, router config, port forwarding
setting, uPnP settings, ISP and potential government restrictions) can affect the way in which web
sites behave to an individual tester.
If the issue the tester is having is linked to any of the above then the tester will get a 100%
reproducibility rate BUT the reviewer and internal staff will not be able to reproduce the issue.
We try and independently reproduce issues to eliminate bugs that are induced by the local
environment of the tester.
Generally speaking – we must be able to reproduce an issue in the office for it to get approved. Some
clients and some projects occasionally have more relaxed rules that we apply.
INTRODUCTION TO REJECTIONS
The area in which there is most disagreement between the testers and our staff is on the topic of bug
rejections. We do notice that some testers put a lot of work into a bug report and then have then
rejected from time to time and we appreciate that it can be disappointing.
What may surprise the testing community is that a lot of these rejections can be avoided and either the
effort spent into creating the bug report can be successfully converted to a paid bug or the tester can
recognise that the issue is not a bug and save themselves the effort.
This section is intended to give some insight into the more common reasons for the rejection of issues
and offers some explanation of those issues that it seems the community never quite understands.

A NOTE ON “CONVERSION” JOURNEYS


We have chosen to illustrate our points here with e-commerce sites as its easier to define a conversion
issue that way and that suits the purpose of this document. In e-commerce conversion is the process
of a visitor selecting a product and taking it to checkout – ultimately ending in delivery of the goods.
Other sites will have other primary aims. On a university or college site, for example, the goal of the
site may be for visitors to request a prospectus and so selecting and ordering a prospectus will become
a conversion journey. There does not have to be money involved for a journey to be a conversion
journey.
A conversion journey will be whatever the sites’ primary goal for the visitor happens to be.

EXPECTATION vs BUGS
This is one of the big 3 causes of bug rejections.
When a tester goes to a web site to test it is not a good idea to start comparing the site to other sites
you have visited. Many bugs have been rejected because testers have raised bug reports saying that
“it is common practice” or “usually”. These are not bugs.
A bug is where functionality that is provided fails to perform as the client expects – not if it fails to
perform as the user expects - although there is some common ground between the two.
Testers have to use their intelligence to determine if something is behaving as intended even if it’s not
what they expect.
As an example – look at www.rushplace.com
On most web sites the categories would disappear from the left side of the page when, say, privacy
policy, was selected but on this site they do not. They are completely unnecessary on this page and
take up valuable canvas space. This is not, however, a bug.
This is user expectation based on experience of other web sites and further exploration of rushplace
will show that this behaviour is what the web site owner intends to happen.
Example from a recent Proof of Concept test:
A tester has logged a conversion medium bug because they cannot see the stock levels of the
items on a site so they do not know how many are available.
The main question here is: Is that functionality provided? If so, is there a reason it’s not
working?
The bug was rejected because it is not provided functionality but had been seen by the tester
on another site. Only defective functionality that is provided is considered a bug

DUPLICATION
Issue duplications are also a major cause of rejections.
Before reporting an issue testers should check to see whether a functional duplicate has previously
been logged. It is not always easy to see from the bug titles what the issues are really about but we
cannot approve more than 1 bug for each functional issue.
Let us for a moment look at what a functional bug is….
A functional bug is an issue that prevents a feature or features of a web site from working correctly.
One of the characteristics of a functional bug is that it can manifest in multiple locations and does not
always manifest in the same way.
A good analogy is when you have a head cold. The virus is the functional bug and the blocked nose,
sneezing, coughing and fever are the varying manifestations.
We will accept one bug report that points to a single functional bug – any further reports that point to
the same functional issue will be rejected.
The location of the issue is also irrelevant for determining if an issue is a duplicate. If the same or
similar issue occurs on multiple pages then they are all considered the same issue.
Remember: We report on unique functional bugs – not every occurrence of them.

OUT OF SCOPE
Out of scope covers a whole range of situations in which we would reject an issue.
Some common out of scope situations are described in the following paragraphs.
OUT OF SCOPE – IMPACT
On Proof of Concept and Live Monitoring projects we have impact guidelines that restrict bugs to
conversion high+ issues. We are always rejecting issues on these projects because the bug report does
not describe an issue that is high+ impact.
A conversion issue is where a user displays intent to purchase but does not complete the checkout
process because of an issue they have encountered. We can go a little further here and say that a user
is “highly likely not to complete a checkout” because of an issue.
When assessing the impact the tester must consider not just the journey they are taking BUT how
likely the entire user demographic of the site to encounter the issue.
The rejections we make based on impact are generally because the tester has focussed on their own
individual experience of the issue discovered and not considered how that will affect all of the users
of the site.
Example from a recent Proof of Concept test:

 A tester has logged a conversion high bug where they have gone to a web site, clicked
“blog” in the footer and tried to write an article and got a “404” error message.
 The main question here is: how many people would start off a purchase journey by
visiting the footer and trying to write an article?
 The above bug was rejected as being out of scope – not a high enough impact.
Another point to note on the bugs that are out of scope on impact is we get a lot of them flagged
where the tester is asserting that there is a valid bug . If we are saying that the issue does not meet the
impact requirements, we have agreed that it’s an issue, but not significant enough an impact -
restating it’s an issue will not get the bug approved.
In the above cases you should raise a flag ONLY if you can sensibly suggest that the bug has a higher
impact than we are asserting.

OUT OF SCOPE – URL


The project link will take you to the site that we are testing. It is important to take a note of the URL
you land on because that is the test URL.
Typically, every page that starts with the test web site URL will be in scope in standard tests but
testers need to beware that a large number of web sites contain embedded links to other sites which
will not be in the scope of the test. This happens particularly on sites for large corporates where they
link back and forth between sites for subsidiary companies etc and there is no styling change to alert
the tester that this has happened.
It is commonly thought that if you land on the project URL then anywhere it takes you is in scope –
this is not true – always check the URL before reporting a bug.
Something also to be aware of is where the test site has embedded elements from other sites – such as
payment APIs.
A lot of the time any issues here will be out of scope as it will be the 3 rd party site causing issues so
please be very careful when logging bugs related to this situation.

OUT OF SCOPE – USER JOURNEY


Our clients generally want us to test a site to see if we can find any issues that will be encountered by
their users. In order to do this the testers need to behave like an average user when testing the site.
We see a lot of bugs that we reject because a tester has added 20000 pairs of shoes to the basket and
complained the quantity field has cut of half of the last 0. This is NOT a normal user journey – what
normal user would buy 2000 pairs of shoes on a retail site?
Remember, when testing, to stick to real world data as well as real world behaviour.
We had a bug rejected because delivery options were not showing correctly – going back over the
video the tester had used a nonsense postal address.
Search functions are often used with search terms that a real world user would not type and, if they
did, they would quickly type in the right thing.

OUT OF SCOPE – PROHIBITED ACTIONS


We will advertise in the “out of scope” sections of the project the areas we want testers not to test.
These areas may also be mentioned elsewhere in the project documentation.
Common prohibited actions are:

 Completion of Purchase
 Using Live Chat
 Using Contact Forms
 Performing Security Testing
This means that even if you do find something in these areas you may also find yourself sanctioned as
we placed them out of scope for good reasons.
We sometimes place form submissions out of scope – that means anywhere that you enter data and
click a button to “submit” the information. Forms are commonly used for newsletter subscriptions,
password resets, brochure requests etc and commonly have some statistical importance to our clients.

TEST CASES
Test cases are handled slightly differently to bugs partly because there is no flagging facility available
for rejected cases.
Test cases should be considered a script to follow and only issues found whilst following that script
will be approved as part of that test case.
Cases are commonly rejected because:

 They are run on the wrong device / browser


 Testers do not provide details of each action they perform on coverage cases
 Testers do not log or link a bug when they find a defect
 Testers do not provide meaningful answers for each stage of the test script
 Testers wander from the instructions
Test cases have to be approved or rejected whilst the project is running so that another tester can pick
up the script and run it if it is rejected.
Any comments/suggestions regarding this document can be e-mailed to support@digivante.com

You might also like