Professional Documents
Culture Documents
FLAGGING BUGS
BUG REJECTION CAUSES
DIFFERING STANDARDS
Some testers are understandably confused about the seemingly inconsistent way we review bugs from
one project to another.
Proof of Concept tests and Live Monitoring tests (previously Always On) have stricter review criteria
than normal exploratory tests. The extra restrictions are detailed in the project instructions. It is
essential that the instructions are read the first time you test on a project AND that every time you
revisit the project you check to see if they have changed.
Rushing into a project without first checking the instructions is a sure way to accumulate bug report
rejections for being out of scope – YOU certainly don’t want that to happen and it may surprise you to
learn we get no satisfaction from rejecting those reports either.
The reviewer of a project will also affect the way that the project is handled, at least in the early
stages.
So, to the business of what happens when you log a bug and “we” get our hands on it.
We do NOT normally allow rejected bugs to be deleted – if a bug is rejected because a tester was out
of scope etc it defeats our performance improvement model if we allow a bug to be deleted.
If we offer a deletion, we will offer it once only. These opportunities are given when there were
problems with the instructions or something generally outside of the control of the tester has occurred.
Our advice – if we offer a deletion – accept it.
REVIEW STAGE 2
Once a project has closed, or is approaching closure, members of the internal team will start to
validate “open” issues on the project. They will try and reproduce the issues previously approved by
the reviewer and perform checks to ensure that the points in Review Stage 1 were checked.
It is NOT uncommon for a bug report that was previously approved to be rejected at this stage. This
will simply be a result of another person identifying a problem that the reviewer did not spot at stage
1.
Once the approved issues on a project have been through at least 2 stages of review they will
generally remain approved.
During stage 2 we may also start looking at rejected bugs and some of those rejected may be reset and
approved at the discretion of the internal staff – this is particularly true of Proof of Concept tests.
BUG REPRODUCIBILITY
One of the issues that frustrates testers is the issue of bug reproducibility. This is particularly true
when the tester can reproduce an issue 100% of the time and we cannot reproduce it once.
To explain this, the installation of software on a PC is complex and involves various modules (such as
.dll files, activex files etc) all pointing to the correct places and being bug free. Installation of any
software on a PC can upset this delicate balance.
In addition to this the tester environment (firewall, AV software, router config, port forwarding
setting, uPnP settings, ISP and potential government restrictions) can affect the way in which web
sites behave to an individual tester.
If the issue the tester is having is linked to any of the above then the tester will get a 100%
reproducibility rate BUT the reviewer and internal staff will not be able to reproduce the issue.
We try and independently reproduce issues to eliminate bugs that are induced by the local
environment of the tester.
Generally speaking – we must be able to reproduce an issue in the office for it to get approved. Some
clients and some projects occasionally have more relaxed rules that we apply.
INTRODUCTION TO REJECTIONS
The area in which there is most disagreement between the testers and our staff is on the topic of bug
rejections. We do notice that some testers put a lot of work into a bug report and then have then
rejected from time to time and we appreciate that it can be disappointing.
What may surprise the testing community is that a lot of these rejections can be avoided and either the
effort spent into creating the bug report can be successfully converted to a paid bug or the tester can
recognise that the issue is not a bug and save themselves the effort.
This section is intended to give some insight into the more common reasons for the rejection of issues
and offers some explanation of those issues that it seems the community never quite understands.
EXPECTATION vs BUGS
This is one of the big 3 causes of bug rejections.
When a tester goes to a web site to test it is not a good idea to start comparing the site to other sites
you have visited. Many bugs have been rejected because testers have raised bug reports saying that
“it is common practice” or “usually”. These are not bugs.
A bug is where functionality that is provided fails to perform as the client expects – not if it fails to
perform as the user expects - although there is some common ground between the two.
Testers have to use their intelligence to determine if something is behaving as intended even if it’s not
what they expect.
As an example – look at www.rushplace.com
On most web sites the categories would disappear from the left side of the page when, say, privacy
policy, was selected but on this site they do not. They are completely unnecessary on this page and
take up valuable canvas space. This is not, however, a bug.
This is user expectation based on experience of other web sites and further exploration of rushplace
will show that this behaviour is what the web site owner intends to happen.
Example from a recent Proof of Concept test:
A tester has logged a conversion medium bug because they cannot see the stock levels of the
items on a site so they do not know how many are available.
The main question here is: Is that functionality provided? If so, is there a reason it’s not
working?
The bug was rejected because it is not provided functionality but had been seen by the tester
on another site. Only defective functionality that is provided is considered a bug
DUPLICATION
Issue duplications are also a major cause of rejections.
Before reporting an issue testers should check to see whether a functional duplicate has previously
been logged. It is not always easy to see from the bug titles what the issues are really about but we
cannot approve more than 1 bug for each functional issue.
Let us for a moment look at what a functional bug is….
A functional bug is an issue that prevents a feature or features of a web site from working correctly.
One of the characteristics of a functional bug is that it can manifest in multiple locations and does not
always manifest in the same way.
A good analogy is when you have a head cold. The virus is the functional bug and the blocked nose,
sneezing, coughing and fever are the varying manifestations.
We will accept one bug report that points to a single functional bug – any further reports that point to
the same functional issue will be rejected.
The location of the issue is also irrelevant for determining if an issue is a duplicate. If the same or
similar issue occurs on multiple pages then they are all considered the same issue.
Remember: We report on unique functional bugs – not every occurrence of them.
OUT OF SCOPE
Out of scope covers a whole range of situations in which we would reject an issue.
Some common out of scope situations are described in the following paragraphs.
OUT OF SCOPE – IMPACT
On Proof of Concept and Live Monitoring projects we have impact guidelines that restrict bugs to
conversion high+ issues. We are always rejecting issues on these projects because the bug report does
not describe an issue that is high+ impact.
A conversion issue is where a user displays intent to purchase but does not complete the checkout
process because of an issue they have encountered. We can go a little further here and say that a user
is “highly likely not to complete a checkout” because of an issue.
When assessing the impact the tester must consider not just the journey they are taking BUT how
likely the entire user demographic of the site to encounter the issue.
The rejections we make based on impact are generally because the tester has focussed on their own
individual experience of the issue discovered and not considered how that will affect all of the users
of the site.
Example from a recent Proof of Concept test:
A tester has logged a conversion high bug where they have gone to a web site, clicked
“blog” in the footer and tried to write an article and got a “404” error message.
The main question here is: how many people would start off a purchase journey by
visiting the footer and trying to write an article?
The above bug was rejected as being out of scope – not a high enough impact.
Another point to note on the bugs that are out of scope on impact is we get a lot of them flagged
where the tester is asserting that there is a valid bug . If we are saying that the issue does not meet the
impact requirements, we have agreed that it’s an issue, but not significant enough an impact -
restating it’s an issue will not get the bug approved.
In the above cases you should raise a flag ONLY if you can sensibly suggest that the bug has a higher
impact than we are asserting.
Completion of Purchase
Using Live Chat
Using Contact Forms
Performing Security Testing
This means that even if you do find something in these areas you may also find yourself sanctioned as
we placed them out of scope for good reasons.
We sometimes place form submissions out of scope – that means anywhere that you enter data and
click a button to “submit” the information. Forms are commonly used for newsletter subscriptions,
password resets, brochure requests etc and commonly have some statistical importance to our clients.
TEST CASES
Test cases are handled slightly differently to bugs partly because there is no flagging facility available
for rejected cases.
Test cases should be considered a script to follow and only issues found whilst following that script
will be approved as part of that test case.
Cases are commonly rejected because: