You are on page 1of 11

Date of 

Issue Incident Sev Platform Associated Project name QA Manager

5/22/2015 INCxxxxx 2 GDW YYYYYYY Bharath Ramanathan


Summary of incident and 
SDSM root cause Status of QA analysis

Issue:
INCXXXX- .

Steve Jones Completed


QA Test Coverage Summary 
(to include why issue was not found in Test)

Investigation on QA scope indicated that the test coverage


was complete. 

o   For Datastage Upgrade testing, QA had undertaken a


combined ST/SIT approach because there were no
functional changes to any of the applications. We were
upgrading DataStage 7.5.3 to DataStage 9.1.

o   The test approach which QA adopted is to do 2 batch


runs, one with the older version of the Datastage & one with
the latest version of Datastage. The expectation is that upon
comparison of data, the outputs should match. 

o   In this case, the user IDs used for testing had all the
access permissions & all the jobs were executed
successfully without any warnings. On comparing the results
with the batch output of the previous version of Datastage,
the outputs matched confirming a successful test.
Was QA Test Coverage level appropriate?

Yes
Were Specific Risks signed up to?

Risk was agreed as not to perform any NFT/OAT and the


Risk was to be mitigated by having Dry Runs in Production
QA Recommendations

QA Recommendations:Perform OAT runs to ensure the


build works fine for Production parameters/setup/user
Profiles. If OAT is waived, Dry Runs in Production need to
cover these OAT tests and proper analysis of he jobs logs is
to be done before moving into Production. 

Action required: ADM team doing the Dry Run needs to


validate OAT Scenarios.

Benefit: Production like (Setup/parameters/profiles) will be


tested once before the Go Live.

Area/Platform owning the Action: ADM GDW


Change Reference Number 
QA Recommendations Reviewed By PMR attended by    (if applicable)

Nigel Cordon Steve Jones CR00XXXXXX


QA Reason Code
(Select from Drop Down List

QAFT11
QAFT Codes

QAFT01 Not tested –  Environment Inadequate

QAFT02 Not tested – Functionality missed

QAFT03 Not tested – QA involvement not relevant

QAFT04 Not Tested – Not Engaged

QAFT05 Not Tested – Requirement not communicated to QA

QAFT06 Not tested – Risk based decision

QAFT07 Tested – Existing/Known issue accepted by Business

QAFT08 Tested – Insufficient / Inadequate testing


QAFT09 Tested – Implementation Plan Incorrect
QAFT10 Not tested – Business testing only

QAFT11 Tested – Functionality worked in test


Description
Required configuration, interface connectivity to test the particular
functionality/scenario is not available in test environment. Hence this
scenario was not tested by QA team.
No test coverage for the Incident scenario in QA testing despite in scope
of QA.Hence this scenario was not tested by QA team.
Incident resides in an area which is out of scope for QA functional
testing. Hence this scenario was not tested by QA team.
QA team not engaged for testing the particular functionality. Hence this
scenario was not tested by QA team.
Change in requirement or introduction of new requirements not
communicated to QA through standard channels. e.g. Change Requests.
Hence this scenario was not tested by QA team.
QA functional testing carried out on risk based approach and the
particular scenario is out of scope and agreed with business. Hence this
scenario was not tested by QA team.
Functionality implemented in live with existing/known issue which is
accepted by business to move to production. Hence functionality was
tested by QA team, however test scenario was failed and defect was
raised.
Incomplete test coverage to test the functionality thoroughly. Test
coverage was insufficient to identify issue.
Incorrect sequence of events followed during implementation
No QA impact. Testing carried out by business team
Evidences present with QA that functionality worked as per requirements
in test environment, however functionality failed in production
environment
Examples

e.g. Tested using stubs which was signed off approach or


environmental issue such as configuration.

e.g. Database rebalance issue in live, middleware change

e.g. Firewall upgradation by third party

e.g. Testing performed around changed functionality only for one of the
brands

e.g. Scenario tested for single address for a customer, however


missed to test where multiple addresses present
e.g. incorrect job runs during implementation, configuration issues
e.g. Scenarios like testing RTDOM

You might also like