You are on page 1of 17

Writing Skills Evaluation

Look for all errors in this document and fix them highlighting the
changes

4TH SOURCE
1 Introduction
This document provides the Testing Strategy for the logbooks application. It includes the following
components:

 Testing Strategy Overview


 Background
 Scope
 Task Overview
 Key Deliverables
 Acceptance Criteria
 Problem Management
 Critical Success Factors
 Risk and Contingency Plans
 Metrics

1.1 Document Purpose


The purpose of this document is to outline the high- level test strategy for the Logbooks application,
defining the preliminary test scope, high-level test activities, and organization, together with test
management for the project. The test strategy provides the framework for estimating the duration of
the testing effort at the required confidence level for the business case.

This test strategy is a planning tool that will provide the starting point for detailed test planning during
the Execute stage.

1.2 Objective
The objectives of this document are the definitions of test activities, creations of test artefacts that will
be used to manage the test activities or the issue tracking.

The key objectives are as follows:


 Determine the significance and critical nature of the logbooks application for the
business.
 Determine the types of tests cases required by each testing task.
 Definition of the test artefacts to (-) be used
 Identify the need for converted data from legacy systems or other sources.

Page 1 of 17
 Determine the need of a systems integration test by identifying key system interfaces.
 Identify performance assurance requirements.

1.3 Document Intended Audience

The Logbook Testing Strategy is intended for the following audience:


 4th Source Development Team
 Regeneron Team members

1.4 Document Scope

This document provides an overall view about the “Logbooks” application; in this document you can find
information about the current process of request access, permissions and other activities.

(-)This document could be consulted on what development tools will be used in the creation of the
application according to the defined requirements. The analysis that determines the use of these tools is
not covered in this document.

Additionally, the information regarding the testing process that will be performed in the application can
be consulted, principally the next information:

 Types of test to perform


 Testing tools to be used
 The environment where the test must be performed

For more details about the testing process consult the “Test Plan for logbooks” document. Which can be
located at the next path: <path>

Page 2 of 17
2 Abbreviation and Terminology
Abbreviations / Definition
Acronyms
Terminology
TBD To be defined
PM Project Manager
DEV Development
MS Microsoft
IDE Integrated Development Environment
QA Quality Assurance
IT Technology Information

Page 3 of 17
3 Project Overview and Background

Regeneron Logbook process is a very important part of the company’s procedures; it is used in order to
keep track of tasks that are needed to be recorded in a cyclic manner.
The current procedure is lengthy and costly; time and money-wise because of the following reasons:
 It is a very labor-intensive activity due to the manual manipulation that it’s storage, retrieval and
creation require.
 Is highly prone to Human Errors since the validation, transcription, transposing and
timestamping are all manual.
 The control of it is done thru Brute Force which uses lock and keys.
 There are multiple non value added activities such as corrections and reconciliation of records
that needs to be performed in order to achieve a higher conformance control and still the
current effectiveness of the process is still around 50%.

Page 4 of 17
4 Testing Strategy
4.1 Test Strategy
 Testing Methodology
This section outlines the testing methodology and the testing procedures for each testing.
 Develop Test Plan
The Test plan is developed to describe and justify the test strategy in relation to technical requirements
and risk. The test plan brings visibility on the test design and execution based on the defined strategy.
The main purpose of a test plan is to:

 Highlight the test conditions/scenarios and the test cases


 Identify special requirements or entry criteria that must be met in order for the test
project to proceed, as well as exit or process for determining when to stop testing
 Support initiation and organization of test project, including preparations, staffing,
responsibilities, facility acquisition, task planning and scheduling
 Support daily management and evaluation of the test project
 Identify and manage any risks or issues that may impact the project
 Specify the deliverables of the test project and the delivery process
 Include a Test Objective Matrix to control and manage any changes during the test
project

 Test Plan
The purpose of a Test Plan is to identify the testing to be conducted for specific Releases to perform the
System Test. The responsibility for the Test Plan’s resides within the deliverables of the Test Managers
and their Leads.

The following content must be included:

 Testing to be conducted; i.e. Test Releases & Regression Test Approach


 Test Type Coverage Matrix
 Milestone Schedule
 Work Plan,
 Resources & Dependencies,
 High Level Estimates
 Test Approach
 Roles and Responsibilities
 Measurements and Metrics
Page 5 of 17
 Progress Reporting
 Alignment Management
 Environment Requirements
 Defect Management Process
 Software and Testing Artefacts
 Test Cases and expected results
 Handover Strategy
 Data Collection
 Entry and Exit Criteria Risk Assessment processes
 Training Requirements
 Project Risks
 Deliverables

 Test Design & Preparation


The following test phases (in sequential order) can be distinguished mapped onto the project plan and
corresponding phases:
Phase Purpose Dependencies
Preview cycle(s) Ensure the test data and environment are prepared Test environment readiness
and availability
First test cycle(s) The official tests (see QA entries in section x or • QA entry criteria met
scope) in the Staging environment. • Test cases/scripts
finalized/authorized
Regression test Ensure bug fixes found in the first test cycle, do not All detected bugs must be
cycle(s) cause integration issues. fixed and set to ready to
test
Release to UAT Release for client verification Regression test pass above
X%

• Test Summary Report


Test summary report is a formal document that summarizes the results of all testing efforts for a
particular testing cycle of a project / module or a sub module. Generally, test leads prepare this
document at the end of the testing cycle or at the end of the project.

This report is covered with the test artefact: “Status Template” resuming all the performed activities in a
single file. Which can be located at the next path: <path>

Page 6 of 17
4.2 Testing Types
• Unit / Component Testing
Unit testing is a level of software testing where individual units/ components of software are tested. The
purpose is to validate that each unit of the software performs as designed.
In object-oriented programming, the smallest unit is a method, which may belong to a base/ super class,
abstract class or derived/ child class. (Some treat a module of an application as a unit. This is to be
discouraged as there will probably be many individual units within that module.)

Unit testing frameworks, drivers, stubs, and mock/ fake objects are used to assist in unit testing.

• System Testing
System testing of software or hardware is testing conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. System testing falls within the scope
of black-box testing, and as such, should require no knowledge of the inner design of the code or logic.

As a rule, system testing takes, as its input, all of the "integrated" software components that have
passed integration testing and also the software system itself integrated with any applicable hardware
system(s). The purpose of integration testing is to detect any inconsistencies between the software units
that are integrated together (called assemblages) or between any of the assemblages and the hardware.
System testing is a more limited type of testing; it seeks to detect defects both within the "inter-
assemblages" and also within the system as a whole.
System testing is the functional and non-functional testing of the entire deliverable system, and the
interfaces between the various components.

• Regression Testing
Regression testing is the process of testing changes to computer programs to make sure that the older
features still works with the new changes. Regression testing is a normal part of the program
development process and, in larger companies, is done by code testing specialists. Automation
department coders develop code test scenarios and exercises that will test new units of code after they
have been written. These test cases form what becomes the test bucket. Before a new version of a
software product is released, the old test cases are run against the new version to make sure that all the
old capabilities still work. The reason they might not work is because changing or adding new code to a
program can easily introduce errors into code that is not intended to be changed.

Page 7 of 17
• Integration Testing
Integration testing is a logical extension of unit testing. In its simplest form, two units that have already
been tested are combined into a component and the interface between them are tested. A component,
in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units
are combined into components, which are in turn aggregated into even larger parts of the program. The
idea is to test combinations of pieces and eventually expand the process to test your modules with
those of other groups. Eventually all the modules making up a process are tested together. Beyond that,
if the program is composed of more than one process, they should be tested in pairs rather than all at
once.

Integration testing identifies problems that occur when units are combined. By using a test plan that
requires you to test each unit and ensure the viability of each before combining units, you know that
any errors discovered when combining units are likely related to the interface between units. This
method reduces the number of possibilities to a far simpler level of analysis.

• Performance Testing
Performance testing is the process of determining the speed or effectiveness of a computer, network,
software program or device. This process can involve quantitative tests done in a special environment
similar to the production environment, such as measuring the response time or the number of MIPS
(millions of instructions per second) at which a system functions.

Qualitative attributes such as reliability, scalability and interoperability may also be evaluated.
Performance testing is often done in conjunction with stress testing.

• User Acceptance Testing


UAT is the opportunity for the business to test a functionally proven and technically robust system, in a
stable environment, against the business objectives.

Page 8 of 17
5 Testing Approach
Test Type Objectives Time frame
Functional testing The objectives are to verify that the
application:
 Meets the defined
requirements;
 Performs and functions
accurately;
 Correctly handles error
conditions;
 Interfaces function correctly;
 Data load is successful.
Functional testing will occur in an
iterative and controlled manner,
ensuring the solution matches the
defined requirements.

Test Type Objectives Time Frame


Regression Testing Verification that features, functions
and modules works correctly after a
new additions or updates in the AQT.

Test Type Objectives Time Frame


Stress Testing Validate that concurrency of several
users in the application is supported
and the latency and response time
are granted not matters the number
of request by each user.

6 Testing Environments
A testing environment is a setup of software and hardware on which the testing team is going to
perform the testing of the newly built software product. This setup consists of the physical setup which
includes hardware, and logical setup that includes Server Operating system, client operating system,

Page 9 of 17
database server, front end running environment, browser (if web application), IIS (version on server
side) or any other software components required to run this software product.
This testing setup is to be built on both the ends – i.e. the server and client.

At least a Test environment is required, the principally features or rules that must be having are:

 The last updates of code from development team


 Use information most closer to the reality
 Must be configurable to “simulate” an production environment
 Limit access only to development/QA team
 The availability of this environment must be parallel to the development environment.

7 Testing Tools
The defined testing tools to be implementing in the testing process are:
 Test Cases Status Template for bugs management
 Test summary report
 Defect details template

8 Testing Controls & Procedures


This section documents the proposed process and governance of key processes.

8.1 Testing Success Criteria


This is the definition of when an item, and the application, will be considered as passing.

 At unit level: an item will pass until it has no unexpected actions and it works according to what
is outlined as the intended course of action.

 At release level: QA sign off for the release will be provided when the code passes 100% of the
test cases, with no pending defects pending.

Page 10 of 17
Exceptions apply to any change request or backlog defect scheduled for a different release. Other
exception may apply if the PM and Business owners provide written confirmation that certain
module/requirement/item can be released even if it is not working as expected and QA/development
are ok with it.

8.2 Defect Management


Ideally, defects are only raised and recorded when they are not going to be fixed immediately. In this
case, the conditions under which they occur and the severity needs to be accurately recorded so that
the defect can be easily reproduced and then fixed.
 Defect Management Severity Definitions

Severity Description
Critical Defect causes critical loss of business functionality or a complete loss of service.
Defect causes a major impact on business functionality and there is not an interim
Major
workaround available.
Defect causes minor impact on business functionality and there is an interim
Minor
workaround available.
Trivial Defect is cosmetic only and usability is not impacted.

8.3 Issues Management


The purpose of the issue resolution process is to provide a mechanism for organizing, maintaining, and
tracking the resolution of the issues that cannot be resolved before of the end of the testing phase.

This manages is covered with the test artefact: "Test Cases Status Template", the importance of the
issues are determined by their severity and according with customer needs. The determination of issues
to be fixed and tested is performed by Team Leader and is ruled in accordance to the importance of the
functionalities blocked by them and their severity.

8.4 Risks Management


The generic process for Risk management involves 3 important stages:

• Risk identifications

As it is said, the first step to solving a problem is identifying it. This stage involves making a list of
everything that might potentially come up and disrupt the normal flow of events.

Page 11 of 17
• Risk assessment / Risk impact analysis

All the risks are quantified and prioritized in this step. Every risk’s probability (the chance of occurrence)
and impact (amount of loss that it would cause when this risk materializes) are determined
systematically.

High – medium – low, values are assigned to both the probability and impact for each risk. The risks
with “high” probability and “High” impact must be taken in care first and then the order follows.

• Risk mitigation
The final step in this Risk Based Testing (RBT) process is to find solutions to plan how to handle each one
of these situations.

8.5 Issue and Risk Escalation and Governance


The importance of issues is determined by their severity and according with the customer needs. The
determination of issues to be fixed and tested is performed by Team Leader and is ruled in accordance
to importance of functionalities blocked by them and their severity. A product Owner could lead the
prioritization of these bugs to be fixed.

8.6 Progress Reporting


This process is covered with the test artefact: "Test Cases Status Template", at the finalization of each
test phase this report must be sent it in case of to be required by the customer or the project manager.
All test cases executed and their result will be displayed in this report.

8.7 Entry Criteria Risk Assessment


The purpose of the Entry Criteria Risk Assessment process is to evaluate the readiness of commencing a
testing Release.
 If all bugs detected in the test phase have been fixed and approved by QA team
 The team leader approves the release knowing that mistakes are minimal and do not risk the
release
 That if an error appear must be seen as a “lower impact” if allows the operation of the
application

Page 12 of 17
8.8 Exit Criteria Risks Assessment
The purpose of the Exit Criteria Risk Assessment process is to evaluate the finalization and completeness
of a testing Release.

8.9 Testing Requirements Traceability Matrix


The purpose of the RTM is to help ensure the object of the requirements conforms to the requirements
by associating each requirement with the object via the traceability matrix.

A traceability matrix is used to verify that all stated and derived requirements are allocated to system
components and other deliverables (forward trace).

The matrix is also used to determine the source of requirements (backward trace).

Requirements traceability includes tracing to things other than software that satisfy the requirements
such as capabilities, design elements, manual operations, tests, etc.

The traceability matrix is also used to ensure all requirements are met and to locate affected system
components when there is a requirements change.

The ability to locate affected components allows the impact of requirements changes on the system to
be determined, facilitating cost, benefit, and schedule estimates.

8.10 Test Coverage Analysis


Metrics need to be established for each testing phases. The metrics and measurement process must be
agreed upon by the necessary stakeholders at the beginning of the project and will form the foundation
for progress reporting.
Page 13 of 17
9 Key Roles, Accountabilities and Responsibilities
9.1 Proposed Test Team Structure

1. Project Manager
Responsibilities:

 Initiate Testing Project


 Test Project Planning, Executing, Monitoring/Controlling, Reporting/Closing

2. Team Leader
Responsibilities:
 Analyze Business Process, Business Requirement, Functional Specification
 Participate in Planning Activities

3. Developer
Responsibilities:
 Develop system/application
 Business Analyst and Test Leader Interaction

4. Quality Assurance Team


Responsibilities:
 Managing The Testing Project and Resource Allocation
 Tracking and Ensuring the Test Team to comply with standard Test Process
 Highlighting noncompliance issues to the test Management Team

5. QA Leader
Responsibilities:
 Analyzing Test Requirement
 Designing Test Strategy, and Test Methodology, RTM
 Designing Test Cases, Test Data
Page 14 of 17
6. QA
Responsibilities:
 Test Preparation
 Test Execution
 Raising and Tracking Defect
 Triage Resolution

10 Milestones and Schedule

10.1 High Level Schedule for Testing


N. Phase Activity Deadline
1 Requirement Analyze the system functionality as August 8, 2016 to August 9, 2016
analysis well as scope
2 Test Planning Test strategy August 9, 2016 to August 12, 2016
Test plan
3 Test Test Scenarios and Test cases August 15, 2016 to September 2 ,
Development creation 2016
4 Test Execution QA validation process September 5, 2016 to September
30 , 2016
Regression testing
5 Test closure QA signoff and UAT Oct 3, 2016 to Oct 8, 2016

Page 15 of 17
11 Risks
Test strategy is based on risk assessment. This means assessing the damage of the consequences of
defects, both undetected prior to operation and occurring during operation.
Risk assessment takes place on the basis of quality characteristics and subsystems. For instance, if the
system is insufficiently user-friendly, what the negative consequences will be.

Risk Impact Mitigation


1 Short timescale This project is a pass/fail project, The first test cycle will be lengthy;
and as Waterfall is methodology to ensure all potential integration
followed, it increases risk at issues are uncovered.
integration point.
2 Requirements Formal requirements are not yet Testing will observe a generalized
incomplete completed, which will possibly approach to test coverage (i.e.
impact test coverage. focusing on end –to - end common
user activity)
3 System does not Users cannot operate the system Test a selection of higher risk
support different browsers (for example IE10).
browsers
Front-end code will be run through
(XHTML Transitional) coding
standards.
This will mitigate browser
compatibility issues.
4 Usability concerns If the user does not have an A “Close all the exits” approach, as
obvious path to follow, then this well as identifying functional and UI
opens up more potential for human redundancies.
error.
Training/Documentation
5 Security Risk of users seeing an areas of the Exploratory testing to find possible
site and data that Must be blocked “back doors”
for them
Run basic front-end security
testing.

Page 16 of 17

You might also like