You are on page 1of 29

1

• Acceptance Testing
• Objectives of acceptance testing.
• Acceptance testing, like system testing, typically focuses on the behavior and capabilities of a whole system or
product. Objectives of acceptance testing include:
• Establishing confidence in the quality of the system as a whole.
• Validating that the system is complete and will work as expected.
• Verifying that functional and non-functional behaviors of the system are as specified.
• Acceptance testing may produce information to assess the system’s readiness for deployment and use by the
customer (end-user). Defects may be found during acceptance testing, but finding defects is often not an
objective, and finding a significant number of defects during acceptance testing may in some cases be considered
a major project risk. Acceptance testing may also satisfy legal or regulatory requirements or standards.
• Common forms of acceptance testing include the following:
• User acceptance testing.
• Operational acceptance testing.
• Contractual and regulatory acceptance testing.
• Alpha and beta testing.
• Each is described in the following four subsections.
• User acceptance testing (UAT).
• The acceptance testing of the system by users is typically focused on validating the fitness for use of the system by
intended users in a real or simulated operational environment. The main objective is building confidence that the
users can use the system to meet their needs, fulfill requirements, and perform business processes with minimum
difficulty, cost, and risk.

2
• Operational acceptance testing (OAT).
• The acceptance testing of the system by operations or systems administration staff is usually
performed in a (simulated) production environment. The tests focus on operational aspects, and
may include:
• Testing of backup and restore;
• Installing, uninstalling and upgrading;
• Disaster recovery;
• User management;
• Maintenance tasks;
• Data load and migration tasks;
• Checks for security vulnerabilities;
• Performance testing.
• The main objective of operational acceptance testing is building confidence that the operators or
system administrators can keep the system working properly for the users in the operational
environment, even under exceptional or difficult conditions.

3
• Contractual and regulatory acceptance testing.
• Contractual acceptance testing is performed against a contract’s
acceptance criteria for producing custom-developed software. Acceptance
criteria should be defined when the parties agree to the contract.
Contractual acceptance testing is often performed by users or by
independent testers.
• Regulatory acceptance testing is performed against any regulations that
must be adhered to, such as government, legal, or safety regulations.
Regulatory acceptance testing is often performed by users or by
independent testers, sometimes with the results being witnessed or
audited by regulatory agencies.
• The main objective of contractual and regulatory acceptance testing is
building confidence that contractual or regulatory compliance has been
achieved.

4
Contractual
and regulatory acceptance
testing

• In contract acceptance testing, a system is


tested against acceptance criteria as
documented in a contract, before the system
is accepted. In regulation acceptance testing,
a system is tested to ensure it meets
governmental, legal and safety standards.

5
• Operational Acceptance Testing (OAT) is a
software testing technique which evaluates
the operational readiness of a software
application prior to the release or production.
The goal of Operational acceptance testing is
ensuring system and component compliance
and smooth system operation in its Standard
Operating Environment(SOE).

6
• Types of Operational Testing
• Installation Testing
• Load & Performance Test Operation
• Backup and Restore Testing
• Security Testing
• Code Analysis
• Fail over Testing
• Recovery Testing
• End-to-End Test Environment Operational Testing
• Operational Documentation Review

7
• Example test cases for Operational Testing or OAT
• Following is a handy checklist to do OAT
• Backups taken at one site can be recovered to the same site
• Backups taken at one site can be recovered to the other site
• Implementation of any new features into the live production environment should
not adversely affect the integrity of the current production services
• Implementation process can be replicated by using valid documentation
• Each component can be shutdown and start successfully within the agreed time
scale.
• For Alerts- All critical alerts must go to the TEC and reference the correct
resolution document.
• Alerts are in place and issued if agreed thresholds are exceeded
• Any recovery documentation produced or altered, including Service Diagrams, is
valid. This should be handed over to the relevant support areas.
• Any component is affected by the failure, should show recommended order of
restart, time to complete, etc.

8
• Typical test objects.
• Typical test objects for any form of acceptance testing include:
• System under test;
• System configuration and configuration data;
• Business processes for a fully integrated system;
• Recovery systems and hot sites (for business continuity and disaster recovery testing);
• Operational and maintenance processes;
• Forms;
• Reports;
• Existing and converted production data.
• Typical defects and failures.
• Examples of typical defects for any form of acceptance testing include:
• System workflows do not meet business or user requirements;
• Business rules are not implemented correctly;
• System does not satisfy contractual or regulatory requirements;
• Non-functional failures such as security vulnerabilities, inadequate performance efficiency under
high loads, or improper operation on a supported platform.

9
• Configuration Testing is the type of
Software Testing which verifies the
performance of the system under
development against various combinations of
software and hardware to find out the
best configuration under which the system
can work without any flaws or issues while
matching its functional requirements.

10
• Storage testing is a type of software
testing used to verify if the software
application under test stores the relevant data
into appropriate directories and whether it
has enough space to prevent unexpected
terminations due to insufficient disk space. It
is also called Storage Performance Testing.

11
• Installation testing is performed to check if
the software has been correctly installed with
all the inherent features and that the product
is working as per expectations. Also known as
implementation testing, it is done in the last
phase of testing before the end user has
his/her first interaction with the product

12
• Reliability Testing is a
software testing process that checks whether
the software can perform a failure-free
operation for a specified time period in a
particular environment. The purpose
of Reliability testing is to assure that the
software product is bug free and reliable
enough for its expected purpose.

13
• recovery testing is the activity of testing how
well an application is able to recover from
crashes, hardware failures and other similar
problems. Recovery testing is the forced
failure of the software in a variety of ways to
verify that recovery is properly performed.

14
• Documentation testing is part of non-
functional testing of a product. It may be a
type of black box testing that ensures that
documentation about how to use the system
matches with what the system does, providing
proof that system changes and improvement
have been documented.

15
• A test procedure is a formal specification
of test cases to be applied to one or more
target program modules. ... Test
procedures are a deliverable product of the
software development process and are used
for both initial checkout and subsequent
regression testing of target program
modifications.

16
Automated testing

17
Content
• What is Automated Testing?
• Automated Testing Advantages
• Limitation of Automated Testing
• Automated testing vs. manual testing
• Automated Test Life-Cycle Methodology (ATLM)
• Automated Testing in CAR Multimedia IAS

18 / Ropota Andrei / Automated Testing


/
What is Automated testing?
• Automated testing: The management and performance of
test activities, to include the development and execution of
test scripts having as objective the verification of test
requirements, using an automated test tool.
• The automation of test activities reveals its greatest values
in instances where test scripts are repeated or where test
scripts subroutines are created and then invoked
repeatedly by a number of test scripts.
• Given the continual changes and additions to requirements
and software, automated tests serves as an important
control mechanism to ensure accuracy and stability of the
software through each build.

19 / Ropota Andrei /
Automated Testing /

Automated
Costs and efficiency
testing advantages
– Detection of the errors that reached production phase (with regression tests)
– Multiple users simulation
– Reusable of the old scripts -> creation of the new scripts is reduce
– automatic execution of performance tests in the beginning of the production -
>less costs for improving the performance
• Time economy
– quick analysis in case of changing of environment parameters
– short duration of the testing cycles
– better estimation for test planning
– a large number of tests can be executed over night
– quick generation of testing preconditions
• Quality increase
– automatic compare of results
– more consistent results due to repeating tests

20 / Ropota Andrei / Automated Testing


/
Limitation of Automated Testing
• Most of the times an Automated Testing system can't tell if something
"looks good" on the screen or when a pictogram or a window is not
displayed well
• There are a bunch of problems that can appear when trying to automate
the testing process:
– Unrealistic expectations (e.g. expectation that automated tests will find a lot
of errors)
– Poor testing experience
– Maintenance of automated tests
• Automated testing will never replace definitely the manual testing
• Tests that should not be automated are:
– tests that are executed very rare
– where the system is very unstable
– tests that can be verified easily manually but hardly automated
– tests that need physical interaction
Automated testing vs. Manual
testing
• Pros of Automated testing • Cons of Automated testing
– If a set of tests must be ran repeatedly, automation – Costs - Writing the test cases and writing or
is a huge win configuring the automate framework that is used
– It offers the possibility to run automation against costs more initially than running the test
code that frequently change to catch regressions manually.
– Offers the possibility to add a large test matrix (e.g.
different languages on different OS platforms) – Some tests can't be automated – manual tests are
– Automated tests can be run the same time on needed
different machines, whereas manual tests must be
run sequentially
– It offers more time for the test engineer to invoke
greater depth and breadth of testing, focus on
problem analysis, and verify proper performance of
software following modifications and fixes
– Combined with the opportunity to perform
programming tasks, this flexibility promotes test
engineer retention and improves his morale
Automated testing vs. Manual
testing (2)
• Pros of Manual testing • Cons of Manual testing
– If the test case runs once or twice most likely is a – Running tests manually can be very time
manual test. Less cost than automating it. consuming
– It allows the tester to perform more ad-hoc – The manual tests are requiring more people and
(random tests). Experience has proven that more hardware
bugs are found via ad-hoc (Experience Based – Each time there is a new build, the tester must
Testing) than via automation testing. And more rerun all required tests - which after a while
time the testers spends playing with the feature, would become very boring and tiresome.
the greater are the chances of finding real user
problems.
Automated Test Life-Cycle
Methodology (ATLM)
1. Decision to Automate Test
2. Test Tool Acquisition
3. Automated Testing Introduction Process
4. Test Planning, Design and Development
5. Execution and Management of Tests
6. Test Program Review and Assessment
Decision to Automate Test:

Overcoming False Expectations
EXPECTATION
Automatic test plan
REALITY
• No tool can create automatically a comprehensive test plan
generation • No single tool can support all operating systems environments and programming languages
• Test tool fits all • Initial use of an automated test tool can actually increase the test effort
• Imminent test effort • Using an automated tool requires new skills, additional training is required
reduction • Not all the tests required for a project can be automated (e.g. some test are physically
• Tool ease of use impossible; time or cost limitation)
• Universal application of • It is impossible to perform an exhaustive testing of all the possible inputs (simple or
test automation combination) to a system
• 100% test coverage
Decision to Automate Test:
Benefits of Automated Testing
Production of a reliable system Improved requirements definition

Improved performance/load/stress testing

Improved partnership with development team

Improved system development life cycle


Improvement of the quality of the test effort Improved build verification testing (smoke testing)

Improved regression testing

Improved multiplatform/software compatibility testing

Improved execution of the repetitive tests

Improved focus on advanced test issues

Execution of tests that manual testing can't accomplish

Ability to reproduce software defects

After-hours testing
Reduction of test effort and minimization of schedule
Test Tool Acquisition
Business business modeling tools
• Identify which of the various tool types suit the
analysis phase configuration management tools
organization system environment, considering:
- the group/department that will use the tool; defect tracking tools
- the budget allocated for the tool acquisition; Requirements requirements management tools
- the most/least important functions of the tool definition phase requirements verifiers tools
etc.
• Choose the tool type according to the stage of the Analysis and database design tools
software testing life cycle design phase application design tools
• Evaluate different tools from the selected tool
Programming syntax checkers/debuggers
category
• Hands-on tool estimation – request product phase memory leak and run-time error
demonstration (evaluation copy) detection tools
• Following the conclusion of the evaluation process, source code or unit testing tools
an evaluation report should be prepared
static and dynamic analyzers
Testing phase test management tools
network testing tools
GUI testing tools (capture/playback)
non-GUI test drivers
load/performance testing tools
environment testing tools
Automated Testing Introduction
Process
Process – Test Process Analysis
Test process characteristics (goals, strategies, methodologies) have been defined and they are
review compatible with automated testing

Schedule and budget allows process implementation

The test team is involved from the beginning of SDLC


Test goals Increase the probability that application under test will behave correctly under all
circumstances

Increase the probability that application meets all the defined requirements

Execute a complete test of the application within a short time frame


Test Ensure that the system complies with defined client and server response times
objectives
Ensure that the most critical end-user paths through the system perform correctly

Incorporate the use of automated test tools whenever feasible

Perform test activities that support both defect prevention and defect detection

Incorporate the use of automated test design and development standards to create reusable
and maintanable scripts
Test Defect prevention (early test involvement, use of process standards, inspection and
strategies walkthroughs)

Defect detection (use of automated test tools, unit/integration/system/acceptance test phase)


Test Planning
• The test planning element of the ATLM incorporates the
review of all activities required in the test program
• It ensures that testing processes, methodologies,
techniques, people, tools, schedule and equipment are
organized and applied in an efficient way
• Key elements: planning associated with project milestone
events, test program activities and test program-related
documentation.
• The following must be accomplished:
- the technical approach for these elements is developed;
- personnel are assigned
- performance timelines are specified in the test program
schedule.
• Test planning is not a single event, but rather a process. It
is the document that guides test execution through to a
conclusion, and it needs to be updated frequently to reflect
any changes.

You might also like