Professional Documents
Culture Documents
Test
Desig
n
Tech Rendering
nique services
s
for:
– Testing Activities
The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and
evaluation of products and related work products to determine that they satisfy specified requirements, to determine
that they are fit for purpose and detect defects.
– Testing Activities
- During Dynamic Testing the program under test is exercised with some test data
Testing as a process
-Testing means not just test execution but, design, record, checking for completion
- We design test process to ensure not to miss critical steps and do things in the right order
– Objectives of testing
– Objectives of testing
Requirement reviews – to review the specifications for completeness and correctness, ensure that they are
testable; identification of requirements defects -> reduce the level of risk of functionality defects
System Design – to improve interfaces testability and usability. Increase each party understanding of the
design -> reduce the risk of design defects
Programming – to review the code and assess structural flaws. Increase each party understanding of the
code and how to test it. -> reduce the level of risk of code defects
Testing prior to release – detect failures and support removal of defects(eg. Debugging) -> increase the
likelihood of meeting stakeholder needs and requirements.
■ ESA – Ariane 5
■ NASA – Mars Climate Orbiter
– Lost its line
– Lost in space
– Reused code
– Data mistake: metric vs. English measures
An error that leads to a defect in a work product can lead to another error that can
lead to a defect in the code.
An Incident is any event occurring that requires investigation.
12 BeeSpeed Timisoara Engineering Centre, 2017
Why is Testing Necessary?
Rendering
services
for:
False POSITIVE:
- Reporting a defect that does not really exist
- Also sometimes called: false – fail of a Test Case
False NEGATIVE:
- Failing to identify a defect that is there
- Also sometimes called: false-pass of a Test Case
- It is the worse
The root causes of defects are the earliest actions or conditions that contributed to creating the
defects.
Defects can be analyzed to identify their root causes, so as to reduce the occurrence of similar
defects in the future.
Root cause analysis can lead to process improvements that prevent a significant number of future
defects from being introduced.
- Consequence of Mistakes
Victims of the failure:
humans
an organization
the environment
Exhaustive testing is a test approach in which all possible data combinations are used. This includes implicit data
combinations present in the state of the software / data at the start of testing.
Furthermore, for testing everything we would need to start testing from the beginning after every little
modification
Many people and organizations are confused about the difference between quality assurance (QA), quality control
(QC), and testing
and Quality
Quality Assurance: A set of activities designed to ensure the adherence to proper processes, in order to provide
confidence that appropriate levels of quality will be achieved.
Quality Control: A set of activities, including testing, designed to support the achievement of appropriate levels of
quality.
Quality Management: The concept that links QA and QC. Includes all activities that direct and control an
organization with regard to quality.
!Quality assurance supports proper testing!
!Testing contributes to the achievement of quality!
What is quality?
When can we say that our software has good quality?
Quality measurements:
- Adaptability -> how easily software can be modified to meet new requirements
- Maintainability -> how easy it is for the developers to maintain the application and how quickly maintenance
changes can be made
- Modularity -> how much of a system or computer program is composed of discrete components and a change
to one component has minimal impact on another component
- Correctness -> how accurately the software perform the functions defined in the specifications
- Reliability -> the time or transactions processed between failures in the software
- Efficiency -> how well a component performs its designated functions using minimal resources
- Usability -> the ease of use of the software by the intended users
- Reusability -> how easy it is to re-use elements of the solutions in other identical solutions
- Testability -> how easy is to test the applications, clear unambiguous requirements
- Legal requirements
quality
Efficiently and/or effectively
testing? Features
or
Attaining a given quality level using as few resources as possible
■ „Critical’’ software
Exhaustive Testing
- Risks and Priorities
Early Testing
Defect Clustering
- System complexity
- Volatile code
- Effects of change upon change
- Development staff experience or inexperience
Business domain
Required standards
The most visible activity is running one or more tests: Test execution
During the work we have to be sure that everything go as we planned before: Monitoring and Control
Test implementation
- Developing, implementing and prioritizing Test Cases
- Developing and prioritizing Test Procedures and, potentially, creating automated test scripts.
- Creating Test Suites from the Test Procedures for effective test execution
Test suite: A set of several test cases for a component or system under test, where the post conditions of one test is
often used as the precondition for the next one.
- Building the environment and verifying that everything has been set up correctly
- Preparing test data and ensuring it is properly loaded in the test environment
- Verifying and bi-directional traceability
Test execution
- Executing Test Procedures either manually or by using test execution tools, according to the planned sequence
- Recording the IDs and versions of the test item or test object, test tool and testware
- Logging the outcome of test execution (Pass/Failed) and recording the identities and versions of the Software Under
Test, test tools.
- Comparing actual results with expected results
- Reporting discrepancies as incidents and analyzing them in order to establish their root cause
- Repeating test activities as a result of action taken for each discrepancy or part of the planned testing(e.g.
confirmation and/or regression testing)
- Verifying and bi-directional traceability
- Checking whether all defects reports are closed, entering change request or product backlog items for any
unresolved defect
- Creating a test summary report
- Finalizing and archiving tests data, the test environment, test infrastructure and other testware for later reuse
- Handover of tests to the maintenance organization
- Analyzing lessons learned from the complete test activities to determine changes needed for future iterations,
releases and projects
- Using the gathered information for the improvement of test process maturity
Test planning
- Test plans, which include information about:
- Test basis
- Traceability
- Entry/Exit criteria
Test analysis
- Test conditions – defined, prioritized and traceable to the test basis it covers
- Test charters – specific to exploratory testing
- Found defects in test basis
Test design
- Test cases and set of test cases (high level)
- Design and identification of:
- Test data needed
- Test environment
- Infrastructure and tools
Test implementation
- Test procedures and their sequence
- Test suites
- A test suite schedule
Test execution
- Status of individual test cases or test procedures (e.g. pass, failed, skipped)
- Defect reports
- Documentation about which test item, test object, test tools and testware were involved in testing
Test completion
Improving the understandability of test progress reports and test summary reports
Relating the technical aspects of testing to stakeholder in terms that they can understand
– Participants
Very different people can be involved in software testing
Developers
Professional testers
Specialists
Users
Developers Testers
– Independence
– Finding a Defect
Tester: Proud
Developer: Nervous
Client: Nervous
– Defect Reporting
At the end of discussions, confirm that you have both understood and been understood
Check of understanding
Agenda services
for:
Testing de product
Waterfall model
V-model
A process is needed which assures quality throughout the development life cycle
At every stage a check should be made that the work product meets its objectives for that stage
V-model
Iterative development is the process of establishing requirements, designing, building and testing a system, done as a series
of shorter development cycles
This process is repeated until a fully working system is produced
The requirement do not need to be fully defined before coding can start
The lack of formal documentation makes it difficult to test
Developer changes many times are not formally recorded
The resulting system produced by iteration may be tested as several test levels during each iteration
Regression testing needed
Disadvantages
- Producer might produce a system inadequate for overall organization needs
- User can get too involved whereas the program can not be to a high standard
- Structure of system can be damaged since many changes could be made
- Producer might get to attached to it (might cause legal involvement)
- Not suitable for large applications
RAD is an iterative
development model
Advantages
Disadvantages
Dependency on strong cohesive teams and individual commitment to the project
Success depends on disciplined developers and their exceptional technical skills
Decision making relies on the feature functionality team
Communal decision making process with lesser degree of centralized PM and engineering authority
The beneficial elements of traditional software engineering practices are take to “extreme” levels, on the theory that if some is
good, more is better
• Not a single concrete perspective process, but rather an adaptable process framework intended to be tailored by the
development organizations and software project teams that will select the elements of the process that are
appropriate for their needs
Verification checks that the work-product meets the requirements set out for it.
• Testing helps to ensure that the work-products will meet the user needs (Validation)
• Did we build the right product?
Validation ensuring that the behavior of the work-product matches the customer needs as define for
the project.
• Characteristics of good testing across the development life cycle
• Each work-product is tested (V-model)
• Early test design (V-model)
• Testers are involved in reviewing requirements before they are released (V-model)
Code has been written before testing of the code can be started
Generally code is written in parts or units
The units are usually constructed in isolation for integration at a later stage
Also known as unit or module testing
Objectives
Reducing risk
Verifying the behavior of the component
Building confidence in the component’s quality
Finding defects in the component
Preventing defects in higher test levels
Test basis:
Component specifications
Detailed design
Code
Data model
Typical tests objects:
Components, units or modules
Code and data structure
Classes
Database modules
Incorrect functionalities
Data flow problems
Incorrect code and logic
Defects found and fixed during unit testing are often not recorded
One approach to component testing is to prepare and automate tests cases before coding (TDD = test driven development)
This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and
executing the component tests correcting any issues and iterating until they pass
Usually performed by the developer who wrote the code
The purpose is to expose defects in the interfaces and in the interactions between integrated components or systems
Integration test types
- Component integration test
Component integration test focuses on the interactions between software components and is done
after component (unit) testing.
System integration test focuses on the interactions between different systems or between hardware
and software and maybe done after system testing of each individual system.
Objectives
Reducing risk
Verifying the behavior of the interfaces
Building confidence in the quality of the interfaces
Finding defects
Preventing defects in higher test levels
Integration strategies
Big-bang strategy – All units are linked at once resulting in a complete system
Incremental strategy – Based on system architecture (top-down of bottom-up), functional tasks, transaction sequences
or other aspects as well
In order to ease fault isolation and defects early, integration should normally be incremental rather than “big bang”
Test basis:
Software and system design
Sequence diagram
Interface and communication protocol specifications
Architecture at component or system level
Workflows
Use cases
External interface definitions
Integration strategies
Big-bang strategy – All units are linked at once resulting in a complete system
Incremental strategy – Based on system architecture (top-down of bottom-up), functional tasks, transaction sequences
or other aspects as well
In order to ease fault isolation and defects early, integration should normally be incremental rather than “big bang”
Top-down integration
- The system is built in stages, starting with components which call other components
- Components which call others are usually placed above those that are called
- Stubs are used for components not yet integrated.
Bottom-down integration
- The opposite of top-down integration.
- Components are integrated in a bottom-up order.
- These are then tested and added to the modules above them to form larger sub-systems which are then tested.
- Bottom-up requires the heavy use of drivers instead of stubs .
• System test focuses on the behavior of the whole system / product in a live environment.
• Independent testers typically carry out the system testing
Functional requirement specifies a function that a system or system component must perform.
Non-functional requirement details how the application will perform in use.
Objectives
Reducing risk
Verifying the behavior of the system
Validating that the system is complete and will work as expected
Building confidence in the quality of the system as a whole
Finding defects in the system
Preventing defects in higher test levels or production
Test basis:
Test objects:
Applications
Hardware/software systems
Operating systems
System under test(SUT)
System configuration and configuration data
Incorrect calculations
Incorrect or unexpected system behavior
Incorrect control and/or data flows
Failure to carry out end to end functional tasks
Failure to work properly in production environment
Failure of the system to work as described in the system and user manuals
User Acceptance Test is executed by the customer to verify that the system meets their business needs.
Operational Acceptance Test: involves checking that the processes and procedures are in place to allow the
system to be used and maintained
- Back-up facilities
- Procedures for disaster recovery
- Training for end users
- Maintenance procedures
- Data load and migration tasks
- Security procedures
Contract and Regulation Acceptance Test
- Contract acceptance test
- Regulation acceptance test
Alpha and Beta tests
- Alpha testing takes place at the developer’s site (but not by the developing team)
- Beta testing takes place at the customer’s site in real world working conditions (field test)
Objectives
Test basis:
Business processes
User or business requirements
Regulations, legal contracts and standards
Use cases
System requirements
System or user documentation
Installation procedures
Risk analysis reports
Test objects:
Acceptance testing is often the responsibility of the customers, business users, product owners or operator of a system
Acceptance testing is often the last test level, but not always, for example:
Acceptance testing of COTS software product may occur when it is installed or integrated
Acceptance testing of a new functional enhancement may occur before system testing
In iterative development, project teams can have different forms of acceptance testing during and at the end of each
iteration
Agenda services
for:
In different test levels different types of testing are required to meet the overall test objective. Software Test types
are introduced as a means of clearly defining the objective of a certain level for a program or project
Test type is a group of test activities based on specific test objectives aimed at specific characteristics
of a component or system. A test type may also take place on one or more test levels or test phase.
Functional testing: Testing conducted to evaluate the compliance of a component or system with
functional requirements.
Functional tests focusing on (based on ISO 9126):
- Suitability
- Interoperability
- Security
- Accuracy
- Compliance
Non-functional testing focuses on the quality characteristics or non-functional attributes of the test object.
- We are testing something that we need to measure on a scale of measurements
- It is the testing of “how well” the system works
Non-functional testing: Testing conducted to evaluate the compliance of a component or system with
non-functional requirements..
Non-functional testing considers the external behavior of the software and in most cases uses black-box test design
techniques to accomplish that
Non-functional testing should be performed at all test levels and done as early as possible
Non-functional testing can be assessed by Non-functional coverage
Structural testing or White-box testing based on an analysis of the internal structure of the
component or system.
In different test levels different kind of structure used to derive test cases
- Lower test levels (component and component integration tests): system architecture
- Higher test levels (system and acceptance tests): business model or menu structure
The test design technique used for structural testing are structure-based or white-box techniques
When a defect is detected and fixed then the changed software should be retested to confirm that the problem has
been successfully removed
- Re-testing
The unchanged software areas should also be retested to ensure that no additional defects have been induced as a
result of changes to the software
- Regression testing
Regression test checks that there are no additional problems in previously tested software.
• Regression testing should also be carried out if the environment has changed
Operation System
• Regression testing involves the creation of a set of tests which serve to
Changed
Changed demonstrate that the system works as expected.
function
function
It is not necessary, for all software, to have every test type across every test level
It is important to run applicable test types at each level, earliest level where the test type occurs
Agenda services
for:
For many projects the system is eventually released into the live environment
During the deployment it may become necessary to change the system
Changes may be due to:
- Additional features being required
- The system being migrated to a new operating platform
- The system being retired
- New faults being found
- Configuration data changed
Testing which takes place on a system which is in operation in the live environment is called Maintenance Testing.
Operation System
DB/HW change Maintenance test
Add or change
Changed
feature
function
Modification:
Planned enhancements
Corrective or emergency changes
Changes of operational environment
Upgrades of COTS
Patches for defects and vulnerabilities
Migration:
One platform to another
Data migration/ conversions
Retirement – application reaches end of life
For IOT systems, can be triggered by introduction of completely new or modified things: software or hardware
devices, into the overall system. In this case we have a particular emphasis on integration testing at different levels.
Impact Analysis evaluate the changes that were made for a maintenance release to identify :
Consequences
Expected and possible side effects of a change
Areas in the system that will be affected by the change
Impact on existing tests
Impact Analysis can be done before a change is made to help decide if the change should be made
- This is a very important parameter as the system is subjected to changes throughout the software life cycle.
Static Testing = Testing of a software development artifact without execution of these artifacts, e.g., reviews or static
analysis.
Benefits of reviews:
Both static analysis and dynamic testing have the same objective – identifying defects.
They are complementary: static techniques find causes of failure rather than the failure itself.
What to
review?
Require
ments Design
Web
specific
pages
ations
User Any
Code
guides software
work
product
Test Test
scripts plans
Test
Test
specific
cases
ations
Objectives:
Finding defects
Gaining understanding
Generating discussions
Educating participants such as testers and new team members
Making decision by consensus
Issue
Individual Fixing and
Planning Initiate review communication
review reporting
and analysis
Defining the entry and exit criteria ( for more formal review types)
Issue
Individual Fixing and
Planning Initiate review review
communication
reporting
and analysis
Issue
Individual Fixing and
Planning Initiate review
review
communication
reporting
and analysis
Issue
Individual Fixing and
Planning Initiate review
review
communicatio reporting
n and analysis
• Reject
Issue
Individual Fixing and
Planning Initiate review communication
review
and analysis reporting
Reviewer/Checker/Inspector
Scribe/Recorder :
Reviewer/Checker/Inspector
Scribe/Recorder :
Author
Reviewer/Checker/Inspector
Scribe/Recorder :
Author
Reviewer/Checker/Inspector
Scribe/Recorder :
Reviewer/Checker/Inspector
Scribe/Recorder
Ins
pec
tion
ty
ali
m
Technical review
for
of
el
v
Le
Walkthrough
Informal Review
• No formal process
• May take the form of pair programming or technical lead review
Description • Varies in usefulness depending on the reviewers
• The results may be documented, but it is not required
• Very commonly used in Agile
• Author
Roles
• Reviewer
• Author
Roles • Multiple reviewers
• Scribe (Mandatory)
• Find defects
• Improve software product,
Main purpose • Consider alternative implementations
• Evaluate conformance to standards
• Gaining understanding, Learning, achieving consensus
• Author
• Trained moderator (not the author)
Roles • Scribe
• Multiple reviewers (technical peers or technical experts)
• Author
• Trained moderator (not the author)
Roles • Scribe
• Multiple reviewers
• Reader (optional)
Ad hoc: A review technique carried out by independent reviewers informally, without a structured process.
Scenarios and dry runs: A review technique where the review is guided by determining the ability of the work
product to address specific scenarios.
Role-based: A review technique where reviewers evaluate a work product from the perspective of different
stakeholder roles.
Perspective-based: A review technique whereby reviewers evaluate the work product from different viewpoints.
Review techniques are applied that are suitable to the type and level of software work products and reviewers.
Management supports a good review process (e.g. by incorporating adequate time for review activities in project
schedules).
Participants avoid body language and behavior that might indicate boredom, exasperation or hostility
Training is given in review techniques, especially the more formal techniques, such as Inspection
Documents included:
• Test Plan
• Test Design Specification
• Test Case Specification
• Test Procedure Specification
• Test Item Transmittal Report
• Test Log
• Test Incident Report
• Test Summary Report
Related Standards
Test execution
147 BeeSpeed Timisoara Engineering Centre, 2017
E
Test Conditions and Designing Test Cases Rendering
services
Organization Level
Testing philosophy of
Organization Level
company
Standard
• Test Policy
Project Level
Test
Policy How we test / why
• Test Strategy we TE
test?
• Based on Test Design • A testSpec.
log is aTDE
chronological
• Test Plan • Test
• Execution objective
The Test Evaluation Report PM
• Requirements Test Phase
traceability Project
from Level
Test
record TE
of relevant details
TM in cooperationcollects,
with • TCO •
Deliverables Features
organizes, and presents to be tested Level
TDEBasis about the execution of tests
• Test Evaluation Report Test Design Test Case Test Log derived from Test
Test • Risk and • Test conditions
mitigation
the Test Results and key measures •
• Contains time samples of
Describes test casesmultiple In detail
signalswith
acquired
Specification
• Testing Specification Basis for each features
Strategy measurements
of test and metrics
to enable objective quality
necessary test steps and acceptance
during measurement
• Training
evaluation and plan
assessment • Data acquisition systems :
criteria’s
• Way Vector
Test Phase Level of reporting,
tools
TE responsibilities:
Test
• items
Reviewing TM Test trackingTestand
(CANalyzer,CANape,CANoe)
• Test Design Specification • Summarizes the test results the Test Results,
of all Summary
TDE TE
Evaluation
• Features
change request to statistics,
be testedofand
• aSequence
managing
test centers in order to give •actions for the
Describes execution
theReport
items being delivered for testing
Report
• Test Case Specification • Test techniques incidents
Test rating for thecoverage
Test release statistics.
Test Item Test Incident
of a test • Where to find them
•
Procedure •
Reviewing Responsibilities
•important
DetailedChange
Transmittal •
Report for setting up the test
method What is new about occurring
them at a
• Test Procedure Specification Plan recommendation. • • Gives approval for theircertain release
RequestReport
Specification Entry Criteria
and Issue details. , running test sets and
environment testing
• Generated at the • end of• the testaccurate • Provide testers warranty that the items are fit to
• Test Item transmittal Report Presenting ExitanCriteria and fair
evaluating results.
be tested and give clear mandate to start level testing
execution assessment
phase• Schedule of the software based • Summary
• Test Log on the defined Evaluation Mission. description of the
• Test Incident Report actual incident
• Test Summary Report
Test Case Specification –Document specifying a set of test cases (objective, inputs, test actions, expected
results and execution pre- and post condition).
Test Procedure Specification –Document specifying a sequence of actions for the execution of a test.
Suspension criteria and Features to be tested Test items Purpose Summary Summary
resumption requirements • Version level of test items,
Reference to other relevant
documents
Test Approach Refinements Input specifications Steps for executing Incident Description: Variances
deliverables/Features to a set of test cases Inputs
be tested Expected results
Actual results
Testing tasks Anomalies
Date and time …..
Test Strategy
Test Coverage
Test Coverage
• Degree, expressed as percentage,
How effective the testing to which a specified coverage item
Coverage measures
has been exercised by a test suite
• Quantitative measure •How
Part of completion
effective criteria defined in
the testing
(Collection of test cases that are
• the
WhatTest
hasPlan (first
been step of the
achieved
• Estimation intended to be used to test a
Fundamental
(Percentage). Test Process).
software).
Test coverage can be applied to any systematic technique • Used to determine
Estimation how much whenmoreto stop
testing
testing
needs to in be
thedone.
final step of the
Coverage measures • Eg.: Executed Testcases vs. full nr.
Fundamental Test Process.
of test cases.
• Executed Requirements vs. full nr.
of requirements.
Experience-based techniques
• Deriving Test Cases from the tester’s experience (or developers, users or other stakeholders) of similar
systems and general experience of testing.
• Knowledge about likely defects and their distribution
Introduction for:
Mistakes made by programmers cause errors that tend to cluster around boundaries
The maximum and minimum values of a partition are its boundary values
• Valid boundary value
• Invalid boundary value
-101
101
• Good to use for system requirements that contain logical conditions (business rules)
• Decision table contains all the
• Input conditions
• Actions that may arise from them
Action 2 NO NO YES
Mode Set
Change display
to Altimeter
Mode = Mode =
Time Altimeter
Mode
Change display
Change display
to Time
to Set Hrs
Set
Mode
Change display
to Time
Mode =
Mode =
Set
Set Hrs
Mode Mins
Change display
to Set Mins Set
Set Add 1 to Mins
Add 1 to Hrs
167 BeeSpeed Timisoara Engineering Centre, 2017
Specification based or Black-box techniques Rendering
services
Mode Set
Mode = Time Mode = Altimeter Mode = Set Hours
Change Display to Altimeter Change Display to Set Hours
Use Cases are descriptions of interactions between users (actors) and the system in a high level view of the
requirements
The main advantage is that we exercise real user processes or business scenarios
User
Basic Flow
Alternate Flow 1
Alternate Flow 2
Code Design
A, B, C, Max: Integer
Begin
Read (A);
Read (B);
A B C
Read (C);
If (A > B)
Then
If (A > C)
Then
Max = A;
Else
Max = C;
Endif
Else MAX
If (B > C)
Then
Max = B;
Else
Max= C;
Endif
Endif
End
173 BeeSpeed Timisoara Engineering Centre, 2017
Structured based or White-box techniques Rendering
services
for:
Read(A)
Read(B)
Read(C)
N
A>B B>C
Y
N Y N
A>C
Y
Max = A Max = C Max = B Max = C
174
End BeeSpeed Timisoara Engineering Centre, 2017
Structured based or White-box techniques Rendering
services
for:
Y N
If A > B
If A > C If B > C
Y N Y N
Endif Endif
Endif
X 100%
B. 4
C. 2
D. 3
• Aim : Exercise programming decisions, both true and false exits of the condition (also possible decision outcomes)
X 100%
• Complete decision test also doesn’t guarantee revealing of all mistakes in the code
Condition coverage (or Predicate coverage) (All conditions and All outcomes)
Multiple condition Coverage (Has every possible combination of Boolean sub-expressions executed ?)
Example : (a==b) && (c > d)
White-Box Techniques
In order to ensure – (Modified)
Condition coverage Condition/Decision
criteria for this example, A, B and C Coverage (Example)
should be evaluated at least one time "true" and one time "false" during tests,
which would be the case with the 2 following tests:
1. A = true / B = true / C = true
2. A = false / B = false / C = false
In order to ensure Decision coverage criteria, the condition ( (A or B) and C )
should also be evaluated at least one time to "true" and one time to "false".
Indeed, in our previous test cases:
3. A = true / B = true / C = true ---> the decision is evaluated as "true"
4. A = false / B = false / C = false ---> the decision is evaluated as "false“
However, these two tests do not ensure a Modified condition/decision coverage which implies that each boolean variable should be
evaluated one time to "true" and one time to "false", and this with affecting the decision's outcome. (if 3 Boolean conditions we need 4 test
cases)
5. A = false / B = false / C = true ---> decision is evaluated to "false"
6. A = false / B = true / C = true ---> decision is evaluated to "true"
7. A = false / B = true / C = false ---> decision is evaluated to "false"
8.
181 A = true / B = false / C = true ---> decision is evaluated to "true" BeeSpeed Timisoara Engineering Centre, 2017
E
Structured based or White-box techniques Rendering
services
for:
Useful when testes have not enough time to execute a full structured test set
Experience Design
Which test techniques ? depends on a number of factors, including internal and external factors
Testers generally use a combination of test techniques
• (Process , Rule , Data-driven techniques to ensure adequate coverage of the object under test)
1. Test Organization
2. Test Planning and Estimation
3. Test Progress Monitoring and Control
4. Configuration Management
5. Risk and Testing
6. Defect Management
7. Check of understanding
1. Test Organization
2. Test Planning and Estimation
3. Test Progress Monitoring and Control
4. Configuration Management
5. Risk and Testing
6. Defect Management
7. Check of understanding
Test
Structure Roles
Organization
Developers can
Can act as ‘the loose the
customer’s ‘quality
voice’ ownership’
attribute
More objectivity
in evaluating
the product
quality issues
Testing tasks may be done by people in specific testing role, or can be performed by someone in another role (e.g. Project
manager, quality assurance manager, development manager, etc)
Responsibilities Plan, estimates test effort, time and cost, collaborates with project manager
of Test Leaders
include: Writes and reviews the test strategy and test policy
Monitors and controls the execution of tests, checks the status of exit criteria
Schedule tests
Design, implement, execute and log the tests, evaluate the results and document problems
found
Use required tools
Automate tests
1. Test Organization
2. Test Planning and Estimation
3. Test Progress Monitoring and Control
4. Configuration Management
5. Risk and Testing
6. Defect Management
7. Check of understanding
Test planning is the first stage in test process, it defines the scope of testing and the test
completion criteria which determine when we stop to test. It covers development,
implementation projects and maintenance activities and is influenced by project
organization. (risk analysis performed)
Test planning is a continual activity that spans the life of the test project; it takes place in all
life-cycle stages.
Liaison with the project manager and making sure that the testing activities have been included within the software life-
cycle activities such as:
Decide what needs to be tested, what roles are involved and who will perform the test activities, planning when and
how the test activities should be done, deciding how the test results will be evaluated, and defining when to stop testing
(exit criteria)
Finding and assigning resources for the different activities that have been defined
Deciding what the documentation for the test project will be, e.g. which plans, how the test cases will be documented,
level of detail, etc.
Defining the management information, including the metrics required and putting in place the processes to monitor and
control test preparation and execution, defect resolution and risk issues
Ensuring that the test documentation generates repeatable test assets, e.g. test cases
Entry criteria are used to determine when a given test activity can start. This could include the beginning of a level of testing,
when test design and/or when test execution is ready to start.
Exit criteria are used to determine when a given test activity has been completed or when it should stop. Exit criteria can
be defined for all of the test activities, such as planning, specification and execution as a whole, or to a specific test level
for test specification as well as execution.
Exit criteria should have been agreed as early as possible in the life cycle; however, they can be and often are subject
to controlled change as the detail of the project becomes better understood and therefore the ability to meet the criteria
is better understood by those responsible for delivery.
All high-risk areas have been fully tested, with only minor residual risks left
outstanding
The schedule has been achieved, e.g the release date has been reached and the
product has to go live
Test Approach:
Test Strategy: The implementation of the test strategy for a specific
A high-level description of the test levels to be performed project. It typically includes the decisions made that follow
and the testing within those levels for an organization or based on the (test) project's goal and the risk assessment
programme (one or more projects). carried out, starting points regarding the test process, the
test design techniques to be applied, exit criteria and test
types to be performed.
Analytical approaches Such as risk-based testing where testing is directed to areas of greatest risk
Such as stochastic testing using statistical information about failure rates (such as reliability growth
Model-based approaches models) or usage (such as operational profiles).
Such as failure-based (including error guessing and fault attacks), checklist based and quality-characteristic
Standard-compliant approaches based.
As specified by industry-specific standards such as The Railway Signaling standards (which define the
levels of testing required) or the MISRA (which defines how to design, build and test reliable software for
Methodical approaches the motor industry).
These adhere to the processes developed for use with the various agile methodologies or traditional
Process-compliant approaches waterfall approaches.
Such as exploratory testing where testing is more reactive to events than pre-planned, and
Dynamic and heuristic approaches where execution and evaluation are concurrent tasks.
Such as those where test coverage is driven primarily by the advice and guidance of technology
and/or business domain experts outside or within the test team.
Consultative approaches
Such as those that include reuse of existing test material, extensive automation of functional
Regression-averse approaches regression tests, and standard test suites.
•Risks: Risk management is very important during testing, so consider the risks and the level of risk. For a well-established
application that is evolving slowly, regression is an important risk, so regression-averse strategies make sense. For a new
application, a risk analysis may reveal different risks if you pick a risk-based analytical strategy.
•Skills: Consider which skills your testers possess and lack because strategies must not only be chosen, they must also be
executed. A standard compliant strategy is a smart choice when you lack the time and skills in your team to create your own
approach.
•Objectives: Testing must satisfy the needs and requirements of stakeholders to be successful. If the objective is to find as
many defects as possible with a minimal amount of up-front time and effort invested – for example, at a typical independent
test lab – then a dynamic strategy makes sense.
•Regulations: Sometimes you must satisfy not only stakeholders, but also regulators. In this case, you may need to plan a
methodical test strategy that satisfies these regulators that you have met all their requirements.
•Product: Some products like, weapons systems and contract-development software tend to have well-specified
requirements. This leads to synergy with a requirements-based analytical strategy.
•Business: Business considerations and business continuity are often important. If you can use a legacy system as a
model for a new system, you can use a model-based strategy.
Test Execution Schedule - A schedule for the execution of test suites within a test cycle.
Ideally based on the prioritization -> Highest priority -> first to be run , but keep in mind all factors!
In case of various sequences are possible, with different level of efficiency -> trade-offs between efficiency and priority!
The following table shows 6 test procedures (P to U) that must now be entered into a test execution schedule.
Business Priority
Test Proce- (1 High Dependencies on test
Other dependencies
dure ID 2 Medium procedures
3 Low)
Can not start unless R has
P 1
completed
Q 1 None Regression testing only
R 2 None None
S 2 None None
Delivery of the code for this
T 3 None part of system is running very
late
U 3 None None
Business severity is regarded as the most important element in determining the sequence of the test procedures, but other
dependencies must also be taken into consideration.
Regression testing can only be run once all other tests have completed.
Which of the following represents the MOST effective sequence for the test execution schedule (where the first entry in the
sequence is the first procedure to be run, the second entry is the second to be run and so on)?
A. Q, P, S, R, U, T.
B. R, S, U, P, Q, T.
C. R, P, S, U, T, Q.
D. P, Q, R, S, U, T
218
BeeSpeed Timisoara Engineering Centre, 2017
Test Execution Schedule – Exercise2 Rendering
services
for:
How would you structure the test execution schedule according to the
requirement dependencies?
This approach relies upon data collected from previous or similar projects. This kind of data might include:
The number of test conditions
The number of test cases written
The number of test cases executed
The time taken to develop test cases
The time taken to run test cases
The number of defects found
The number of environment outages and how long on average each one lasted
With this approach and data it is possible to estimate quite accurately what the cost and time required for a similar project
would be.
It is important that the actual costs and time for testing are accurately recorded. These can then be used to re validate and
possibly update the metrics for use on the next similar project.
This alternative approach to metrics is to use the experience of owners of the relevant tasks or experts to derive an estimate
(this is also known as the Wide Band Delphi approach). In this context ‘experts’ could be:
Business experts
Test process consultants
Developers
Technical architects
Analysts and designers
Anyone with knowledge of the application to be tested or the tasks involved in the process.
Many things affect the level of effort required to fulfill the test requirements of a project. These can be split into three main
categories, as shown below.
Product characteristics:
• Timescales
• Amount of budget available
• Skills of those involved in the testing and development activity (the lower the skill level in development, the more defects
could be introduced, and the lower the skill level in testing, the more detailed the test documentation needs to be)
• Which tools are being used across the life cycle (i.e. the amount of automated testing will affect the effort required
1. Test Organization
2. Test Planning and Estimation
3. Test Progress Monitoring and Control
4. Configuration Management
5. Risk and Testing
6. Defect Management
7. Check of understanding
Test monitoring can serve various purposes during the project, including the following:
Give the test team and the test manager feedback on how the testing work is going, allowing opportunities to guide and
improve the testing and the project.
Provide the project team with visibility about the test results.
Measure the status of the testing, test coverage and test items against the exit criteria to determine whether the test
work is done.
Information can be gathered manually, for small projects or automatically, when working with large teams, distributed
projects and long-term test efforts,
Following are the Test coverage metric (requirements, user stories, acceptance criteria, risks or code)
metrics that can be
used for test
monitoring:
Percentage of planned work done in test environment and test cases preparation
Test execution metrics (Number of test cases pass, fail, blocked, on hold)
Defect metrics
Metrics should be collected during and at the end of a test level in order to assess:
Test reporting is concerned with summarizing information about the testing endeavor, including:
What happened during a period of testing, such as dates when exit criteria were met
Analyzed information and metrics to support recommendations and decisions about future actions, such as an
assessment of defects remaining, the economic benefit of continued testing, outstanding risks, and the level of
confidence in tested software. The outline of a test summary report is given in ‘Standard for Software Test
Documentation’ (IEEE 829)
Status of testing and product quality in respect with exit criteria/ definition of done
Metrics of defects, test cases, test coverage, activity progress and resource consumption
Residual risks
Test control
describes any guiding or corrective actions taken as a result of information and metrics gathered
and reported. Actions may cover any test activity and may affect any other software life cycle activity or task.
Re-prioritize tests when an identified risk occurs (e.g. software delivered late).
Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer before accepting them
into a build.
1. Test Organization
2. Test Planning and Estimation
3. Test Progress Monitoring and Control
4. Configuration Management
5. Risk and Testing
6. Defect Management
7. Check of understanding
Configuration management establish and maintain the integrity of the products (components, data and documentation)
of the software/system through the project and product life cycle.
Configuration management procedures have to be taken in account during test planning process.
Configuration Configuration
Configuration control Status accounting
identification auditing
selecting the configuration items for a system evaluation, co-ordination, approval or ->recording and reporting of The function to check
and recording their functional and physical disapproval, and implementation of changes to information needed to manage a that the software
characteristics in technical documentation configuration items after formal establishment configuration effectively, including: product matches the
of their configuration identification A listing of the approved configuration configuration items
identification identified previously
-The status of proposed changes to
the configuration
-The implementation status of the
approved changes
1. Test Organization
2. Test Planning and Estimation
3. Test Progress Monitoring and Control
4. Configuration Management
5. Risk and Testing
6. Defect Management
7. Check of understanding
In software testing Risks are the possible problems that might endanger the objectives of the project stakeholders.
It is the possibility of a negative or undesirable outcome. A risk is something that has not happened yet and it may never
happen; it is a potential problem.
In the future, a risk has some probability between 0% and 100%; it is a possibility, not a certainty.
The chance of a risk becoming an outcome is dependent on the level of risk associated with its possible negative
consequences.
Product risk (factors relating to what is produced by the work, i.e. the thing we are testing)
Project risk (factors relating to the way the work is carried out, i.e. the test project )
Project issues Supplier issues Organizational factors Political issues Technical issues
Delays Failure of a third party Skill, training and staff Improper attitude Problems in defining
to deliver on time or at shortages toward or the right requirements
all expectations of testing
Inaccurate Contractual issues Personnel issues Follow up not done on Test environment not
estimations information found in ready in time
reviews
Late changes Conflicting business Communication Low quality of the
priorities issues design, code,
configuration data,
test data and tests
Weakness in the
process
Potential failure areas (adverse future events or hazards) in software are known as product risks, as they are a risk to the
quality of the product.
Risks are used to decide where to start testing in the software development life cycle and how much testing is needed.
Risk-based approach to testing provides proactive opportunities to reduce the levels of product risk starting in the initial
stages of a project. It involves the identification of product risks and how they are used to guide the test planning,
specification and execution.
Will determine the particular test levels and types of testing to be performed
Will determine any non-test activities that could be employed to reduce risk
To ensure that the chance of a product failure is minimized, risk management activities provide a disciplined approach
To assess continuously what can go wrong (risks). Regular reviews of the existing and looking for any new product risks
should occur periodically throughout the life cycle
To determine what risks are important to deal with (probability � impact). As the project progresses, owing to the
mitigation activities risks may reduce in importance, or disappear altogether
Testing is a risk control activity that provides feedback about the residual risk in the product by measuring the effectiveness
of critical defect removal and by reviewing the effectiveness of contingency plans.
1. Test Organization
2. Test Planning and Estimation
3. Test Progress Monitoring and Control
4. Configuration Management
5. Risk and Testing
6. Defect Management
7. Check of understanding
An incident is any unplanned event occurring that requires further investigation. In testing this translates into anything
where the actual result is different to the expected result.
An incident when investigated may be a defect, however, it may also be a change to a specification or an issue with the test
being run. It is important that a process exists to track all incidents through to closure.
The process of incident management ensures that incidents are tracked from recognition to correction, and finally through
retest and closure.
Incidents can be raised at any time throughout the software development life cycle, from reviews of the test basis
(requirements, specifications, etc.) to test specification and test execution.
To provide developers and other parties with feedback on the problem to enable
identification, isolation and correction as necessary
To provide test leaders with a means of tracking the quality of the system under test
and the progress of the testing
According to the structure standardized by IEEE 829-1998, an incident report should include:
Incident report id
Summary
Incident description with input
Expected results, anomaly
Date and time
Procedure
Environment
Attempts to repeat
Tester
Observer
Impact.
1. Test Organization
2. Test Planning and Estimation
3. Test Progress Monitoring and Control
4. Configuration Management
5. Risk and Testing
6. Defect Management
7. Check of understanding
Test tool - A software product that supports one or more test activities, such as planning and control, specification,
building initial files and data, test execution and test analysis.
The main benefit of using test tools is similar to the main benefit of automating any process. That is, the amount of time and
effort spent performing routine, mundane, repetitive tasks is greatly reduced.
This time saved can be used to reduce the costs of testing or it can be used to allow testers to spend more time on the more
intellectual tasks
The various types of test tools according to the test process activities are:
Tool support for test Tools support for Tool support for
execution and performance and specialized testing
logging monitoring:
needs:
• Test execution tools • Dynamic analysis tools • Data quality assessment
Requirement
management
Tool
Continuous Configuration
Integration Management
Tool(D)
Test Tool
Management
Tool
Test
Defect management
Management and ALM
Tool
Tool
To link the test object with version information in the configuration management tool
!When using an integrated tool, as ALM tool, keep in mind that test management tool is only a
module and it acts as an interfaces with other tools and modules!
tools for:
Data-driven testing - A scripting technique that stores test input and expected results in a table or spreadsheet, so
that a single control script can execute all of the tests in the table. Data-driven testing is often used to support the application
of test execution tools such as capture/playback tools.
Keyword-driven testing - A scripting technique that uses data files to contain not only test data and expected
results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts
In both approaches the expected results need to be compared with actual results either dynamically or stored for later
comparison
tools for:
Model-based Testing (MBT)- A scripting technique that enables a functional specification to be captures in form
of a model, such as an activity diagram.
The MBT tool interprets the model in order to create test case specifications.
There are many benefits that can be gained by using tools to support testing. They are:
People often make mistakes by underestimating the time, cost and effort for the initial introduction of a tool
Underestimating the time and effort needed to achieve significant and continuing benefits from the tool (including the
need for changes in the testing process and continuous improvement of the way the tool is used
Underestimating the effort required to maintain the test assets generated by the tool.
Lack of good understanding and experience with the issues of test automation
The vendor nay provide poor response for support, upgrades and defect fixes
Organization for:
The following factors Assessment of the organization’s maturity (e.g. readiness for change);
are important during
tool selection:
Identification of the areas within the organization where tool support will help to improve testing
processes;
Proof-of-concept to see whether the product works as desired and meets the requirements and
objectives defined for it;
Evaluation of the vendor (training, support and other commercial aspects) or open-source
network of support;
Organization for:
The following factors Understanding the technologies used by the test object in order to select a compatible tool
are important during
tool selection:
The build and continuous integration tools already in use in the organization, in order to assure
tool compatibility and integration
Organization for:
To see how the tool would fit with existing processes or documentation, how those would need to change to work well with
the tool and how to use the tool to streamline existing processes;
To decide on standard ways of using the tool that will work for all potential users (e.g. naming conventions, creation of
libraries, defining modularity, where different elements will be stored, how they and the tool itself will be maintained);
To evaluate the pilot project against its objectives (have the benefits been achieved at reasonable cost?).
Understanding the metrics that you wish the tool to collect and report, and configuration of the tool in order to capture
these metrics
Organization for:
Success factors like: Rolling out the tool to the rest of the organization incrementally.
Adapting and improving processes to fit in with the use of the tool.
Implementing a way to gather usage information from the actual use of a given tool.