You are on page 1of 12

Software failure can cause loss of time, money, company reputation, and

can even cause injury and death.


Error/mistake -> defect/fault/bug->failure
Not all defects turns into failure
Defects can occur because human beings are fallible, time pressure,
complexity of infrastructure, complex code, changing technologies and
many systems interaction.
Per syllabus, efficiency is also a non-functional behavior.
The standard for software product quality is ISO 9126
Learning lessons from previous project and incorporating them is an
aspect of quality assurance
Development standard, training, defect analysis are all quality assurance
activity, alongside testing.
Testing can have the objectives: finding defects, preventing defects,
gaining understanding, providing information for decision making.
Designing tests early can help prevent defects being introduced into the
code.
During operational testing, the main objective may be to assess system
characteristics such as reliability and availability.
Dynamic testing can show failures that are caused by defects.
Debugging is the development activity to find, analyze and remove the
cause of defect.
Testing can have the following objectives:
Finding defects
Gaining confidence about the level of quality
Providing information regarding decision making
Preventing defects.
The thought process and activities involved in designing tests early in the
life cycle (verifying the test basis via test design) can help to prevent
defects from being introduced into the code.
Instead of exhaustive testing, risk analysis and priorities should be
used to focus testing efforts.
Early testing should have defined objectives.
Finding and fixing defects does not help if the system built is unusable
and does not fulfill the users needs and expectation.
Fundamental test processes may overlap or take place
concurrently
Test planning is the activity of defining the objectives of testing and the
specification of test activities in order to meet the objectives and mission.
Test control is the ongoing activity of comparing actual progress against
the test plan, and reporting status including deviation from the plan.
Test analysis and design ( READ IT FROM SYLLABUS/NEW BOOK)
Book + reviewing software integrity level (risk level) + risk
analysis report
Evaluate testability of the test basis and test objects.
Identify and prioritize test condition based on analysis of test item,
the specification.
Design and prioritize high level test cases
Identify necessary test data to support the test condition
and test cases
Design the test environment setup and identify and required
infrastructure and/or tools
Creating by-directional traceability between test cases and
test basis
Implementation and execution of test
Finalize, implementing and prioritizing test cases ( Including the
identification of test data)
Verifying and updating bi-directional traceability between the test
basis and test cases.
+ Book
Evaluating exit criteria and reporting should be done for each
test level.
Independent testing maybe carried out at any level of testing.
Independence means avoiding author bias, not replacement of
familiarity.
Looking for failures in a system requires curiosity, professional
pessimism, a critical eye, attention to detail, and good communication
with the development peers and experience on which to base error
guessing.
Defect information can help developers developing their skills
on a weekend while testers can go on a holiday.
Testers should collaborate rather than battles remind everyone of the
common goal of better quality system.
Code of ethics- page 20 NEW NEW NW
PUBLIC, CLIENT AND EMPLOYER, PRODUCT, JUDGEMENT,
MANAGEMENT, PROFESSION, COLLEAGUES, SELF,
Software development model must be adapted to the context of
project and product characteristics
Functional and structure test can be carried out at any level
Indicators for maintenance testing: modification, migration
and retirement
COTS= commercial off the shelf
V model can have more or less than 4 levels
CMMI = CAPABILITY MATURITY MODEL INTEGRATION
Software life cycle processes (ieee/iec 12207)
Regression testing is increasingly important in an iterative-incremental
development model
Characteristics of good testing
For every development activity there is a corresponding testing
activity(V- model)
Each test level has test objectives specific to that level
The analysis and design of tests for a given test level should begin
during the corresponding development activity
Testers should be involved in reviewing documents as soon as drafts
are available in the development life cycle.
Test levels can be combined and reorganized
Read test level from the syllabus
Read test basis and test objectives in different test levels from the
syllabus
In component testing stubs, drivers and simulators are used.
In component testing, test cases are derived from work products such
as a specification of the component, the software design or the data
model.
In System integration testing, the developing organization may
control only one side of the interface. This might be considered as a
risk. Business process implemented as workflows may involve a series
of systems. Cross platform issues may be significant.
Both functional and non functional characteristics may be
included in integration testing.
Both functional and structural approaches can be used in
integration testing.
Testers should understand the architecture and influence integration
planning if integration tests are planned before components or
systems are built, those components can be built in the order required
for most efficient testing.
In System testing, testing score will have been determined during
planning phase and test environment should mimic the production
environment to minimize any environment related failure.
System testing should investigate function /non-functional as well as
data quality characteristics
Both black box and white box testing are used during system
testing
Accepting testing test basis and test objects:
Finding defects is not the main focus of acceptance testing.
Acceptance testing is not necessarily a final level of testing. A large
scale system testing may come after acceptance testing
A COTS can be a subject for acceptance testing before it is installed, a
component may be acceptance tested during component testing, a
new functionality can be acceptance tested.
In operational acceptance testing includes: bitmabs + data loads and
migration task
A test type is focused on a particular test objective, which could be:
functional testing, non-functional testing, structure or architecture
testing, confirmation and regression testing.
Example of different type of testing:
structure testing: control flow model, menu structure model
non functional model: performance model, usability model,
security thread modeling
Functional testing: a process flow model, a state transition model
or plain language specification.
Functions that a system performed may be undocumented.
Functions that a system performed may be documented in functional
specification, requirements specification or use case.
Functional test can be performed at all test levels.
Interoperability testing is also functional testing, evaluating the
capability of software product to interact with one or more specified
components/systems.
Non functional testing may be performed at all test levels.
Non functional tests can be referenced to a quality model such as
Software Engineering Software Product quality (ISO 9126)
Non functional testing uses the black box design techniques. And
considers external behavior
Structure based technique (white-box) are best used after
specification based technique.
In component and component integration testing, tools can be used to
measure the code coverage of elements.
Structure based testing can also be applied to system, system
integration and acceptance testing.
Debugging is NOT testing activity
Regression testing is also done when softwares environment is
changed
The extent of regression testing depends on the risk of not finding
defects in software that was working previously
Repeatability is a characteristics of tests used for
regression/confirmation testing
Regression can be used at all test levels and it includes,
functional/non-functional/structural testing.
Maintenance testing is triggered by modification, migration or
retirement. A distinction should be made between planned
release and hot fixes.
Migration testing ( conversion testing ) is needed for data
migration also
Modification includes corrective/emergency changes, patches to
correct new exposed/discovered vulnerabilities of the operating
system, planned enhancement, planned operating system/database
upgrade, planned COTS upgrade.
Maintenance testing of migration includes operational tests of new
environment as well as the change software. It also includes
Migration/conversion testing
Maintenance testing for the retirement may include testing of data
migration or archiving for long data retention policy.
Maintenance testing can be done for all test levels and all test
types
Maintenance testing can be difficult if specifications are out of
data/missiong, or testers with domain knowledge are not
available.
Any software work product can be reviews
Benefits of reviews: DIRT + early detection and correction, fewer
defects
Review, static analysis, and dynamic testing all have same
objectives identifying defects.
The way a review is conducted out depends on the agreed objectives
of the review (e.g. finding defects, gain understanding, educate
testers and new team members, or discussion and decision by
consensus.
Planning: book + defining review criteria+ checking entry criteria (??? )
An exit criterion is checked during follow up. Do not forget it!!!
A single software product or related work product may be the subject
of more than one review.
Main purpose of informal review: inexpensive way to get some
benefit
Technical review may include peers and technical experts with
optional management participation
Technical review also includes review report with a list of findings, and
when appropriate, recommendation related to finding. ( Not in books)
There is a role called optional reader in Inspection ( most formal
review)
Peer group means colleagues of same organizational level
When review is done as peer groups, it is called peer review
Success factors for review process: RAM TEC +
Testers are valued reviewers who contribute to the review and
also learn about the product, which enables them to prepare test
earlier.
The review is conducted in an atmosphere of trust; the outcome
will not be used for the evaluation of the participants.
As with reviews, static analysis finds defects rather than failures.
Static analysis tools analyze program code (e.g. control flow and data
flow) as well as generated output such as HTML and XML
Types of defect found by static analysis: Books + Missing erroneous
logic (infinite loop), overly complicated constructs.
Static analysis tool are also used during checking in code in
configuration management tool.
The level of formality for test development process depends on
maturity, time constraints, safety or regulatory requirements, people
involved ( audit trail from chapter 3)
A test condition is defined as an item or event that could be verified by
one or more test cases a test case can be a function, transaction,
quality characteristics or structural element.
During test design, test cases and test data are created and
specified
A test case consists of input values, execution precondition, expected
result and execution post condition.
The standard for software test documentation (IEEE STD 829-1998 )
describes the content of test design specification and test case
specification.
Expected results should ideally be defined prior to test execution.
Test procedure specification includes developed, prioritized ,
implemented and organized test cases developed during test
implementation
During test analysis, the test basis documentation is analyzed to find
test condition ( what to test)
During test analysis, the detailed test approach is implemented to
select the test design technique.
So the actual sequence is developed during test
implementation and execution
The purpose of test design technique is to identify test
conditions, test cases and test data.
Specification based or black-box testing includes both functional and
non functional testing.
In specification based testing, formal/informal models are used
for the specification of the problem to be solved, the software
or its components. Test cases can be derived systematically
from these models.
Apart from the knowledge of users, developers, testers, and other
stack holders, its usage, environment and knowledge about likely
defects and their distribution is another source of information.
Equivalence partitioning can be found for both valid and
invalid data.
Partitions can also be identified for outputs, internal values, time
related values and for interface parameters
Equivalence partitioning can be used to achieve input and
output coverage goals.
Behavior at the edge is more likely to be incorrect that behavior within
the partitions
Tests can be designed for both valid and invalid boundary
values
Boundary value analysis can be applied to all test levels.
BVA is relatively easy to apply and its defect finding capability is high.
Boundary value analysis is an extension for EP and other black-box
techniques- can be applied for user input on screen or time range or
table range.
Decision tables are a good way to capture system requirements that
contain logical conditions and to document internal system design. +
business rules
Rules can be either true or false
A state table shows the relationship between inputs and states, and
can highlight possible transitions that are invalid.
Tests can be designed to cover any and every kind of
transitions/states.
State transition is heavily used within embedded system and technical
automation in general, testing screen dialogue flows.
Business use case, technology-free and business process level are
abstract use cases
System use cases on the system functionality level are system use
cases.
Each use case has preconditions which need to be met for the
use case to work successfully. Each use case terminates with
post conditions which are the observable results and final
state of the system. A use case has a mainstream (most likely)
scenario and alternatives scenarios.
Use cases are very useful for designing acceptance tests with
customer/user participation. They also help uncover
integration defects caused by the interaction and interfaces of
different components.
Structure based testing can be applied to component level(statements,
decisions, branches), integration level(tree) or system level (menu
structure, business process or process structure)
Decision coverage is stronger than statement coverage. 100%
decision coverage guarantees 100% statement coverage, but
not vice versa
Condition coverage is stronger than decision coverage
Tool support useful for structural testing of code.
The concept of coverage can also be applied to other test
levels (e.g. integration testing)
Error guessing is one form of experience based techniques.
A systematic approach to error guessing is fault attack - enumerate a
list of possible defects and design tests to attack these
defects.
All the testing activities are concurrent in exploratory testing and they
are time boxed.
Exploratory testing can serve as a check on the test process to help
ensure that most serious defects are found.
Developers may participate in lower level of testing, but their lack of
objective often limits their effectiveness.
An independent tester can verify assumptions people made during
specification and implementation.
Draw backs for independent testing includes: isolation from the
development team, developers may become irresponsible,
independent testers may be seen as a bottleneck of blamed for delays.
Testing may be done by specific testing role or by pm, qm,dev, sme or
the mochus ( infrastructure or IT operations)
The role of test leader may be performed by a project manager,
development manager, quality assurance manager, or the manager of
a test group.
In large groups, two positions may exist: test manager and test leader.
The test leader plans, monitors and controls the testing activities.
Testers at the component and integration level would be developers
Testers test level would be business experts and users.
Testers for operational acceptance testing would be operators.
Test Planning activity: book + Defining the amount, level of detail,
structure and templates for the test documentation
TPA: book + Setting the level of detail for test procedures in order to
provide enough information to support reproducible test preparation
and execution.
Entry criteria: availability of test environment, test tool, testable
code and test data.
Exit criteria: book + estimate of defect density or reliability measures,
residual risk, such as defects not fixed or lack of test coverage in
certain areas.
Testing effort -> Development process: books + stability of the
organization and test process.
The test approach is the implementation of the test strategy for a
specific project.
Test approach is used for selecting the test design techniques,
test types to be applied and for defining entry/exit criteria.
The selected approach depends on the context and may
consider risk, hazards and safety, available resources and
skills, the technology, the nature of the system, test objectives
and regulations.
Action taken during test control may cover any test activity and may
affect any other software life cycle activity or task.
Test control-> books + Making decisions based on information from
test monitoring.
The purpose of configuration management is to establish and maintain
the integrit of the products (components, data and documentation) of
the software or system through the project and product.
During test planning , the configuration management procedures and
infrastructure ( tools) should be chosen, documented and
implemented.
The level of risk will be determined by the likelihood of an
adverse vent happening and the impact ( the harm resulted
from that event)
Project risk-> organizational factors -> books + improper attitude
toward or expectations of testing ( not appreciating the value of finding
defects testing)
Project risk-> technical/specialist issues: book + test environment not
ready on time, late data conversion,migration planning and
development and testing data conversion/migration tool.
Project risk -> technical/specialist issues: low quality of the design,
code, configuration data, test data and tests.
Product risk: book + Poor data integrity and quality (e.g. data
migration issues, data conversion problems, data transport problems,
violation of data standard)
Testing as a risk-control activity provides feedback about the residual
risk by measuring the effectiveness of critical defect removal and of
contingency plans.
A risk based approach starts in the initial stages of a project.
Risk identified can be used to derive: book ( test technique + test
prioritizing + non-testing activities ) + extent of testing to be carried
out.
An organization should establish an incident management process and
rules for classification.
Incidents may be raised during development, review, testing or
use of software product. (any software life cycle )
Incidents can be raised against code and ANY kind of
documentation.
Items in incident management: books Scope, severity, priority of
incident; Date of incident was discovered.
Tools can be used to 4 types of activity : execution, management,
exploration/monitoring, others ( e.g. spreadsheets, email, word doc etc
)
Purpose of test tools in testing: increase efficiency by automatic
repetitive tasks, Automate activities that require significant resource
( static testing) , automate activities that cannot be done manually
( performance ) , increase reliability ( large data comparison or
simulating ) .
Test framework means: reusable and extensible test libraries,
a type of design of test automation ( data/keyword driven ) ,
overall process of execution of testing.
Some tools can be intrusive, affecting the actual outcome of the test
due to difference in actual timing or extra instructions.
The consequence of intrusive tools is called the probe effect.
Tools to support activities over the entire software life cycle:
test management tool, requirements management tool,
configuration management tool, incident management tool.
Test Management tool provides interfaces for executing tests, tracking
defects/management requirements and support for quantitative
analysis and support. They also support tracing the test objects to
requirements and might have their own version control or an interface
to external version control.
Requirement management tool store requirements, their attributes and
trace requirements to individual tests may also help indentify
incorrect/identify mission requirements.
Tool support for static testing: review tool, static analysis tool,
modeling tool.
Review tools assists with review process, checklist and guidelines and
stores review comments.
Static analysis tool: these tools help developers and testers
find defects prior to dynamic testing by providing support for
enforcing coding standards ( including secure coding), analysis
of structures and dependencies, also help planning or risk
analysis by providing metrics of the code.
Tool support for test specification: test design tool + test data
preparation tool.
Test design tools are used to generate test inputs or executable tests
or test oracles from requirements, GUI, design models or code
Test data preparation tool manipulates data file, databases, or data
transmission to setup test data to be used.
Test execution tools enables tests to be run both automatically or semi
automatically
TET (test execution tools) usually provides a test log for each test run.
TEST supports GUI based configuration for parameterization of data.
Security tools evaluate the ability of the software to protect data
confidentiality, integrity, authentication, authorization, availability and
non repudiation. (DIAANA)
Security tools are most focused with particular technology, platform
and purpose.
Dynamic analysis tools find defects that are only evident when
software is executing
Dynamic analysis tools are typically used in component and
component integration testing.
Dynamic analysis tools are also used for testing middleware.
Test machines used in Performance testing are known as load
generator
Monitoring tools continuously analyze, verify and report on usage of
specific system resources and give warnings of possible service
problems
Data quality assessment tools is a tool supported for specific
need
Potential Benefit of using tool: EGOR -> Repetitive work is
reduced, Greater consistency and repeatability, Objective assessment (
static measures, coverage) , Ease of access to information about tests
or testing.
Risk of using tools FROM SYLLABUS
Scripts made by capturing tests by recording may be unstable when
unexpected events occur.
Data driven tests take in data from a separate source other than
scripts (e.g. spreadsheets) or algorithm which generates input
automatically based on runtime parameters supplied to the
application.
Key word driven tests uses keyword stored in a spreadsheet to decide
on actions and test data. Test
For execution tools, the expected results for each test need to be
stored for later comparison.
Test Management tool need to interface with other test tools.
Success factors read it from syllabus,

You might also like