You are on page 1of 42

Review Process

So how can we perform the review .. it’s

not a random activity, we have a process for that …

Reviews can vary widely in their level of formality, where formality relates to the level of
structure and documentation associated with the activity.

Reviews vary from informal to formal.

Informal reviews are characterized by not following a defined process and not having

formally documented results.

Just like when you ask a colleague passing by to look at one of your documents.

So there are no written instructions for reviewers.

it is very informal On the other hand, Formal reviews are characterized by team
participation, documented results of the review, and documented procedures for
conducting the review.

There are factors that affect the decision on the appropriate level of formality.

Those are organization based factors that affect the level of formality of any review.

It is usually based on,

• the software development lifecycle model.

Waterfall might need a more formal process while Agile might be ok with informal ones.

• The maturity of the development process as the more mature the process is, the more

formal reviews tend to be.

• the complexity of the work product to be reviewed, the more complex the work
product,

the more formal the review process should be.

• Legal or regulatory requirements.

For example, in safety-critical software applications domain, there’re regularity or legal


requirements determine what kinds of review should take place.

• The need for an audit trail: The level of formality in the different types of review

used can help to raise the level of an audit trail to trace backward throughout the
software

1
development life cycle.

ISO standard (ISO/IEC 20246) contains more in-depth descriptions of the review
process

for work products, including roles and review techniques.

Here’s another standard number to remember

3.2.1 Work Product Review Process

The different types of reviews vary in its formality, but before discussing the different

types of reviews let’s talk first about the five groups of activities of the review

process.

They are

1.Planning

2.Initiate Review

3.Individual review (Individual preparation)

4.Issue communication and analysis

5.Fixing and Reporting

Again, we need to know which activities happen during which group and also memorize
the sequence

of the activities as this is one repeated question in the ISTQB exam

Let’s talk about each of those activities in detail

Planning Reviews are good but we can’t review every

work product we get.

So we should be

• Defining the scope, Deciding the purpose of the review, what documents or parts of

documents to review, and the quality characteristics to be evaluated, where to do it , and


if there’s

already any company process, guidelines or predefined checklists we could use in the

review process

• Estimating effort and timeframe, to know

2
when to do it, how long should it take,

• Identifying review characteristics such

as the review type with roles, activities, and checklists to know

• Selecting the people to participate in the review and allocating roles

The reviewers should be skilled to do the job, know how to dig for mistakes in the
document

They also should be of different background.

For example, someone with a design background, someone who is an expert in UI,
someone with

performance background, another with standards knowledge and so on.

The selected personnel will be assigning roles responsibilities accordingly.

• Defining the entry and exit criteria for more formal review types (e.g., inspections)

Entry criteria define what criteria should be fulfilled to start the review such as making

sure the document is spell checked before starting the review

And exit criteria define what criteria should be fulfilled to stop the review such as fixing

major bugs found in the document Now we need to sit and wait Checking that

the entry criteria are met if we have one so reviewers won’t waste time on a non-ready

document

Initiate review Before the actual review, we need to make

sure all the reviewers know exactly what’s expected from them to initiate the review

• Distributing the work product (physically or by electronic means) and other material

if need

• , Hand the reviewers any issue log forms,

checklists, and related work products that they might use

• Explaining the scope, objectives, process, roles, and work products to the participants

• Answering any questions that participants may have about the review

Individual review (i.e., individual preparation)

• each of the participants alone will review

all or part of the work product

• Noting potential defects, recommendations,

questions and comments.

3
This activity could be time-boxed, (usually 2 to 4 hours).

Issue communication and analysis

• Now it’s time for the participants to Communicate the identified potential defects

(this could be in a review meeting) … participants will go through a discussion regarding


any

defects found.

The discussion usually will lead to more defects findings.

• Analyzing potential defects, assigning ownership and status to them

Reviewers may only suggest or recommend fixes but not an actual discussion on how to
fix

it.

This will be done later by the author

• Evaluating and documenting quality characteristics

• At the end of the meeting, a decision on the document under review has to be made

by the participants.

Evaluating the review findings against the exit criteria to make a review decision (Should

we proceed with this document or drop it altogether or a simple follow up meeting after
fixing

the defects found will be enough?)

Fixing and reporting

• After the meeting, we should Create defect

reports for those findings that require changes

• the author will have a series of defects

to investigate, answering questions and suggestions raised in the review meeting Fixing
defects

found (typically done by the author) in the work product reviewed

• we might need to Communicate defects to the appropriate person or team (when


found

4
in a work product related to the work product reviewed)

• Recording updated status of defects (in formal reviews), potentially including the

agreement of the comment originator

• Gathering metrics (for more formal review

types) for example such as how much time was spent on the review and how many
defects were

found.

• Checking that exit criteria are met (for more formal review types)

• Accepting the work product when the exit criteria are reached

The results of a work product review vary, depending on the review type and formality.

Roles and responsibilities in a formal review


The participants in any formal review should have adequate knowledge of the review
process

and have been properly trained as reviewers when necessary.

A typical formal review will include the following roles:


– Author – Management – Facilitator (or moderator)

– Review leader – Reviewers – And last Scribe / recorder

Let’s talk about each role in little detail

5
Author
• he’s the person who Creates the work product under review

• and Fixes defects in the work product under review (if necessary)

Management
• Is responsible for review planning

• Decides on the execution of reviews

• Assigns staff, budget, and allocates time in project schedules

• Monitors ongoing cost-effectiveness

• Executes control decisions in the event of inadequate outcomes

Facilitator (which also often called moderator)

• Ensures effective running of review meetings (when held)

• Is often the person upon whom the success of the review depends

• And is responsible for making sure no bug fixing will be discussed in the review
meeting

and also responsible for making sure the reviewers will discuss the code objectively not
subjectively.

•And last mediates, if necessary, between the various points of view and the review
meetings

Review leader
•Takes overall responsibility for the review

• Decides who will be involved and organizes when and where it will take place

Reviewers
• May be subject matter experts, persons working on the project, stakeholders with
interest in the work product,

and/or individuals with specific technical or business backgrounds

• who, after the necessary preparation, Identify potential defects in the work product
under review

•And may represent different perspectives (e.g., tester, programmer, user, operator,
business analyst, usability expert, etc.)

6
And last Scribe (or recorder)
• Collects potential defects found during the individual review activity

• And records new potential defects, open points, and decisions from the review meeting
(when held)

Some might get confused over the difference between management, review leader and
the moderator or facilitator.

Well, think of management as the management … time, cost, resources, very high level
decisions, no technicality in the review needed

The review leader is like the team leader in your project… he understands technically
what’s going on & will make sure everything is executed

The facilitator on the other hand, usually gets involved in the review meeting only…

his job is helping others do their job right and makes sure that it runs smoothly

without any interruptions nor tension between the participants.

Also, the actions associated with each role may vary based on review type.

In addition, with the advent of tools to support the review process,

especially the logging of defects, open points, and decisions, there is often no need for a
scribe.

Notice that, it’s normal that, one person may play more than one role, and one role can
be played by more than one person.

Again, more detailed roles are possible, as described in ISO standard (ISO/IEC 20246).

It is the only standards for everything related to reviews ISO standard (ISO/IEC 20246).

Review Types
Think of a review as any event where someone needs to go through a document with
another one.

There could be multiple reasons why you need to go through a document with another
one.

7
The objectives of any review could be finding defects, gaining understanding, educating
participants

such as testers and new team members, or discussing and deciding by consensus

The focus of any review depends on the agreed objectives of the review

But despite what type of review, finding defects are always welcome in any type of
reviews.

You won’t find someone pointing a defect to you in a document and your reply would
be no this meeting is only to educate you about this

document, so you are not allowed to find a defect in this document. So finding defects is
always a purpose for any review type.

There are four types of reviews vary in their formality.

Starting from the lowest to the highest formal review type, we have:

1. Informal

2. Walkthrough

3. Technical review

4. inspection

There’re different factors that help to decide the review type, those are project-based
factors that affect the type of review.

For example:

• the needs of the project,

• available resources,

• product type and risks,

• business domain,

• company culture,

• and other selection criteria.

Reviews can be classified according to various attributes.

8
The following lists the four most common types of reviews and their associated
attributes.

Questions in the exam are usually about differentiating between the different review
types,

so we will try to pinpoint some keywords to highlight the review type characteristics.

Informal review
Informal review also known as (e.g., buddy check, pairing, pair review)

The main purpose is to quickly find defects and an inexpensive way to achieve some
limited benefit.

Possible additional purposes: generating new ideas or solutions, quickly solving minor
problems

The least formal review type where there is no formal process to run the review.

May not involve a review meeting and may be performed by a colleague of the author
(buddy check) or by more people

Finding in the review are not usually documented Informal review varies in usefulness
depending on the reviewers

Use of checklists is optional

And last, very commonly used in Agile development

As an example of the informal review is pair programming - a technique introduced by


the agile extreme programming methodology

(where two programmers work together to write the same code so one programmer
instantly reviews the code of the other programmer.

The keyword here: is no process and quick

Second is Walkthrough

Where the author has something to explain or show in his document to the participants.
So the main purpose here is for the participants to learn

something from the document or gain more understanding about the content of the
document, also walkthrough can be used to find defects

in the document, improve the software product, consider alternative implementations,


evaluate conformance to standards and specifications

Possible additional purposes: exchanging ideas about techniques or style variations,


training of participants, achieving consensus

9
In this type of review, the meeting is led by the author

Review sessions are open-ended and may vary in practice from quite informal to very
formal.

Appointment of a scribe who is not the author is mandatory,

Preparation by reviewers before the walkthrough meeting is optional,

Use of checklists is optional

when you walkthrough a work product it may take the form of scenarios, dry runs, or
simulations

we will talk about scenarios and dry runs in another video defect logs and review reports
may be produced

so they are also optional

Keywords here led by the author, main purpose learning and gaining understanding
and most of the review process activities are optional

Third Technical review


• A technical review is a discussion meeting that focuses on achieving consensus about
the technical content of a document.

Finding defects is a plus as usual

• Possible further purposes: evaluating quality and building confidence in the work
product, generating new ideas,

motivating and enabling authors to improve future work products, considering


alternative implementations

• Reviewers are usually experts in their field and can be technical peers of the author

• Most of the review process activities are executed:

• Individual preparation before the review meeting

is required • The review meeting is optional, ideally led by a trained facilitator (typically
not the author)

• Scribe is mandatory, ideally not the author • Use of checklists is optional

• Potential defect logs and review reports are

typically produced

• Keywords here: led by a trained moderator, purpose is discussion, gaining


consensus

and taking decisions and evaluation of alternatives and most of the activates in the
review process are executed

10
Lat and most formal is Inspection

• Inspection main purposes: detecting potential defects, evaluating quality and building
confidence

in the work product, preventing future similar defects through author learning and root
cause analysis

• Possible further purposes: motivating and enabling authors to improve fufuture work
products

and the software development process, and achieving consensus

•Inspection follows a defined process with formally documented outputs, based on rules
and checklists

• All the roles mentioned in the previous video are mandatory and may include a
dedicated reader

(who reads the work product aloud during the review meeting) what that reader with not
mentioned in the roles

it's also mentioned here in the inspection

so it may include adipicated reader

• Individual preparation before the review meeting is required

• Reviewers are usually peers of the author and should be experts in disciplines that are
relevant to the work product

• Specified entry and exit criteria are used

• Scribe is mandatory

• The review meeting is led by a trained facilitator again (not the author)

• The author cannot act as the review leader, reader, or scribe

• Potential defect logs and review report are produced

• Metrics are collected and used to improve the entire software development process,
including the inspection process

Keywords here, led by a trained moderator, the main purpose is finding bugs and all the
activities in the review process are executed

In reality there is a fine line between the review types often get blurred and

what is seen as a technical review in one company may be seen as an inspection in


another.

The key for each company is to agree on the objectives and benefits of the reviews that
they plan to carry out.

11
Also a single work product may be subject to many different review types: for example,
an informal review may be carried out

before the document is subjected to a technical review or, depending a technical review

or inspection may take place before a walkthrough with a customer

The types of reviews described can be done as peer reviews, done by colleagues at the
same or a similar approximate organizational level.

The types of defects found in a review vary, depending mainly on the work product
being reviewed.

Applying Review Techniques


As I have said before, it’s a skill to read a document and find defects in it.

II see it a skill like the movie critics.

Many might go to a movie and like it…but critics find it very bad.

Movie critics have trained eyes to find defects that others might not notice.

Well, in this video we will learn how to enhance this skill by learning a number
techniques

that people use that you can apply during the individual review (i.e., individual
preparation)

activity to uncover defects.

These techniques can be used across the review types described above.

The effectiveness of the techniques may differ depending on the type of review used.

And as I said it is a skill, so it needs practice to master those techniques.

12
We will talk about five techniques

1- Ad hoc

2- Checklist based

3- Scenarios and dry runs

4- Role-based

5- Perspective-based

Ad hoc

Ad hoc usually means no planning or little preparation.

In an ad hoc review, reviewers are provided with little or no guidance on how this task

should be performed.

Reviewers often read the work product sequentially, identifying and documenting issues
as they

encounter them.

This technique is highly dependent on reviewer skills and experience and may lead to
many

duplicate issues being reported by different reviewers.

Checklist-based

We will talk about checklist testing in detail in future videos.

But for now, imagine if I gave you a list of questions and asked you to answer them

according to the document (or test object) you have at hand.

This is simply checklist-based testing.

It is a systematic technique, Reviewers detect issues based on checklists that are


distributed

at review initiation (e.g., by the facilitator).

They just answer the questions according to their point of view.

A review checklist consists of a set of questions based on potential defects, which may
be derived

13
from experience.

Checklists should be specific to the type of work product under review and should be

maintained regularly to cover issue types missed in previous reviews.

Questions like, is the section non-functional requirements exit?

Do we have a UML diagram for every use case?

Does every function in the source code have a detailed comment about its purpose?

The main advantage of the checklist-based technique is a systematic coverage of typical

defect types.

Care should be taken not to simply follow the checklist in individual reviewing, but

also to look for defects outside the checklist.

Scenarios and dry runs

In a scenario-based review, reviewers are

provided with structured guidelines on how to read through the work product.

These scenarios provide reviewers with better guidelines on how to identify specific
defect

types than simple checklist entries.

A dry run is a testing technique where you try to mimic real-life situations going as

wrong as possible.

For example, an aerospace company may conduct a “dry run” test of a jet’s new pilot

ejection seat while the jet is parked on the ground, rather than while it is in flight.

A scenario-based approach supports reviewers in performing “dry runs” on the work


product

based on the expected usage of the work product (if the work product is documented in
a suitable

format such as use cases).

Role-based

14
Consider software like Microsoft word.

You may consider potential users to this software like a student, a secretary, and a
publishing

company.

Now I want you to imagine how each one of those users will use the software.

A student wants everything to work using short cuts and toolbar icons.

A secretary is a speed typist.

She wants only to the keyboard to finish any task.

A publishing company doesn’t mind going through detailed dialogs to set up the
printing

process very accurately.

This is role-based testing in which the reviewers evaluate the work product from the
perspective

of individual stakeholder roles.

Typical roles include specific end-user types (experienced, inexperienced, senior, child,

etc.), and specific roles in the organization (user administrator, system administrator,

performance tester, etc.).

Perspective-based

I think of Perspective based reading as a mix of both role-, checklist and scenario-based

testing.

This technique acknowledges that there are multiple consumers of the documents
produced

during the requirements development phase.

PBR offers each of the reviewers a viewpoint or perspective specific to each type of
consumer,

similar to a role-based review.

But here Typical consumers or stakeholder viewpoints include end-user, marketing,


designer,

tester, or operations.

15
The technique instructs the reviewers on precisely what to search for, thus enabling them
to

find more defects in less time.

From each of these perspectives, the inspector is advised to apply a scenario-based


approach

to reading the document.

Each scenario consists of a set of questions and activities that guide the inspection
process

by relating the requirements to the regular work practices of a specific stakeholder.

What does this mean?

It means that perspective-based reading also requires the reviewers to attempt to use
the

work product under review to generate the product they would derive from it.

For example, a tester would attempt to generate draft acceptance tests if performing a
perspective-based

reading on a requirements specification to see if all the necessary information was


included.

Further, in perspective-based reading, checklists are expected to be used.

Using different stakeholder viewpoints leads to more depth in individual reviewing with

less duplication of issues across reviewers.

Empirical studies or statistics or experience have shown that perspective-based reading

to be the most effective general technique for reviewing requirements and technical
work

products.

A key success factor is including and weighing different stakeholder viewpoints


appropriately,based on risks.

Success Factors for Reviews

16
In order to have a successful review, the appropriate type of review and the techniques
used must be considered.

In addition, there are plenty of other factors that will affect the outcome of the review.

We can categorize those factors to Organizational success factors and People-related


success factors.

Organizational success factors for reviews include:

• Each review has clear objectives, defined during review planning, and used as
measurable exit criteria

• Review types are applied which are suitable to achieve the objectives and

are appropriate to the type and level of software work products and participants

• Any review techniques used, such as checklist-based

or role-based reviewing, are suitable for effective defect identification in the work
product to be reviewed

• Any checklists used should address the main risks and are up to date

•Large documents are written and reviewed in small chunks,

so that quality control is exercised by providing authors early and frequent feedback on
defects

• Participants have adequate time to prepare

• Reviews are scheduled with adequate notice

• Management supports the review process (e.g., by incorporating adequate time for
review activities in project schedules)

People-related success factors for reviews include:

• The right people are involved to meet the review objectives, for example,

people with different skill sets or perspectives, who may use the document as a work
input

which enables them to prepare more effective tests, and to prepare those tests earlier

• Participants dedicate adequate time and attention to detail

17
• Reviews are conducted on small chunks, so that reviewers do not lose concentration

during individual review and/or the review meeting (when held)

• Defects found are acknowledged, appreciated, and handled objectively

• The meeting is well-managed, so that participants consider it a valuable use of their


time

• The review is conducted in an atmosphere of trust; everyone knows that the main
objective is to increase the quality of the document

under review. The outcome will not be used for the evaluation of the participants.

I have seen companies that calculate the monthly bonus depending on the number of
bugs per developer … this is so unrealistic and unfair.

• Participants avoid body language and behaviors that might indicate boredom,
exasperation, or hostility to other participants

• Adequate training is provided, especially for more formal review types such as
inspections

• A culture of learning and process improvement is promoted: we should learn from our
mistakes

and we should use the metrics collected to improve the overall software process

18
Test Techniques

This section covers the very important and the popular topic of test case design
techniques.

This is where testers get their creativity to work and come up with ideas on how to test

the software.

As we mentioned before, exhaustive testing is impossible, that means we cannot test


everything.

So we have to be very very selective on how to test the software.

So how to design the perfect test case is where the test design techniques come into

action.

The purpose of a test design technique is to identify test conditions, test cases, and

test data.

But why there are many techniques to design a test case you may ask

Each test technique tackles a different situation.

Some are general, others are very specific.

Some are straightforward, others are difficult and complex to implement.

There are many excellent books published on software testing techniques

There are new techniques that pop up every year.

All these to help the tester do his job effectively and efficiently.

We have talked in the first section about effectiveness versus efficiency

19
From the testing point of view, Effective testing means we find more faults, Focus
attention

on specific types of defects if needed.

In some situations, you might need to concentrate on calculations faults.

In other situations, you might need to focus on UI issues and so on

In addition, we want to make sure that you’re testing the right thing

Efficient testing on the other hand means that we find faults with the minimum effort,

with the minimum number of test cases, avoiding duplication to minimize cost and time.

Plus using techniques that are measurable.

In this section, we will learn the most important famous ones

However, as a tester, I encourage you to read more and more about test design
techniques

4.1 Categories of Test Techniques

In this syllabus, The test case design techniques we will look at are grouped into three
categories:

• black-box,

• white-box, and

• experience-based.

Black-box or specification-based techniques are those based on deriving


test cases directly

from a specification or a model of a system or proposed system.

They are called this way because we can’t know what’s inside the box, which is the

software.

It is also called behavioral or behavior-based techniques because we only know how it


should

behave.

We know how it should behave from other documents: requirement documents, user
manuals, technical

specification, , use cases, user stories, or business processes.

20
Or we might know how it should behave because we have a model or another system
that behaves

like ours.

If we have any information, about how the system should behave then we are using
black-box

testing or specification-based testing.

These techniques are applicable to both functional and nonfunctional testing.

Black-box test techniques concentrate on the inputs and outputs of the test object
without

reference to its internal structure.

• 2nd category of test design techniques, those based on deriving test cases directly

from the structure of a component or system, known as structure-based, structural or


white-box techniques.
And it’s called white-box because in this situation we know what’s inside the box,

we know how it’s constructed.. we might know architecture, detailed design, internal

structure, or the code of the test object but on the contrary we might not know how

it should behave In the istqb curriculum, we will concentrate

on tests based on the code written to implement a component or system, but other
aspects of

structure, such as a DB structure for example, can be tested in a similar way.

• Lastly, experience-based techniques are test design techniques based on deriving


test

cases from stakeholders’ experience of similar systems and general experience of testing.

Stakeholders, what do you mean by stakeholders…well, stakeholders could be the


testers, developers,

users, customers, subject matter experts…

Before we go into techniques in each category, put in mind that you can use as many
techniques

as you can while testing..

We’ll explain this more after we visit all the techniques in this curriculum

Now what kind of questions you can get in this part, simply you need to know the
differences

21
between each category of design techniques specification or black-box vs structural or

white-box vs experienced-based techniques

The international standard (ISO/IEC/IEEE 29119-4) contains descriptions of test


techniques and

their corresponding coverage measures.

So ISO/IEC/IEEE 29119-1 talks about testing Concepts and definitions

ISO/IEC/IEEE 29119-2 talks about Test processes ISO/IEC/IEEE 29119-3 talks about
Test documentation

or test work products ISO/IEC/IEEE 29119-4 talks about Test techniques

And we had ISO/IEC 20246 talks about Work product reviews

TEST MANAGEMENT
RISK AND TESTING
We have mentioned earlier how Risk is an important factor in the testing activity.
We base our testing efforts upon the amount of risk in delivering the product too early.
If the risk is high then we need to spend more efforts in testing the software
If the risk is low, enough then we can deliver the software.

Definition of Risk
But what is “risk” after all?!!

There’re 2 parts to the definition of risk,


the first part
• Risk involves the possibility of an event in the future which has negative consequences.
My friends who are PMP certified might not like this definition because in PMP a risk
may result in future negative or positive consequences.
But people in ISTQB consider risk as only may result in future negative consequences.
Risk is used to focus on the effort required during testing.
It is used to decide where and when to start testing and to identify areas that need more
attention.

22
Testing is used to reduce the probability of a negative event occurring, or to reduce
the impact of a negative event.
So if we are worried that the client will get upset if there’s any miscalculation
in the report(this is a negative event that might happen) . Then we can add more testing
to the report to make sure we don’t any major defects.
This action will lower the probability or the impact of the negative event.

Risk-based testing draws on the collective knowledge and insight of the project stakeholders
to carry out product risk analysis.

To ensure that the likelihood of a product failure is minimized, risk management activities
provide a disciplined approach to:

• Analyze (and re-evaluate on a regular


basis) what can go wrong (risks)
• Determine which risks are important to
deal with
• Implement actions to mitigate those risks
• Make contingency plans to deal with the risks should they become actual events
In addition, testing may identify new risks, help to determine what risks should be mitigated,
and lower uncertainty about risks.

I’ll try here to give you a risk management course in 5 min, let talk first about analyzing
what can go wrong

Product and Project Risks


One of the many different ways that a project team can identify the risks in their project
is to look at the different classifications of risks and wonder if any of those risks
could actually happen to them.

We can classify the risks into 2 categories, project risks and product risks.
What is the difference between project and product?
Easy, a product is the software itself; a project is the activities needed to create

23
the product.
So product risk is any risk related to the software itself, while project risk is any
risk related to how we develop the software.

Now one of the famous questions I have seen is to distinguish between the 2 types of risks.
I even have gotten this question in my advanced level test manager exam
Product risk involves the possibility that a work product (e.g., a specification, component,
system, or test) may fail to satisfy the legitimate needs of its users and/or stakeholders.

When the product risks are associated with specific quality characteristics of a product
(e.g., functional suitability, reliability, performance efficiency, usability, security,
compatibility, maintainability, and portability), product risks are also called quality risks.

Examples of product risks include:

• Software might not perform its intended


functions according to the specification
• Software might not perform its intended
functions according to user, customer, and/or stakeholder needs

• System architecture may not adequately support some non-functional requirement(s)


• A particular computation may be performed incorrectly in some circumstances
• A loop control structure may be coded incorrectly
• Response-times may be inadequate for a high-performance transaction processing system
• User experience (UX) feedback might not meet product expectations

And for the 2nd type of risks, Project risks


Project risk involves situations that, should they occur, may have a negative effect on
a project's ability to achieve its objectives.

Examples of project risks include:

24
• Project issues:
• Delays may occur in delivery, task completion, or satisfaction of exit criteria or definition
of done
• Inaccurate estimates, reallocation of
funds to higher priority projects, or general cost-cutting
• across the organization may result in inadequate funding

• Late changes may result in substantial re-work


• Organizational issues:
• Skills, training, and staff may not be
sufficient

• Personnel issues may cause conflict and


problems
• Users, business staff, or subject matter
experts may not be available due to conflicting business priorities
• Political issues:
• Testers may not communicate their needs

and/or the test results adequately


• Developers and/or testers may fail to
follow up on information found in testing and reviews (e.g., not improving development
and testing practices)
• There may be an improper attitude toward,
or expectations of, testing (e.g., not appreciating the value of finding defects during testing)

• Technical issues:
• Requirements may not be defined well enough
• The requirements may not be met, given existing constraints
• The test environment may not be ready on time
• Data conversion, migration planning, and their tool support may be late
• Weaknesses in the development process may impact the consistency or quality of project
work products such as design, code, configuration, test data, and test cases

25
• Poor defect management and similar problems may result in accumulated defects and other
technical debt
• Supplier issues:
• A third party may fail to deliver a necessary product or service, or go bankrupt
• Contractual issues may cause problems to the project

Project risks may affect both development activities and test activities.
In some cases, project managers are responsible for handling all project risks, but it is
not unusual for test managers to have responsibility for test-related project risks.

Now Risk Analysis The 2nd part of the definition of risk is


• The level of risk is determined by the likelihood of the event and the impact (the
harm) from that event.

Level of risk = probability of the risk × impact if it did happen

So for example if we have 2 risks


The first is a risk of having a UI issue,
the probability of this risk happening is 4 (4 on
a scale of 1 to 5, 1 being a low risk and 5 a high one).

but the impact of this risk happens is very low, only 1 (using the same scale) then the
level of risk or risk score for the first risk is 4 x 1 = 4

A second risk might be a miscalculation is one of the reports, the probability of such
a risk is very low say 2 but the impact of such a defect would be high as the customer
will really get pissed off is he saw such a defect so the impact might be 3
So the level of risk is this case is 2 x 3 = 6

So the level of risk for the miscalculation is higher than that of the UI issue
This means that if we have very limited time for testing then we would concentrate our
efforts to test the report to lower the probability of a miscalculation.

26
Prioritizing risks in an attempt to find the potential critical areas of the software as
early as possible.
As I said, there’re many ways to identify risks, any identified defect should be analyzed
and classified for better risk management.
So now we have a long list of possible risks, we should calculate the risk level for each
risk and sort the risks accordingly.

That’s how we will know where to focus out testing attention.

Risk-based Testing and Product Quality


As we have said, Risks are used to decide where to start testing , where to test more,
making some testing decisions and when to stop testing.
Therefore, Testing is used as a risk mitigation activity, to provide feedback about identified
risks, as well as providing feedback on residual (unresolved) risks.
A risk-based approach to testing provides proactive opportunities to reduce the levels
of product risk.
Proactive means that we will not wait till the risk happen to deal with it but rather
we will be ready for it and even get rid of it before it even happens.
To summaries, Risk-based testing involves product risk analysis, which includes the
identification of product risks and the assessment of each risk’s likelihood and impact.
The resulting product risk information is used to guide test planning, the specification,
preparation and execution of test cases, and test monitoring and control.
Analyzing product risks early contributes to the success of a project.

In a risk-based approach, the results of product risk analysis are used to:
• Determine the test techniques to be employed
• Determine the particular levels and types
of testing to be performed (e.g., security testing, accessibility testing)

• Determine the extent of testing to be carried out


• Prioritize testing in an attempt to find the critical defects as early as possible

27
• Determine whether any activities in addition to testing could be employed to reduce risk
(e.g., providing training to inexperienced designers)

Now we have analyzed our risked and prioritized then what we need to do now to manage
those
risks, handle those risks, lower their risk levels

There’re four ways we can handle or respond to risks:


1- Avoid : doing anything to make the risk level 0, meaning that either making the probability
0 or making the impact of that risk 0 Let’s imagine a risk that you have heard
rumors that one of your team members let’s call him Jack might move to another company
To avoid such a risk, you would not assign jack to your project in the first place and
get another one
2- Mitigate: mitigate means that you’ll lower the risk level, and you can achieve
this by either lowering the probability or lowering the impact of the risk
Well, what should we do with jack, you can lower the probability of him moving by give
him a promotion or salary increase, or you can lower the impact by giving him minor tasks
to work on
3- Transfer: meaning moving the risk from your side to another side, you might ask jack’s
manger to assure you that if jack leaves the company , then he’d be responsible for finding
you another resource with the same qualification or maybe by outsourcing the whole job to
another
company
4- Or you’ll Accept the risk, and you can
be a passively accept by simply waiting for risk to happen and see what to do then or
you can accept it actively by putting a plan to be executed in case jack left like planning
for a 2 weeks hand over from jack to a new resource ..this is called a contingency plan
or Wow that was a tough risk management course
in 5 minutes or so, hope you like it

It’s clear that any project risk will later affect the product itself, so the objective
of all of our risk management efforts is to minimize product risk.

28
INDEPENDENT TESTING

test management this organization independent testing at the beginning of

this course we said that testing tasks can be done by anyone it may be done by

people in a specific testing role or by people in another role for example

customers the relationship between the tester and the test object has an effect

on the testing itself by the relationship we mean how much the tester

psychologically attached to what he is testing this relationship represents how

the tester is dependent or independent from the test object a certain degree of

Independence often makes the test or more effective at finding defects due to

the differences between the authors and the testers mental biases we have talked

about the mental bias when we were talking about the psychology of testing

in the first section of this course independence is not however a

replacement for familiarity and developers can efficiently find many

defects in their own code in this lecture we will elaborate more on how

independence affects the test management of the software board the approach is to

organize at his team very form a company to another and from a project to another

what we are trying to achieve here is to understand that testing independence

should be brought into consideration when organizing at-st degrees of

Independence in testing includes the following from low level of Independence

to high level on one side of tested independence lies at available with low

independence who tests his own code and by the way notice that when I say low

independence is equivalent to high dependence low independence is

equivalent to high dependence so please take care when they mentioned dependence

or independence so again on one side of testing independence lies at available

with low independence who tests his own code a little higher independence is

desktop from the development team this could be developers

29
testing their colleagues products then the independent testing team inside the

organization reporting to project management or executive management

independent testers from the business organization or user community or with

specializations in specific test types such as usability security performance

regularity compliance or portability and on the very other side was very high

independence lies independent testers external to the organization a third

party or a contractor either working on site which is in sourcing or off site

which is outsourcing independent testing is sure a good thing but it doesn't mean

that we should only consider highly independent testers so let's look at

each type of test us form the independence bow interview and see the

pros and cons of considering this type of tester to the testing team to the

testing team first the developer the author of the code should we allow him

to test his own code even if he is highly dependent on the code the boss of

using the variable of testing our know the code best we'll find problems that

the testers will miss can find and fix faults cheaply the cons of using the

developer for testing are difficult to destroy on world it's his own baby after

all tendency to see expected results not actual results subjective assessment

so let's consider a tester from the development team other than the

developer the blows are independent view of the software more independent than

the developer dedicated to testing not coding and testing at the same time part

of the team working to the same goal which is quality the cons are lack of

respect he is a body lonely thankless task he is the only test turns a project

corruptible peer pressure a single view or opinion again he is the only testers

aboard then comes the independent caste who is

manger this testing the boss dedicated team just to do testing the specialist

testing expertise testing is more objective and more consistent the cons

are over the world syndrome there's a wall between the developers and testers

30
our department and your department okay and there could be some politics issues

as well may be confrontational over-reliance on testers developers will

be lazy to test depending on the testers to do the job for them what about the

specialized testers either form the user community or with a specialization in a

specific testing type security performance and so on sure they are the

highest specialist and they field but they need good people skills

communication and communication could be very tough with adverb

last with highest independence and low dependence comes third party

organization where we outsource the testing of software to another

organization highly specialist testing expertise if

outsourcing to a good organization of course independent of internal politics

and the cons are lack of product knowledge they don't know what they are

testing Z are not from the same industry expertise gains goes outside the company

could be expensive actually it's expensive and confidential information

will be leaked from inside the organization to the third party

inclusive therefore the idea is to get as much as possible forms of laws of

independent testing and try to avoid as much as you can from the cons of

independent testing for most types of budgets especially complex or safety

critical projects it's usually best to have multiple levels of testing with

some or all of the levels done by independent testers development staff

may participate in testing especially at the lower levels so as to exercise

control over the quality of their own work we should consider asking the users

to help with the testing and also we should consider asking testing subject

matter experts to test the critical parts of the application or software if

needed and so on in addition the way in which independence of testing is

implemented varies depending on the software development lifecycle for

example in agile development testers may be part of a development team in some

31
organizations usually agile methods these testers may be considered part of

a larger independent test team as well in addition in such organizations board

act owners may be perform acceptance testing to validate user stories at the

end of each iteration to summarize potential benefits of test

independence include independent testers are likely to recognize different kinds

of failures compared to developers because of their different backgrounds

technical perspectives and biases and independent tester can verify a

challenge or disapprove assumptions made by stakeholders during specification and

implementation of the system for example if a developer assumed that a value

should be in a specific range then the tester will verify this assumption and

will not take it for granted potential drawbacks of test independence include

the more independence the more isolation from the development team leading to a

lack of collaboration delays in both IDing feedback to the development team

or an confrontational relationship with the development team developers may lose

a sense of responsibility for quality many times I have heard developers think

that they should not test deserve own code because it is a testers

responsibility which of course is not right at all I'm saying that in the

nicest possible way independent testers may be seen as a

bottleneck or blamed for delays in it ease independent testers may lack some

important information about the test object many organizations are able to

successfully achieve the benefits of test independence while avoiding the

drawbacks so let's all hope we can use it

Tasks of a Test Manager and Tester

Testing tasks may be done by people in a specific

32
testing role, or may be done by someone in another role, such as a project manager,
quality

manager, developer, business and domain expert, infrastructure or IT operations.

The ISTQB curriculum talks in detail about 2 roles only, The test manager and the tester.

Though the same people may play both roles at various points during the project.

The activities and tasks performed by these two roles depend on the project and product

context, the skills of the people in the roles, and the organization.

Let's take a look at the work done in these roles, starting with the test manager.

The test manager is tasked with overall responsibility for the test process and
successful leadership

of the test activities.

Typical test manager tasks may include:


1.test strategy : Write or review a test strategy for the project, and test policy for the
organization

if not already in place

2.Plan; they Plan the test activities – considering the context and understanding the test
objectives

and risks – including selecting test approaches, estimating the time, effort and cost of
testing,

acquiring resources, defining test levels, test cycles, and planning defect management

and create high level test schedule ..notice I said high level

3.Write and update the test plan(s)

33
4.Coordinate: Coordinate the test strategy and test plans with project managers,
product

owners and others

5.Share the testing perspective to other project activities, such as integration planning

6.Initiate the analysis, design, implementation, and execution of tests, monitor test
progress

and results, and check the status of exit criteria (or definition of done)

7.Test Reports: Prepare and deliver test progress reports and test summary reports
based on

the information gathered during testing

8.Adopt: Adapt planning based on test results and progress (sometimes documented in
test

progress reports, and/or in test summary reports for other testing already completed on
the

project) and take any actions necessary for test control

9.Support setting up the defect management system and adequate configuration


management of testware:

we will talk more about defect management and configuration management in future
videos

in this section

10.Metrics : Introduce suitable metrics for measuring test progress and evaluating the
quality of

the testing and the product

11.Tools : Support the selection and implementation of tools to support the test process,
including

recommending the budget for tool selection (and possibly purchase and/or support),
allocating

34
time and effort for pilot projects, and providing continuing support in the use of the
tool(s)

12.Design Test Environment: they make sure the test environment is put into place
before

test execution and managed during test execution.

13.Promote and advocate the testers, the test team, and the test profession within
the organization

14.Develop the skills and careers of testers (e.g., through training plans, performance

evaluations, coaching, etc.)

The test management role might be performed by a professional test manager, or by a


project

manager, a development manager, or a quality assurance manager.

In larger projects or organizations, several test teams may report to a test manager, test

coach, or test coordinator, each team being headed by a test leader or lead tester.

The way in which the test manager role is carried out varies depending on the software

development lifecycle.

For example, in Agile development, some of the tasks mentioned above are handled by
the

whole Agile team, especially those tasks concerned with the day-to-day testing done
within the

team, often by a tester working within the team.

Some of the tasks that span multiple teams or the entire organization, or that have to

do with personnel management, may be done by test managers outside of the


development

35
team, who are sometimes called test coaches.

Typical tester tasks may include:


1.Review and contribute to test plans

2.Analyze for testability Analyze, review and assess user requirements,


specifications and

models for testability

3.Identify and document test conditions, and capture traceability between test
cases, test

conditions, and the test basis

4.Design, set up, and verify test environment(s), … setting up the hardware and
software needed

for testing often coordinating with system administration and network management

5.Design and implement test cases and test procedures

6.Prepare test data o Prepare and acquire test data

7.Create the detailed test execution schedule…yes we leave it up to the testers to


create their

own detailed test schedule around the high level schedule created by the test manager

8.Execute tests, evaluate the results, and document deviations from expected
results

9.Use tools Use appropriate tools to facilitate the test process

10.Automate tests (may be supported by a developer or a test automation expert)

36
11.Evaluate non-functional characteristics such as performance efficiency, reliability,
usability,

security, compatibility, and portability

12.Review: Review and contribute to test plans and also Review tests developed by
others

Again and again, Depending on the risks related to the product and the project, and the
software

development lifecycle model selected, different people may take over the role of tester
at

different test levels.

For example, at the component testing level and the component integration testing
level,

the role of a tester is often done by developers.

At the acceptance test level, the role of a tester is often done by business analysts,

subject matter experts, and users.

At the system test level and the system integration test level, the role of a tester is often

done by an independent test team.

At the operational acceptance test level, the role of a tester is often done by operations

and/or systems administration staff.

People who work on test analysis, test design, specific test types, or test automation may

be specialists in these roles.

the questions in this part are usually to differentiate between the tasks of a tester

and a test manager.

As you may have noticed the test leader tasks are related to how to do things while the

testers tasks are related to the actual hands on of doing the things

37
5.2.2 Test Strategy and Test Approach 5.2.

test Planning and Estimation 5.2.2 Test Strategy, Test Approach

A test strategy provides a generalized description of the test process, usually at the
product

or organizational level.

The test strategy describes, at a high level, independent of any specific project, the "how"

of testing for an organization.

the choice of the test strategy is one of the powerful factors if not the most powerful

factor in the success of the test effort and the accuracy of the test plans and estimates.

Let’s look at the major types of test strategies that are commonly found.

• Analytical: in Analytical test approaches, our testing will be based on an analysis of

some factor, usually during the requirements and design stages of the project.

To decide where to test first and where to test more and when to stop testing.

For example, the risk-based strategy involves performing a risk analysis using project
documents

and stakeholder input.

Risk-based testing gives higher attention to areas of highest risk.

Another analytical test strategy is the requirements-based strategy, where an analysis of


the requirements

specification forms the basis for planning, estimating and designing tests.

• Model-based: in model-based approach, we create, design or benchmark some formal

or informal model that our system must follow.

The model will be based of some required aspect of the product, such as a function, a
business

38
process, an internal structure, or a non-functional characteristic (e.g., reliability).For
example,

our software response time should be faster than that of the competitor’s.

We will keep on testing our software until the behavior of the system under test
conforms

to that predicted by the model.

Examples of such models include business process models, state models, and reliability
growth models.

• Methodical: This type of test strategy relies on making systematic use of some
predefined

set of tests or test conditions, such as a taxonomy of common or likely types of failures,

a list of important quality characteristics, or company-wide look-and-feel standards for

mobile apps or web pages.

Examples of Methodical approaches are failure-based (including error guessing and fault
attacks),

checklist based and quality-characteristic based.

• Process- or standard-compliant: in this approach, you might adopt an industry


standard

or a known process to test the system.

For example, you might adopt the IEEE 29119 standard for your testing.

Alternatively, you might adopt one of the agile methodologies such as Extreme
Programming.

Process- or standard-compliant strategies have in common reliance upon external rules

or standards.

• Reactive: In this type of test strategy, testing is reactive to the component or system

being tested, and the events occurring during test execution, rather than being pre-
planned

(as the preceding strategies are).

Tests are designed and implemented, and may immediately be executed in response to
knowledge

39
gained from prior test results.

Exploratory testing is a common technique employed in reactive strategies.

• Consultative or directed: This type of test strategy is driven primarily by the advice,

guidance, or instructions of stakeholders, business domain experts, or technology


experts,

who may be outside the test team or outside the organization itself..For example, you

might ask the users or developers about the system to tell you what to test or even rely

on them to do the testing.

• Regression-averse: This type of test strategy is motivated by a desire to avoid


regression

of existing capabilities.

Regression-averse test strategy includes reuse of existing testware (especially test cases

and test data), extensive automation of regression tests, and standard test suites.

so that, whenever anything changes, you can re-run every test to ensure nothing has
broken.

So which approach is the best?

Again, There is no one best approach.

An appropriate test strategy is often created by combining several of these types of test

strategies according to your situation.

For example, risk-based testing (an analytical strategy) can be combined with exploratory

testing (a reactive strategy); they complement each other and may achieve more
effective

testing when used together.

While the test strategy provides a generalized description of the test process, The test

approach is the implementation of the test strategy for a specific project or release.

So, When you assess the risk of your project as we mentioned in a previous lecture and

refine your project objectives and your project test objectives.

Then you can decide on the test approach you will take to test your project based on
your

40
organization test strategy.

The test approach will be reflected in your decisions in planning the testing effort.

It is the starting point for planning the test process, for selecting the test techniques,

test levels and test types to be applied, and for defining the entry and exit criteria

(or the definition of done).

The tailoring of the strategy is based on decisions made in relation to the complexity

and goals of the project, the type of product being developed, and product risk analysis.

Questions in this topic in the ISTQB exam are more about defining a situation and ask

you which best approach to use.

For example, if you are using agile or waterfall.

Then what’s the approach you are using?

Process- or standard-compliant

If you are asking the user which areas to test.

Then what’s the approach you are using?

Directed

If you are using test cases from an old version of the software.

Then what’s the approach you are using?

Aggressive-averse. and so on

How do you know which strategies to pick or blend for the best chance of success?

There are many factors to consider, but let us highlight a few of the most important:

• Risks: Testing is about risk management, so consider the risks and the level of risk.

For a well-established application that is evolving slowly, regression is an important

risk, so regression-averse strategies make sense.

For a new application, a risk analysis may reveal different risks if you pick a risk-based

analytical strategy.

41
• Available resources and Skills: So, you have to consider which skills your testers

possess and lack.

A standard- compliant strategy is a smart choice when you lack the time and skills in

your team to create your own approach.

• Objectives: Testing must satisfy the needs of stakeholders to be successful.

If the objective is to find as many defects as possible with a minimal amount of up-front

time and effort invested then a reactive strategy makes sense.

• Regulations: Sometimes you must satisfy not only stakeholders, but also some
regulations.

In this case, a methodical test strategy may satisfy those regulations.

• Product: The nature of the product and the business, e.g. a different approach is

required for testing mobile phone coverage than for testing an online banking
operation.

• Safety: safety considerations promote more formal strategies

• Technology

42

You might also like