You are on page 1of 16

Created by:Farah Gul

z
Lecture # 7
z
Objective

 Best Practices in industry for Quality Software

 Test Policy

 Test Strategies
 Analytical strategies, such as risk-based testing
 Model-based strategies
 Methodical strategies
 Process- or standard-compliant strategies
 Dynamic or heuristic strategies
 Consultative strategies
 Regression testing strategies
z
Best Practices in industry for Quality
Software
 Involve test team from beginning of Project

 Requirement walkthrough should be conducted by requirement


collector e.g.BA(Business Analyst)/Product Owner/Scrum
Master. Developer, tester and customer support team members
should be part of this review.

 Requirement should be clear, correct and unambiguous.


Acceptance criteria(approved by customer) should be defined.

 Team size should be small(2-4 members), Developer + tester


etc.
z
Best Practices in industry for Quality
Software(Continue)
 Requirement/User stories should be placed at centralized repository
system(Jira, MS Test Manager etc.)

 Test scenarios should be identified by test team at requirement time. Test


scenarios should be reviewed by SME(Subject matter expert).

 Code should be check-in after review by peer developer(Senior


Developer/Development lead)

 Unit test(Manual/automated) should be done by developer before transfer


user stories to QA team for testing.

 Smoke test(manual + automated) should be conducted when new build


deployed at QA environment.
z
Best Practices in industry for Quality
Software(Continue)
 QA environment(DB/application) should be separate from
development environment.

 There should be no conflict between developer and tester.

 Use static analysis tool e.g. Sonar for syntax and compiler
errors for increasing code quality

 Training should be conducted to enhance knowledge of


developer and tester for better quality software, Specially for
new technology
z
Best Practices in industry for Quality
Software(Continue)
 Code should be store at centralized repository e.g. Git, Github,
MS Team server

 Test cases & Bugs should be managed at centralized repository


system e.g. Jira, MS Test manager, HP ALM, Mantis etc.

 Bugs backlog should be periodically review by development


manager and Product manager.

 More authority provide to team for new innovative work

 QA Passed Build should be approved by all stakeholder.


z
Test Policy

 The document that describes the organization's philosophy,


objectives, and key metrics for testing—and possibly quality
assurance as well.

 Why testing required for organization?


z
Test Strategies/approach(How testing
will be done within organization?)
 The test strategy describes, at a high level, independent of any
specific project, the "how" of testing for an organization.

 The test strategy is the organization's methods of testing.


The test strategy document can cover the options that the
organization can use to manage product and project risks during
testing. For example, what are the typical product risks that we
try to address with testing, and how do we do so? How do those
product risks relate to the high-level test objectives? What are
the typical project risks that can affect testing, and how do we
manage those risks?
z
Types of Test Strategies

 Analytical strategies, such as risk-based testing

 Model-based strategies

 Methodical strategies

 Process- or standard-compliant strategies

 Dynamic or heuristic strategies

 Consultative strategies

 Regression testing strategies


z
Analytical strategies(requirement/risk-
based testing)

 such as risk-based testing, the test team analyzes a test basis


to identify the test conditions.

 The results of the analysis form the basis of the test effort.
Analytical test strategies tend to be thorough, excellent for
quality risk management, and good at finding bugs.

 Analytical test strategies are primarily preventive and require


significant early-project time and effort.
z
Model-based strategies(mock system for
testing real system)

 In model-based strategies, such as those relying on operational


profiles, you design, implement, and execute your tests based
on models of system behavior.

 If the models capture the system's essential elements, your


tests will be good. So model-based strategies rely on the ability
of the tester to develop good models. These strategies fail when
the models, or the testers building the models, fail to capture the
essential or risky aspects of the system.
z
Methodical strategies(checklists of quality
characteristics of previous system e.g. bank)

 In methodical strategies, such as those that follow checklists of


quality characteristics, you follow some standard catalog of test
objectives.

 Methodical strategies are lightweight and effective for stable


systems and for systems that are similar ones you've tested
previously.

 However, significant changes will make these strategies


ineffective, at least unless some adjustment to the catalog of
test objectives occurs
z
Process- or standard-compliant
strategies(IEEE 829,
 In process- or standard-compliant strategies, such as those based on
IEEE 829, you follow some process or standard promulgated by a
standards committee or group of wise people somewhere. These
strategies can save you the time and energy of devising your own
methodologies. However, if the overall mission of testing that you have
is different from the ones that the creators of the standard know, or if the
test problems you are struggling with are not those that the creators of
the standard have resolved, then you'll find that this borrowed approach
fits about as a well as a borrowed tuxedo.

 Verification of systems and software in compliance with rigorous,


formal standards, including DO-178C, DO-254 and DO-278B
z
Dynamic or heuristic strategies

 In dynamic or heuristic strategies, such as those based on bug taxonomies or


software attacks, you use general rules about how software fails; lists of
important software areas, features, and behaviors; or lists of common
software data structures and operations to make educated guesses to
focus the testing. These test strategies will minimize structure, maximize
flexibility, and, typically, focus on finding bugs (rather than building
confidence or reducing risk). The lack of structure and documentation
means that they lack detailed coverage information and do not systematically
reduce quality risks or incorporate any preventive elements. These
weaknesses aside, using a purely dynamic strategy is better than not testing at
all. Better yet, you can blend dynamic strategies with analytical strategies,
providing a good way to check for gaps in the analytically designed tests.
z
Consultative strategies(Crowdsource
testing)
 In consultative strategies, such as those where users or
programmers determine what is to be tested, the test manager
trusts that some other group of people knows best what should be
tested. The test manager asks them for the test conditions to
cover and covers those conditions. The other group also acts as
the primary test oracle, in that we'll ask them to determine the
expected results. This strategy can be appropriate when the test
team is brought in primarily to serve as the hands and eyes of
another group, such as with some types of outsourced testing, but
generally, it is a poor substitute for professional testing.
z
Regression testing strategies

 In regression testing strategies, such as those that rely on


extensive functional test automation, we focus on test
repetition and smart test selection to try to minimize the risk of
breaking something that already works. For stable systems that
are changing only slowly, these strategies can make sense.
However, there is always the risk that new features will be poorly
tested and, should the rate of change accelerate, the test coverage
for recently added features will be left behind. In addition, even
given a good approach for test automation, a rapidly increasing
automated regression test set can overwhelm the abilities of the
test team to run and analyze all the test results.

You might also like