You are on page 1of 25

7.

2 DESIGN CONCEPTS
Fundamental software design concepts provide the necessary frame work for “getting it right.”
Abstraction –
Many levels of abstraction can be posed.
The highest level states the solution in broad terms using the language of the problem environment.
The lower level gives more detailed description of the solution.

Procedural Abstraction refers to a sequence of instruction that have specific and limited
functions.
Data Abstraction is a named collection of data that describes a data object.
Architecture –
provides overall structure of the software and the ways in which the structure provides conceptual
integrity for a system. The goal of software design is to derive and architectural rendering of a system
which serves as a framework from which more detailed design activities can be conducted.
The Architectural design is represented by the following models
 Structural Model – Organized collection of program components.
 Framework Model – Increases level of design abstraction by identifying repeatable architectural
design frameworks that are encountered in similar types of applications.
 Dynamic Model – Addresses the behavioral aspects of the program architecture.
 Process Model – Focuses on design of business or technical process that system must
accommodate.
 Functional Model – Used to represent functional hierarchy of the system.

Patterns –
A pattern is a named nugget of insight which conveys the essences of proven solutions to a
recurring problem with a certain context amidst competing concerns. The design pattern provides
a description that enables a designer to determine
• Whether the pattern is applicable to the current work?
• Whether the pattern can be reused?
• Whether the pattern can serve as a guide for developing a similar but functionally or structurally
different pattern?
Modularity –
the software is divided into separately named and addressable components called modules that are
integrated to satisfy problem requirements.

Information Hiding –
modules should be specified and designed so that information contained within a module is
inaccessible to other modules that have no need for such information. Information hiding
provides greatest benefits when modifications are required during testing and software
maintenance.
Functional Independence – is achieved by developing modules with single minded function and
an aversion to excessive interaction with other modules. Independence is assessed using two
qualitative criteria
Cohesion – is a natural extension of information hiding concept, module performs single
task, requiring little interaction with other components in other parts of a program.
Coupling – interconnection among modules and a software structures, depends on
interface complexity between modules, the point at which entry or reference is made to
a module, and what data pass across the interface.

Refinement –
is a process of elaboration. Refinement causes the designer to elaborate on original statement, providing
more and more detailed as each successive refinement occurs.

Refactoring –
important design activity suggested for agile methods, refactoring is a recognization technique
that simplifies design of a component without changing its function or behavior. “Refactoring is a
process of changing a software system in such away that it does not alter the external behavior of
the code yet improves its internal structure.

Design Classes –
describes some element of problem domain, focusing on aspects of the problem that are user or
customer visible. The software team must define a set of design classes that are
▪ User Interface classes define all abstractions that are necessary for Human
Computer Interaction (HCI). In many cases, HCI occurs within the context of a metaphor
(e.g., a checkbook, an order form, a fax machine) and the design classes for the interface
may be visual representations of the elements of the metaphor.
▪ Business domain classes are often refinements of the analysis classes defined earlier. The
classes identify the attributes and services (methods) that are required to implement some
element of the business domain.
▪ Process classes implement lower-level business abstractions required to fully manage
the business domain classes.
▪ Persistent classes represent data stores (e.g, a database) that will persist beyond the
execution of the software.
▪ System classes implement software management and control functions that enable the
system to operate and communicate within its computing environment and with the outside
world.

The Design Model

Introduction to Design Modeling in Software Engineering

Design modeling in software engineering represents the features of the software that helps engineer
to develop it effectively, the architecture, the user interface, and the component level detail. Design
modeling provides a variety of different views of the system like architecture plan for home or
building. Different methods like data-driven, pattern-driven, or object-oriented methods are used for
constructing the design model. All these methods use set of design principles for designing a model.
Working of Design Modeling in Software Engineering

Designing a model is an important phase and is a multi-process that represent the data structure,
program structure, interface characteristic, and procedural details. It is mainly classified into four
categories – Data design, architectural design, interface design, and component-level design.
 Data design: It represents the data objects and their interrelationship in an entity-relationship
diagram. Entity-relationship consists of information required for each entity or data objects as well as
it shows the relationship between these objects. It shows the structure of the data in terms of the
tables. It shows three type of relationship – One to one, one to many, and many to many. In one to
one relation, one entity is connected to another entity. In one many relation, one Entity is connected
to more than one entity. un many to many relations one entity is connected to more than one entity as
well as other entity also connected with first entity using more than one entity.
 Architectural design: It defines the relationship between major structural elements of the
software. It is about decomposing the system into interacting components. It is expressed as a block
diagram defining an overview of the system structure – features of the components and how these
components communicate with each other to share data. It defines the structure and properties of the
component that are involved in the system and also the inter-relationship among these components.
 User Interfaces design: It represents how the Software communicates with the user i.e. the
behavior of the system. It refers to the product where user interact with controls or displays of the
product. For example, Military, vehicles, aircraft, audio equipment, computer peripherals are the
areas where user interface design is implemented. UI design becomes efficient only after performing
usability testing.
This is done to test what works and what does not work as expected. Only after making the repair,
the product is said to have an optimized interface.
 Component level design: It transforms the structural elements of the software architecture into a
procedural description of software components. It is a perfect way to share a large amount of data.
Components need not be concerned with how data is managed at a centralized level i.e. components
need not worry about issues like backup and security of the data.
Principles of Design Model
 Design must be traceable to the analysis model:
Analysis model represents the information, functions, and behavior of the system. Design model
translates all these things into architecture – a set of subsystems that implement major functions and
a set of component kevel design that are the realization of Analysis classes. This implies that design
model must be traceable to the analysis model.
 Always consider architecture of the system to be built:
Software architecture is the skeleton of the system to be built. It affects interfaces, data structures,
behavior, program control flow, the manner in which testing is conducted, maintainability of the
resultant system, and much more.
 Focus on the design of the data:
Data design encompasses the manner in which the data objects are realized within the design. It helps
to simplify the program flow, makes the design and implementation of the software components
easier, and makes overall processing more efficient.
 User interfaces should consider the user first:
The user interface is the main thing of any software. No matter how good its internal functions are
or how well designed its architecture is but if the user interface is poor and end-users don’t feel
ease to handle the software then it leads to the opinion that the software is bad.

A Strategic approach to software testing

• Testing is a set of activities that can be planned in advance and conducted


systematically. For this reason a template for software testing—a set of steps into
which you can place specific test case design techniques and testing methods—
should be defined for the software process.

• A number of software testing strategies have been proposed in the literature.

• All provide you with a template for testing and all have the following generic

characteristics:

Generic Characteristics are

• To perform effective testing, you should conduct effective technical reviews. By


doing this, many errors will be eliminated before testing commences.

• Testing begins at the component level and works “outward” toward the integration
of the entire computer-based system.

• Different testing techniques are appropriate for different software engineering


approaches and at different points in time.

• Testing is conducted by the developer of the software and (for large projects) an
independent test group.

• Testing and debugging are different activities, but debugging must be


accommodated in any testing strategy.
Strategic issues

Strategic issues facing companies are the important questions, topics, and challenges
that firms must manage to stay competitive and relevant in the ever-changing business
environment. Strategic issues come in all colors, shapes, and sizes. Understanding and
managing your strategic issues is key to achieving long-term, sustained growth. Some
leaders only think of issues as adverse or harmful events or problems that must be
addressed. However, issues can also include opportunities that the enterprise should
consider in order to achieve its strategic goals and objectives.

Internal Issues
• Supply-chain disruptions
• Product lifecycle
• Workforce and talent
• Product or service offerings
• Target customers
• Internal operating systems

• New innovations in products and processes


External Issues
• Globalization trends
• Political and regulatory changes
• Economic conditions
• Industry consolidations
• Technology disruptions
• Competitor behaviours and rivalry
• Social trends
• Environmental patterns
• Population health and welfare
Periodically, executives need to refresh their perspective and assumptions about all of
the variables that could impact their business. When examining relevant issues, we
advise our clients to consider three timeframes:

1. Near-term issues that could affect the organization this year or next.
2. Mid-term issues that could affect the success of the enterprise over the next three to
five years.
3. Long-term issues that could affect the fortune of the business five years from now or
beyond.
Test strategies for conventional software

1 Unit Testing

The unit test focuses on the internal processing logic and data structures within the
boundaries of a component. This type of testing can be conducted in parallel for
multiple components.

Unit-test considerations:-

1. The module interface is tested to ensure proper information flows (into and out).

2. Local data structures are examined to ensure temporary data store during execution.

3. All independent paths are exercised to ensure that all statements in a module have
been executed at least once.

4. Boundary conditions are tested to ensure that the module operates properly at
boundaries. Software often fails at its boundaries.

5. All error-handling paths are tested.

If data do not enter and exit properly, all other tests are controversial. Among the
potential errors that should be tested when error handling is evaluated are:

(1) Error description is unintelligible,

(2) Error noted does not correspond to error encountered,

(3) Error condition causes system intervention prior to error handling,

(4) exception-condition processing is incorrect,

(5) Error description does not provide enough information to assist in the location of the
cause of the error.

Unit-test procedures:-

The design of unit tests can occur before coding begins or after source code has been
generate. Because a component is not a stand-alone program, driver and/or stub
software must often be developed for each unit test.

Driver is nothing more than a “main program” that accepts test case data, passes such
data to the component (to be tested), and prints relevant results. Stubs serve to replace
modules that are subordinate (invoked by) the component to be tested.

A stub may do minimal data manipulation, prints verification of entry, and returns
control to the module undergoing testing. Drivers and stubs represent testing
“overhead.” That is, both are software that must be written (formal design is not
commonly applied) but that is not delivered with the final software product

2 Integration Testing

Data can be lost across an interface; one component can have an inadvertent, adverse
effect on another; sub functions, when combined, may not produce the desired major
function. The objective of Integration testing is to take unit-tested components and build
a program structure that has been dictated by design. The program is constructed and
tested in small increments, where errors are easier to isolate and correct. A number of
different incremental integration strategies are:-

a) Top-down integration testing

is an incremental approach to construction of the software architecture. Modules are


integrated by moving downward through the control hierarchy. Modules subordinate to
the main control module are incorporated into the structure in either a depth-first or
breadth-first manner.

The integration process is performed in a series of five steps:

1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.

2. Depending on the integration approach selected (i.e., depth or breadth first),


subordinate stubs are replaced one at a time with actual components.

3. Tests are conducted as each component is integrated.

4. On completion of each set of tests, another stub is replaced with the real component.

5. Regression testing may be conducted to ensure that new errors have not been
introduced.

The top-down integration strategy verifies major control or decision points early in the
test process. Stubs replace low-level modules at the beginning of top-down testing.
Therefore, no significant data can flow upward in the program structure. As a tester, you
are left with three choices:

(1) Delay many tests until stubs are replaced with actual modules,

(2) Develop stubs that perform limited functions that simulate the actual module, or

(3) Integrate the software from the bottom of the hierarchy upward.

b) Bottom-up integration-
Begins construction and testing with components at the lowest levels in the program
structure. Because components are integrated from the bottom up, the functionality
provided by components subordinate to a given level is always available and the need
for stubs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:

1. Low-level components are combined into clusters (sometimes called builds) that
perform a specific software sub function.

2. A driver (a control program for testing) is written to coordinate test case input and
output.

3. The cluster is tested.

4. Drivers are removed and clusters are combined moving upward in the program
structure. Integration follows the following pattern—D are drivers and M are
modules. Drivers will be removed prior to integration of modules.

Regression testing:-

Each time a new module is added as part of integration testing, the software changes.
New data flow paths are established, new I/O may occur, and new control logic is
invoked.

These changes may cause problems with functions that previously worked flawlessly.
Regression testing is the re-execution of some subset of tests that have already been
conducted to ensure that changes have not propagated unintended side effects.

Regression testing may be conducted manually or using automated capture/playback


tools. Capture/playback tools enable the software engineer to capture test cases and
results for subsequent playback and comparison.

The regression test suite contains three different classes of test cases:

• A representative sample of tests that will exercise all software functions.

• Additional tests that focus on software functions that are likely to be affected by the
change.

• Tests that focus on the software components that have been changed.

As integration testing proceeds, the number of regression tests can grow

Smoke testing:-
It is an integration testing approach that is commonly used when product software is
developed. It is designed as a pacing mechanism for time-critical projects, allowing the
software team to assess the project on a frequent basis. In essence, the smoke-testing
approach encompasses the following activities:

1. Software components that have been translated into code are integrated into a
build. A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product functions.

2. A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover “showstopper” errors
that have the highest likelihood of throwing the software project behind schedule.

3. The build is integrated with other builds, and the entire product is smoke tested
daily. The integration approach may be top down or bottom up. Smoke testing provides
a number of benefits when it is applied on complex, time critical software projects:

• Integration risk is minimized. Because smoke tests are conducted daily,


incompatibilities and other show-stopper errors are uncovered early,

• The quality of the end product is improved. Smoke testing is likely to uncover
functional errors as well as architectural and component-level design errors.

• Error diagnosis and correction are simplified. Errors uncovered during smoke
testing are likely to be associated with “new software increments”—that is, the software
that has just been added to the build(s) is a probable cause of a newly discovered error.

• Progress is easier to assess. With each passing day, more of the software has
been integrated and more has been demonstrated to work. This improves team morale
and gives managers a good indication that progress is being made.

Strategic options:- The major disadvantage of the top-down approach is the need for
stubs and the attendant testing difficulties that can be associated with them. The major
disadvantage of bottom-up integration is that “the program as an entity does not exist
until the last module is added”
Selection of an integration strategy depends upon software characteristics and,
sometimes, project schedule. In general, a combined approach or sandwich testing may
be the best compromise. As integration testing is conducted, the tester should identify
critical modules. A critical module has one or more of the following characteristics:

(1) Addresses several software requirements,

(2) Has a high level of control,

(3) Is complex or error prone?


(4) Has definite performance requirements.

Critical modules should be tested as early as is possible. In addition, regression tests


should focus on critical module function. Integration test work products:- It is
documented in a Test Specification. This work product incorporates a test plan and a test
procedure and becomes part of the software configuration. Program builds (groups of
modules) are created to correspond to each phase.

The following criteria and corresponding tests are applied for all test phases:

1. Interface integrity. Internal and external interfaces are tested as each module (or
cluster) is incorporated into the structure.

2. Functional validity. Tests designed to uncover functional errors are conducted.

3. Information content. Tests designed to uncover errors associated with local or global
data structures are conducted.

4. Performance.

Tests designed to verify performance bounds established during software design are
conducted. A history of actual test results, problems, or peculiarities is recorded in a Test
Report that can be appended to the Test Specification.

Validation testing

 The process of evaluating software during the development process or at the end
of the development process to determine whether it satisfies specified business
requirements.

 Validation Testing ensures that the product actually meets the client's needs. It
can also be defined as to demonstrate that the product fulfills its intended use
when deployed on appropriate environment.

 It answers to the question, Are we building the right product?

 Validation testing can be best demonstrated using V-Model. The


Software/product under test is evaluated during this type of testing.

Activities:

 Unit Testing

 Integration Testing

 System Testing

 User Acceptance Testing


Validation Testing Variations

 Component/Unit Testing – The aim of the unit testing is to look for bugs in the
software component. At the same time, it also verifies the work of modules and
objects which can be tested separately.

 Integration testing- This is an important part of the software validation model,


where the interaction between the different interfaces of the components is
tested. Along with the interaction between the different parts of the system, the
interaction of the system with the computer operating system, file system,
hardware, and any other software system it might interact with, is also tested.

System testing- System testing is carried out when the entire software system is ready.
The main concern of system testing is to verify the system against the specified
requirements. While carrying out the tests, the tester is not concerned with the internals
of the system but checks if the system behaves as per expectations.
System Testing

 System Testing (ST) is a black box testing technique performed to evaluate the
complete system the system's compliance against specified requirements. In
System testing, the functionalities of the system are tested from an end-to-end
perspective.

 System Testing is usually carried out by a team that is independent of the


development team in order to measure the quality of the system unbiased. It
includes both functional and Non-Functional testing.

 What do you verify in System Testing?

 Software Testing Hierarchy


Types of System Testing

 Usability Testing- mainly focuses on the user's ease to use the application,
flexibility in handling controls and ability of the system to meet its objectives

 Load Testing- is necessary to know that a software solution will perform under
reallife loads.

 Regression Testing- involves testing done to make sure none of the changes
made over the course of the development process have caused new bugs. It also
makes sure no old bugs appear from the addition of new software modules over
time.

 Recovery testing - is done to demonstrate a software solution is reliable,


trustworthy and can successfully recoup from possible crashes.

 Migration testing- is done to ensure that the software can be moved from older
system infrastructures to current system infrastructures without any issues.

 Functional Testing - Also known as functional completeness testing, Functional


Testing involves trying to think of any possible missing functions. Testers might
make a list of additional functionalities that a product could have to improve it
during functional testing.
 Hardware/Software Testing - IBM refers to Hardware/Software testing as
"HW/SW Testing". This is when the tester focuses his/her attention on the
interactions between the hardware and software during system testing.

The art of debugging

Debugging is crucial to successful software development, but even many experienced


programmers find it challenging. Sophisticated debugging tools are available, yet it may
be difficult to determine which features are useful in which situations. The Art of
Debugging is your guide to making the debugging process more efficient and effective.
The Art of Debugging illustrates the use three of the most popular debugging tools on
Linux/Unix platforms: GDB, DDD, and Eclipse. The text-command based GDB (the
GNU Project Debugger) is included with most distributions. DDD is a popular GUI front
end for GDB, while Eclipse provides a complete integrated development environment.
In addition to offering specific advice for debugging with each tool, authors Norm
Matloff and Pete Salzman cover general strategies for improving the process of finding
and fixing coding errors, including how to:

• Inspect variables and data structures


• Understand segmentation faults and core dumps
• Know why your program crashes or throws exceptions
• Use features like catch points, convenience variables, and artificial arrays
• Avoid common debugging pitfalls

Real world examples of coding errors help to clarify the authors’ guiding principles, and
coverage of complex topics like thread, client-server, GUI, and parallel programming
debugging will make you even more proficient. You'll also learn how to prevent errors in
the first place with text editors, compilers, error reporting, and static code checkers.

White box testing


• White-box testing is the detailed investigation of internal logic and structure of
the code. White-box testing is also called glass testing or open-box testing.

• In order to perform white-box testing on an application, a tester needs to know


the internal workings of the code.

• The tester needs to have a look inside the source code and find out which
unit/chunk of the code is behaving inappropriately.

• The box testing approach of software testing consists of black box testing and
white box testing.
• We are discussing here white box testing which also known as glass box is
testing, structural testing, clear box testing, open box testing and transparent box
testing.

• It tests internal coding and infrastructure of a software focus on checking of


predefined inputs against expected and desired outputs.

• It is based on inner workings of an application and revolves around internal


structure testing. In this type of testing programming skills are required to design
test cases.

• The primary goal of white box testing is to focus on the flow of inputs and
outputs through the software and strengthening the security of the software.

• The term 'white box' is used because of the internal perspective of the system.
The clear box or white box or transparent box name denote the ability to see
through the software's outer shell into its inner workings.

• Developers do white box testing. In this, the developer will test every line of the
code of the program.

• The developers perform the White-box testing and then send the application or
the software to the testing team, where they will perform the black box testing
and verify the application along with the requirements and identify the bugs and
sends it to the developer.

• The developer fixes the bugs and does one round of white box testing and sends
it to the testing team. Here, fixing the bugs implies that the bug is deleted, and
the particular feature is working fine on the application.

• Here, the test engineers will not include in fixing the defects for the following
reasons:

• Fixing the bug might interrupt the other features. Therefore, the test engineer
should always find the bugs, and developers should still be doing the bug fixes.

• If the test engineers spend most of the time fixing the defects, then they may be
unable to find the other bugs in the application.

• The white box testing contains various tests, which are as follows:

• Path testing

• Loop testing

• Condition testing
• Testing based on the memory perspective

• Test performance of the program

Path testing

• In the path testing, we will write the flow graphs and test all independent paths.
Here writing the flow graph implies that flow graphs are representing the flow of
the program and also show how every program is added with one another And
test all the independent paths implies that suppose a path from main() to function
G, first set the parameters and test if the program is correct in that particular
path, and in the same way test all other paths and fix the bugs.

Loop testing

In the loop testing, we will test the loops such as while, for, and do-while, etc. and also
check for ending condition if working correctly and if the size of the conditions is
enough.

For example: we have one program where the developers have given about 50,000
loops.

while(50,000)
……

……

We cannot test this program manually for all the 50,000 loops cycle. So we write a small
program that helps for all 50,000 cycles, as we can see in the below program, that test P
is written in the similar language as the source code program, and this is known as a
Unit test. And it is written by the developers only.

Loop testing

As we can see in the below image that, we have various requirements such as 1, 2, 3, 4.
And then, the developer writes the programs such as program 1,2,3,4 for the parallel
conditions.
Here the application contains the 100s line of codes.
Condition testing

In this, we will test all logical conditions for both true and false values; that is, we will
verify for both if and else condition.

For example:

if(condition) - true

…..

……
}

else -

false

…..

……

The above program will work fine for both the conditions, which means that if the
condition is accurate, and then else should be false and conversely.contains the 100s line
of codes.

Testing based on the memory (size) perspective

The size of the code is increasing for the following reasons:

• The reuse of code is not there: let us take one example, where we have four
programs of the same application, and the first ten lines of the program are
similar. We can write these ten lines as a discrete function, and it should be
accessible by the above four programs as well. And also, if any bug is there, we
can modify the line of code in the function rather than the entire code.

• The developers use the logic that might be modified. If one programmer writes
code and the file size is up to 250kb, then another programmer could write a
similar code using the different logic, and the file size is up to 100kb.

• The developer declares so many functions and variables that might never be
used in any portion of the code. Therefore, the size of the program will increase.
Test the performance (Speed, response time) of the program The application
could be slow for the following reasons:

• When logic is used.

• For the conditional cases, we will use or & and adequately.

• Switch case, which means we cannot use nested if, instead of using a switch
case.
• Test cases for white box testing are derived from the design phase of the
software development lifecycle. Data flow testing, control flow testing, path
testing, branch testing, statement and decision coverage all these techniques used
by white box testing as a guideline to create an error-free software.

White Box Testing

• White box testing follows some working steps to make testing manageable and
easy to understand what the next task to do. There are some basic steps to
perform white box testing.

Generic steps of white box testing

• Design all test scenarios, test cases and prioritize them according to high priority
number.

• This step involves the study of code at runtime to examine the resource
utilization, not accessed areas of the code, time taken by various methods and
operations and so on.

• In this step testing of internal subroutines takes place. Internal subroutines such
as nonpublic methods, interfaces are able to handle all types of data
appropriately or not.

• This step focuses on testing of control statements like loops and conditional
statements to check the efficiency and accuracy for different data inputs.

• In the last step white box testing includes security testing to check all possible
security loopholes by looking at how the code handles security.

Reasons for white box testing

• It identifies internal security holes.

• To check the way of input inside the code.


• Check the functionality of conditional loops.

• To test function, object, and statement at an individual level.

Advantages of White box testing

• White box testing optimizes code so hidden errors can be identified.

• Test cases of white box testing can be easily automated.

• This testing is more thorough than other testing approaches as it covers all code
paths.

• It can be started in the SDLC phase even without GUI.

Disadvantages of White box testing

• White box testing is too much time consuming when it comes to large-scale
programming applications.

• White box testing is much expensive and complex.

• It can lead to production error because it is not detailed by the developers.

• White box testing needs professional programmers who have a detailed


knowledge and understanding of programming language and implementation.

Techniques Used in White Box Testing

7 Different types of white-box testing


 Unit Testing

 Static Analysis

 Dynamic Analysis

 Statement Coverage

 Branch testing Coverage

 Security Testing

 Mutation Testing

Black box testing


• The black box testing is done by the Test Engineer, where they can check the
functionality of an application or the software according to the customer /client's
needs.

• In this, the code is not visible while performing the testing; that's why it is
known as black-box testing.

• Black box testing is a technique of software testing which examines the


functionality of software without peering into its internal structure or coding.
The primary source of black box testing is a specification of requirements that is
stated by the customer.

• In this method, tester selects a function and gives input value to examine its
functionality, and checks whether the function is giving expected output or not.
If the function produces correct output, then it is passed in testing, otherwise
failed. The test team reports the result to the development team and then tests the
next function. After completing testing of all functions if there are severe
problems, then it is given back to the development team for correction.
Generic steps of black box testing

 The black box test is based on the specification of requirements, so it is


examined in the beginning.

 In the second step, the tester creates a positive test scenario and an adverse test
scenario by selecting valid and invalid input values to check that the software is
processing them correctly or incorrectly.

 In the third step, the tester develops various test cases such as decision table, all
pairs test, equivalent division, error estimation, cause-effect graph, etc.

 The fourth phase includes the execution of all test cases.

 In the fifth step, the tester compares the expected output against the actual
output.

 In the sixth and final step, if there is any flaw in the software, then it is cured and
tested again.

 Test procedure

Test Cases

Techniques Used in Black Box Testing

 Decision Table Technique

 Boundary Value Technique


 State Transition Technique

 All-pair testing Technique

 Cause-Effect Technique

 Equivalence partitioning Technique

 Error guessing Technique

 Use case Technique

Types Of Black Box Testing


 Functional Testing
 Non-Functional Testing
 Regression Testing
 Requirement based testing
 Compatibility testing

Pros Cons

Testers do not require technical Difficult to automate


knowledge, programming or IT
skills

Testers do not need to learn Requires prioritization, typically


implementation details of the infeasible to test all user paths
system
Tests can be executed by Difficult to calculate test coverage
crowdsourced or outsourced
testers

Low chance of false positives If a test fails, it can be difficult to


understand the root cause of the issue

Tests have lower complexity, Tests may be conducted at low scale or


since they simply model common on a nonproduction-like environment
user behavior

You might also like