You are on page 1of 40

Module 5

Integration and Component-Based Software Testing: Overview, Integration testing


strategies, Testing components and assemblies. System, Acceptance and Regression Testing:
Overview, System testing, Acceptance testing, Usability, Regression testing, Regression test
selection techniques, Test case prioritization and selective execution. Levels of Testing,
Integration Testing: Traditional view of testing levels, Alternative life-cycle models, The
SATM system, Separating integration and system testing, A closer look at the SATM system,
Decomposition-based, call graph-based, Path-based integrations.

Integration and Component-Based Software Testing: Overview,

The traditional V model introduced divides testing into four main levels of granularity:
module, integration, system, and acceptance test.

Module or unit test checks module behavior against specifications or expectations;


integration test checks module compatibility; system and acceptance tests check behavior of
the whole system

Integration faults are ultimately caused by incomplete specifications or faulty


implementations of interfaces, resource usage, or required properties.

1
Mars Climate Orbiter

2
Timeline of travel

Time
Date Event
(UTC)

11 Dec
18:45:51 Spacecraft launched
1998

08:41:00 Insertion begins. Orbiter stows solar array.

08:50:00 Orbiter turns to correct orientation to begin main engine burn.

Orbiter fires pyrotechnic devices which open valves to begin


08:56:00
pressurizing the fuel and oxidizer tanks.

23 Sep Main engine burn starts; expected to fire for 16 minutes 23


09:00:46
1999 seconds.

09:04:52 Communication with spacecraft lost

Orbiter expected to enter Mars occultation, out of radio contact


09:06:00
with Earth.[n 1]

09:27:00 Expected to exit Mars occultation.[n 1]

25 Sep Mission declared a loss. Reason for loss known. No further


1999 attempts to contact.

Apache Config Parsing Errors


Error:
The apache plugin is not working; there may be problems with your existing configuration.
The error was: PluginError((‗There has been an error in parsing the file (%s): %s‘,
u‘/etc/apache2/mods-enabled/alias.conf‘, u‘Syntax error‘),)

3
Integration testing strategies,

Integration testing proceeds incrementally with assembly of modules into successively larger
subsystems.

In addition, controlling and observing the behaviour of an integrated collection of modules


grows in complexity with the number of modules and the complexity of their interactions.

A strategy for integration testing of successive partial subsystems is driven by the order in
which modules are constructed (the build plan), which is an aspect of the system architecture.

Since incremental assemblies of modules are incomplete, one must often construct
scaffolding—drivers, stubs, and various kinds of instrumentation—to effectively test them.

Big Bang Testing

One extreme approach is to avoid the cost of scaffolding by waiting until all modules are
integrated, and testing them together

In this big bang approach, neither stubs nor drivers need be constructed, nor must the
development be carefully planned to expose well-specified interfaces to each subsystem.

Requiring the whole system to be available before integration does not allow early test and
feedback, and so faults that are detected are much more costly to repair.

Big Bang Integration - WorkFlow Diagram


Big Bang Testing is represented by the following workflow diagram:

Disadvantages of Big-Bang Testing


 Defects present at the interfaces of components are identified at very late stage as all
components are integrated in one shot.
 It is very difficult to isolate the defects found.
 There is high probability of missing some critical defects, which might pop up in the
production environment.

4
 It is very difficult to cover all the cases for integration testing without missing even a
single scenario.

Structural integration test strategy

In a structural approach, modules are constructed, assembled, and tested together in an order
based on hierarchical structure in the design. Structural approaches include bottom-up, top-
down, and a combination sometimes referred to as sandwich or backbone strategy.
Top-down and bottom-up Strategy
A top-down integration strategy begins at the top of the uses hierarchy, including the
interfaces exposed through a user interface or top-level application program interface (API).
The need for drivers is reduced or eliminated while descending the hierarchy

Bottom-up integration similarly reduces the need to develop stubs, except for breaking
circular relations.

Top-down and bottom-up approaches to integration testing can be applied early in the
development if paired with similar design strategies: If modules are delivered following the
hierarchy, either top-down or bottom-up, they can be integrated and tested as soon as they are
delivered, thus providing early feedback to the developers.

Sandwich or backbone strategy

Integration may combine elements of the two approaches, starting from both ends of the
hierarchy and proceeding toward the middle. An early top-down approach may result from
developing prototypes for early user feedback, while existing modules may be integrated
bottom-up. This is known as the sandwich or backbone strategy.

Integration testing is the process of testing the interface between two software units or
module. It‘s focus on determining the correctness of the interface. The purpose of the
integration testing is to expose faults in the interaction between integrated units. Once all the
modules have been unit tested, integration testing is performed.

Integration test approaches –


There are four types of integration testing approaches. Those approaches are the following:

1. Big-Bang Integration Testing –

It is the simplest integration testing approach, where all the modules are combining
and verifying the functionality after the completion of individual module testing. In
simple words, all the modules of the system are simply put together and tested. This
approach is practicable only for very small systems. If once an error is found during
the integration testing, it is very difficult to localize the error as the error may
potentially belong to any of the modules being integrated. So, debugging errors
reported during big bang integration testing are very expensive to fix.

5
Advantages:

 It is convenient for small systems.

Disadvantages:
 There will be quite a lot of delay because you would have to wait for all the modules to
be integrated.
 High risk critical modules are not isolated and tested on priority since all modules are
tested at once.

2. Bottom-Up Integration Testing –

In bottom-up testing, each module at lower levels is tested with higher modules until
all modules are tested. The primary purpose of this integration testing is, each
subsystem is to test the interfaces among various modules making up the subsystem.
This integration testing uses test drivers to drive and pass appropriate data to the
lower level modules.

Advantages:
 In bottom-up testing, no stubs are required.
 A principle advantage of this integration testing is that several disjoint subsystems can
be tested simultaneously.

Disadvantages:
 Driver modules must be produced.
 In this testing, the complexity that occurs when the system is made up of a large number
of small subsystem.

3. Top-Down Integration Testing –

Top-down integration testing technique used in order to simulate the behaviour of the
lower-level modules that are not yet integrated.In this integration testing, testing takes
place from top to bottom. First high-level modules are tested and then low-level
modules and finally integrating the low-level modules to a high level to ensure the
system is working as intended.

Advantages:
 Separately debugged module.
 Few or no drivers needed.
 It is more stable and accurate at the aggregate level.

Disadvantages:
 Needs many Stubs.
 Modules at lower level are tested inadequately.

4. Mixed Integration Testing –

A mixed integration testing is also called sandwiched integration testing. A mixed


integration testing follows a combination of top down and bottom-up testing

6
approaches. In top-down approach, testing can start only after the top-level module
have been coded and unit tested. In bottom-up approach, testing can start only after
the bottom level modules are ready. This sandwich or mixed approach overcomes this
shortcoming of the top-down and bottom-up approaches. A mixed integration testing
is also called sandwiched integration testing.

Advantages:
 Mixed approach is useful for very large projects having several sub projects.
 This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.

Disadvantages:
 For mixed integration testing, require very high cost because one part has Top-down
approach while another part has bottom-up approach.
 This integration testing cannot be used for smaller system with huge interdependence
between different modules.

Testing components and assemblies.

Many software products are constructed, partly or wholly, from assemblies of prebuilt
software components.

A key characteristic of software components is that the organization that develops a


component is distinct from the (several) groups of developers who use it to construct systems.

Reusable components are often more dependable than software developed for a single
application.

The advantages of component reuse for quality are not automatic. They do not apply to code
that was developed for a single application and then scavenged for use in another.

In general, a software component is characterized by a contract or application program


interface (API) distinct from its implementation.

e.g., SQL for database access

7
COTS - commercial off-the-shelf
Short for commercial off-the-shelf, an adjective that describes software or hardware products
that are ready-made and available for sale to the general public. For example, Microsoft
Office is a COTS product that is a packaged software solution for businesses. COTS products
are designed to be implemented easily into existing systems without the need for
customization.

The main problem facing test designers in the organization that produces a component is lack
of information about the ways in which the component will be used.

A component may be reused in many different contexts, including applications for which its
functionality is an imperfect fit.

8
Test designers cannot anticipate all possible uses of a component under test, but they can
design test suites for classes of use in the form of scenarios. Test scenarios are closely related
to scenarios or use cases in requirements analysis and design.

System, Acceptance and Regression Testing: Overview,

System, acceptance, and regression testing are all concerned with the behavior of a software
system as a whole, but they differ in purpose.

System testing,

System testing is a check of consistency between the software system and its specification (it
is a verification activity).

The essential characteristics of system testing are that it is comprehensive, based on a


specification of observable behavior, and independent of design and implementation
decisions.

9
Acceptance testing,

The purpose of acceptance testing is to guide a decision as to whether the product in its
current state should be released. The decision can be based on measures of the product or
process.

Although system and acceptance testing are closely tied in many organizations, fundamental
differences exist between searching for faults and measuring quality.
Operational profile

Statistical models of usage, or operational profiles, may be available from measurement of


actual use of prior, similar systems. For example, use of a current telephone handset may be a
reasonably good model of how a new handset will be used.

Sensitivity testing

One can perform sensitivity testing to determine which parameters are critical. Sensitivity
testing consists of repeating statistical tests while systematically varying parameters to note
the effect of each parameter on the output.

A second problem faced by statistical testing, particularly for reliability, is that it may take a
very great deal of testing to obtain evidence of a sufficient level of reliability.

A less formal, but frequently used approach to acceptance testing is testing with users. An
early version of the product is delivered to a sample of users who provide feedback on
failures and usability. Such tests are often called alpha and beta tests.

Usability,

A usable product is quickly learned, allows users to work efficiently, and is pleasant to use.
Usability involves objective criteria such as the time and number of operations required to
perform tasks and the frequency of user error, in addition to the overall, subjective
satisfaction of users.

The process of verifying and validating usability includes the following main steps:

Inspecting specifications with usability checklists. Inspection provides early feedback on


usability.

Testing early prototypes with end users to explore their mental model (exploratory test),
evaluate alternatives (comparison test), and validate software usability. A prototype for early
assessment of usability may not include any functioning software; a cardboard prototype may
be as simple as a sequence of static images presented to users by the usability tester.

Testing incremental releases with both usability experts and end users to monitor progress
and anticipate usability problems.

10
System and acceptance testing that includes expert-based inspection and testing, user based
testing, comparison testing against competitors, and analysis and checks often done
automatically, such as a check of link connectivity and verification of browser compatibility.

Exploratory testing

The purpose of exploratory testing is to investigate the mental model of end users. It consists
of asking users about their approach to interactions with the system. For example, during an
exploratory test for the Chipmunk Web presence, we may provide users with a generic
interface for choosing the model they would like to buy, in order to understand how users
will interact with the system.

Regression testing,

When building a new version of a system (e.g., by removing faults, changing or adding
functionality, porting the system to a new platform, or extending interoperability), we may
also change existing functionality in unintended ways.

When a new version of software no longer correctly provides functionality that should be
preserved, we say that the new version regresses with respect to former versions. The
nonregression of new versions (i.e., preservation of functionality), is a basic quality
requirement.

Testing activities that focus on regression problems are called (non) regression testing.
Usually ―non‖ is omitted and we commonly say regression testing.

A simple approach to regression testing consists of reexecuting all test cases designed for
previous versions. Even this simple retest all approach may present problems and costs.

Regression test selection techniques,

Even when we can identify and eliminate obsolete test cases, the number of tests to be
reexecuted may be large, especially for legacy software. Executing all test cases for large
software products may require many hours or days of execution and may depend on scarce
resources such as an expensive hardware test harness.

The cost of reexecuting a test suite can be reduced by selecting a subset of test cases to be
reexecuted, omitting irrelevant test cases or prioritizing execution of subsets of the test suite
by their relation to changes.

Regression test selection techniques are based on either code or specifications.

Code-based selection techniques select a test case for execution if it exercises a portion of the
code that has been modified.

11
Specification-based criteria select a test case for execution if it is relevant to a portion of the
specification that has been changed.

Control flow graph (CFG) regression techniques are based on the differences between the
CFGs of the new and old versions of the software.

CFG regression testing techniques compare the annotated control flow graphs of the two
program versions to identify a subset of test cases that traverse modified parts of the graphs.

The graph nodes are annotated with corresponding program statements, so that comparison of
the annotated CFGs detects not only new or missing nodes and arcs, but also nodes whose
changed annotations correspond to small, but possibly relevant, changes in statements.

Let us consider, for


example, the C function cgi decode from Chapter 12. Figure 22.1 shows the originalfunction
as presented in Chapter 12, while Figure 22.2 shows a revison of the program.

We refer to these two versions as 1.0 and 2.0, respectively.

12
Differences between
version 2.0 and 1.0 are indicated in gray. In the example, we have new nodes, arcs and paths.

13
14
Figure 22.2: Version 2.0 of the C function cgi decode adds a control on hexadecimal escape
sequences to reveal incorrect escape sequences at the end of the input string and a new branch
to deal with non-ASCII characters.

Test case prioritization and selective execution

Regression testing criteria may select a large portion of a test suite. When a regression test
suite is too large, we must further reduce the set of test cases to be executed.

15
Priorities can be assigned in many ways. A simple priority scheme assigns priority according
to the execution history: Recently executed test cases are given low priority, while test cases
that have not been recently executed are given high priority. In the extreme, heavily
weighting execution history approximates round robin selection.

Other history-based priority schemes predict fault detection effectiveness. Test cases that
have revealed faults in recent versions are given high priority. Faults are not evenly
distributed, but tend to accumulate in particular parts of the code or around particular
functionality.

Structural coverage leads to a set of priority schemes based on the elements covered by a test
case. We can give high priority to test cases that exercise elements that have not recently
been exercised. Both the number of elements covered and the ―age‖ of each element (time
since that element was covered by a test case) can contribute to the prioritization.

Levels of Testing,
The traditional three levels of testing, unit, integration, and system.

Integration Testing: Traditional view of testing levels

Thus far, we have said nothing about one of the key concepts of testing—levels of
abstraction.

Levels of testing echo the levels of abstraction found in the waterfall model of the software
development life cycle. Although this model has its drawbacks, it is useful for testing as a
means ofidentifying distinct levels of testing and for clarifying the objectives that pertain to
each level.

A diagrammatic variation of the waterfall model, known as the V-Model

16
Waterfall Testing
The waterfall model is closely associated with top–down development and design by
functional decomposition. The end result of preliminary design is a functional decomposition
of the entire system into a tree-like structure of functional components. With such a
decomposition, top–down integration would begin with the main program, checking the calls
to the next-level units, and so on until the leaves of the decomposition tree are reached. At
each point, lower-level units are replaced by stubs—throwaway code that replicates what the
lower-level units would do when called.

Bottom–up integration is the opposite sequence, starting with the leaf units and working up
toward the main program. In bottom–up integration, units at higher levels are replaced by
drivers (another form of throwaway code) that emulate the procedure calls.

17
The ―big bang‖ approach simply puts all the units together at once, with no stubs or drivers.

Whichever approach is taken, the goal of traditional integration testing is to integrate


previously tested units with respect to the functional decomposition tree. Although this
describes integration testing as a process, discussions of this type offer little information
about the methods or techniques.

Pros and Cons of the Waterfall Model

In its history since the first publication in 1968, the waterfall model has been analyzed and
critiqued repeatedly. The earliest compendium was by Agresti (1986), which stands as a good
source.

Agresti observes that

◾The framework fits well with hierarchical management structures.

◾The phases have clearly defined end products (exit criteria), which in turn are convenient
for project management.

◾The detailed design phase marks the starting point where individuals responsible for units
can work in parallel, thereby shortening the overall project development interval.

More importantly, Agresti highlights major limitations of the waterfall model. We shall see
that these limitations are answered by the derived life cycle models. He observes that There is
a very long feedback cycle between requirements specification and system testing, in which
the customer is absent.

◾The model emphasizes analysis to the near exclusion of synthesis, which first occurs at the
point of integration testing.

◾Massive parallel development at the unit level may not be sustainable with staffing
limitations.

◾Most important, ―perfect foresight‖ is required because any faults or omissions at the
requirements level will penetrate through the remaining life cycle phases.

Alternative life-cycle models

Testing in Iterative Life Cycles

Waterfall Spin-Offs

There are three mainline derivatives of the waterfall model: incremental development,
evolutionary development, and the spiral model Each of these involves a series of increments
or builds as shown in Figure 11.3.

18
Evolutionary development is best summarized as client-based iteration. In this spin-off, a
small initial version of a product is given to users who then suggest additional features.

The initial version might capture a segment of the target market, and then that segment is
―locked in‖ to future evolutionary versions. When these customers have a sense that they are
―being heard,‖ they tend to be more invested in the evolving product.

Barry Boehm‘s spiral model has some of the flavor of the evolutionary model. The biggest
difference is that the increments are determined more on the basis of risk rather than on client
suggestions.

The spiral is superimposed on an x–y coordinate plane, with the upper left quadrant referring
to determining objectives, the upper right to risk analysis, the lower right refers to
development (and test), and the lower left is for planning the next iteration.

These four phases—determine objectives, analyze risk, develop and test, and next iteration
planning—are repeated in an evolutionary way. At each evolutionary step, the spiral enlarges.

Specification-Based Life Cycle Models

19
When systems are not fully understood (by either the customer or the developer), functional
decomposition is perilous at best.

Barry Boehm jokes when he describes the customer who says ―I don‘t know what I want, but
I‘ll recognize it when I see it.‖

The rapid prototyping life cycle deals with this by providing the ―look and feel‖ of a system.
Thus, in a sense, customers can recognize what they ―see.‖ In turn, this drastically reduces
the specification-to-customer feedback loop by producing very early synthesis. Rather than
build a final system, a ―quick and dirty‖

Executable specifications (Figure 11.5) are an extension of the rapid prototyping concept.
With this approach, the requirements are specified in an executable format (such as finite
state machines, StateCharts, or Petri nets). The customer then executes the specification to
observe the intended system behavior and provides feedback as in the rapid prototyping
model.

The executable models are, or can be, quite complex.

20
The SATM system

Problem Statement

The SATM system communicates with bank customers via the 15 screens shown in Figure
2.4. Using a terminal with features as shown in Figure 2.3, SATM customers can select any
of three transaction types: deposits, withdrawals, and balance inquiries. For simplicity, these
transactions can only be done on a checking account.

When a bank customer arrives at an SATM station, screen 1 is displayed. The bank customer
accesses the SATM system with a plastic card encoded with a personal account number
(PAN), which is a key to an internal customer account file, containing, among other things,
the customer‘s name and account information.

If the customer‘s PAN matches the information in the customer account file, the system
presents screen 2 to the customer. If the customer‘s PAN is not found, screen 4 is displayed,
and the card is kept.

At screen 2, the customer is prompted to enter his or her personal identification number
(PIN).

If the PIN is correct (i.e., matches the information in the customer account file), the system
displays screen 5; otherwise, screen 3 is displayed. The customer has three chances to get the
PIN correct; after three failures, screen 4 is displayed, and the card is kept.

On entry to screen 5, the customer selects the desired transaction from the options shown on
screen.

21
If balance is requested, screen 14 is then displayed. If a deposit is requested, the status of the
deposit envelope slot is determined from a field in the terminal control file. If no problem is
known, the system displays screen 7 to get the transaction amount. If a problem occurs with
the deposit envelope slot, the

system displays screen 12. Once the deposit amount has been entered, the system displays
screen 13, accepts the deposit envelope, and processes the deposit.

The system then displays screen 14.

If a withdrawal is requested, the system checks the status (jammed or free) of the withdrawal
chute in the terminal control file. If jammed, screen 10 is displayed; otherwise, screen 7 is
displayed so the customer can enter the withdrawal amount. Once the withdrawal amount is
entered, the system checks the terminal status file to see if it has enough currency to dispense.

If it does not, screen 9 is displayed; otherwise, the withdrawal is processed. The system
checks the customer balance (as described in the balance request transaction); if the funds in
the account are insufficient, screen 8 is displayed. If the account balance is sufficient, screen
11 is displayed and the money is dispensed. The balance is printed on the transaction receipt
as it is for a balance request transaction.

22
After the cash has been removed, the system displays screen 14.

When the ―No‖ button is pressed in screens 10, 12, or 14, the system presents screen 15 and
returns the customer‘s ATM card. Once the card is removed from the card slot, screen 1 is
displayed.

When the ―Yes‖ button is pressed in screens 10, 12, or 14, the system presents screen 5 so the
customer can select additional transactions.

A closer look at the SATM system

The ports in the SATM system include the digit and cancel keys, the function keys, the
display screen, the deposit and withdrawal doors, the card and receipt slots, and several less
obvious devices, such as the rollers that move cards and deposit envelopes into the machine,
the cash dispenser, the receipt printer, and so on.

23
We will start with a hierarchy of state machines; the upper level is shown in Figure 14.4.

At this level, states correspond to stages of processing, and transitions are caused by abstract
logical (instead of port) events. The card entry ―state,‖ for example, would be decomposed
into lower levels that deal with details such as jammed cards, cards that are upside down,
stuck card rollers

The PIN entry state S2 is decomposed into the more detailed view in Figure 14.5. The
adjacent states are shown because they are sources and destinations of transitions from the
PIN entry state at the upper level. (This approach to decomposition is reminiscent of the old
data flow diagramming idea of balanced decomposition.)

At the S2 decomposition, we focus on the PIN retry mechanism; all of the output events are
true port events, but the input events are still logical events.

The transaction processing state S3 is decomposed into a more detailed view in Figure 14.6.
In that finite state machine, we still have abstract input events, but the output events are

24
actual port events. State 3.1 requires added information. Two steps are combined into this
state: choice of the account type and selection of the transaction type.

The little ―<‖ and ―>‖ symbols are supposed to point to the function buttons adjacent to the
screen, as shown in Figure 14.1. As a side note, if this were split into two states, the system
would have to ―remember‖ the account type choice in the first state. However, there can be
no memory in a finite state machine, hence the combined state.

Once again, we have abstract input events and true port output events.

25
Separating integration and system testing

System Testing:
While developing a software or application product, it is tested at the final stage as a whole
by combining all the product modules and this is called as System Testing. The primary aim
of conducting this test is that it must fulfill the customer/user requirement specification. It is
also called as an end-to-end test, as is performed at the end of the development. This testing
does not depend on system implementation; in simple words, the system tester doesn‘t know
which technique among procedural and object-oriented is implemented.

This testing is classified into functional and non-functional requirements of the system. In
functional testing, the testing is similar to black-box testing which is based on specifications
instead of code and syntax of the programming language used. On the other hand, in non-
functional testing, it checks for performance and reliability through generating test cases in
the corresponding programming language.

26
Integration Testing:
This testing is the collection of the modules of the software, where the relationship and the
interfaces between the different components are also tested. It needs coordination between
the project level activities of integrating the constituent components together at a time.

The integration and integration testing must adhere to a building plan for the defined
integration and identification of the bug in the early stages. However, an integrator or
integration tester must have the programming knowledge, unlike system tester.

Difference between System Testing and Integration Testing :

27
Decomposition-based integrations

Approaches to Integration Testing

• Functional Decomposition (most commonly described in the literature)


– Top-down
– Bottom-up
– Sandwich
– ―Big bang‖

• Call graph
– Pairwise integration
– Neighborhood integration

• Paths
– MM-Paths
– Atomic System Functions

Basis of Integration Testing Strategies

• Functional Decomposition
applies best to procedural code

• Call Graph
applies to both procedural and object-oriented code

• MM-Paths
applies to both procedural and object-oriented code

• Calendar functions

– the date of the next day (our old friend, NextDate) – the day of the week corresponding to
the date
– the zodiac sign of the date
– the most recent year in which Memorial Day was celebrated on May 27
– the most recent Friday the Thirteenth

Calendar Program Units

Main Calendar
Function isLeap
Procedure weekDay
Procedure getDate
Function isValidDate
Function lastDayOfMonth
Procedure getDigits
Procedure memorialDay

28
Function isMonday
Procedure friday13th
Function isFriday
Procedure nextDate
Procedure dayNumToDate
Procedure zodiac

weekDayStub
Procedure weekDayStub(mm, dd, yyyy, dayName)

29
If ((mm = 10) AND (dd = 28) AND (yyyy = 2013))
Then dayName = ―Monday‖
EndIf
.
.
.
If ((mm = 10) AND (dd = 30) AND (yyyy = 2013))
Then dayName = ―Wednesday‖
EndIf

Top-Down Integration Mechanism

• Breadth-first traversal of the functional decomposition tree.

• First step: Check main program logic, with all called units replaced by stubs that always
return correct values.

• Move down one level


– replace one stub at a time with actual code.
– any fault must be in the newly integrated unit

30
Bottom-Up Integration Mechanism

• Reverse of top-down integration

• Start at leaves of the functional decomposition tree.

• Driver units...
– call next level unit
– serve as a small test bed
– ―drive‖ the unit with inputs
– drivers know expected outputs

• As with top-down integration, one driver unit at a time is replaced with actual code.

• Any fault is (most likely) in the newly integrated code.

Top-Down and Bottom-Up Integration

• Both depend on throwaway code.


– drivers are usually more complex than stubs

• Both test just the interface between two units at a time.

• In Bottom-Up integration, a driver might simply reuse unit level tests for the ―lower‖ unit.

• Fan-in and fan-out in the decomposition tree results is some redundancy.

31
Sandwich Integration

• Avoids some of the repetition on both top-down and bottom-up integration.


• Nicely understood as a depth-first traversal of the functional decomposition tree.
• A ―sandwich‖ is one path from the root to a leaf of the functional decomposition tree.
• Avoids stub and driver development.
• More complex fault isolation.

32
―Big Bang‖ Integration

• No...
– stubs
– drivers
– strategy

• And very difficult fault isolation

33
• (Named after one of the theories of the origin of the Universe)

• This is the practice in an agile environment with a daily run of the project to that point.

Pros and Cons of Decomposition-Based Integration

• Pros

– intuitively clear
– ―build‖ with proven components
– fault isolation varies with the number of units being integrated

• Cons

– based on lexicographic inclusion (a purely structural consideration)


– some branches in a functional decomposition may not correspond with actual interfaces.
– stub and driver development can be extensive

call graph-based integrations

Call Graph-Based Integration (continued)

• Two strategies
– Pair-wise integration
– Neighborhood integration

• Degrees of nodes in the Call Graph indicate integration sessions


– isLeap and weekDay are each used by three units

• Possible strategies
– test high indegree nodes first, or at least,
– pay special attention to ―popular‖ nodes

Pair-Wise Integration

• By definition, and edge in the Call Graph refers to an interface between the units that are the
endpoints of the edge.

• Every edge represents a pair of units to test.

• Still might need stubs and drivers

• Fault isolation is localized to the pair being Integrated

34
Neighborhood Integration
• The neighborhood (or radius 1) of a node in a graph is the set of nodes that are one edge
away from the given node.
• This can be extended to larger sets by choosing larger values for the radius.
• Stub and driver effort is reduced.

35
36
Path-based integrations.

Wanted: an integration testing level construct similar to DD-Paths for unit testing...

– extend the symbiosis of spec-based and code-based testing to the integration level
– greater emphasis on behavioral threads
– shift emphasis from interface testing to interactions (cofunctions) among units

• Need some new definitions

– source and sink nodes in a program graph


– module (unit ) execution path
– generalized message
– MM-Path

New and Extended Definitions

37
• A source node in a program is a statement fragment at which program execution begins or
resumes.

• A sink node in a unit is a statement fragment at which program execution terminates.

• A module execution path is a sequence of statements that begins with a source node and
ends with a sink node, with no intervening sink nodes.

• A message is a programming language mechanism by which one unit transfers control to


another unit, and acquires a response from the other unit.

MM-Path Definition and Example

• An MM-Path is an interleaved sequence of module execution paths and messages.

• An MM-Path across three units:

38
Details of the Example MM-Path

The node sequence in the example MM-Path is:


<a1, a2, a3, a4>

message msg1
<b1, b2>

message msg2
<c1, c2, c4, c5, c6, c8, c9>

msg2 return
<b3, b4, (b2, b3, b4)*, b5>

msg1 return
<a6, a7, a8>

Note: the (b2, b3, b4)* is the Kleene Star notation for repeated traversal of the loop.

About MM-Paths

• Message quiescence: in a non-trivial MM-Path, there is always (at least one) a point at
which no further messages are sent.

• In the example MM-Path, unit C is the point of message quiescence.

• In a data-driven program (such as NextDate), MM-Paths begin (and end) in the main
program.

• In an event program (such as the Saturn Windshield Wiper), MM-Paths begin with the unit
that senses an input event and end in the method that produces the corresponding output
event.

Some Sequences...

• A DD-Path is a sequence of source statements.

• A unit (module) execution path is a sequence of DD-Paths.

• An MM-Path is a sequence of unit (module) execution paths.

• A (system level) thread is a sequence of MMPaths.

• Taken together, these sequences are a strong ―unifying factor‖ across levels of testing. They
can lead to the possibility of trade-offs among levels of testing (not explored here).

39
40

You might also like