Professional Documents
Culture Documents
USLTG Test Case Automatic Generation by
USLTG Test Case Automatic Generation by
Chu Thi Minh Hue†,§, Duc-Hanh Dang‡,¶, Nguyen Ngoc Binh‡,|| and
Anh-Hoang Truong*,**
*Department of Software Engineering
VNU University of Engineering and Technology, Vietnam
†Hung Yen University of Technology and Education, Vietnam
‡
Hosei University, Japan
§
huectm.di12@vnu.edu.vn
¶hanhdd@vnu.edu.vn
||nnbinh@vnu.edu.vn
**hoangta@vnu.edu.vn
Keywords: Model-based testing; transformation; functional test case; use case; test generation;
USL; USLTG; TCSL.
1. Introduction
Software testing is an important activity to ensure the quality of software. It
accounts for a large part of the development e®ort. A way to reduce testing e®ort
and ensure the e®ectiveness of testing is to generate functional test cases from
¶ Corresponding author.
1313
1314 C. T. M. Hue et al.
use cases. Hence, the identi¯cation, maintenance, and execution of test cases require
a big e®ort. Automatically generating and performing test cases help to save much
time and e®ort as well as reduce the number of errors and faults in the testing
activity.
Among current automated testing execution processes, the Model-Based Testing
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
(MBT) process allows generating and performing test cases automatically [1]. In the
¯ve main steps of this process, the three last steps (test script generation, testing
execution, and testing evaluation) have been fully provided by automated testing
tools. However, the test case automatic generation step is still facing many di±culties
and challenges. Especially, test case generation from use case speci¯cations is still a
challenge for the research community because use case speci¯cations are often de-
scribed in a natural language. In order to automate testing activities fully, generated
test cases need to contain the necessary information to automatically generate test
scripts. The basic idea is to express each test case as abstract as possible while
making it precise enough to be executed or interpreted by an automated testing
tool [1]. The basic working principles' automated testing tools are to divide the test
case into four di®erent parts: test steps, test objects of each test step, actions within
test objects, and test data [2, 3]. Hence, these tools can run tests directly from test
cases and they is no need to update low-level test scripts as test cases are changed.
A considerable number of works, including [4–10], have attempted to introduce
di®erent approaches to automatically generate functional test cases from use cases.
These works proposed methods to generate functional test cases from use case spe-
ci¯cations represented by using templates with restricted rules and keywords [4, 5],
use case models in UML activity and sequence diagrams [6, 7], UML diagrams with
contracts [8], or use case models in a Domain Specify Language (DSL) [9, 10].
However, within the current works mentioned above, the relevant information of
test cases including concurrent test steps, test object name, action, and test data
values is often not captured. These works also lack a test case speci¯cation language
(TCSL) capable of capturing reusable functional test cases. These speci¯cations are
understandable to non-technical stakeholders and precise enough for automated
testing tools to create test scripts automatically.
In this paper, we ¯rst propose a TCSL to precisely specify the functional test cases
of a system, then propose a method named USLTG (Use case Speci¯cation Language
(USL)-based Test Generation) to generate test cases from use cases captured as USL
models. We specify use cases as models in a USL [11]. With each use case, USLTG
generates test scenarios, a snapshot representing internal system states before exe-
cuting the use case, and test data suites from these models. USLTG transforms the
generated test scenarios, snapshot, and test data suites into a model in TCSL by
model transformations.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1315
. We build three algorithms to transform use cases as USL models into a TCSL
model.
. We implement a support tool to realize the USLTG method. We conduct two case
studies with the tool to illustrate our method. We also evaluate the USLTG
method by comparing it to other related methods.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
This work makes four signi¯cant improvements from our earlier conference
paper [12]. First, we propose the TCSL with the abstract syntax and OCL well-
formedness rules. Second, we improve the algorithm GenTestInputData to automatic
generate and update internal states of the system. Third, we conduct an additional
case study in order to illustrate how to apply USLTG in practice. Last but not the
least, we provide an evaluation for USLTG.
The rest of this paper is organized as follows. Section 2 introduces the background
and motivation for our work. Section 3 comments on related work. Section 4 over-
views our method. Section 5 brie°y presents the USL language. Section 6 presents
the TCSL. Section 7 explains how to generate functional test cases from use cases.
Section 8 brie°y discusses the tool support of USLTG, case study and an evaluation
for USLTG. The paper is ended with conclusions and our future work.
Table 1. The use case description Lend book (adapted from [11]).
to verify compliance with a speci¯c requirement [15]. According to Jim [16], the
creation of test cases from a use case includes three main steps: ¯rst, use case sce-
narios are identi¯ed from the description of use case. A use case scenario is an
instance of a use case or a complete path through a use case. Second, test scenarios
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
are built from the corresponding use case scenarios. Third, test data suites of each
test scenario are identi¯ed. A test case results from combining a test scenario and a
test data suite. A test scenario describes steps of a testing procedure, execution
conditions. A test data suite includes values of test inputs and expected outputs. A
test step can be an action that is executed by a primary actor or a checkpoint (a
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
system action that compares a current value for a speci¯ed property with expected
value for that property [3]). Moreover, a test step can also be relevant to other test
cases of another use case that is included in the use case or extended from the use
case. This test step is called Invoking Step [10].
In the test design phase, test cases are usually speci¯ed by a natural language in a
CSV (Comma-Separated Values) format or excel ¯le. The information structure of
these speci¯cations depends on automated testing tools [3]. For example, Table 2
gives two test cases to test an execution path of the Lend book use case. They are
combined from a test scenario and two test data suites and contain the necessary
information to be executed automatically by an automated testing tool. These test
cases are speci¯ed by a natural language in an excel ¯le. Lines 1 to 8 describe the test
steps of the test scenario and two test data suites of the test cases. In particular, the
columns Num, Step, Object, Action, Data 1, and Data 2 present the order of each
test step, the description of each test step, the test object name of each test step, the
action name on the test object of each test step, and two test data suites, respec-
tively. Two test cases contain four necessary information parts.
among activities of activity paragraphs. They mentioned about activity paths and
types of activity paths, namely (1) non-concurrent activity path and (2) concurrent
activity path.
First, a non-concurrent activity path is a sequence of non-concurrent activities
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
from the starting activity to the ending activity in an activity graph. In this path
type, each activity in the sequence has at most one occurrence except those activities
that exist within a loop. Each activity in a loop may have at most two occurrences in
the sequence. The non-concurrent activity path is used to cover faults in loop and
branch condition. To detect a fault in a loop, this criterion tests the loop that is
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
executed zero and one time with the while-do loop structure, and one and two times
with the do-while loop structure. Second, a concurrent activity path is a special case of
non-concurrent path, which consists of both non-concurrent and concurrent activi-
ties satisfying precedence relation among them. To ensure all precedence relations
among the activities in the activity diagram to be satis¯ed, the authors proposed to
select the concurrent activity path such that the sequence of all concurrent activities
encapsulated in that path corresponding to the breadth-¯rst search traversal of them
in the activity graph. For more details, refer to [18].
In order to transform use cases into test cases that ensure quality and which can
be executed automatically by an automated testing tool, we identify several chal-
lenges as follows.
3. Related Work
We position our work in functional test cases' automatic generation from use cases.
Within this context, test cases are often generated by MBT techniques. In order to
automatically generate functional test cases from use cases, several approaches [4–10]
have been proposed. These approaches can be grouped into three groups as follows.
In the ¯rst group, functional test cases are automatically generated from speci-
¯cations in a natural language. For example, methods of Wang et al. [4] and
Sarmiento et al. [5] follow this approach. In the approach of Wang et al., use cases are
described by speci¯cations of RUCM. RUCM is based on a template in natural
USLTG: Test Case Automatic Generation by Transforming Use Cases 1319
language with restricted rules and keywords. This work uses a Natural Language
Processing (NLP) technique to build Use Case Test Models (UCTMs) from RUCM
speci¯cations and then extracts behavioral information and constraints from
UCTMs in order to generate test scenarios and constraints. Moreover, extracted
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
In most works above, generated test cases often do not include the information of
test steps, test objects of each test step, actions within test objects, and test data.
This information is necessary to automatically generate test scripts for automated
testing tools.
In this paper, we propose an approach to generate functional test cases from use
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
case models that are speci¯ed by USL models. The generated test cases contain
necessary information (test steps, test objects of each test step, type of test action,
actions within test objects, and concrete test data) to automatically generate test
scripts by automated testing tools. Test cases containing concurrent actions and
include and extend relationships with other use cases are also generated.
Additionally, we propose a DSL to precisely specify generated test cases.
1322
C. T. M. Hue et al.
Fig. 3. The USL model of the Lend book use case (adapted from [12]).
USLTG: Test Case Automatic Generation by Transforming Use Cases 1323
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
Fig. 4. A partial USL model of the Lend book use case in abstract syntax.
. UseCase expresses a use case of the system. A system includes one or more use
cases.
. TestScenario represents a test scenario. A use case includes one or more test
scenarios.
. Step denotes a step in a test scenario. The properties step and description
represent the order number and the description of the Step, respectively. This
concept is divided into three speci¯c concepts: TestStep, checkpoint, and
InvokeStep.
1324 C. T. M. Hue et al.
. TestStep expresses a test step executed by actors. The properties objectName and
actionName present the name of the test object and the name of the action that
the actor executes on the test object, respectively.
. CheckPoint de¯nes a test step that is a CheckPoint. The properties objectName
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
and checkPointType present the name of the test object and type of the check-
point, respectively.
. InvokeStep expresses a test step that is an InvokeStep. The properties type and
ucName capture the type of invoking action (include or extend) and the name of use
case that their test cases are inserted into the test steps of the test cases, respectively.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
. DataCase represents values in data suites of each test step. A Step includes one or
more DataCases which belong to di®erent data suites of a test scenario. A
DataCase can include one or more value of di®erent properties that they are
speci¯ed by the concept DataTupe.
. StepConection de¯nes relations between Steps. Normally, Steps are executed
sequentially. If Steps are done concurrently, the initial point and the ¯nal point of
concurrent Steps are presented by the StepConnection fork and the StepCon-
nectionjoin.
. SnapShot expresses internal system states before executing a use case. A SnapShot
includes a set of MObjects and a set of MLinks that are instances of the system's
domain concepts relating to the use case and links between MObjects.
. MObject represents an instance of a domain concept of the system. An MObject
includes a set of MAttributes.
Figure 6 shows a partial TCSL model in abstract syntax that speci¯es generated
test cases from the Lend book use case. This model is an instance of the metamodel
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
TCSL and shown in a graphic form. The model is stored in an XML ¯le containing
tags to de¯ne instances of the TCSL metaconcepts This ¯gure focuses on displaying
test cases in Table 2. Particularly, the System object captures the system under test.
The UseCase object speci¯es the use case of de¯ned test cases, the SnapShot object
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
captures the system snapshot expressing internal system before executing the use
case. The TestScenario object captures a test scenario which can include TestSteps,
CheckPoints or InvokeSteps. The DataCase object speci¯es input data values or
expected output data values of each TestStep or CheckPoint in test data suites.
A TCSL model contains sets of test scenarios generated from use cases. Each test
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
scenario contains one or more test data suites. A test case is a combination of a test
scenario and a test data suite.
The purpose of USLTG is to identify USL paths and also to solve a set of
guard conditions on each path to identify test input values for test cases. The fol-
lowing subsections will present algorithms to generate test scenario and test data
values.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
each generated path, guard conditions on a FlowEdge are taken as the part of the
pre-condition of the corresponding next action. Each path is considered as a use case
scenario. Then, each use case scenario is revisited to generate a test scenario.
De¯nition 3 (A stored path USL). A path USL is stored by a variable
ep ¼ hs0 ; s1 ; . . . ; sn ; sf i. ep stores a sequence of objects in a USL model D, where s0 is
the InitialNode and sf is a FinalNode of D, si ð8i ¼ 1; . . . ; nÞ is an Action or a
ForkNode or a JoinNode or a sequence of Actions. A pair of a ForkNode and a
JoinNode de¯nes a starting point and a ¯nishing point of sequences of Actions which
they are executed concurrently. Each Action has a pre-condition and a post-
condition. In particular, the pre-condition is a constraint of the Action that must be
satis¯ed to execute the Action and the post-condition is a constraint that must to be
satis¯ed after ¯nishing Action.
To identify test cases satisfying the test requirement from activity diagrams, the
work in [18] converts an activity diagram into an activity graph and then traverses
the activity graph to get activity paths that achieve activity path coverage criterion.
However in this paper, to avoid the complexity of graph transformation, we try to
directly get USL paths from USL models. We implemented a scenario generation
algorithm named GenScenarios to generate a set of USL paths from a USL mode
through a Depth First Traversal (DFT) as shown in Algorithm 1. The following
describes this algorithm.
GenScenarios takes as input a USL model D. Its output is a set of paths which
correspond to scenarios with constraints of the use case. The derived paths achieve
the USL path coverage criterion.
The procedure GenerateScenarios calls some functions: outGoing(x) returning a
set of outgoing FlowEdges from the USLNode x, targetNode(e) returning a target
USLNode of the FlowEdge e, defGuard(g, a) combining the Constraint g with the pre
Constraint of the Action a and then moving into the pre Constraint of the
Action a, guardE(e) returning the guard Constraint of the FlowEdge e. Addition-
ally, procedure VisitSubpath(allsubSteps, allsubPaths, subSteps, subPaths, target-
Node(e), jn) is called to generate sets of all subpaths and all substeps starting from
di®erent USLNodes, target USLNodes of outgoing FlowEdges from the ForkNode x, to
a JoinNode jn.
1328 C. T. M. Hue et al.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
Example 7.1. The algorithm GenScenarios identi¯ed ¯ve USL paths that
correspond to ¯ve constrained use case scenarios from the USL model of the Lend
book use case as follow. We use two symbols \<" and \>" to denote relations
between an action with its pre-condition and post-condition, respectively. The use
case scenarios sc1 to sc4 contain two actions executed concurrently: fa12 > p2g and
fa13 > p3g:
— sc1 ¼ fa1; a2; a3; a4; a5; a6; g1 < a7; a8; g3 < a9; a10; g5 < a11 > p1; c4; fa12 >
p2g; fa13 > p3g; c5; c7g;
USLTG: Test Case Automatic Generation by Transforming Use Cases 1329
— sc2 ¼ fa1; a2; a3; a4; a5; a6; g2 < a14 > p4; a5; a6; g1 < a7; a8; g3 < a9; a10; g5 <
a11 > p1; c4; fa12 > p2g; fa13 > p3g; c5; c7g;
— sc3 ¼ fa1; a2; a3; a4; a5; a6; g2 < a14 > p4; a15; a16; a5; a6; g1 < a7; a8; g3 < a9;
a10; g5 < a11 > p1; c4;fa12 > p2g; fa13 > p3g; c5; c7g;
— sc4 ¼ fa1; a2; a3; a4; a5; a6; g1 < a7; a8; g4 < a17; a7; a8; g3 < a9; a10; g5 < a11 >
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
p1; c4; fa12 > p2g; fa13 > p3g; c5; c7g;
— sc5 ¼ fa1; a2; a3; a4; a5; a6; g1 < a7; a8; g3 < a9; a10; g6 < a18 > p6; c8g.
When the Lend book use case is executed according to each use case scenario, the
state transitions of the system occur. For example, when the Lend book use case is
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
executed according to Scenario sc1, the state transitions of the system occur as
follows:
trueja1jtrue trueja2jtrue trueja3jtrue trueja4jtrue trueja5jtrue
sc1 ¼ 0 ! a1 ! a2 ! a3 ! a4 !
trueja6jtrue g1ja7jtrue trueja8jtrue g3ja9jtrue trueja10jtrue
a5 ! a6 ! a7 ! a8 ! a9 !
g5ja11jp1 truejc4jtrue ftrueja12jp2;trueja13jp3g truejc5jtrue
a10 ! a11 ! c4 ! a12a13 !
truejc7jtrue
c5 ! c8 :
the system, CONF is a con¯guration ¯le that contains value domains of internal
states of the system and environment variables (value domains from business). The
output of GenTestInputData is a list of pairs and a system snapshot named OMpi.
Each pair includes a constrained use case scenario scenario and objects of EV that
each object of EV is a test input data suite of scenario.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1331
The algorithm ¯rst generates a system snapshot named OMpi from the ¯le con-
¯gure CONF by calling a procedure GenerateSnapshot(CONF) at line 3. The
algorithm then browses scenarios saving in paths by a foreach loop at line 4 to
generate test input data suites for each scenario.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
the OCL constraints INVs on the class model CM with value domains identi¯ed in
the ¯le CONF and identi¯ed values in the object diagram OMp.
However, the use case scenario sc can have actions which change the internal
states of the system. Hence, the values of input variables which are entered after
these actions must be reidenti¯ed. These values depend on new internal states of the
system and values of input variables that were entered before these actions.
The while loop at Line 9 executes to update internal states of the system and
environment variables based on the post-condition of the action a which changes the
internal system states. The loop then combines internal states of the system and
values of entered input variables, and resolves constraints in INVs. The loop ¯nds
new values for input variables that are entered after the action a. These new values of
input variables combine with values of input variables that are entered before the
action and new internal states of the system to satisfy constraints in INVs.
Each object of EV in Object diagram OMf corresponds to a test input data suite of
the scenario. The objects of EV identi¯ed in the object diagram OMf and the use case
scenario sc are then pushed in a list named TCs.
Procedure processAction (a, EV ) at Line 28 is used by Procedure processScenario
(sc) at Lines 22 and 25. This procedure is to identify input variables that an actor
action sends to the system and to replace the name of variables in OCL constraint
expressions of actions.
Example 7.2. Figure 7 shows a class model CD in (a) and a system snapshot of the
class model CD in (b). This snapshot represents the internal states of the system
before executing the Lend book use case. It was generated by procedure
GenerateSnapshot in Line 3 of Algorithm 2.
Results are returned when the algorithm GenTestInputData generates test input
data suites for the use case scenario sc4 as follows:
. The created class EV is EV (bcid, bid1, bid2).
. The created ¯le INVs includes constraints as follows:
context EV
inv g1: BookCopy.allInstances()->exists(b:BookCopyj b.bcid=bcid)
inv g4: (bid1=") or (Borrower.allInstances()->exists(b:Borrowerj b.bid=bid1)=
false)
1332 C. T. M. Hue et al.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
(a) (b)
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
(c)
Fig. 7. The class diagram models domain concepts of the Library system.
The object diagram OMf returned from Algorithm 2 includes objects and in
Fig. 7(b) and the generated objects EV in Fig. 7(c). The objects in this object
diagram are identi¯ed to satisfy invariants in INVs.
has the post-condition is mapped to a CheckPoint. With each object EV, input
variables in the SystemAction's post-condition expression are replaced by their
values in the object EV, then the post-condition expression is mapped to a DataCase
of the CheckPoint as an expected output. Each pair of a ForkNode and a JoinNode is
mapped to a StepConection that its type is fork and a StepConection that its type
is join, respectively.
Example 7.3. Figure 6 depicts the part of the generated TCSL model that contains
two test cases corresponding to the scenario sc4 of the Lend book use case in the
Library system.
1334 C. T. M. Hue et al.
de¯ne the domain of system snapshots. The output of the USLTG tool is a TCSL
model. To develop USLTG tool, we implemented two Modules (1) and (2) and use
Module (3). Module (1) is installed by an Acceleo program to read input models and
transform the generated test cases into a TCSL model by the procedure (3) imple-
menting the algorithm GenTCSLModel. Acceleo is a model transformation language
to text M2T of the Eclipse framework [21]. Module (2) is implemented by a Java
class. It contains two main procedures (1) and (2). The procedures (1) and (2)
implement the algorithms GenScenarios and GenTestInputData, respectively.
Module (3) is an OCL solver which USLTG uses to solve OCL constraints. This
solver is the use case partial solution completion of the OCL Analysis tool. It is a
plugin on the tool USE and was developed by Gogolla et al. [20].
In particular, ¯rst, the Acceleo program reads the USL model and the UML class
model then it passes these models for the java module. Second, the algorithm Gen-
Scenarios built in Module (2) generates constrained use case scenarios. Third, the
constrained use case scenarios generated in algorithm GenScenarios are passed to the
algorithm GenTestInputData to generate pairs hscenario, EVObjectsi. To generate
objects EV, the algorithm GenTestInputData reads the con¯guration ¯le and calls
Module (3) to solve OCL constraints. The results of the algorithm GenTestInputData
in Module (2) are passed for the algorithm GenTCSLModel that is written in
Module (1). Finally, the algorithm GenTCSLModel in Module (1) transforms the
returned results from the algorithm GenTestInputData into a TCSL model.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1335
We use the results during the automatic test case generation of the use case With-
draw [18] to illustrate in this section. Table 3 shows a description of the Withdraw
use case. In order to automatically build other software artifacts including test
cases from the use cases, this use case description is modeled by a USL model as
in Fig. 9.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
This model includes a basic °ow, 3 alternate °ows (4a; 5a; 5b), 13 actions, 3
FinalNodes (f1; f2; f3), four DecisionNodes, 8 guard conditions (g1 g8), 6 action
post-conditions (p1 p6), one use case variable (cNum), and one pre-condition of the
use case(g0). In this example, the pre-condition of the use case is speci¯ed by one
constraint associated with InitialNode c0 to specify constraint on variables of the use
case before executing the use case.
1336 C. T. M. Hue et al.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
First, the USLTG reads the USL model in Fig. 9 and identi¯es the constrained use
case scenarios as follows:
— sc1 ¼ fg0 < c0; a1; a2; a3; a4; g1 < a5; a6; g3 < a7 > p2; a8 > p3; a9 > p4; f1g;
— sc2 ¼ fg0 < c0; a1; a2; a3; a4; g2 < a10 > p1; a3; a4; g1 < a5; a6; g3 < a7 > p2;
a8 > p3; a9 > p4; f1g;
USLTG: Test Case Automatic Generation by Transforming Use Cases 1337
— sc3 ¼ fg0 < c0; a1; a2; a3; a4; g1 < a5; a6; g4 < a11; g5 < a12; g7 < a13 > p6; f2g;
— sc4 ¼ fg0 < c0; a1; a2; a3; a4; g1 < a5; a6; g4 < a11; g5 < a12; g8 < a7 > p2;
a8 > p3; a9 > p4; f1g;
— sc5 ¼ fg0 < c0; a1; a2; a3; a4; g1 < a5; a6; g4 < a11; g6 < a14 > p5; f3g.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Second, the USLTG reads the class diagram capturing the domain concepts of the
ATM system as shown in Fig. 10(a) and a con¯guration ¯le describing value domains
of system states to generate a snapshot of the class diagram and test input data. In
particular, Procedure GenerateSnapshot in Line 3 of Algorithm 2 generates a system
snapshot of the class diagram as shown in Fig. 10(b). The classes EV and processed
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
OCL constraints INVs of ¯ve scenarios are also generated as shown in Table 4.
The ¯rst column shows scenario name and name of inputs (each input is represented
by a property of the class EV) and the second column shows OCL constraints of
INVs. Objects EV are returned from Procedure OCLSolver corresponding to ¯ve
scenarios as shown in Table 5. Table 6 shows test cases resulted from the combi-
nation between the use case scenarios and test data.
(a) (b)
Fig. 10. The domain class diagram of the ATM system and it's system snapshot.
Finally, USLTG transforms the identi¯ed use case scenarios, snapshots, and test
input data into a test case speci¯cation model TCSL. Listing 1 shows the XML ¯le of
the TCSL model which speci¯es functional test cases of the Withdraw use case. Our
work applied the active path coverage criterion [18] in order to generate system test
scenarios. This criterion is used for both loop testing and concurrency among
activities of activity diagrams. Hence, this criterion also ensures the generated test
scenarios achieving branch coverage criterion and the basic path coverage criterion
[4, 5, 7, 22]. However, only [5, 7] mention the use cases that contain concurrent
actions as our work. All concurrent actions are executed at least once in testing. In
the work of Straszak et al. [10], use case scenarios are manually identi¯ed and
speci¯ed in RSL [19] and then transformed to test scenarios. Thus, the coverage of
test scenarios depends on the initial identi¯cation of the use case scenarios of
1338 C. T. M. Hue et al.
Table 4. Input variables and processed guard constraints of the use case scenario Withdraw.
Scenario Object EV
sc1 ev1: cNum¼`91b126', amount¼5
sc2 ev1: cNum¼`91b126', amount1¼100, amount2¼2
ev2: cNum¼`91b126', amount1¼1, amount2¼2
sc3 ev1: cNum¼`91b124', amount¼40
sc4 ev1: cNum¼`91b124', amount¼30
sc5 ev1: cNum¼`91b125', amount¼50
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
message
1339
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
1340
Table 6. (Continued )
C. T. M. Hue et al.
modelers. Table 7 shows the total number of test scenarios generated by applying
our approach and several proposed approaches over three use cases. In the above-
mentioned three use cases, the Lend book use case contains concurrent actions.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
Hence, we use the letter \N" to denote works that do not mention use cases con-
taining concurrent actions.
1342 C. T. M. Hue et al.
To automatically generate test cases for three use cases, Lend book, With-
draw [18], and IIOSS [4], we ¯rst create USL models to capture the use case
descriptions. USLTG then reads the USL models sequentially to generate test cases
for each use case. These test cases are speci¯ed by a TCSL model. It takes several
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
minutes to generate test cases for each use case. However, most of the time to
automatically create functional test cases in the USLTG method is to create USL
models. The e®ort required for creating USL models depends on domain knowledge,
expertise with USL and OCL. At the beginning of the project, our method must
spend 2 h to create a USL model because our method requires time to understand the
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
speci¯cations and transform steps, actions, control nodes, and OCL constraints into
USL models. However, after a short time, it only takes 30 min on average to create a
USL model. As discussed in [11], we can build di®erent generators to transform a
USL model to other software artifacts such as template-based use case description,
class model, activity diagram, sequence diagram, and test cases. Hence, the initial
e®ort to create the USL models will be o®set from the automation generators. In this
research, we focus on the proposal of test cases generator. Furthermore, to increase
the modeling performance, we can additionally provide a textual editor to create
USL models as discussed in our previous research [11].
Through these examples, we applied the functional test case generation for cases
as follows: test cases including concurrent actions as the Lend book use case, test
cases including an action that calls another use case as the Lend book use case, test
cases using the entered input data of another use case as the use case withdraw using
the cNum (Card number of the use case Insert card before). Through these case
studies, USLTG can be fully responsive for the problem that meets in the practice as
above. We value the USLTG method based on the information identi¯cation ability
of generated test cases and the test case speci¯cation method. In Sec. 8.3, we compare
the USL method with other methods.
Test case information [4] [5] [8] [22] [7] [9] [10] USLTG
(c1) Test scenario Y Y Y Y Y Y Y Y
(c2) Test input V N N D N N D V
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
given to compare identi¯cation abilities of test scenario and test data. Normally, in
order to generate functional test cases from use cases, test scenarios are generated
easier than test data. So, test scenarios are generated in all of the work above.
However, test data often are not identi¯ed or are not fully identi¯ed such as
descriptions instead of speci¯c values. Test input data are identi¯ed as concrete
values and expected output data are identi¯ed as OCL constraints in the work of
Wang et al. [4] and ours. In [10] and [22], the authors identi¯ed the test data as
descriptions (satisfying conditions) instead of concrete values. In other words, they
only presented methods to generate test scenarios without test data. In addition,
pre-conditions of test scenarios in their works are described in a natural lan-
guage [7, 9, 10]. But in our work, if this condition describes constraints on variables of
the use cases (they are often input data that are entered before executing the use
case), these variables will be identi¯ed to be concrete values. For example, in the use
case Withdraw, pre-condition of the use case is \ATMCard.allInstances()->exists(aj
a.cNum¼cNum)", in the scenario sc1, pre-condition of sc1 is \cNum=91b126".
Second, in order to automatically execute test cases by an automated testing tool,
the type of test steps (checkpoint or actor action), action name and object name in
test steps must be identi¯ed. Criteria c3 and c4 are used to compare these characters
of identi¯ed test steps in test scenarios. In all of the works, they did not concretely
identify the action names and object names of test steps in test scenarios. Similarly,
the type of test steps is not also concretely identi¯ed in [4, 5, 8, 22]. Only [9, 10]
identi¯ed the type of test steps. However, in these works, the types of checkpoints are
not identi¯ed concretely (including 10 checkpoint types) as our work.
Third, use case scenarios can contain concurrent actions and relationships with
other use cases. Hence, Criteria c5, c6 are to compare the works that they mentioned
or did not mention test cases containing concurrent actions and relationships with
test cases of other use cases. There are several works that did not mention test
scenarios containing concurrent actions as in [4, 8–10]. Similarly, test scenarios
containing relationships with other use cases such as < include > and < extend >
did not also mention in [4, 5, 8, 9, 22].
Finally, in order to precisely capture and reuse test cases, test cases need to be
speci¯ed by a language built by a metamodeling technique. So, Criteria c7 is to
1344 C. T. M. Hue et al.
compare the work in which they describe test cases by the natural language or a
model of a metamodel (a metamodel of a DSL). In all of the mentioned works,
just [9, 10] proposed the metamodels to specify test cases. Moreover, the concrete
input data values of a test case are identi¯ed on the basis of a system snapshot, so
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
system snapshots also need to be identi¯ed and stored. However, just our work
identi¯es and stores these system snapshots.
9. Conclusion
In this paper, we proposed the TCSL and introduced the method USLTG to auto-
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
matically generate the executable functional test cases from use cases. The generated
functional test cases are precisely speci¯ed by a TCSL model and contain necessary
information to generate test scripts by automated testing tools which can auto-
matically run case tests directly. We also implemented a support tool to realize our
method, applied USTTG to several case studies, and evaluated our USLTG method
with other methods.
As our future work, we completely focus on developing TCSL to provide concrete
syntax for TCSL. In addition, we would like to enrich the abstract syntax and
enhance the concrete syntax of TCSL in order to provide better support for modelers.
Acknowledgments
This research is funded by Vietnam National Foundation for Science and Technology
Development (NAFOSTED) under Grant No. 102.03-2014.23.
References
1. M. Utting and B. Legeard, Practical Model-Based Testing: A Tools Approach (Morgan
Kaufmann, San Francisco, 2007).
2. R. P. Mg, Learning Selenium Testing Tools, 3rd edn. (Packt Publishing, Birmingham,
Mumbai, 2015).
3. Mercury, QuickTest Professional User's Guide 6.5 (Mercury Interactive Corporation,
Sunnyvale, 2003).
4. C. Wang, F. Pastore, A. Goknil, L. Briand and Z. Iqbal, Automatic generation of system
test cases from use case speci¯cations, in Proc. Int. Symp. Conf. Software Testing and
Analysis, 2015, pp. 385–396.
5. E. Sarmiento, J. C. S. do Prado Leite, E. Almentero and G. S. Alzamora, Test scenario
generation from natural language requirements descriptions based on petri-nets, Electr.
Notes Theor. Comput. Sci. 329 (2016) 123–148.
6. J. J. Gutierrez, M. J. Escalona, M. Mejías and J. Torres, An approach to generate test
cases from use cases, in Proc. 6th Int. Conf. Web Engineering, 2006, pp. 113–114.
7. S. Tiwari and A. Gupta, An approach of generating test requirements for agile software
development, in Proc. 8th India Conf. Software Engineering, 2015, pp. 186–195.
8. C. Nebut, F. Fleurey, Y. Le Traon and J.-M. Jezequel, Automatic test generation: A use
case driven approach, IEEE Trans. Softw. Eng. 32(3) (2006) 140–155.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1345
Federated Conf. Computer Science and Information Systems, 2014, pp. 1569–1574.
11. C. T. M. Hue, D. D. Hanh, N. N. Binh and L. M. Duc, Usl: A domain-speci¯c language
for precise speci¯cation of use cases and its transformations, Informatica 42(3) (2018)
325–343.
12. C. T. M. Hue, D. D. Hanh and N. N. Binh, A transformation-based method for test case
automatic generation from use cases, in Proc. 10th Int. Conf. Knowledge and Systems
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com