You are on page 1of 33

International Journal of Software Engineering

and Knowledge Engineering


Vol. 29, No. 9 (2019) 1313–1345
#.c World Scienti¯c Publishing Company
DOI: 10.1142/S0218194019500414
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

USLTG: Test Case Automatic Generation by


Transforming Use Cases
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Chu Thi Minh Hue†,§, Duc-Hanh Dang‡,¶, Nguyen Ngoc Binh‡,|| and
Anh-Hoang Truong*,**
*Department of Software Engineering
VNU University of Engineering and Technology, Vietnam
†Hung Yen University of Technology and Education, Vietnam

Hosei University, Japan
§
huectm.di12@vnu.edu.vn
¶hanhdd@vnu.edu.vn
||nnbinh@vnu.edu.vn

**hoangta@vnu.edu.vn

Received 14 November 2018


Revised 4 January 2019
Accepted 14 February 2019

This paper proposes a transformation-based method to automatically generate functional test


cases from use cases named USLTG (Use case Speci¯cation Language (USL)-based Test
Generation). We ¯rst focus on developing a modeling language named Test Case Speci¯cation
Language (TCSL) in order to express test cases. Test cases in TCSL can contain detailed
information including test steps, test objects within steps, actions of test objects, and test data.
Such information is often ignored in currently available test case speci¯cations. We then aim to
generate test cases in a TCSL model by a transformation from use cases that are represented by
a USL. The USLTG transformation includes three main steps in generating (1) scenarios,
(2) test data, and (3) a TCSL model. Within our transformation, the OCL solver is employed in
order to build system snapshots as the part of test cases and to identify other test data. We
applied our method to two case studies and evaluated our method by comparing it with other
recent works.

Keywords: Model-based testing; transformation; functional test case; use case; test generation;
USL; USLTG; TCSL.

1. Introduction
Software testing is an important activity to ensure the quality of software. It
accounts for a large part of the development e®ort. A way to reduce testing e®ort
and ensure the e®ectiveness of testing is to generate functional test cases from

¶ Corresponding author.

1313
1314 C. T. M. Hue et al.

functional requirements during requirement engineering, an early phase of software


development. However, the software requirements often change during the software
development process, so related functional test cases have to be rebuilt and re-
executed. Moreover, functional requirements are often represented and captured by
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

use cases. Hence, the identi¯cation, maintenance, and execution of test cases require
a big e®ort. Automatically generating and performing test cases help to save much
time and e®ort as well as reduce the number of errors and faults in the testing
activity.
Among current automated testing execution processes, the Model-Based Testing
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

(MBT) process allows generating and performing test cases automatically [1]. In the
¯ve main steps of this process, the three last steps (test script generation, testing
execution, and testing evaluation) have been fully provided by automated testing
tools. However, the test case automatic generation step is still facing many di±culties
and challenges. Especially, test case generation from use case speci¯cations is still a
challenge for the research community because use case speci¯cations are often de-
scribed in a natural language. In order to automate testing activities fully, generated
test cases need to contain the necessary information to automatically generate test
scripts. The basic idea is to express each test case as abstract as possible while
making it precise enough to be executed or interpreted by an automated testing
tool [1]. The basic working principles' automated testing tools are to divide the test
case into four di®erent parts: test steps, test objects of each test step, actions within
test objects, and test data [2, 3]. Hence, these tools can run tests directly from test
cases and they is no need to update low-level test scripts as test cases are changed.
A considerable number of works, including [4–10], have attempted to introduce
di®erent approaches to automatically generate functional test cases from use cases.
These works proposed methods to generate functional test cases from use case spe-
ci¯cations represented by using templates with restricted rules and keywords [4, 5],
use case models in UML activity and sequence diagrams [6, 7], UML diagrams with
contracts [8], or use case models in a Domain Specify Language (DSL) [9, 10].
However, within the current works mentioned above, the relevant information of
test cases including concurrent test steps, test object name, action, and test data
values is often not captured. These works also lack a test case speci¯cation language
(TCSL) capable of capturing reusable functional test cases. These speci¯cations are
understandable to non-technical stakeholders and precise enough for automated
testing tools to create test scripts automatically.
In this paper, we ¯rst propose a TCSL to precisely specify the functional test cases
of a system, then propose a method named USLTG (Use case Speci¯cation Language
(USL)-based Test Generation) to generate test cases from use cases captured as USL
models. We specify use cases as models in a USL [11]. With each use case, USLTG
generates test scenarios, a snapshot representing internal system states before exe-
cuting the use case, and test data suites from these models. USLTG transforms the
generated test scenarios, snapshot, and test data suites into a model in TCSL by
model transformations.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1315

To summarize, the main contributions of this paper are as follows.

. We propose a language named TCSL to specify functional test cases.


. We introduce a method named USLTG to automatically generate the functional
test cases from use cases.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

. We build three algorithms to transform use cases as USL models into a TCSL
model.
. We implement a support tool to realize the USLTG method. We conduct two case
studies with the tool to illustrate our method. We also evaluate the USLTG
method by comparing it to other related methods.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

This work makes four signi¯cant improvements from our earlier conference
paper [12]. First, we propose the TCSL with the abstract syntax and OCL well-
formedness rules. Second, we improve the algorithm GenTestInputData to automatic
generate and update internal states of the system. Third, we conduct an additional
case study in order to illustrate how to apply USLTG in practice. Last but not the
least, we provide an evaluation for USLTG.
The rest of this paper is organized as follows. Section 2 introduces the background
and motivation for our work. Section 3 comments on related work. Section 4 over-
views our method. Section 5 brie°y presents the USL language. Section 6 presents
the TCSL. Section 7 explains how to generate functional test cases from use cases.
Section 8 brie°y discusses the tool support of USLTG, case study and an evaluation
for USLTG. The paper is ended with conclusions and our future work.

2. Motivation and Background


In this section, we ¯rst discuss the basic knowledge that we use in this research. We
then debate our motivations.

2.1. Use case


Use cases are a software artifact that is commonly used for capturing and structuring
the functional requirements. A use case is de¯ned as \the speci¯cation of sequences of
actions, including variant sequences and error sequences, that a system, subsystem,
or class can perform by interacting with outside objects to provide a service of
value" [13]. As a requirement artifact, the use case model is commonly speci¯ed by a
UML use case diagram and loosely structured textual descriptions [14]. Use case
models are a centric artifact in the software development. They are used as an input
to generate other software artifacts, such as class diagrams, activity diagrams,
sequence diagrams, and functional test cases. Figure 1 shows a simpli¯ed use case
diagram of a Library system and Table 1 shows a use case description of a Lend book
use case in the Library system. This use case is invoked when the librarian executes
the book lending transaction. In this paper, we use the Lend book use case as an input
to generate functional test cases and to illustrate this research.
1316 C. T. M. Hue et al.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

Fig. 1. A simpli¯ed use case diagram of the Library system.


Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Table 1. The use case description Lend book (adapted from [11]).

Use case name: Lend Book


Brief description: The Librarian processes a book loan.
Actors: Librarian.
Pre-condition: The librarian has logged into the system successful.
Post-condition: If the use case successfully ends, the book loan is saved and a complete message is shown.
In the other case, the system displays an error message.
Trigger: The Librarian requests a book-loan process.
Special requirement: There is no special requirement.
Basic °ow
(1) The Librarian selects the Lend Book function.
(2) The system shows the Lend-book window, gets the current date and assigns it to the book-loan date.
(3) The Librarian enters a book copy id.
(4) The system checks the book copy id. If it is invalid, it goes to step (4a(1)).
(5) The Librarian enters a borrower id.
(6) The system validates the borrower id. If it is invalid, it goes to step (6a(1)).
(7) The Librarian clicks the save-book-loan button.
(8) The system validates the conditions to lend book. If it is invalid, the system goes to step (8a(1)).
(9) The system saves the book loan record, then executing two steps (10) and (11) concurrently.
(10) The system shows a complete message.
(11) The system prints the borrowing bill.
Alternate °ows
(E1) Request searched book.
(1) The Librarian selects the search function after step (4a(1)).
(2) The system executes the extending use case Search book.
(4a) The book copy id is invalid.
(1) The system shows an error message, then goes to step (3).
(6a) The Borrower id is invalid.
(1) The system shows an error message, then goes to step (5).
(8a) The lending condition is invalid.
(1) The system shows an error message.
(2) The system ends the use case.

2.2. Test case


A test case is de¯ned as \a speci¯cation of inputs, execution conditions, testing
procedure, and expected results". It de¯nes a single test to be executed to achieve a
particular software testing objective, such as to traverse a particular program path or
USLTG: Test Case Automatic Generation by Transforming Use Cases 1317

to verify compliance with a speci¯c requirement [15]. According to Jim [16], the
creation of test cases from a use case includes three main steps: ¯rst, use case sce-
narios are identi¯ed from the description of use case. A use case scenario is an
instance of a use case or a complete path through a use case. Second, test scenarios
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

are built from the corresponding use case scenarios. Third, test data suites of each
test scenario are identi¯ed. A test case results from combining a test scenario and a
test data suite. A test scenario describes steps of a testing procedure, execution
conditions. A test data suite includes values of test inputs and expected outputs. A
test step can be an action that is executed by a primary actor or a checkpoint (a
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

system action that compares a current value for a speci¯ed property with expected
value for that property [3]). Moreover, a test step can also be relevant to other test
cases of another use case that is included in the use case or extended from the use
case. This test step is called Invoking Step [10].
In the test design phase, test cases are usually speci¯ed by a natural language in a
CSV (Comma-Separated Values) format or excel ¯le. The information structure of
these speci¯cations depends on automated testing tools [3]. For example, Table 2
gives two test cases to test an execution path of the Lend book use case. They are
combined from a test scenario and two test data suites and contain the necessary
information to be executed automatically by an automated testing tool. These test
cases are speci¯ed by a natural language in an excel ¯le. Lines 1 to 8 describe the test
steps of the test scenario and two test data suites of the test cases. In particular, the
columns Num, Step, Object, Action, Data 1, and Data 2 present the order of each
test step, the description of each test step, the test object name of each test step, the
action name on the test object of each test step, and two test data suites, respec-
tively. Two test cases contain four necessary information parts.

Table 2. Two test cases of the Lend book use case.

Num Step Object Action Data 1 Data 2


1 Librarian selects the Lend-book function lendbookF select
2 Librarian enters a book copy id bcid enter \bc 01" \bc 03"
3 Librarian enters a borrower id bid enter \b 03"
4 Librarian enters a borrower id bid enter \b 02" \b 02"
5 Librarian clicks the save-book-loan button SaveBLoan click
6 The system save the book loan record BookLoan verify A Book loan is recorded
7 The system shows a complete message message verify \Book loan save complete"
8 The system prints the borrowing bill bBill verify \Book loan receipt is printed"

2.3. Test coverage criteria


A test coverage criterion is a set of rules that guide to decide appropriate elements to
be covered to make test case design adequate [17]. In this section, we present a
proposed test coverage criteria applied to generate test cases in our method.
Kundu and Samanta [18] proposed a test coverage criterion named activity path
coverage criterion. This criterion is used for both loop testing and concurrency
1318 C. T. M. Hue et al.

among activities of activity paragraphs. They mentioned about activity paths and
types of activity paths, namely (1) non-concurrent activity path and (2) concurrent
activity path.
First, a non-concurrent activity path is a sequence of non-concurrent activities
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

from the starting activity to the ending activity in an activity graph. In this path
type, each activity in the sequence has at most one occurrence except those activities
that exist within a loop. Each activity in a loop may have at most two occurrences in
the sequence. The non-concurrent activity path is used to cover faults in loop and
branch condition. To detect a fault in a loop, this criterion tests the loop that is
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

executed zero and one time with the while-do loop structure, and one and two times
with the do-while loop structure. Second, a concurrent activity path is a special case of
non-concurrent path, which consists of both non-concurrent and concurrent activi-
ties satisfying precedence relation among them. To ensure all precedence relations
among the activities in the activity diagram to be satis¯ed, the authors proposed to
select the concurrent activity path such that the sequence of all concurrent activities
encapsulated in that path corresponding to the breadth-¯rst search traversal of them
in the activity graph. For more details, refer to [18].
In order to transform use cases into test cases that ensure quality and which can
be executed automatically by an automated testing tool, we identify several chal-
lenges as follows.

Test scenario generation: Generated test scenarios contain complete information of


test steps, such as execution conditions, descriptions of test steps, test objects within
steps, actions on test objects, and type of test steps. For example, in Table 6, columns
Step, Object, and Action show information of descriptions of test steps, test objects
within steps, and actions on test objects, respectively.
Test data generation: Generated test data suites contain concrete values of inputs
and expected outputs.
Test case precise speci¯cation: To reuse test cases or to automatically transform test
cases into di®erent forms, the generated test cases should be precisely speci¯ed by a
speci¯cation language.

3. Related Work
We position our work in functional test cases' automatic generation from use cases.
Within this context, test cases are often generated by MBT techniques. In order to
automatically generate functional test cases from use cases, several approaches [4–10]
have been proposed. These approaches can be grouped into three groups as follows.
In the ¯rst group, functional test cases are automatically generated from speci-
¯cations in a natural language. For example, methods of Wang et al. [4] and
Sarmiento et al. [5] follow this approach. In the approach of Wang et al., use cases are
described by speci¯cations of RUCM. RUCM is based on a template in natural
USLTG: Test Case Automatic Generation by Transforming Use Cases 1319

language with restricted rules and keywords. This work uses a Natural Language
Processing (NLP) technique to build Use Case Test Models (UCTMs) from RUCM
speci¯cations and then extracts behavioral information and constraints from
UCTMs in order to generate test scenarios and constraints. Moreover, extracted
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

constraints are represented in a natural language. Hence, to automatically generate


test data, engineers have to represent again the extracted constraints in OCL.
Another limitation of the work is that they do not handle test scenarios containing
concurrent test steps. Similarly, Sarmiento et al. proposed an approach to generate
test scenarios from a use case speci¯cation in a Restricted form of Natural Language
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

(RNL). To automatically generate test scenarios, this approach automatically


translates RNL speci¯cations into Petri-Net models. These models are used as an
input for test scenario generation. In both the works, use cases are described in
natural languages that are enriched by restricted rules and keywords. However,
speci¯cations in these languages are often imprecise. So, these speci¯cations need
many e®orts to check the correctness before automatically transforming to inter-
mediate models what are used to generate test cases.
In the second group, functional test cases are automatically generated from UML
diagrams that are used to capture use cases, such as activity and sequence diagrams.
For example, the work of Gutierrez et al. [6] proposed a process to generate test cases
from use cases for web applications. First, a use case is speci¯ed by an activity
diagram to generate test scenarios. Because the generated test scenarios are
described at a high level, they are hard to implement test scripts. So, the work has
used the sequence diagrams proposed in UML Testing Pro¯le to specify each test
scenario in detail. Tiwari et al. [7] proposed an approach to generate test cases from
use case speci¯cation by actor-oriented activity diagrams. This approach allows
generating systematic, prioritized test sequences. Nebut et al. [8] enhanced UML
diagrams with contracts to automatically generate functional test cases. However,
the works [6–8] did not mention test data generation for generated test scenarios.
Furthermore, the type of test steps in these works was not identi¯ed concretely.
In the third group, functional test cases are automatically generated from use
cases that are expressed using DSLs, such as Gutierrez et al. [9] and Straszak
et al. [10]. Gutierrez et al. [9] presented a method to extend an NTD (Navigational
Development Technique) model and model transitions to automatically generate
functional test cases from requirements of web applications. The work of Gutierrez
et al. also proposed metamodels to specify generated test scenarios and test data.
However, this work did not mention test case generation including concurrent
actions and  include  or  extend  relations with other use cases. In addition,
this work only gave model transitions and transition relationships, and it did not
present in detail how to generate test scripts, test data, and transform models.
Straszak et al. [10] presented a Requirement-Driven Software Testing tool
(ReDSeT). ReDSet allows generating test cases for di®erent requirement types. The
generated test cases are speci¯ed by a Test Speci¯cation Language (TSL). The work
of Straszak et al. presented a method to generate functional test cases from use case
1320 C. T. M. Hue et al.

speci¯cations in Requirement Speci¯cation Language (RSL) [19]. However, in this


work, use case scenarios are created by engineers and test data are considered as pre-
condition and post-condition expressions of test scenarios. Moreover, this work did
not mention test cases containing concurrent actions.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

In most works above, generated test cases often do not include the information of
test steps, test objects of each test step, actions within test objects, and test data.
This information is necessary to automatically generate test scripts for automated
testing tools.
In this paper, we propose an approach to generate functional test cases from use
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

case models that are speci¯ed by USL models. The generated test cases contain
necessary information (test steps, test objects of each test step, type of test action,
actions within test objects, and concrete test data) to automatically generate test
scripts by automated testing tools. Test cases containing concurrent actions and
 include  and  extend  relationships with other use cases are also generated.
Additionally, we propose a DSL to precisely specify generated test cases.

4. Overview of Our Approach


Figure 2 shows the main steps of our USLTG approach. USLTG aims to automat-
ically generate test cases from use cases. Our goal is to address the challenges listed in
Sec. 2. In our approach, we employ the USL language [11] to specify use cases.
USLTG then reads a UML class model capturing domain concepts of the system, the
con¯guration ¯le of the system, and USL models as inputs to automatically generate
the functional test cases. These generated test cases are represented as a TCSL
model. Here, we de¯ne the TCSL as a language to specify functional test cases. This
language is presented in Sec. 6. To automatically generate test cases from USL
models, USLTG executes with three main steps: ¯rst, with each USL mode, USLTG
browses the USL model to generate use case scenarios and constraints. Second, for
each use case, USLTG generates a snapshot of the class model. With each use case
scenario, USLTG then employs the OCL constraint solver proposed in [20] to solve

Fig. 2. Overview of the USLTG approach.


USLTG: Test Case Automatic Generation by Transforming Use Cases 1321

constraints to generate an object diagram. The object diagram contains objects


corresponding to test input data suites of the use case scenario. Finally, generated
snapshots, use case scenarios, constraints, and test input data suites are transformed
to a TCSL model. A more detailed explanation of these steps is presented in Sec. 7.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

5. Representing Use Cases in USL


This section brie°y describes the USL language to specify use cases. This language
was proposed in our previous work [11]. USL models are then used to automatically
generate other software artifacts by transformations and to be easily understood by
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

non-technical stakeholders (see [11] for more details).


Figure 3 shows the USL model of the Lend book use case speci¯ed in USL. This
model captures described information of the Lend book use case as shown in
Table 1. In this model, the overview information of the use case is presented by the
concept DetailInf as the upper left corner of the model. Steps in the basic °ow are
presented by ActorSteps (s1, s3, s5) and SystemSteps (s2, s4, s6; . . . ; s13). These
Steps are connected together by BasicFlowEdges (b1; . . . ; b18), DecisionNodes, a
ForkNode and a JoinNode. Steps in each alternate °ow are also presented by
ActorSteps and SystemSteps. They are connected by DecisionNodes and Alter-
nateFlowEdges which are labeled as the name of the alternate °ows such as 4a, 6a,
8a, and E1. Furthermore, actions in each step are speci¯ed in detail by di®erent
action types (a1; . . . ; a18). Moreover, guard conditions of °ows are speci¯ed by
constraints associated with FlowEdges (BasicFlowEdge or AlternateFlowEdge)
(g1; . . . ; g6), the pre- and post-conditions of each action (if it has) are speci¯ed by
pre-constraints and post-constraints associated with the action (p1; . . . ; p6). The
initial point and the ¯nal points of the use case are speci¯ed by the concept
InitialNode and FinalNodes. Figure 4 shows the partial USL model of the Lend
book use case in abstract syntax.

6. The Test Case Speci¯cation Language


We propose a TCSL to precisely specify functional test cases. A TCSL model aims to
capture the information of functional test cases of a system. A system includes a set
of use cases. Each use case includes one or more snapshots capturing internal system
states before executing the use case and a set of test cases that correspond to each
test scenario with test data. In particular, the test data of a use case are identi¯ed on
the basis of the internal system states. These system states are the instances of the
domain concepts of the system that relate to the use case. Figure 5 shows the
metamodel of TCSL.
The TCSL metamodel includes the following metaconcepts:

. System represents a system which created test cases against it.


Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

1322
C. T. M. Hue et al.

Fig. 3. The USL model of the Lend book use case (adapted from [12]).
USLTG: Test Case Automatic Generation by Transforming Use Cases 1323
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Fig. 4. A partial USL model of the Lend book use case in abstract syntax.

Fig. 5. The metamodel of language TCSL.

. UseCase expresses a use case of the system. A system includes one or more use
cases.
. TestScenario represents a test scenario. A use case includes one or more test
scenarios.
. Step denotes a step in a test scenario. The properties step and description
represent the order number and the description of the Step, respectively. This
concept is divided into three speci¯c concepts: TestStep, checkpoint, and
InvokeStep.
1324 C. T. M. Hue et al.

. TestStep expresses a test step executed by actors. The properties objectName and
actionName present the name of the test object and the name of the action that
the actor executes on the test object, respectively.
. CheckPoint de¯nes a test step that is a CheckPoint. The properties objectName
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

and checkPointType present the name of the test object and type of the check-
point, respectively.
. InvokeStep expresses a test step that is an InvokeStep. The properties type and
ucName capture the type of invoking action (include or extend) and the name of use
case that their test cases are inserted into the test steps of the test cases, respectively.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

. DataCase represents values in data suites of each test step. A Step includes one or
more DataCases which belong to di®erent data suites of a test scenario. A
DataCase can include one or more value of di®erent properties that they are
speci¯ed by the concept DataTupe.
. StepConection de¯nes relations between Steps. Normally, Steps are executed
sequentially. If Steps are done concurrently, the initial point and the ¯nal point of
concurrent Steps are presented by the StepConnection fork and the StepCon-
nectionjoin.
. SnapShot expresses internal system states before executing a use case. A SnapShot
includes a set of MObjects and a set of MLinks that are instances of the system's
domain concepts relating to the use case and links between MObjects.
. MObject represents an instance of a domain concept of the system. An MObject
includes a set of MAttributes.

We de¯ne the OCL restrictions of the TCSL metamodel as well-formedness rules


as follows.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1325
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

Figure 6 shows a partial TCSL model in abstract syntax that speci¯es generated
test cases from the Lend book use case. This model is an instance of the metamodel
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Fig. 6. Test cases in Table 2 are captured by a TCSL model.


1326 C. T. M. Hue et al.

TCSL and shown in a graphic form. The model is stored in an XML ¯le containing
tags to de¯ne instances of the TCSL metaconcepts This ¯gure focuses on displaying
test cases in Table 2. Particularly, the System object captures the system under test.
The UseCase object speci¯es the use case of de¯ned test cases, the SnapShot object
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

captures the system snapshot expressing internal system before executing the use
case. The TestScenario object captures a test scenario which can include TestSteps,
CheckPoints or InvokeSteps. The DataCase object speci¯es input data values or
expected output data values of each TestStep or CheckPoint in test data suites.
A TCSL model contains sets of test scenarios generated from use cases. Each test
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

scenario contains one or more test data suites. A test case is a combination of a test
scenario and a test data suite.

7. Transforming Use Cases into Test Cases


This section explains test coverage criteria used in the USLTG method. We then
present three main steps of USLTG as shown in Fig. 2 to generate test cases from use
cases: (1) identify use case scenarios and constraints, (2) generate snapshot and test
input data suites, and (3) create TCSL model.

7.1. Selecting coverage criteria


A USL model captures the control °ows of a use case by rede¯ned concepts
employing from the UML activity diagram. Hence, we use the activity path coverage
criterion proposed in [18] for test generations from a activity diagram. A USL path is
a path from the initial node to a ¯nal node of the USL model. This path corresponds
to an activity path of an activity graph. It is also a combination of the basic °ow with
another alternate °ow of the use case. We also call the activity path coverage cri-
terion in a USL model to be USL path coverage criterion.
De¯nition 1 (USL test case). Given a use case scenario of a USL model D
that consists a sequence of actions ða1 ; . . . ; an Þ. With a test input data suite satisfying
all of the pre-conditions of actions in the scenario, the execution of the scenario by
the test input data suite is a test case execution. A test case execution is realized as a
path in the LTS L of D that we de¯ned in our previous work [11]:
t1 t2 tn gi jai jri
p ¼ 0 ! 1 !    ! n , where ti ¼ i1 ! i (8i ¼ 1; . . . ; n), 0 ¼ L  init ,
n 2 L:F , and ti 2 L:T .
De¯nition 2 (The USL path coverage criteria). Let Pu be the USL path set of
a use case u from a USL model, T be the set of test cases, T is called to satisfy the
USL path coverage criteria, if 8pi 2 Pu , 9 t 2 T that the scenario corresponding to
t is done according to USL path pi .
When a test case t 2 T is executed, the execution result of this test case is \Pass",
if 8ri is satis¯ed (8i ¼ 1; . . . ; n), in other cases it is \Fail".
USLTG: Test Case Automatic Generation by Transforming Use Cases 1327

The purpose of USLTG is to identify USL paths and also to solve a set of
guard conditions on each path to identify test input values for test cases. The fol-
lowing subsections will present algorithms to generate test scenario and test data
values.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

7.2. Generation of use case scenarios and constraints


We aim to generate test scenarios that satisfy the test coverage and adequacy cri-
teria. We ¯rst identify all USL paths following the activity path coverage criterion
proposed in [18] from a InitialNode to a FinalNode in a USL model. Note that with
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

each generated path, guard conditions on a FlowEdge are taken as the part of the
pre-condition of the corresponding next action. Each path is considered as a use case
scenario. Then, each use case scenario is revisited to generate a test scenario.
De¯nition 3 (A stored path USL). A path USL is stored by a variable
ep ¼ hs0 ; s1 ; . . . ; sn ; sf i. ep stores a sequence of objects in a USL model D, where s0 is
the InitialNode and sf is a FinalNode of D, si ð8i ¼ 1; . . . ; nÞ is an Action or a
ForkNode or a JoinNode or a sequence of Actions. A pair of a ForkNode and a
JoinNode de¯nes a starting point and a ¯nishing point of sequences of Actions which
they are executed concurrently. Each Action has a pre-condition and a post-
condition. In particular, the pre-condition is a constraint of the Action that must be
satis¯ed to execute the Action and the post-condition is a constraint that must to be
satis¯ed after ¯nishing Action.
To identify test cases satisfying the test requirement from activity diagrams, the
work in [18] converts an activity diagram into an activity graph and then traverses
the activity graph to get activity paths that achieve activity path coverage criterion.
However in this paper, to avoid the complexity of graph transformation, we try to
directly get USL paths from USL models. We implemented a scenario generation
algorithm named GenScenarios to generate a set of USL paths from a USL mode
through a Depth First Traversal (DFT) as shown in Algorithm 1. The following
describes this algorithm.
GenScenarios takes as input a USL model D. Its output is a set of paths which
correspond to scenarios with constraints of the use case. The derived paths achieve
the USL path coverage criterion.
The procedure GenerateScenarios calls some functions: outGoing(x) returning a
set of outgoing FlowEdges from the USLNode x, targetNode(e) returning a target
USLNode of the FlowEdge e, defGuard(g, a) combining the Constraint g with the pre
Constraint of the Action a and then moving into the pre Constraint of the
Action a, guardE(e) returning the guard Constraint of the FlowEdge e. Addition-
ally, procedure VisitSubpath(allsubSteps, allsubPaths, subSteps, subPaths, target-
Node(e), jn) is called to generate sets of all subpaths and all substeps starting from
di®erent USLNodes, target USLNodes of outgoing FlowEdges from the ForkNode x, to
a JoinNode jn.
1328 C. T. M. Hue et al.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Example 7.1. The algorithm GenScenarios identi¯ed ¯ve USL paths that
correspond to ¯ve constrained use case scenarios from the USL model of the Lend
book use case as follow. We use two symbols \<" and \>" to denote relations
between an action with its pre-condition and post-condition, respectively. The use
case scenarios sc1 to sc4 contain two actions executed concurrently: fa12 > p2g and
fa13 > p3g:
— sc1 ¼ fa1; a2; a3; a4; a5; a6; g1 < a7; a8; g3 < a9; a10; g5 < a11 > p1; c4; fa12 >
p2g; fa13 > p3g; c5; c7g;
USLTG: Test Case Automatic Generation by Transforming Use Cases 1329

— sc2 ¼ fa1; a2; a3; a4; a5; a6; g2 < a14 > p4; a5; a6; g1 < a7; a8; g3 < a9; a10; g5 <
a11 > p1; c4; fa12 > p2g; fa13 > p3g; c5; c7g;
— sc3 ¼ fa1; a2; a3; a4; a5; a6; g2 < a14 > p4; a15; a16; a5; a6; g1 < a7; a8; g3 < a9;
a10; g5 < a11 > p1; c4;fa12 > p2g; fa13 > p3g; c5; c7g;
— sc4 ¼ fa1; a2; a3; a4; a5; a6; g1 < a7; a8; g4 < a17; a7; a8; g3 < a9; a10; g5 < a11 >
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

p1; c4; fa12 > p2g; fa13 > p3g; c5; c7g;
— sc5 ¼ fa1; a2; a3; a4; a5; a6; g1 < a7; a8; g3 < a9; a10; g6 < a18 > p6; c8g.
When the Lend book use case is executed according to each use case scenario, the
state transitions of the system occur. For example, when the Lend book use case is
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

executed according to Scenario sc1, the state transitions of the system occur as
follows:
trueja1jtrue trueja2jtrue trueja3jtrue trueja4jtrue trueja5jtrue
sc1 ¼ 0 ! a1 ! a2 ! a3 ! a4 !
trueja6jtrue g1ja7jtrue trueja8jtrue g3ja9jtrue trueja10jtrue
a5 ! a6 ! a7 ! a8 ! a9 !
g5ja11jp1 truejc4jtrue ftrueja12jp2;trueja13jp3g truejc5jtrue
a10 ! a11 ! c4 ! a12a13 !
truejc7jtrue
c5 ! c8 :

7.3. Generation of test input data suites


The domain concepts of the system are captured by a class diagram named CD.
Before executing a use case, the system exists its internal states which are captured
by an object diagram. This object diagram is an instance of the class diagram CD and
is called a system snapshot. The test input data of each test scenario of the use case
are identi¯ed based on this system snapshot.
A test input data suite of a test scenario is a set of input variable values which
actors send to the system. We model the input variables of a test scenario by
properties of a class named EV. An action sending input data to the system can be
executed many times. At di®erent execution times, input variables are modeled by
di®erent properties of EV including two parts: pre¯x and su±x. Their pre¯xes have
the same value and are the input name. Their su±xes are the repeat number of the
action. For example, in the use case scenario sc4, the class EV includes properties as
follows: bcid, bid1, and bid2.
In order to generate test input data for test scenarios of a use case, ¯rst, a system
snapshot (it is the state 0 of state transitions in scenarios of a use case) be gener-
ated. After that, the constraints of each test scenario are solved according to the
generated system snapshot to identify test input data of the test scenario. We
implemented a test input data generation algorithm named GenTestInputData as
depicted in Algorithm 2 to generate test input data for test scenarios of a use case.
The following describes GenTestInputData in detail.
The algorithm takes as an input as paths are the set of constrained use case
scenarios of a use case, CD is a UML class model which models domain concepts of
1330 C. T. M. Hue et al.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

the system, CONF is a con¯guration ¯le that contains value domains of internal
states of the system and environment variables (value domains from business). The
output of GenTestInputData is a list of pairs and a system snapshot named OMpi.
Each pair includes a constrained use case scenario scenario and objects of EV that
each object of EV is a test input data suite of scenario.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1331

The algorithm ¯rst generates a system snapshot named OMpi from the ¯le con-
¯gure CONF by calling a procedure GenerateSnapshot(CONF) at line 3. The
algorithm then browses scenarios saving in paths by a foreach loop at line 4 to
generate test input data suites for each scenario.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

With each scenario, ¯rst, GenTestInputData creates a class EV to specify input


variables and replaces input variable names in OCL constraint expressions of the
scenario. Second, it combines EV with CD to generate a class model CM. Third, the
algorithm derives and combines all pre-condition of actions to create an invariant ¯le
INVs. Finally, it calls the procedure OCLSolver (CM, CONF, INVs, OMp) to solve
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

the OCL constraints INVs on the class model CM with value domains identi¯ed in
the ¯le CONF and identi¯ed values in the object diagram OMp.
However, the use case scenario sc can have actions which change the internal
states of the system. Hence, the values of input variables which are entered after
these actions must be reidenti¯ed. These values depend on new internal states of the
system and values of input variables that were entered before these actions.
The while loop at Line 9 executes to update internal states of the system and
environment variables based on the post-condition of the action a which changes the
internal system states. The loop then combines internal states of the system and
values of entered input variables, and resolves constraints in INVs. The loop ¯nds
new values for input variables that are entered after the action a. These new values of
input variables combine with values of input variables that are entered before the
action and new internal states of the system to satisfy constraints in INVs.
Each object of EV in Object diagram OMf corresponds to a test input data suite of
the scenario. The objects of EV identi¯ed in the object diagram OMf and the use case
scenario sc are then pushed in a list named TCs.
Procedure processAction (a, EV ) at Line 28 is used by Procedure processScenario
(sc) at Lines 22 and 25. This procedure is to identify input variables that an actor
action sends to the system and to replace the name of variables in OCL constraint
expressions of actions.
Example 7.2. Figure 7 shows a class model CD in (a) and a system snapshot of the
class model CD in (b). This snapshot represents the internal states of the system
before executing the Lend book use case. It was generated by procedure
GenerateSnapshot in Line 3 of Algorithm 2.
Results are returned when the algorithm GenTestInputData generates test input
data suites for the use case scenario sc4 as follows:
. The created class EV is EV (bcid, bid1, bid2).
. The created ¯le INVs includes constraints as follows:
context EV
inv g1: BookCopy.allInstances()->exists(b:BookCopyj b.bcid=bcid)
inv g4: (bid1=") or (Borrower.allInstances()->exists(b:Borrowerj b.bid=bid1)=
false)
1332 C. T. M. Hue et al.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

(a) (b)
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

(c)

Fig. 7. The class diagram models domain concepts of the Library system.

inv g3: Borrower.allInstances()->exists(b:Borrowerj b.bid=bid2)


inv g5: BookLoan.allInstances()->exists(l:BookLoanj (l.bid=bid2)and(l.payed=
0))=false

The object diagram OMf returned from Algorithm 2 includes objects and in
Fig. 7(b) and the generated objects EV in Fig. 7(c). The objects in this object
diagram are identi¯ed to satisfy invariants in INVs.

7.4. Generation of a TCSL model


The main purpose of the TCSL is to provide notations for reusable test cases that are
understandable for non-technical stakeholders and precise enough for the system
testing. Algorithm 3 shows the algorithm GenTCSLModel to transform generated
use case scenarios and test data suites of each use case of a system into a TCSL mode.
GenTestInputData takes a set of USL models lucs which is Ordered according to the
execution priorities of use cases. The output of GenTestInputData is a TCSL model
capturing the test cases which was generated from the use cases of the system.
Particularly, this algorithm creates a TCSL model to capture test cases of use
cases of the system. Each use case is mapped to a use case in the TCSL model. Each
generated snapshot of each use case is transformed to a snapshot in the corre-
sponding use case in the TCSL model. Each use case scenario is mapped to a test
scenario. The Constraints of the InitialNode and the FinalNode of the use case
scenario are transformed to the pre-condition and post-condition of the test scenario,
respectively. An ActorAction is mapped to TestSteps corresponding to parameters
of the ActorAction. Values of input variables which are entered by actions of actors
(ActorAction) in objects of EV are transformed to DataCases of the TestSteps. A
SystemAction that is a SystemInclude or a SystemExtend is mapped to an
InvokeStep. A SystemAction that is not a SystemInclude or a SystemExtend and
USLTG: Test Case Automatic Generation by Transforming Use Cases 1333
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

has the post-condition is mapped to a CheckPoint. With each object EV, input
variables in the SystemAction's post-condition expression are replaced by their
values in the object EV, then the post-condition expression is mapped to a DataCase
of the CheckPoint as an expected output. Each pair of a ForkNode and a JoinNode is
mapped to a StepConection that its type is fork and a StepConection that its type
is join, respectively.
Example 7.3. Figure 6 depicts the part of the generated TCSL model that contains
two test cases corresponding to the scenario sc4 of the Lend book use case in the
Library system.
1334 C. T. M. Hue et al.

8. Results and Discussions


In this section, we ¯rst introduce a tool that we implemented to support the USLTG
method. We then present a case study applying the USLTG method to generate test
cases. A comparison with other works also discusses to evaluate our method.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

8.1. Tool support


Figure 8 shows an architecture of a USLTG tool that realizes our method. The input
of the USLTG tool is USL model, a conceptual class model, and a con¯guration ¯le to
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

de¯ne the domain of system snapshots. The output of the USLTG tool is a TCSL
model. To develop USLTG tool, we implemented two Modules (1) and (2) and use
Module (3). Module (1) is installed by an Acceleo program to read input models and
transform the generated test cases into a TCSL model by the procedure (3) imple-
menting the algorithm GenTCSLModel. Acceleo is a model transformation language
to text M2T of the Eclipse framework [21]. Module (2) is implemented by a Java
class. It contains two main procedures (1) and (2). The procedures (1) and (2)
implement the algorithms GenScenarios and GenTestInputData, respectively.
Module (3) is an OCL solver which USLTG uses to solve OCL constraints. This
solver is the use case partial solution completion of the OCL Analysis tool. It is a
plugin on the tool USE and was developed by Gogolla et al. [20].

Fig. 8. The architecture of USLTG.

In particular, ¯rst, the Acceleo program reads the USL model and the UML class
model then it passes these models for the java module. Second, the algorithm Gen-
Scenarios built in Module (2) generates constrained use case scenarios. Third, the
constrained use case scenarios generated in algorithm GenScenarios are passed to the
algorithm GenTestInputData to generate pairs hscenario, EVObjectsi. To generate
objects EV, the algorithm GenTestInputData reads the con¯guration ¯le and calls
Module (3) to solve OCL constraints. The results of the algorithm GenTestInputData
in Module (2) are passed for the algorithm GenTCSLModel that is written in
Module (1). Finally, the algorithm GenTCSLModel in Module (1) transforms the
returned results from the algorithm GenTestInputData into a TCSL model.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1335

8.2. Case study


In order to demonstrate the applicability of our method, we applied USLTG to
automatically generate functional test cases for three use cases Lend book, With-
draw [18], and Identify Initial Occupance Status of a Seat [4] abbreviated as IIOSS.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

We use the results during the automatic test case generation of the use case With-
draw [18] to illustrate in this section. Table 3 shows a description of the Withdraw
use case. In order to automatically build other software artifacts including test
cases from the use cases, this use case description is modeled by a USL model as
in Fig. 9.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Table 3. The use case description Withdraw.

Use case name: Withdraw


Brief description: The customer withdraws cash
Actors: Customer
Pre-condition: The insert card use case was successful
Post-condition: If the use case was successful, the system updates the balance, dispenses the cash, prints
a receipt for the user. If not, the system displays an error message
Trigger: User click button withdraw
Special requirement: There is no special requirement
Basic °ow
(1) The customer selects the withdraw transaction
(2) The system shows the withdraw windows
(3) The customer enters the amount she/he wants to withdraw
(4) The system checks whether the value entered is valid or not, if it is invalid, the system goes to
step (4a(1))
(5) The system retrieves the balance of the user then checks whether the balance is su±cient to withdraw
from or not, if the balance is not su±cient, the system goes to step (5a(1))
(6) The system updating the balance, dispensing the cash and ¯nally printing a receipt for the user
Alternate °ows
(4a) The customer enters an invalid amount
(1) The system displays an error message for the customer to enter an appropriate value \Enter a valid
amount", then return step (3)
(5a) The balance is not su±cient
(1) The system checks whether the customer has permission to overdraft. If the customer has no overdraft
go to (5b)
(2) The system compares the amount required with the maximum allowed limit for overdraft, if the
amount needed is within limit, go to (6), else go next
(3) The system displays an error message \Amount is beyond Limit"
(5b) The customer has no overdraft permission
(1) The system displays an error message \No permission granted"

This model includes a basic °ow, 3 alternate °ows (4a; 5a; 5b), 13 actions, 3
FinalNodes (f1; f2; f3), four DecisionNodes, 8 guard conditions (g1  g8), 6 action
post-conditions (p1  p6), one use case variable (cNum), and one pre-condition of the
use case(g0). In this example, the pre-condition of the use case is speci¯ed by one
constraint associated with InitialNode c0 to specify constraint on variables of the use
case before executing the use case.
1336 C. T. M. Hue et al.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Fig. 9. The USL model of the use case Withdraw.

First, the USLTG reads the USL model in Fig. 9 and identi¯es the constrained use
case scenarios as follows:

— sc1 ¼ fg0 < c0; a1; a2; a3; a4; g1 < a5; a6; g3 < a7 > p2; a8 > p3; a9 > p4; f1g;
— sc2 ¼ fg0 < c0; a1; a2; a3; a4; g2 < a10 > p1; a3; a4; g1 < a5; a6; g3 < a7 > p2;
a8 > p3; a9 > p4; f1g;
USLTG: Test Case Automatic Generation by Transforming Use Cases 1337

— sc3 ¼ fg0 < c0; a1; a2; a3; a4; g1 < a5; a6; g4 < a11; g5 < a12; g7 < a13 > p6; f2g;
— sc4 ¼ fg0 < c0; a1; a2; a3; a4; g1 < a5; a6; g4 < a11; g5 < a12; g8 < a7 > p2;
a8 > p3; a9 > p4; f1g;
— sc5 ¼ fg0 < c0; a1; a2; a3; a4; g1 < a5; a6; g4 < a11; g6 < a14 > p5; f3g.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

Second, the USLTG reads the class diagram capturing the domain concepts of the
ATM system as shown in Fig. 10(a) and a con¯guration ¯le describing value domains
of system states to generate a snapshot of the class diagram and test input data. In
particular, Procedure GenerateSnapshot in Line 3 of Algorithm 2 generates a system
snapshot of the class diagram as shown in Fig. 10(b). The classes EV and processed
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

OCL constraints INVs of ¯ve scenarios are also generated as shown in Table 4.
The ¯rst column shows scenario name and name of inputs (each input is represented
by a property of the class EV) and the second column shows OCL constraints of
INVs. Objects EV are returned from Procedure OCLSolver corresponding to ¯ve
scenarios as shown in Table 5. Table 6 shows test cases resulted from the combi-
nation between the use case scenarios and test data.

(a) (b)

Fig. 10. The domain class diagram of the ATM system and it's system snapshot.

Finally, USLTG transforms the identi¯ed use case scenarios, snapshots, and test
input data into a test case speci¯cation model TCSL. Listing 1 shows the XML ¯le of
the TCSL model which speci¯es functional test cases of the Withdraw use case. Our
work applied the active path coverage criterion [18] in order to generate system test
scenarios. This criterion is used for both loop testing and concurrency among
activities of activity diagrams. Hence, this criterion also ensures the generated test
scenarios achieving branch coverage criterion and the basic path coverage criterion
[4, 5, 7, 22]. However, only [5, 7] mention the use cases that contain concurrent
actions as our work. All concurrent actions are executed at least once in testing. In
the work of Straszak et al. [10], use case scenarios are manually identi¯ed and
speci¯ed in RSL [19] and then transformed to test scenarios. Thus, the coverage of
test scenarios depends on the initial identi¯cation of the use case scenarios of
1338 C. T. M. Hue et al.

Table 4. Input variables and processed guard constraints of the use case scenario Withdraw.

Scenario & Inputs OCL guards


sc1: cNum, amount context EV
inv g0: ATMCard.allInstances ()->exists(aj a.cNum ¼ cNum)
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

inv g1: (amount >¼ 2) and (amount <¼ 50)


inv g3: ATMCard.allInstances ()->exists(aj(a.balance5 >¼ amount)and
(a.cNum ¼ cNum))
sc2: cNum, amount1, context EV
amount2 inv g0: ATMCard.allInstances ()->exists(aja.cNum ¼ cNum)
inv g2: (amount1 < 2) or (amount1 > 50)
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

inv g1: (amount2 >¼ 2) and (amount2 <¼ 50)


inv g3: ATMCard.allInstances ()->exists(aj(a.balance5 >¼amount2)and
(a.cNum ¼ cNum))
sc3: cNum, amount context EV
inv g0: ATMCard.allInstances ()->exists(aja.cNum ¼ cNum)
inv g1: (amount >¼2) and (amount<¼50)
inv g4: ATMCard.allInstances ()->exists(aj(a.balance5 <amount) and
(a.cNum ¼ cNum))
inv g5: Permision.allInstances ()->exists(aja.cNum ¼ cNum)
inv g7: ATMCard.allInstances ()->exists(aj(a.cNum ¼ cNum)and(a.balance þ
Permision.allInstances()->select(p jp.cNum ¼ cNum)->asSequence()->
¯rst().limit < amount))
sc4: cNum, amount context EV
inv g0: ATMCard.allInstances ()->exists(aja.cNum¼cNum)
inv g1: (amount >¼ 2) and (amount<¼50)
inv g4: ATMCard.allInstances ()->exists(aj(a.balance5 < amount)and
(a.cNum ¼ cNum))
inv g5: Permision.allInstances ()->exists(aja.cNum ¼ cNum)
inv g8: ATMCard.allInstances()->select(aja.cNum ¼ cNum)->asSequence()->
¯rst(). balanceþ Permision.allInstances()->select(pjp.cNum ¼ cNum)->
asSequence()-> ¯rst().limit>¼amount
sc5: cNum, amount context EV
inv g0: ATMCard.allInstances ()->exists(aja.cNum ¼ cNum)
inv g1: (amount >¼ 2) and (amount<¼ 50)
inv g4: ATMCard.allInstances ()->exists(aj(a.balance5 <amount) and
(a.cNum¼cNum))
inv g6: Permision.allInstances()->exists(a:Permisionj a.cNum ¼ cNum) ¼ false

Table 5. The generated objects EV of the use case scenarios


Withdraw.

Scenario Object EV
sc1 ev1: cNum¼`91b126', amount¼5
sc2 ev1: cNum¼`91b126', amount1¼100, amount2¼2
ev2: cNum¼`91b126', amount1¼1, amount2¼2
sc3 ev1: cNum¼`91b124', amount¼40
sc4 ev1: cNum¼`91b124', amount¼30
sc5 ev1: cNum¼`91b125', amount¼50
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

Table 6. The generated test cases of the Withdraw use case.

SceN Pre Steps Object Action Data 1 Data 2


sc1 cNum ¼ `91b126' (1) The customer selects but- butWithdraw select
Withdraw
(2) The customer enters amount enter 5
amount
(3) The system updates balance ATMCard verify ATMCard.allInstances() ->exists
(aj(a.cNum¼ `91b126')and
(a.balance ¼a.balance@pre-5))
(4) The system dispenses cash Cashier verify The cash is dispensed
(5) The system prints receipt Printer verify A Receipt is printed
sc2 cNum¼ `91b126' (1) The customer selects but- butWithdraw select
Withdraw
(2) The customer enters amount enter 100 1
amount
(3) The system shows error MessageBox Show Enter a valid amount
message
(4) The customer enters amount enter 2 2
amount
(5) The system updates balance ATMCard verify ATMCard.allInstances() ->exists ATMCard.allInstances() ->exists
(aj(a.cNum¼ `91b126')and (aj(a.cNum¼ `91b126')and
(a.balance ¼a.balance@pre-2)) (a.balance ¼a.balance@pre-2))
(6) The system dispenses cash Cashier verify The cash is dispensed The cash is dispensed
(7) The system prints receipt Printer verify A Receipt is printed A Receipt is printed
sc3 cNum¼ `91b124' (1) The customer selects but- butWithdraw select
Withdraw
(2) The customer enters amount enter 40
amount
(3) The system shows error MessageBox Show Amount is beyond limit
USLTG: Test Case Automatic Generation by Transforming Use Cases

message
1339
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

1340

Table 6. (Continued )
C. T. M. Hue et al.

SceN Pre Steps Object Action Data 1 Data 2


sc4 cNum¼ `91b124' (1) The customer selects but- butWithdraw select
Withdraw
(2) The customer enters amount enter 30
amount
(3) The system updates balance ATMCard verify ATMCard.allInstances() ->exists
(aj(a.cNum¼ `91b124')and(a.
balance =a.balance@pre-30))
(4) The system dispenses cash Cashier verify The cash is dispensed
(5) The system prints receipt Printer verify A Receipt is printed
sc5 cNum¼ `91b125' (1) The customer selects but- butWithdraw select
Withdraw
(2) The customer enters amount enter 50
amount
(3) The system shows error MessageBox Show No permission granted
message
USLTG: Test Case Automatic Generation by Transforming Use Cases 1341

Table 7. The generated test scenarios in several works.

Use case USLTG [4] [5] [22] [7]


Lend book 5 N 5 N 5
Withdraw 5 5 5 5 5
IIOSS 5 5 5 5 5
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

modelers. Table 7 shows the total number of test scenarios generated by applying
our approach and several proposed approaches over three use cases. In the above-
mentioned three use cases, the Lend book use case contains concurrent actions.
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Hence, we use the letter \N" to denote works that do not mention use cases con-
taining concurrent actions.
1342 C. T. M. Hue et al.

To automatically generate test cases for three use cases, Lend book, With-
draw [18], and IIOSS [4], we ¯rst create USL models to capture the use case
descriptions. USLTG then reads the USL models sequentially to generate test cases
for each use case. These test cases are speci¯ed by a TCSL model. It takes several
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

minutes to generate test cases for each use case. However, most of the time to
automatically create functional test cases in the USLTG method is to create USL
models. The e®ort required for creating USL models depends on domain knowledge,
expertise with USL and OCL. At the beginning of the project, our method must
spend 2 h to create a USL model because our method requires time to understand the
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

speci¯cations and transform steps, actions, control nodes, and OCL constraints into
USL models. However, after a short time, it only takes 30 min on average to create a
USL model. As discussed in [11], we can build di®erent generators to transform a
USL model to other software artifacts such as template-based use case description,
class model, activity diagram, sequence diagram, and test cases. Hence, the initial
e®ort to create the USL models will be o®set from the automation generators. In this
research, we focus on the proposal of test cases generator. Furthermore, to increase
the modeling performance, we can additionally provide a textual editor to create
USL models as discussed in our previous research [11].
Through these examples, we applied the functional test case generation for cases
as follows: test cases including concurrent actions as the Lend book use case, test
cases including an action that calls another use case as the Lend book use case, test
cases using the entered input data of another use case as the use case withdraw using
the cNum (Card number of the use case Insert card before). Through these case
studies, USLTG can be fully responsive for the problem that meets in the practice as
above. We value the USLTG method based on the information identi¯cation ability
of generated test cases and the test case speci¯cation method. In Sec. 8.3, we compare
the USL method with other methods.

8.3. Criteria comparison


This section aims in providing a comparison of the criteria in functional test case
generation from use cases. In order to compare with other works, we give seven
comparison criteria based on generated information of test case and TCSL. Table 8
lists the comparison results for the above criteria. In this table, we use six letters \Y",
\N", \V", \D", \NL", and \M" to denote the information elements of generated test
cases and proposed TCSL in each method: \Y" denotes that information is gener-
ated, \N" expresses that information is not identi¯ed or discussed, \V" denotes that
the identi¯ed information is concrete values, \D" expresses that the identi¯ed
information is descriptions such as descriptions of conditions, \NL" denotes natural
language, and \M" indicates a speci¯cation language that is built by the metamo-
deling technique. The details of these criteria are discussed as follows.
First, in order to identify concrete test cases, test scenarios and test data (in-
cluding test input and expected output) must be identi¯ed. So, Criteria c1 and c2 are
USLTG: Test Case Automatic Generation by Transforming Use Cases 1343

Table 8. Comparison of identi¯ed information and TCSL of test generation methods.

Test case information [4] [5] [8] [22] [7] [9] [10] USLTG
(c1) Test scenario Y Y Y Y Y Y Y Y
(c2) Test input V N N D N N D V
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

(c2) Expected output V N N D N N D V


(c3) Type of test steps N N N N N Y Y Y
(c4) Action name and Object Name N N N N N N N Y
(c5) Concurrent actions N Y N N Y N N Y
(c6) Relations with other test cases N N N N Y N Y Y
(c7) TCSL NL NL NL NL NL M M M
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

given to compare identi¯cation abilities of test scenario and test data. Normally, in
order to generate functional test cases from use cases, test scenarios are generated
easier than test data. So, test scenarios are generated in all of the work above.
However, test data often are not identi¯ed or are not fully identi¯ed such as
descriptions instead of speci¯c values. Test input data are identi¯ed as concrete
values and expected output data are identi¯ed as OCL constraints in the work of
Wang et al. [4] and ours. In [10] and [22], the authors identi¯ed the test data as
descriptions (satisfying conditions) instead of concrete values. In other words, they
only presented methods to generate test scenarios without test data. In addition,
pre-conditions of test scenarios in their works are described in a natural lan-
guage [7, 9, 10]. But in our work, if this condition describes constraints on variables of
the use cases (they are often input data that are entered before executing the use
case), these variables will be identi¯ed to be concrete values. For example, in the use
case Withdraw, pre-condition of the use case is \ATMCard.allInstances()->exists(aj
a.cNum¼cNum)", in the scenario sc1, pre-condition of sc1 is \cNum=91b126".
Second, in order to automatically execute test cases by an automated testing tool,
the type of test steps (checkpoint or actor action), action name and object name in
test steps must be identi¯ed. Criteria c3 and c4 are used to compare these characters
of identi¯ed test steps in test scenarios. In all of the works, they did not concretely
identify the action names and object names of test steps in test scenarios. Similarly,
the type of test steps is not also concretely identi¯ed in [4, 5, 8, 22]. Only [9, 10]
identi¯ed the type of test steps. However, in these works, the types of checkpoints are
not identi¯ed concretely (including 10 checkpoint types) as our work.
Third, use case scenarios can contain concurrent actions and relationships with
other use cases. Hence, Criteria c5, c6 are to compare the works that they mentioned
or did not mention test cases containing concurrent actions and relationships with
test cases of other use cases. There are several works that did not mention test
scenarios containing concurrent actions as in [4, 8–10]. Similarly, test scenarios
containing relationships with other use cases such as < include > and < extend >
did not also mention in [4, 5, 8, 9, 22].
Finally, in order to precisely capture and reuse test cases, test cases need to be
speci¯ed by a language built by a metamodeling technique. So, Criteria c7 is to
1344 C. T. M. Hue et al.

compare the work in which they describe test cases by the natural language or a
model of a metamodel (a metamodel of a DSL). In all of the mentioned works,
just [9, 10] proposed the metamodels to specify test cases. Moreover, the concrete
input data values of a test case are identi¯ed on the basis of a system snapshot, so
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

system snapshots also need to be identi¯ed and stored. However, just our work
identi¯es and stores these system snapshots.

9. Conclusion
In this paper, we proposed the TCSL and introduced the method USLTG to auto-
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

matically generate the executable functional test cases from use cases. The generated
functional test cases are precisely speci¯ed by a TCSL model and contain necessary
information to generate test scripts by automated testing tools which can auto-
matically run case tests directly. We also implemented a support tool to realize our
method, applied USTTG to several case studies, and evaluated our USLTG method
with other methods.
As our future work, we completely focus on developing TCSL to provide concrete
syntax for TCSL. In addition, we would like to enrich the abstract syntax and
enhance the concrete syntax of TCSL in order to provide better support for modelers.

Acknowledgments
This research is funded by Vietnam National Foundation for Science and Technology
Development (NAFOSTED) under Grant No. 102.03-2014.23.

References
1. M. Utting and B. Legeard, Practical Model-Based Testing: A Tools Approach (Morgan
Kaufmann, San Francisco, 2007).
2. R. P. Mg, Learning Selenium Testing Tools, 3rd edn. (Packt Publishing, Birmingham,
Mumbai, 2015).
3. Mercury, QuickTest Professional User's Guide 6.5 (Mercury Interactive Corporation,
Sunnyvale, 2003).
4. C. Wang, F. Pastore, A. Goknil, L. Briand and Z. Iqbal, Automatic generation of system
test cases from use case speci¯cations, in Proc. Int. Symp. Conf. Software Testing and
Analysis, 2015, pp. 385–396.
5. E. Sarmiento, J. C. S. do Prado Leite, E. Almentero and G. S. Alzamora, Test scenario
generation from natural language requirements descriptions based on petri-nets, Electr.
Notes Theor. Comput. Sci. 329 (2016) 123–148.
6. J. J. Gutierrez, M. J. Escalona, M. Mejías and J. Torres, An approach to generate test
cases from use cases, in Proc. 6th Int. Conf. Web Engineering, 2006, pp. 113–114.
7. S. Tiwari and A. Gupta, An approach of generating test requirements for agile software
development, in Proc. 8th India Conf. Software Engineering, 2015, pp. 186–195.
8. C. Nebut, F. Fleurey, Y. Le Traon and J.-M. Jezequel, Automatic test generation: A use
case driven approach, IEEE Trans. Softw. Eng. 32(3) (2006) 140–155.
USLTG: Test Case Automatic Generation by Transforming Use Cases 1345

9. J. Gutierrez, G. Aragón, M. Mejías, F. J. Domínguez Mayo and C. M. Ruiz Cutilla,


Automatic Test Case Generation from Functional Requirements in NDT, in Current
Trends in Web Engineering Lecture Notes in Computer Science, eds. M. Grossniklaus and
M. Wimmer (Springer, Berlin, 2012), pp. 176–185.
10. T. Straszak and M. Śmialek, Automating acceptance testing with tool support, in Proc.
by UNIVERSITY OF NEW ENGLAND on 10/22/19. Re-use and distribution is strictly not permitted, except for Open Access articles.

Federated Conf. Computer Science and Information Systems, 2014, pp. 1569–1574.
11. C. T. M. Hue, D. D. Hanh, N. N. Binh and L. M. Duc, Usl: A domain-speci¯c language
for precise speci¯cation of use cases and its transformations, Informatica 42(3) (2018)
325–343.
12. C. T. M. Hue, D. D. Hanh and N. N. Binh, A transformation-based method for test case
automatic generation from use cases, in Proc. 10th Int. Conf. Knowledge and Systems
Int. J. Soft. Eng. Knowl. Eng. 2019.29:1313-1345. Downloaded from www.worldscientific.com

Engineering, 2018, pp. 252–257.


13. I. Jacobson, Object-Oriented Software Engineering: A Use Case Driven Approach
(Addison-Wesley Longman, Redwood City, 2004).
14. OMG, OMG uni¯ed modeling language Version 2.5, http://www.omg.org/spec/UML/
2.5 (May, 2005).
15. I.O. for Standardization, ISO/IEC/IEEE International Standard  
 Systems and soft-
ware engineering  
 Vocabulary, in ISO/IEC/IEEE 24765:2010(E), 2010, pp. 1–418.
16. J. Heumann, Generating test cases from use cases, Techical report, Rational Software
(2001).
17. H. Zhu, P. A. V. Hall and J. H. R. May, Software unit test coverage and adequacy, ACM
Comput. Surv. 29 (1997) 366–427.
18. D. Kundu and D. Samanta, A novel approach to generate test cases from uml activity
diagrams, J. Object Technol. 8 (2009) 65–83.
19. M. Smialek and W. Nowakowski, From Requirements to Java in a Snap: Model-Driven
Requirements Engineering in Practice (Springer, Switzerland, 2015).
20. M. Gogolla and F. Hilken, Model validation and veri¯cation options in a contemporary
UML and OCL analysis tool, in Proc. Modellierung, 2016, pp. 205–220.
21. Obeo, Acceleo/User Guide   Eclipsepedia https://wiki.eclipse.org/Acceleo/User Guide.
22. P. Boghdady, N. Badr, M. Hashem and M. Tolba, A proposed test case generation
technique based on activity diagrams, Int. J. Eng. Technol. 11 (01 2011) 37–57.

You might also like