Professional Documents
Culture Documents
%
'
!
05
tion; Unified Modeling Language (UML); % &
& 2 65"
!"
#
'
#
!,'!'
!
'$
KEYWORDS
+
$
!
) %
!
&
%
'
!
Model-Based Testing, Test Case Creation % &
& -
#
'!
% &
&
'
# & .-/)
( )
!'
#
!,'!'
173
4
2%
! ,
!
2 '
?
:9 89 01$ ;<=
39
01
01 ;<>=
) 7
) 7
39 01 2
;<3=
%
!
) 7 ) 7 % 3:9
01
!!
'
% #
;<=
01) 7 0
! ,
!) 7 %
'
01%' ! % #
;<=
Figure 3: Textual vs. model-based test case creation. #2
174
interferes with my work. to five with MBTCC is very useful. (Tbl. 1). cases are 81.6%. Software in the Loop (SiL) with about 5% and Model
On the basis of a mean (x) of 3.29, a standard derivation (s) of 0.98, in the Loop (MiL) with 1.7% are less targeted with automatically
and a sample size (n) of 55 we conclude on a generally positive performed tests. We also considered the sources for test case cre-
expectation from MBTCC. The sample size (n = 55) varies to the
sample size of 69 valid questionnaires because of invalid or unde- "
!
2 %'
2
#&
A
clared indications. On the impact of the test engineers’ backgrounds
&
& #2
(R1), we assumed that their opinion about the “benefit“ of model-
based test case creation is independent from their background. For 7 %
#2
statistical verification we use Pearson correlation, with correlation .0+
/ 0+%
!
coefficient (r ), p-value and the sample size (n) [21] (Tbl. 1). #2
! '
Table 1: Correlation between background with expectation #2
"
'
on model based testing.
#2
@-!@
Background r p n 9 9 >9 :9 9 9
What is your main function? -0.05 0.371 53 $ %'
<= %'
<>=
What did you study? 0.03 0.421 55
%'
<3= &2 %'
<=
Experience in automotive industry 0.19 0.088 55
Experience in test case creation 0.10 0.240 55 Figure 5: Sources used for test case creation.
175
"
%
2
!'
6 7 %
$
& A / %) 7;
"
/ %
/'
/&,-
3
%
(
) 7; C
!
/'
, , ,3 3,> >, ,: :,8 8, , ,
6 7 %
$
& %
9 4 $
#2
"
#2
Figure 8: Overview of test case derivation.
Figure 7: Estimated current and possible test coverage.
176
4
Table 3: Exemplary action block of a derived test case.
E2
!&2/#2 DD
F E F Action Parameter Expectation
2
!&2 Set:headlightsSwitch on
# Check:headlights on
*
2
!&2 D
# Check:lightsIndicator active
2
!&2/#2 &25!
D
$
&25!
* Since test cases correspond to valid paths through the AD, the
order of test steps is defined by the occurrences of the informa-
Figure 9: Excerpt of an AD describing requirements on the tion names on the respective path. Timing information (e.g., be-
activation of headlights controls. tween test step 1 and 2) are added later in the process as they de-
pend on the implementation. Thus, the generated test cases remain
implementation-independent.
177
languages for various domains, including automotive [23], cloud 3.) Each calculated path is transformed into a logical formula. The
systems [18], and robotics [2]. expression is created corresponding to the guards and assignments
We export the diagrams modeled using the SysML modeling tool that are part of the respective path. Under the assumption that the
used at BMW to XML and transform these into UML/P [22] ADs. values of information flows do not change during activity execution,
Based on the ADs, test cases are created and afterwards exported the conjunction of all guards and assignments must hold on each
valid path. For the left path of the exemplary activity diagram
to a file format that is used as input by the tool responsible for
shown in Fig. 9 the calculated formula is:
executing the tests. The corresponding tool chain comprises three
components: (1) a converter transforms the ADs exported from headliдht Swich = on∧headliдht s = on∧liдht s I ndicat or = act ive
the SysML modeling tool to UML/P ADs; (2) a test case creator For paths where all information within guards and assignments are
transforms the ADs into an intermediate test case representation. distinct, input values can be easily determined without calculation
To this end, the configuration (cf. Sect. 4.4) is processed influencing of a logical formula. However, our approach is also capable to find
the resulting test cases; and (3) an exporter transforms the resulting input values or declare a path as invalid in more complex situations.
test cases to input formats specific to the tool used with product 4.) Finally, the logic expression for each path is solved using Z3
and test bed. Theorem Prover [4], which can be used to check the satisfiability
of logical formulas. For valid paths Z3 creates a witness for a solu-
$ tion of the formula. This witness can be transformed back to value
0
! &
assignments for input information fulfilling the given path of the di-
# %$ agram. If no witness exists, the path is invalid (due to contradictory
&
guard conditions) and is discarded.
4;% #
&; In the end, the generation process as shown in Fig. 10 comprises
the final step of exporting the actual test cases. For each validated
path the test actions are exported into the format supported by the
4
$ .0+6"4
'
tool responsible for the test execution. The implemented generator
also allows for an easy extension of the many coverage criteria. As
for now only the structured path coverage is available. Additionally,
Figure 10: Generating test cases from ADs modeled with the this criterion can be adapted by the configuration as presented
SysML modeling tool used at BMW. earlier. The implementation of other coverage criteria is already
planned for the future.
The overall tool chain is illustrated in Fig. 10. The component
6 DISCUSSION
ADConverter processes the XML ADs and creates UML/P ADs.
The TCCreator component processes the UML/P ADs together The survey results are subject to various threats typical to empiri-
with a configuration model for the test case generation, as described cal research. This section, reports on these threats and highlights
in Sect. 4. The TCExporter component is used to export the de- lessons learned from deploying our method at BMW.
rived test cases. Finally, the exported test cases are used as input
for the actual testing tool. Its modular infrastructure enables em- 6.1 Threats to study validity
ploying MBTCC with changing input and output tools, i.e., if the We cannot eliminate duplicated submissions and wrong interpreta-
requirements ADs are not specified by the current modeling tool tions. Threats to the internal validity arise from the distribution of
anymore, exchanging the ADConverter suffices to adjust it ac- the survey by email and forwarding. We carefully selected our par-
cordingly. A similar argument holds for changing the test tool we ticipants by their specialization in the context of test case creation.
produce the outputs for. With the forwarding, we intend to additionally reach external test
The generation of actual test cases from the exported XML doc- engineers involved in the process of test case creation. We can nei-
ument as performed by the Generator component consists of ther ensure to have all external companies included nor to have in-
four steps: terrogated all external test engineers, because both weren’t invited
1.) The set of input and output information is determined by directly or might not have all permissions needed to participate in
investigating the input AD’s interface. In addition, intermediate the survey.
variables are taken into account by inspecting the guards and as- For instance, the portions of the produced result (cf. Tbl. 2) could
signments of the AD. be distributed differently with all the external test engineers in-
2.) All paths through the AD are calculated. To this end, the ADs cluded. In particular, the automation of test cases is assigned to
are normalized, i.e., hierarchies and parallel control flows of an external service providers.
AD are resolved using model transformations. Resolving parallel Moreover, test engineers are often of external companies. Espe-
control flows is done by transforming parallelism into a sequen- cially the estimation of the current and the possible test case/require-
tial control flow in a defined order. This transformation can be ment coverage (cf. Fig. 7) could be influenced if more internal test
performed under the assumption that the parallel flows are inde- engineers existed. Although, we added imaginary circumstances
pendent of each other. For each path, expected output values as “all resources and time you need“ for the possible coverage, it could
well as values of intermediate results can be determined by taking be perceived as a performance review and lead to adjusted estima-
into account the assignments of the given path. tions.
178
Threats to construct validity arise from employing an online sur- with further tasks and ultimately will pay off for all parties. More-
vey. The style of question and answer options might have influenced over, additional effort was necessary to adapt already existing mod-
the actual answers. For instance, the question about the focus on els. Conducting MBTCC from scratch would reduce the additional
quality assurance has no alternative for a neutral statement, except effort. The amount of created test cases relates to the amount of
“don’t know“ (cf. Fig. 6). decision logic defined in the model. Function A already includes
Besides the survey execution, there are threats to statistical va- diagnosis related functionality, which results in a more complex de-
lidity. Based on the total sample of 70 participants in the mentioned cision logic. In consequence, this function has a significantly higher
survey (see Sect. 3.3), we are 95% confident that the margin of er- potential for automated test case creation. The measured effort is
ror for our sampling procedure and its results is no more than ± lower in comparison to the current manual approach, especially
20 participants. Furthermore, we mentioned that our results are considering further iterations of each function.
statistically insignificant, i.e., the p-values of Tbl. 1 indicate such Altogether, based on the current experience, the feedback is very
results. Because of the margin error, the possibility of not every par- positive. Based on first statements, the cooperation between tester
ticipant responding to every question and results being statistical and specifier seems to improve the model quality and the overall
insignificant, we can only assume trends. effort for test case creation could be reduced. Further feedback has
In addition to the feedback from the survey, we learned from been collected to identify additional potentials of improvement
experience in the current project. on test case creation regarding coverage criteria and use case or
requirements related test case creation.
6.2 Lessons learned from applying MBTCC at
BMW 7 RELATED WORK
Successfully introducing new methodologies requires identifying The presented work relates to surveys on model-based testing in
and convincing all involved stakeholders. Therefore, different in- automotive, as well as to other MBTCC approaches.
terests and necessary organizational changes need to be more than While there are, for instance, surveys on employing formal meth-
accepted: They have to be understood by all affected persons. ods to automate design and specification in industry (cf. [28]), re-
Despite the benefits of MBTCC being understood by all partners, ports from automotive software engineering are rare. Where these
we underestimated the effort required to convince all involved surveys have been reported, such as in [3], they focus on tool use
stakeholders. The term automated test case creation was the main during requirements engineering and testing instead of the poten-
issue inflicted by a wrong communication. This term could be tial benefits of MBTCC. With this focus, the findings presented
interpreted as a full automation and thereby classifying the testers’ in [3] did not address our research questions.
work as replaceable by a more or less simple algorithm. Model-based test case creation is subject to active research [5].
In consequence, instead of achieving a cooperation between Several approaches to (partially or fully) automated derivation of
specifier and tester, which would decrease the workload of both executable test cases from various input modeling languages have
and improve the overall model quality, the new approach was mis- been brought forth. The most prominent approaches focus on UML
interpreted as an additional workload for the specifier while the or SysML. These approaches generally can be distinguished based
created test cases could not replace manually created tests - as the on their support of concurrency in the input models. This includes
intellectual input of the tester cannot be replaced. various approaches to MBTCC that employ UML Statecharts or
The first perception of the test engineers was that the test cases sequence diagrams as input models [12, 24, 26]. Approaches employ-
are created without their knowledge. On the contrary, the defined ing ADs as input prohibit concurrency in the models (cf. [10, 14, 15]).
approach suggest to include the test engineers’ expertise into the Moreover, various tools to support and facilitate model-based test-
model. Once this general misunderstanding was clarified, it was pos- ing have been presented as well. These include, for instance, MBT-
sible to convince the team members of MBTCC based on SMArDT. suite [1] and UMLTest [19]. MBTsuite translates ADs into test cases
During a pilot phase, test engineers adapt six functions related to and can export these into a variety of different outputs, but does
e-drive and created over 272 test cases with an additional effort of not support concurrency in its input models. UMLTest processes
only 55 hours as shown in Fig. 11. Statecharts, yet it’s subject to the same restrictions. UMLTGF [11]
processes ADs and, similar to our approach, supports ADs featuring
3
concurrency, but cannot be extended to produce the output format
required at BMW.
%
With the presented extension to SMArDT addressing the trans-
> formation from originally natural-language requirements to exe-
> > 8
3 cutable function tests, it relates to approaches translating natural-
(
(
1 (
(
4 (
(
( language requirements to test cases directly. Such transformations
!!
<2=
<
%
= are, for instance, supported by the aToucan4Test framework pre-
sented in [29]. With aToucan4Test, input is restricted to a subset
Figure 11: First results on MBTCC for six e-drive functions. of structured English that does not support the precision and com-
plexity achieved through manual function test modeling by domain
Despite that learning MBTCC requires additional effort, all in- experts. Despite applying MBTCC to various domains, such as web
volved parties agreed that these efforts will be reduced significantly services [13], automated test case creation in automotive software
179
engineering with its established development and testing processes [8] Object Management Group. 2006. SysML Specification Version 1.0 (2006-05-03).
(aug 2006).
is rarely documented. Where it is, it focuses on specific tooling, or [9] Arne Haber, Markus Look, Pedram Mir Seyed Nazari, Antonio Navarro Perez,
proposes general, abstract overviews only [25, 30]. Bernhard Rumpe, Steven Völkel, and Andreas Wortmann. 2015. Integration of
Heterogeneous Modeling Languages via Extensible and Composable Language
Components. In Model-Driven Engineering and Software Development Conference
8 CONCLUSION (MODELSWARD’15). SciTePress, 19–31.
[10] Abdelkamel Hettab, Elhillali Kerkouche, and Allaoua Chaoui. 2015. A Graph
Shortened development processes and frequently modified require- Transformation Approach for Automatic Test Cases Generation from UML Activ-
ments affect the balance between time to market and high quality ity Diagrams. In Proceedings of the Eighth International C* Conference on Computer
standards. To maintain high quality standards comprehensive test- Science & Software Engineering. ACM, 88–97.
[11] Yuan Jiesong, Wang Linzhang, Li Xuandong, and Zheng Guoliang. 2006. UMLTGF:
ing is crucial to automotive software engineering, yet test case A Tool for Generating Test Cases from UML Activity Diagrams Based on Grey-
creation is still largely manual. We have conducted a survey to Box Method. Journal of Computer Research and Development 1 (2006), 008.
investigate if test engineers in the automotive sector could reap [12] Supaporn Kansomkeat and Wanchai Rivepiboon. 2003. Automated-generating
Test Case Using UML Statechart Diagrams. In Proceedings of the 2003 Annual
the benefits claimed by model-based testing. The study was sent Research Conference of the South African Institute of Computer Scientists and
to 196 participants at BMW of whom 69 valid questionnaires were Information Technologists on Enablement Through Technology (SAICSIT ’03). South
African Institute for Computer Scientists and Information Technologists, Republic
returned. The study has shown that the participants expect to ben- of South Africa, 296–300.
efit from automated test case creation independent of their back- [13] ChangSup Keum, Sungwon Kang, In-Young Ko, Jongmoon Baik, and Young-Il
grounds, that they expect the test quality to increase, and that Choi. 2006. Generating test cases for web services using extended finite state
machine. In IFIP International Conference on Testing of Communicating Systems.
MBTCC is considered most useful for system, subsystem, and com- Springer, 103–117.
ponent tests. Based on this survey and the experiences from work- [14] Monalisha Khandai, Arup Abhinna Acharya, and Durga Prasad Mohapatra. 2011.
ing with the automotive industry, we conceived a method to extend Test Case Generation for Concurrent System using UML Combinational Diagram.
International Journal of Computer Science and Information Technologies, IJCSIT 2
an existing specification method for requirements and testing. (2011).
We created an adjustable tool chain that can transform func- [15] Jonathan Lasalle, Fabien Peureux, and Jérôme Guillet. 2011. Automatic test con-
cretization to supply end-to-end MBT for automotive mechatronic systems. In
tional requirements modeled as ADs from various input formats Proceedings of the First International Workshop on End-to-End Test Script Engineer-
into executable test cases for various output formats. Overall, af- ing. ACM, 16–23.
ter underestimating the persuasion efforts required to implement [16] Peter Liggesmeyer. 2009. Software-Qualität: Testen, Analysieren und Verifizieren
von Software. Springer Science & Business Media.
our methodology due to ambiguous terminology on our side, the [17] Detlev Mohr, N Müller, A Krieg, P Gao, HW Kaas, A Krieger, and R Hensley. 2013.
feedback is very positive. Our methodology seems to improve the The road to 2020 and beyond: What’s driving the global automotive industry?
model quality and the overall effort for test case creation as hinted (2013), 2014 pages.
[18] Antonio Navarro Pérez and Bernhard Rumpe. 2013. Modeling Cloud Architectures
by the survey. In addition, the method improves the cooperation as Interactive Systems.. In MDHPCL@ MoDELS. 15–24.
between specifier and tester and reduces redundancies as function [19] Jeff Offutt and Aynur Abdurazik. 1999. Generating Tests from UML Specifications.
Springer Berlin Heidelberg, Berlin, Heidelberg, 416–429.
and test model can be combined. [20] Biren Prasad. 1997. Analysis of pricing strategies for new product introduction.
Pricing Strategy and Practice 5, 4 (1997), 132–141.
REFERENCES [21] Irene Rößler and Albrecht Ungerer. 2008. Statistik für Wirtschaftswissenschaftler.
Physica-Verlag Heidelberg, Heidelberg.
[1] 2017. MBTsuite - the Testing Framework. (2017). http://www.mbtsuite.com/
[22] Bernhard Rumpe. 2016. Modeling with UML: Language, Concepts, Methods.
Accessed: 2017-06-11.
Springer International. http://www.se-rwth.de/mbse/
[2] Kai Adam, Katrin Hölldobler, Bernhard Rumpe, and Andreas Wortmann. 2017.
[23] Bernhard Rumpe, Christoph Schulze, Michael von Wenckstern, Jan Oliver Ringert,
Engineering Robotics Software Architectures with Exchangeable Model Trans-
and Peter Manhart. 2015. Behavioral Compatibility of Simulink Models for
formations. In International Conference on Robotic Computing (IRC’17). IEEE,
Product Line Maintenance and Evolution. In Software Product Line Conference
172–179.
(SPLC’15). ACM, 141–150.
[3] Harald Altinger, Franz Wotawa, and Markus Schurius. 2014. Testing methods
[24] Mahesh Shirole, Amit Suthar, and Rajeev Kumar. 2011. Generation of improved
used in the automotive industry: Results from a survey. In Proceedings of the 2014
test cases from UML state diagram using genetic algorithm. In Proceedings of the
Workshop on Joining AcadeMiA and Industry Contributions to Test Automation and
4th India Software Engineering Conference. ACM, 125–134.
Model-Based Testing - JAMAICA 2014, Christof Budnik, Gabriella Carrozza, David
[25] Mark Utting and Bruno Legeard. 2010. Practical model-based testing: a tools
Faragó, Baris Güldali, Barath Kumar, Vittorio Manetti, Roberto Pietrantuono, and
approach. Morgan Kaufmann.
Stephan Weißleder (Eds.). ACM Press, New York, New York, USA, 1–6.
[26] Thi-Dao Vu, Pham Ngoc Hung, and Viet-Ha Nguyen. 2015. A method for au-
[4] Leonardo De Moura and Nikolaj Bjørner. 2008. Z3: An efficient SMT solver. Tools
tomated test data generation from sequence diagrams and object constraint
and Algorithms for the Construction and Analysis of Systems (2008), 337–340.
language. In Proceedings of the Sixth International Symposium on Information and
[5] Arilo C. Dias Neto, Rajesh Subramanyan, Marlon Vieira, and Guilherme H. Travas-
Communication Technology. ACM, 335–341.
sos. 2007. A survey on model-based testing approaches. In Proceedings of the 1st
[27] Jon Whittle, John Hutchinson, and Mark Rouncefield. 2014. The State of Practice
ACM international workshop on Empirical assessment of software engineering lan-
in Model-Driven Engineering. Software, IEEE 31, 3 (2014), 79–85.
guages and technologies held in conjunction with the 22nd IEEEACM International
[28] Jim Woodcock, Peter Gorm Larsen, Juan Bicarregui, and John Fitzgerald. 2009.
Conference on Automated Software Engineering (ASE) 2007, Eileen Kraemer and
Formal methods. Comput. Surveys 41, 4 (2009), 1–36.
Jonathan I. Maletic (Eds.). ACM, New York, NY, 31–36.
[29] Tao Yue, Shaukat Ali, and Man Zhang. 2015. RTCM: a natural language based,
[6] Christof Ebert and John Favaro. 2017. Automotive Software. IEEE Software 34, 3
automated, and practical test case generation framework. In Proceedings of the
(2017), 33–39.
2015 International Symposium on Software Testing and Analysis. ACM, 397–408.
[7] Robert France and Bernhard Rumpe. 2007. Model-driven Development of Com-
[30] Justyna Zander, Ina Schieferdecker, and Pieter J Mosterman. 2011. Model-based
plex Software: A Research Roadmap. Future of Software Engineering (FOSE ’07) 2
testing for embedded systems. CRC press.
(may 2007), 37–54.
180