You are on page 1of 12

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/334157231

System Based Interference Analysis in Capella

Article  in  Journal of Object Technology · January 2019


DOI: 10.5381/jot.2019.18.2.a14

CITATIONS READS

0 285

4 authors, including:

Julien Deantoni
University of Nice Sophia Antipolis
91 PUBLICATIONS   872 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

GEMOC initiative View project

Functional mockup Interface extension with support for Discrete Event Languages View project

All content following this page was uploaded by Julien Deantoni on 23 July 2019.

The user has requested enhancement of the downloaded file.


Journal of Object Technology
Published by AITO — Association Internationale pour les Technologies Objets
http://www.jot.fm/

Capella Based Interference Analysis


Amin Oueslatia Julien Deantonib Philippe Cuenotac
a. IRT Saint Exupery, Sophia Antipolis, France
http://www.irt-saintexupery.com/
b. Université Cote d’Azur, I3S/INRIA Kairos, Sophia Antipolis, France
https://team.inria.fr/kairos
c. Seconded from Continental Automotive France

Abstract In embedded systems the emergence of System on Chip (SoC)


offers low cost, flexible and powerful computing architectures. These new
COTS capabilities enable new application in aerospace domain with more
integration of avionic functionalities on a same hardware. The main draw-
back of such integration is the difficulty to master the deployment of the
application on the SoC architecture, while understanding miscellaneous
emerging behaviors. Model Based Engineering techniques have been in-
troduced to help in the analysis of systems at the early stages of the
development process. For instance, Capella [?] is a tooled language to
support design of systems architecture (http://polarsys.org/capella).
Capella helps in providing a consistent view of the system architecture.
However, Capella does not help to understand emerging behaviors. For
instance it does not help to understand how the deployment of different
tasks (and their parameters) on different computing resources impacts the
conflicts (interferences) on the interconnect between the computational
resources and the memory. This problem is increasingly important with
the integration of various functionalities.
We propose to address this problem at different levels. First, we
equipped Capella models with two kinds of reasoning capabilities. The
first one is based on the worst case analytic evaluation of the interconnect
interferences of a specific deployment (easy to compute but pessimistic).
The second one is based on the (exhaustive) simulation and allow to
obtain accurate interconnect interferences (more computationally intensive
than the analytic methods but accurate). These reasoning capabilities
are significantly helping the designer but he still has to explore several
potential solutions by hand. To help in this task, we proposed a small DSL
to express the exploration space from which the former reasoning can be
performed automatically.
We experimented these techniques in the context of the Attipic collabo-
rative project, based on the modeling of simple but representative models
in Capella.
Keywords Models, Operational Semantics, Interference Analysis

Amin Oueslati, Julien Deantoni, Philippe Cuenot. Capella Based Interference Analysis. Licensed under .
In Journal of Object Technology, vol. 0, 2019, pages 0:1–11. Available at
2 · Amin Oueslati, Julien Deantoni, Philippe Cuenot

1 Introduction
The aerospace domain has a long tradition of dedicated hardware and software, tailored
to the condition in space and designed to be more resistant to SEU (Single Event
Upset). Such design reduced the capabilities of embedded hardware and forced in many
case to disable some of its capabilities (for instance processor caches were disabled
because they are very sensitive to SEU). Nowadays, a better management of fault
management techniques opens the door to off the shelves hardware for satellites. This
allows to define new computing architectures based on already existing hardware to
allow the integration of various avionic functionalities on a same hardware. These
new architectures are essential for new low costs satellites. However, beyond the
study of fault tolerance mechanisms (important but not the focus of this paper),
the computing power introduced by the use of COTS (Component Off The Shelf)
and the need to regroup various functionalities on a same hardware board makes
more difficult to master the deployment of the application on the architecture. For
instance, it becomes more difficult to understand miscellaneous emerging behaviors
like, for instance, the emergence of temporary high load on a bus due to unexpected
synchronizations between different tasks.
Model Based Engineering [?] have been introduced to help in the analysis of
systems at the early stages of the development process. MBE is more and more
used and is nowadays a common practice in many software related disciplines [?].
For instance, Capella [?] is a tooled open source language to support design of
systems architecture(http://polarsys.org/capella) introduced by Thalès1 and
used in many other company nowadays. Capella is of great to help in providing a
consistent view of the system architecture, which can be reviewed, shared, etc.
Despite interesting features, Capella is not yet fully equipped to help system
designers with the understanding of emerging behaviors. This is mainly due to the
generality of Capella, encompassing various disciplines, that forbids the definition
of a full operational semantics from which simulation and behavioral analysis can
be conducted. For instance, during the definition of new software and hardware
architectures for satellite-based systems, it remains difficult to understand the impact
of architectural choices at the first steps of the development process. This disable
an early exploration of different deployment solutions and parameterizations. In
other words, despite the use of Capella models (i.e., MBE), it is nowadays difficult
to tame the adequacy between the application and the architecture at early stage
of the development process. this is a fortiori the case when new computational
resources are amenable to the integration of new functionalities on a same hardware
platform. Note that in this case the problem is not a classical scheduling problem
(even if part of the problem) but rather a communication scheduling problem since the
interconnect between the different computational resources and the memory becomes a
potential bottleneck that must be used wisely. To do so, it is important to adjust (1) the
deployment of the different tasks of the system on the right computational resource and
(2) to schedule their communication so that it avoids interference on the interconnect,
i.e., it avoids communications initiated by different tasks to use an interconnect at
the same time, for instance by delaying the start of some communications at specific
point in times. Of course, such decisions should be compliant with more traditional
scheduling analysis, i.e., with respect to periods and deadlines.
What we report in this paper is the use of Capella models allowing the exploration
1 https://www.thalesgroup.com/fr

Journal of Object Technology, vol. 0, 2019


Capella Based Analysis · 3

of different architecture, in terms of deployment and parameterization, with respect


to interference in the interconnect. The models we used are appropriate to the
definition of both hardware and software models as required in the context of the
ATTIPIC project (a collaborative industrial project). We developed two different but
complementary approaches to compute the level of interference on the interconnects of
a systems. The first approach proposes to use an analytic method from which bounds
on the latency in an interconnect can be obtained. While pessimistic, this gives a
coarse grain idea of the possible interference on the interconnect at low cost, i.e.,
without expensive computing. The second approach proposes to equip Capella with an
operational semantics. Based on this semantics it is possible to run, possibly exhaustive,
simulations. These simulations allow computing the interconnect usage as well as the
latencies in task communications due to interference. While it requires simulations,
this gives a fine grain understanding of the interference in the interconnects.
Finally, based on each of these methods, we provided a small DSL from which
it is possible to specify the domain of the parameters we want to explore. Then,
we automatically generate the different models for these domains, simulate these
models and provide a representation of the results to help the designer to choose the
appropriate parameters.
TODO: outline of the paper

2 Background
(2 pages max)

2.1 Modeling Technologies


2.1.1 Capella
Capella is an open source Model Based System Engineering (MBSE) solution hosted
at PolarSys working group of the Eclipse Foundation2 . Capella provides formalisms
and toolsets that implements the ARCADIA method developed by Thales [?, ?]. The
method defines a workflow of four phases: operational analysis and system analysis to
identify the needs at operational and system levels, logical and physical architectures to
identify the components that fulfill this needs. For each phase of the workflow, Capella
provides a set of diagrams and analysis methods such as functional dataflow to describe
functions and their exchanges, functional chains to identify the functions involved
to perform a given requirement and scenarios to describe a sequence of exchanged
messages along the time.
In this paper we will only focus on Physical Architecture phase and more precisely
on the Physical Architecture Blank (PAB). Indeed, the PAB provides a suitable
syntax for Hardware and Software co-modeling. However, as we target low level
details of the hardware architecture, we still need to complement the model with
microarchitecture-specific information that are not covered by standalone Capella
models.
2 https://www.polarsys.org/

Journal of Object Technology, vol. 0, 2019


4 · Amin Oueslati, Julien Deantoni, Philippe Cuenot

2.1.2 KitAlpha
Kitalpha3 is an set of eclipse plugins, based on Capella, which allow to extend Capella
models with Domain Specific information. Also provided by PolarSys, it enables the
customization of the Capella syntax for a specific viewpoint. Developing a viewpoint
allows to describe specific concerns on top of Capella’s generic ones. For instance, this
mechanism was used to allow the specification of fault tolerant mechanism directly in
Capella4 .

2.1.3 Gemoc Studio


The GEMOC Studio is a set of eclipse plugins that provides generic components
through Eclipse technologies for the development, integration, and use of heterogeneous
executable modeling languages5 .

2.2 Modeling Interferences


PC:
The targeted multicore SoC architecture provides a priory enough computing power
for a aerospace domain application. The performance and deadline of the embedded
functionality are mainly affected by the communication scheduling of the application
and in particular by possible interference on access to shared hardware resources. We
focus our approach on data memory transaction because it is the major factor on
communication scheduling for an application, instruction memory transaction being
often optimized and independent in the SoC architecture.Interference produces latency
on bus communication interface. At the micro-architecture level of the SoC, the
on-chip component communication are managed by memory mapped standardized
bus communication, such as for example the ARM AMBA6 standard. The on-chip
memory communication is controlled by interconnect component managing arbitration
of memory transaction. Target component in particular memory and interconnect
component could be a bottleneck in the communication scheme of the application.
So it become important to master the bus performance on the on-chip component to
detect and minimize the local interference. The main bus characteristics proposed to
measure on port the bus throughput (unit Mb/s) and the latency on the port (unit
us).
The hardware architecture selected for the ATIPPIC project is a SoC from the
Xilinx Zynq7000 family offering FPGA capabilities. The SoC is decomposed, as visible
in the Figure 1, with a Processing System (PS) including a dual CortexA9 processor in
the Application Processor, a central interconnect connected to a set of I/O peripheral
and a memory interconnect to allow access to external DDR memory device. Note that
DDR access is also possible directly from L2 cache controller of the dual CortexA9
or from the central interconnect. In addition it includes a Programmable Logic area
(PL) to provides a user configurable area to allow the integration of hardware IP
components. The PL is connected to PS via the central interconnect with AXI General
Purposes ports (GPx) and via the memory interconnect with AXI High Performance
ports (HPx) to perform access to DDR.
3 https://www.polarsys.org/kitalpha/
4 https://youtu.be/MXdZdCRDMH4
5 http://gemoc.org/studio
6 https://www.arm.com/products/silicon-ip-system/embedded-system-design/

amba-specifications/

Journal of Object Technology, vol. 0, 2019


Capella Based Analysis · 5

ECMFA2019/figures/Zynq7000.png

Figure 1 – Zynq7000 SoC overview

The main challenge for the application architecture is to be able to master commu-
nication exchange with the DDR memory, OCM memory being not used as sensitive
to radiation and having inefficient protection mechanism. Depending of the commu-
nication scheduling of the application access to DDR can be subject to interference.
Interference occurs from concurrent request access from software task execution or
from hardware IP of the PL area to DDR. The access path to DDR using AXI bus
causes potential congestion on central interconnect or memory interconnect or DDR
controller. So it is important to assess early in the design the AXI bus throughputs and
latencies characteristics on the component port interface and to compare alternative
solution. Alternative solution may varies on software task parametrized scheduling, on
AXI bus port allocation for hardware IP, or on allocation of function into a software
task or an hardware IP component. The case study in section 5 will elaborates the
analysis performed in the context of the ATIPPIC project.

3 Running example
In the rest of this paper, we rely on the simplified example of Figure 2 to illustrate the
two approaches that we propose for interference analysis. Since it is not representative
of real world software and hardware architectures, a more concrete use case will be
introduced later. The example was made under the Capella tool, using the PAB
extended with a Kitalpha viewpoint. The hardware architecture is composed of four
Physical Components, two of them represent CPUs, the third one an interconnect and
the last one a memory. These Physical Components are connected through Physical
Links which considered as bus connections in our case of study. On top of this
hardware architecture, we allocate two Physical Components Behavior to represent
the software tasks, one on each CPU. The data dependency between the two tasks is
abstracted by the Component Exchange Link that directly connects the Output Flow
Port of Task1 to the Input Flow Port of the Task2. These output and input ports
are then allocated on Component executions Physical Ports and the data dependency
is explicitly translated into two transactions carried by two physical Paths. The first
Physical Path in red is composed of two physical Links and connects CPU1 to the
memory passing by the interconnect. It represents the transaction path for writing
Task1 data to memory. The second Physical Path in blue is also composed of two
physical Links and connects CPU2 to the memory passing by the interconnect. It
represents the transaction path for reading Task2 data from memory.
More generally, without considering all the Capella formalisms, Task1 and Task2
are both periodic, Task1 executes on CPU1 for a certain time interval then writes
data to Memory using cpu1_to_interconnect (red link) and interconnect_to_memory
(black link) buses. While Task2 allocated on CPU2, reads data from Memory through
interconnect_to_memory (black link) and CPU2_to_interconnect (blue link) buses
then executes for a certain time interval. The described scenario of execution is similar
to the producer-consumer problem replacing the shared variable by a shared resource
which is represented in this example by the interconnect_to_memory bus (black link).

Journal of Object Technology, vol. 0, 2019


6 · Amin Oueslati, Julien Deantoni, Philippe Cuenot

ECMFA2019/figures/running_example_kitalpha.png

Figure 2 – Capella PAB running example

In the following, and for the sake of readability, we will avoid as much as possible using
Capella naming. We will only refer to diagram elements with the name of the concepts
they represent (e.g. bus instead of Physical Link, task or component execution instead
of Physical Component Behavior...).

4 Proposition
The proposed solutions to estimate the bus throughput and latency of the on-chip
component communication requires 1) to identify the component that involved in
the communication scheduling 2) to define component parameters that affects the
communication scheduling (performance and latency).
As the estimation is targeting to provide results in the early stage of the application
design, we don’t need to consider, in a first step, optimization features of the component
(e.g. internal buffers, pipeline... in the interconnect, in the computing resource
load/store interface, or in the memory controller). Moreover local memory transaction
such as data cache line access and control from processor is not considered, we
overestimate the high level communication scheduling by considering access through
bus interfaces.
Then, we need to identify in Capella the architecture layer and diagram where to
start our approach. Obviously we select the Physical Architecture with the Physical
Architecture Blank (PAB) diagram. The PAB diagram represents the allocation of
function into behavioral component mapped to hardware execution component. This
depicts the mapping of software component behavior to general purpose processor and
of hardware IP behavior to FPGA as execution support. It describes the hardware
and software architecture. However, the expressivity of the diagram is not rich enough
to describe all the domain specific properties we need to represent in our model.
First, we need to identify the elements as well as their properties that affect bus
performance and communication latencies. For the scope of our SoC, we assume that
transactions are initiated by tasks (software or hardware) allocated on computation
resources (general purpose processor or FPGA). And by DMA (Direct Memory Access),
for fast data flow transfer. We need to distinguish two kinds of DMA, dedicated
hardware DMA present in the PS, and hardware implemented DMA (FPGA IP)
present in the PL. As we focus on bus contention analysis, we only consider external
memory transactions that involve buses and interconnects. In the following we list all
the elements involved on bus communications:

• Component execution (tasks): Software or Hardware implementation of


tasks and responsible of data traffic generation. We consider behavioral part of
DMA as tasks since they implement a logic to transfer data.
• Computation resources: General purpose processor and FPGA, on which
tasks are allocated. They provide computational capabilities and communication
interfaces for tasks. DMA support is also classified in this category.

Journal of Object Technology, vol. 0, 2019


Capella Based Analysis · 7

• Communication resources: Bus, Interconnects, IO interfaces, responsible of


routing and transferring data.
• Memory resources: Represent the source or destination of some transactions.
Data storage and data exchange zone for tasks.
• Sensors/actuators: First data providers and last data consumers.

Second we need to select for each of these components the set of properties and
parameters that are relevant for bus performance analysis. Conflicts on the interconnect
occurs when two or more data memory transactions address the same target at the
same time. Thus the time properties have a significant impact on generating latency.
We need to consider timing properties related to support and initiation of transaction,
but also transaction support and size information to scale transaction timing. We
propose the following properties organize by upper defined categories:

• Component execution (tasks): The scheduling of the task can be event driven is
data port event or periodic execution rate and offset. Execution time
characterizes the task and data size the associated transaction. The task
allocation identifies the initiator of the transaction.

• Computation resources: The transaction initiator as master interface is scaled by


bus interface size operating at frequency. For FPGA support an additional
latency can be added on the interface.
• Communication resources: The communication interconnections are con-
trolled by their master/slave interface number with associated bus inter-
face size operating at frequency. Moreover the internal arbitration scheme
and path master/slave relationship must be defined. DMA to be clarified.
• Memory resources: The transaction target as slave interface is scaled by
data bus interface size running at frequency with a possible read/write
latency.
• Sensors: Are initiator of transactions and hold similar properties as component
execution.
• actuators: Are target of transactions similar to memory resources.

Additionally the transaction path from an initiator to a target must be recorded


to identify the data memory transaction path in the architecture. Note that information
can be derived from capture in Capella using Physical Path.

The Figure 2) depicts the running example applied to document the analytic and
operational solution.

4.1 Extending the Capella model


As we mentioned before, the Capella PAB is aimed to model the physical architecture
of the system including both hardware and software architectures. It is then the
more adapted diagram to cover our study. However, as we address a set of specific
components and properties, we still need to lower the abstraction level of the PAB in

Journal of Object Technology, vol. 0, 2019


8 · Amin Oueslati, Julien Deantoni, Philippe Cuenot

ECMFA2019/figures/kitalpha_properties.png

Figure 3 – Component execution (task) properties

order to include the lacking information. This was done by using the Kitalpha tool to
generate a veiewpoint extension.
In this viewpoint, we define all the elements needed for our analysis as extensions
of existing PAB components. On the running example of Figure 3, we distinguish 3
different kinds of Physical Components, computation resources for CPU 1 and 2 (in
blue), a bus controller for interconnect (in orange) and a memory resources (in red).
Same for Physical Links for buses and Physical Components Behavior for component
execution. Figure 3 shows the tab view generated by Kitalpha for setting the properties
of components.
In addition to the properties, Kitalpha also provides the possibility to extend the
model by defining operations that describe the actions realized by each element. These
operations are needed in the operational solution.

4.2 Analytic Solution


PC:
(3 pages max)
The first approach we propose is based on analytic evaluation of bus characteristics
of the architecture to be able to measure interference effect on the component port
interface with throughput and latency. Only static context properties are calculated
on port interface, meaning the following assumption on the memory transaction: 1)
all transactions are atomic 2) the interconnect arbitration is not considered 3) all
transactions crossing a bus port interface are concurrent. As consequence the derived
bus throughput and latency value will be computed with worst condition and bound
to worst case value.
The calculation of bus port throughput and latency is evaluated as following:

1. Bandwidthbus (MBps) for all Bus Port of all Component :

Bandwidthbus = F requencybus ∗ Interf aceSizebus

Note that as AXI protocol support separated R/W channel a factor two need
to be applied on interconnect. Additionally the duration of the write response
channel will be ignored.
2. maxBlockedT imetask (µs) maximum time a task can be blocked by other tasks
using the same port.

[l]maxBlockedT imetask = (1)


let concernedT aksport = t ∈ allT asks|t.allocation.contains(port) in (2)
let allConcernedT askSettask = concernedT asksp ∀p ∈ task.allocation in (3)
X X
( maxT ransf ertT imet ) − maxT ransf ertT imetask (4)
ct∈allConcernedT askSettask t∈ct

Journal of Object Technology, vol. 0, 2019


Capella Based Analysis · 9

3. Calculation of the Bus P Throughput (MBps) for all Bus Port :


BusPort.Throughput = i=0 ,N Task.DataSize ÷ (BusPort.ResponseTime + Bus-
Port.Latency
N is equal to all Task allocated to the BusPort Component

The calculation have been performed with definition of parameter in Kitapha and
java code for computing the formula. For more information onf Kitalpha viewpoint
see next section 4.3.

The method applied on the running example of Figure 2) aims to evaluate through-
put and latency to access to the memory component. First we calculate a Bandwidth of
400 MBps on all component port interface. The ResponseTime computed from initiator
to target component gives for CPU2 port gives Task2.DataSize (5) / Bandwidth (400)
= 12.5 µs, and for CPU1 port Task1.DataSize (5) / Bandwidth (400) = 12.5 µs. On
the Interconnect master port it is (5+5) / 400 = 25 µs, and so the same value for
Memory Port. The Latency on Memory Port is Max(Task1.DataSize, Task2.DataSize)
(5) / Bandwidth (400) = 12.5 µs. It is the same value on the Interconnect Master Port
as there only one value computed for latency on this Port. So, the Latency for CPU1
and CPU2 port is identical 12.5 µs. Finally the throughput on the Memory Port is
computed by (Task1.DataSize + Task2.Data) (10) / (ResponseTime + Latency) (37,5)
= 266.6 MBps.

4.3 Operational Solutions


AO:
(4 pages max)
The second approach we propose, is based on model level simulation. As we want
to reason at high-level of abstraction, we use the Capella model as a support for our
simulation which allows us to easily correlate the obtained results with the parameters
of the model. In order to do this, we equip Capella with an operational semantic that
allows us to simulate the model.
In our model, communications are initiated by tasks and DMAs. In most cases, a
task reads data from memory, executes on them and writes back the results to memory.
Some tasks are scheduled periodically and others are triggered as soon as their input
data are available. For instance, DMA data transfers are considered as data triggered
tasks. Also we don’t consider a fixed execution time for tasks, instead of this, the
execution time is randomly chosen between a best and worst case execution time.
However, in some cases when trying to read or write date, the associated bus may be
occupied by another communication. The task should then wait for the bus until the
current transaction is over and thus generates latency. Thus, a communication bus
can handle only one transaction at the same time, it is considered busy until the end
of the transaction.

4.3.1 Encoding the behavior of the system under Gemoc


In the Kitalpha viewpoint, we first need to define the set of operations that describe
how the system behave. In the case of component execution (task), we define the
following operations: start(), stop(), execute(), read(), write() and wait(). Once we
defined these operations, we can now define the actions we want to perform when each

Journal of Object Technology, vol. 0, 2019


10 · Amin Oueslati, Julien Deantoni, Philippe Cuenot

operation is executed and the dynamic information needed to monitor the evolution
of the system during execution. To illustrate this, let’s consider the case when a task
is waiting for a bus, the associated wait() operation is thus executed and it updates
the values of bus latency and bandwidth. In Gemoc Studio, dynamic information
also called Runtime Data (RTD) and execution functions form the Domain Specific
Actions (DSA) and are implemented in Kermeta (ref). A DSA implements the data
semantic of the system.
Once we defined the DSA, the second step is the implementation of the control
flow semantic. In order to do this, we first define the Domain Specific Events (DSE)
that trigger the execution functions. As we may have several instances of the . For
instance, in the context of task, we define a start, stop, execute, read, write and wait
DSE. As we may have several instances of task,

4.4 Design Space Exploration


JD:
(3 pages max)

5 Case study
AO:
(4 pages max)

6 Related Works
(2 pages max)

7 Conclusion
(0.5 page max)

About the authors


Amin Oueslati is an enginner at the IRT Saint Exupery, Sophia
Antipolis, France.
TODO: short bio and picture
TODO: short bio and picture TODO: short bio and picture
TODO: short bio and picture TODO: short bio and picture
Contact him at amin.oueslati@irt-saintexupery.fr or
TODO.

Journal of Object Technology, vol. 0, 2019


Capella Based Analysis · 11

Julien Deantoni is an associate professor in computer sciences


at the University Cote d’Azur. After studies in electronics and
micro informatics, he obtained a PhD focused on the modeling and
analysis of control systems, and had a post doc position at INRIA
in France. He is currently a member of the I3S/Inria Kairos team.
His research focuses on the join use of Model Driven Engineering
and Formal Methods for System Engineering. He is particularly
interested in understanding how the explicit modeling of the op-
erational semantics of languages can be used for heterogeneous
simulation and reasoning.
Contact him at julien.deantoni@univ-cotedazur.fr. or http:
//www.i3s.unice.fr/~deantoni/.

Philippe Cuenot is Research Engineer at IRT Saint Exupéry,


Toulouse, France (seconded from Continental Automotive France).
He was graduated in 1989 from ISTG Ploytech Grenoble French
University (engineering diploma in Industrial Computer Science
and Instrumentation. He joined Continental Automotive France
(formally Siemens Automotive France) for development of engine
management system real time software. In 2005 he moved to the
Electronic Advanced Development team as innovation project leader on system and
software methods. He was leading internal research on critical embedded system
architecture description and hardware simulation. Since 2014, he is in delegation to
the IRT Saint Exupery in System engineering department. Contact him at philippe.
cuenot@irt-saintexupery.fr .

TODO: thanks ANR, ATTIPIC and GLOSE


Acknowledgments

Journal of Object Technology, vol. 0, 2019

View publication stats

You might also like