Professional Documents
Culture Documents
Service Oriented Holonic and Multiagent Manufacturing Systems Fo 2021
Service Oriented Holonic and Multiagent Manufacturing Systems Fo 2021
Theodor Borangiu · Damien Trentesaux ·
Paulo Leitão · Olivier Cardin ·
Samir Lamouri Editors
Service Oriented,
Holonic and
Multi-Agent
Manufacturing
Systems for Industry
of the Future
Proceedings of SOHOMA 2020
Studies in Computational Intelligence
Volume 952
Series Editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new develop-
ments and advances in the various areas of computational intelligence—quickly and
with a high quality. The intent is to cover the theory, applications, and design
methods of computational intelligence, as embedded in the fields of engineering,
computer science, physics and life sciences, as well as the methodologies behind
them. The series contains monographs, lecture notes and edited volumes in
computational intelligence spanning the areas of neural networks, connectionist
systems, genetic algorithms, evolutionary computation, artificial intelligence,
cellular automata, self-organizing systems, soft computing, fuzzy systems, and
hybrid intelligent systems. Of particular value to both the contributors and the
readership are the short publication timeframe and the world-wide distribution,
which enable both wide and rapid dissemination of research output.
Indexed by SCOPUS, DBLP, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of
Science.
Samir Lamouri
Editors
123
Editors
Theodor Borangiu Damien Trentesaux
Faculty of Automatic Control Université Polytechnique Hauts-de-France,
and Computer Science Le Mont Houy
University Politehnica of Bucharest Valenciennes, France
Bucharest, Romania
Olivier Cardin
Paulo Leitão Department of Génie Mecanique
Research Centre in Digitalization et Productique
and Intelligent Robotics (CeDRI) Université de Nantes
Instituto Politécnico de Bragança, Carquefou, France
Campus de Santa Apolónia
Bragança, Portugal
Samir Lamouri
LAMIH Arts et Métiers Paris Tech
Paris, France
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Foreword
I would like to thank the SOHOMA Steering Committee for offering me the
opportunity to share my views and ideas with the SOHOMA community and the
manufacturing control and systems domain researchers at large. I worked in
manufacturing control for more than twenty years and have witnessed the research
progress of the domain from the first distributed architectures, including first
holonic manufacturing systems models, to the myriad models for digital transfor-
mation of manufacturing through service orientation, and to the distributed intel-
ligence models that are employed in what was recently called “manufacturing as a
service” or shortly MaaS. The SOHOMA workshop, now at its anniversary tenth
edition, has always kept with the times and even went through a few name changes
to better capture the evolving nature of our work as a community. But, nevertheless,
it always welcomed submissions from around the world covering cutting-edge
manufacturing control modelling, promoted transformative research and moved
forward the knowledge frontier. As confirmation, the last SOHOMA workshop held
in October featured the overarching theme “manufacturing as a service—virtual-
izing and encapsulating manufacturing resources and controls into cloud networked
services” and, to name a few, included articles covering MaaS aspects such as
cloud-based manufacturing control, digital twins in manufacturing, holonic and
multi-agent process control, ethics and social automation, human factors integra-
tion, and physical Internet and logistics.
This foreword makes an attempt to capture the readers’ attention by highlighting
current MaaS developments, as well as outline potential areas of research for the
SOHOMA community and beyond. Manufacturing as a service includes local and
potentially geographical distributed, service-oriented, knowledge-based smart
manufacturing models that provide customized design and product solutions to
individual or group-based customer types. It leverages technologies such as big data
analytics, cloud, edge and fog computing, digital twins, artificial intelligence/ma-
chine learning (AI/ML), including deep learning, 3D printing, 5G broadband and
SDN networks, and Internet of things. All within constraints such as high effi-
ciency, safety of operations, cybersecurity of digital transactions, ethics, human–
machine interaction, low energy consumption and reduced logistics imprint.
v
vi Foreword
I hope our readers will enjoy and find valuable the high-quality articles included
in this anniversary volume. I am confident that previous SOHOMA authors will
continue to contribute to the advancement of manufacturing research, and I invite
other academia and industry practitioners reading this volume to submit their work
to the future editions of the SOHOMA workshop. Together we will build the
manufacturing systems for the industry of the future.
This volume gathers the peer-reviewed papers presented at the tenth edition of the
international workshop on Service Oriented, Holonic and Multi-Agent
Manufacturing Systems for Industry of the Future—SOHOMA’20 organized on
1–2 October 2020 by Arts et Métiers ParisTech in collaboration with University
Politehnica of Bucharest (the CIMR Research Centre in Computer Integrated
Manufacturing and Robotics), Université Polytechnique Hauts-de-France (the
LAMIH Laboratory of Industrial and Human Automation Control, Mechanical
Engineering and Computer Science) and Polytechnic Institute of Bragança (the
CeDRI Research Centre in Digitalization and Intelligent Robotics).
The main objective of SOHOMA workshops is to foster innovation in smart and
sustainable manufacturing and logistics systems by promoting concepts, methods
and solutions addressing trends in service orientation of agent-based control tech-
nologies with distributed intelligence.
The book is structured in eight parts that correspond to the technical sessions
of the workshop’s program and include papers describing results of the research
addressing the development and application of key enabling technologies (KET:
production-, digital- and cyber-physical technologies) for the industry of the future.
In concurrence with this vision of future manufacturing, the eight sections of the
book address control and organization problems in the manufacturing value chain
and offer smart solutions for smart factories networked in the cloud, implemented in
cyber-physical systems with all resources integrated, sharing information and
infrastructures, collaborating, adapting to reality and self-configuring at runtime for
efficiency, agility and safety.
These subjects are treated in the book’s Part 1: Cloud Networked Models of
Knowledge-based Intelligent Control; Part 2: Digital Twins in Manufacturing and
Beyond; Part 3: Holonic and Multi-Agent Process Control; Part 4: Ethics and
Social Automation in Industry 4.0; Part 5: New Organizations based on Human
Factors Integration in Industry 4.0; Part 6: Intelligent Products and Smart
Processes; Part 7: Physical Internet and Logistics; Part 8: Optimal Production and
Supply Chain Planning.
ix
x Preface
Along the nine previous annual workshop editions, the SOHOMA scientific
community introduced and developed new concepts, methods and solutions that
aligned with the worldwide effort of modernizing and digitalizing manufacturing in
the twenty-first century’s context of high dynamics market globalization, product-
centric control, direct digital manufacturing, customer- and service-oriented man-
ufacturing, enterprise networking and cloud-based infrastructure sharing. The ten-
year anniversary edition in 2020 hosted presentations of the most important
research contributions of SOHOMA groups, which are also included in this book.
These papers reflect continuity in approach and demonstrate the impact of the
community’s research reported on specific evolution lines in manufacturing systems
towards ‘industry of the future’ (IoF).
SOHOMA research has identified and permanently addressed the technological
and computational enablers that potentiate the IoF characteristics: global opti-
mization and intelligence distribution in manufacturing execution systems (MES);
decoupling supervision from control; extended digital modelling of processes,
products and resources; pervasive instrumenting of shop floor entities and edge
computing in the industrial Internet of things (IIoT) framework; strongly coupling
systems of systems of autonomous and cooperative elements in cyber-physical
systems (CPS) according to the holonic paradigm; on-demand sharing of technol-
ogy and computing resources through cloud-type services; predictive production
control and resource maintenance based on artificial intelligence (AI, machine
learning-ML) techniques. Concerning these enablers, orchestrating technologies
are essential to coordinate and synchronize the two classes of IoF enablers towards
implementation and deployment. There are three such technologies that have been
systematically developed, applied and improved during the last decade; they rep-
resent the triple brand of the SOHOMA scientific community: service orientation,
holonic manufacturing and multi-agent systems (MAS) in the industrial
environment.
Service orientation in the manufacturing domain was not limited to just Web
services or technology and technical infrastructure either; instead, service-oriented
architectures (SOA) which were developed reflect a new way of thinking about
processes, resources, orders and their information counterparts (the service-oriented
agents) reinforcing the value of commoditization, reuse, semantics and information,
and creating business value for the factory. A complete manufacturing service
(MService) theory and implementing model have been established.
The holonic paradigm has been used to develop smart, distributed manufacturing
control architectures (for mixed batch planning and scheduling, resource allocation,
material flow and environment conditioning), based on the definition of a set of
abstract entities: resources (technology, humans—reflecting the producer’s profile,
capabilities, skills), orders (reflecting the business solutions) and products
(reflecting the customers’ needs, value propositions). These entities are represented
by autonomous holons communicating and collaborating in holarchies to reach a
common production-related goal. The holonic paradigm provides the attributes of
flexibility, agility and optimality by means of a completely decentralized manu-
facturing control architecture based on a social organization of intelligent entities
Preface xi
called holons with specific behaviours and goals. From the control perspective, in
the dynamic organizations of holons (the holarchies), decision-making character-
istics (e.g. scheduling, negotiating, allocating, reconfiguring) are combined with
reality-reflecting features (robustness, fault-tolerance, agility) provided by holons.
Holarchies allow for object-oriented aggregation, while the specialization incor-
porated in control architectures provides support for abstraction; in this way, the
holonic control paradigm has been increasingly transposed in control models of
diverse types of industrial processes.
The shop floor control scheme is scalable, decoupled from the decision-making
(supervision) MES layer which assures adaptability and reconfigurability of the
global production control which is thus keeping free of induced constraints and
limitations such as myopia or the incapacity to react at unexpected events.
In the context of holonic manufacturing, the strongly coupled networks of
software agents—information counterparts of holons’ physical parts—cooperate to
solve global production problems. These are multi-agent systems that constitute the
implementing frameworks for holonic manufacturing control and reengineering of
shop floor resource coalitions. MAS allow distributing intelligence in the MES.
They are able to control in decentralized (heterarchical) mode production systems.
Mixed approaches were developed, e.g. patterns of delegate MAS (D-MAS) are
mandated by the holons representing structural production elements to undertake
tasks reconfiguring operation scheduling and resource allocation in case of dis-
turbances such as resource breakdowns. Bio-inspired MAS for manufacturing
control with social behaviour and short-term forecasting of resource availability
through ant colony engineering or recurrent neural networks are AI-based tech-
niques for heterarchical control with MAS.
Because reality awareness and robustness of control systems represent priorities
of the industry, semi-heterarchical models of holonic manufacturing control were
developed to offer a dual behaviour that combines optimized system scheduling
with agile, reactive scheduling that is done in real time by D-MAS. The semi-
heterarchical manufacturing control architecture deals rapidly with unexpected
events affecting orders in current execution, while computing in parallel (eventually
in the cloud) new optimized schedules for the rest of orders waiting to be processed;
this operating mode reduces the myopia of the system at global batch level and
preserves the system’s agility.
In the SOHOMA research, MAS were often used as implementing framework
for holonic semi-heterarchical control in SOA. The three orchestrating technologies
represent the basis of manufacturing CPS design and implementing; book chapters
in part 3 and part 8 describe research carried out in these three topics.
During the past ten years, a lot of research works have been done by the
SOHOMA community in the domain of intelligent products. Intelligent products
(IP) are created temporarily in the production stage by embedding intelligence on
the physical order or product that is linked to information and rules governing the
way it is intended to be made (with recipe, resources), routed, inspected and stored;
this enables the product to support and/or influence these operations. IP virtual-
ization moves the processing from the intelligence embedded in the product to the
xii Preface
virtual machine in the cloud using a thin hypervisor on the product carrier and WI-
FI connection, either in a dedicated workload or in a shared workload to make
decisions relevant to its own destiny. The research contributions can be grouped in
three areas: 1) product-driven systems, 2) product lifecycle information systems and
3) physical Internet.
Product-driven systems (PDS) were defined as a way to optimize the whole
product lifecycle by dealing with products whose informational content is perma-
nently bound to their virtual or material contents and are able thus to influence
decisions made about them, participating actively to different control processes in
which they are involved throughout their lifecycle. Designing a PDS is a challenge
that involves three fundamentals aspects: functions, architecture and interactions.
Several bio-inspired approaches have been proposed by SOHOMA authors such as
ant colony optimization, the firefly algorithm and a mechanism inspired from
stigmergy using the notion of volatile knowledge.
An important facet of the intelligent product is related to data. There have been
defined two levels of product intelligence (PI): 1) Level 1 (information-oriented)—
PI is related to the (customer) needs linked to the production order, e.g. goods
required, quality, timing, cost agreed; PI allows communicating with the local
organization (and with the customer for the order); PI monitors/tracks the progress
of the order through the industrial supply chain; 2) Level 2 (decision-oriented)—PI
influences the choice between different options affecting the order when such a
choice needs to be made; PI adapts the order management depending on real
production conditions. The management of product information along the product’s
lifecycle was treated in the community’s research work by means of distributed
Product Lifecycle Information Management (PLIM) Systems. Different PLIM
architectures messaging protocols and formats have been proposed. The EPCIS
architecture is one such distributed data management architecture, specially adapted
to product tracking in the supply chain [32]. DIALOG is another architecture
proposed by SOHOMA members, based on a multi-agent system distributed in
every actor of a given supply chain. In this architecture, a specific messaging
protocol initially called product messaging interface (PMI) and further named
quantum lifecycle management (QLM) is used.
The physical Internet (PI) concept has been proposed and formally defined as an
open global logistics system leveraging interconnected supply networks through a
standard set of modular containers, collaborative protocols and interfaces for
increased efficiency and sustainability. The concepts of physical Internet and
intelligent product were merged in SOHOMA works with the main idea to realize
the notion of PI-container (smart container used in the physical Internet paradigm)
by applying the activeness concept to a normal container. Also, concepts from the
PDS area have also been applied to the physical Internet, e.g. the PROSIS archi-
tecture was first applied in an intra-logistics context that uses wireless holon net-
works constituted by mobile holons (shuttles, moving products) and fixed holons
(workstations).
Parts 6 and 7 of the book include descriptions of SOHOMA research in the areas
of intelligent product and physical Internet.
Preface xiii
[1] Transposes pools of shop floor resources (robots, CNC machines), products
(recipes, client spec.), orders (work plans, task sequences) into on-demand
making services;
[2] Enables pervasive, on-demand network access to a shared pool of config-
urable HPC resources (servers, storage, applications) that can be rapidly
provisioned and released as services to various high-level MES tasks with
minimal management effort or interaction. Hence, CC-CMfg may use cloud
computing facilities.
There were proposed CC models (public + private) and techniques for the
integration of an infrastructure-as-a-service (IaaS) cloud system with a manufac-
turing system based on the virtualization of multiple shop floor resources (robots,
machines). Major contributions were brought for the virtualization of shop floor
entities (resource, intelligent product) and MES workloads in the cloud. The
solutions use a combination of virtual machines (VM) deployed in cloud before
production start (offering the static part of services) and of containers executed on
the VMs which run the dynamic part of services because they are deployed much
faster than VMs. High availability (HA) methods and software-defined networking
(SDN) mechanisms for interoperability, resilience and cybersecurity in dual CC-
CMfg architecture were also developed being considered as major achievements.
The full dual CC-CMfg model was adopted for production planning and control
of manufacturing systems with multiple resources and products with embedded
intelligence. CC features were taken over by operational technologies (control,
supervision, dynamic reconfiguring): i) the product-making services are provi-
sioned automatically by MES optimal resource allocation programs; ii) the CC
component offers network access to HPC services through distributed message
platforms such as the manufacturing service bus (MSB); iii) the shop floor
resources are placed in clusters with known location relative to the material flow
and dynamically assigned at batch run time; this location is one input parameter
weighting the optimal resource allocation; iv) the CC services can scale rapidly in
order to sustain the variable real-time computing demand for order rescheduling
respectively anomaly detection, the resources being assigned or released elastically;
v) the assigned CMfg resources are monitored and controlled and both the MES
(service consumer) and the cloud (service provider) are notified about the usage
within the smart control application; the cost model ‘pay as you go’ is used to
establish the cost offers for client orders in service-level agreements. Such cloud
models and services were developed for optimized, energy-aware production at
batch level with resource sharing in semi-heterarchical control topology.
The SOHOMA community worked out ML-based approaches for reality
awareness and efficiency in cloud manufacturing and proposed applications of
machine learning algorithms for global optimization of manufacturing at batch
level, robust behaviour at disturbances and safe utilization of manufacturing
resources. The focus was put on the prediction of key performance indicators (KPI)
like instant power consumption of resources, energy consumption and/or execution
time for product operations, to provide more accurate input for the cloud-based
xvi Preface
xix
xx Contents
babicear@erau.edu
Abstract. This paper describes a 10-year scientific journey in the area of Cloud-
based manufacturing in the SOHOMA research community. The tour started in
Paris on June 20, 2011 at École Nationale Supérieure d’Arts et Métiers, Paris and
returns here on 1st October 2020 after annual stops in Bucharest, Valenciennes,
Nancy, Cambridge, Lisbon, Nantes, Bergamo and Valencia. Several stages in the
evolution of Cloud manufacturing research are recalled in their historical order:
vertical enterprise integration and networking; resource and product virtualization
and cloud infrastructure design; batch optimization with cloud services; real time
big shop floor data streaming, machine learning in the cloud for predictive pro-
duction control, resource health monitoring and predictive maintenance. Major
contributions of SOHOMA authors are evoked: extending the cloud computing
model to on demand shop floor resource sharing, infrastructure sharing in cloud
networked enterprises, MES workload virtualization, deploying cloud services
in real time with virtual machine and containers, high availability solutions and
software defined networking, machine learning for predictive manufacturing.
networked product development models in which clients are enabled to configure, select
and use customized product making resources and services, ranging from computer-
aided design and engineering software to reconfigurable manufacturing systems [1].
Several generic applications using Cloud platforms have been reported in the first years
after 2000, for hosting and exposing services related to manufacturing such as customer
order management, adaptive capacity planning, collaborative product design, networked
supplier relationship management, etc.
Historically, the relationship between computer science and manufacturing control
started in the ’70s with the initial idea of “digital manufacturing” and “numeric interpo-
lation” for CNC machines. Since then, advances in computer science have given birth to
the Cloud Computing (CC) paradigm, where computing resources are seen as a service
offered to end-users. CC has been used to improve first the IT infrastructure of the enter-
prise’s business management and capacity planning (ERP) layers and its connectivity
in the Manufacturing Value Chain (MVC), and then to increase the High Performance
Computing (HPC) of the manufacturing control infrastructure; its principles have also
inspired the new CMfg paradigm with the perspective of benefits for both manufacturers
and their customers.
The benefits of Cloud for manufacturing enterprises are numerous; Cloud as a pro-
curement model delivers undisputed cost efficiencies and flexibility, while increasing
reliability, elasticity, usability, scalability and disaster recovery. The key difference
between Cloud Computing and CMfg is that resources involved in CC are primarily
computational (e.g., server, storage, network, software), while in CMfg resources and
abilities involved in the whole life cycle of product making are virtualized and encapsu-
lated in different service models [2] where different product stakeholders can search and
invoke the qualified services according to their needs, and assemble them into a virtual
environment or solution to complete their orders and manufacturing tasks [15].
Cloud manufacturing is a research topic permanently addressed by the SOHOMA
scientific community in accordance with its general evolution and relationship with
advances in Information, Communication and Control Technologies (IC2 T) applied to
the manufacturing industry.
The first contributions related to CMfg addressed the vertical integration of manu-
facturing enterprises having already adopted cloud computing on the higher layers of
business and operations management processes for supply, ERP and digital marketing,
however not yet integrated with the production and logistics layers [3]. The integration
along the vertical enterprise axis: business management, ERP, high level production con-
trol (Manufacturing Execution System - MES), shop floor distributed control is based
on the Service Oriented Architecture (SOA) concept and marks a shift from the agent-
centric architecture to SOA. The application of SOA principles in the factory automation
domain consisted in encapsulating the functionality and business logic of components
in the production environment (legacy software and devices) by means of Web services
[4]. Cloud-based enterprise networking in the MVC was also part of the ‘enterprise
integration’ theme in this early CMfg stage.
The design of cloud models and infrastructures for manufacturing was a topic present
since the 2014 edition. Cloud services in MES are based on the virtualization of shop
floor devices and a new control and computing mode that operates in the global Cloud
Cloud Networked Models of Knowledge-Based Intelligent Control 5
Manufacturing model (CC-CMfg), with progressive solutions towards real time. In the
vision of the SOHOMA community, CC-CMfg services orchestrate a dual OT (operation
technology control) and IT (computing) model that:
a. Transposes pools of shop floor resources (robots, CNC machines), products (recipes,
client spec.), orders (work plans, task sequences) into on-demand making services;
b. Enables pervasive, on-demand network access to a shared pool of configurable HPC
resources (servers, storage, applications) that can be rapidly provisioned and released
as services to various high level MES tasks with minimal management effort or
interaction [5]. Hence, CC-CMfg may use cloud computing facilities.
There were proposed CC models (public, private) and techniques for the integra-
tion of an Infrastructure as a Service (IaaS) cloud system with a manufacturing system
based on the virtualization of multiple shop floor resources (robots, machines). Major
contributions were brought for shop floor entities (resource, intelligent product) and
MES workload virtualization in the cloud. One solution is a combination of virtual
machines (VM) deployed in cloud before production start (offering the static part of ser-
vices), and of containers executed on the VMs which run the dynamic part of services
because they are deployed much faster than VMs [6]. High availability (HA) methods
and Software-Defined Networking (SDN) mechanisms for interoperability, resilience
and cyber-security in the interconnected CC-CMfg architecture were developed [7].
However, despite the usefulness of CC for CMfg (option b. of the Cloud manufactur-
ing model), SOHOMA’17 authors of [8] advocate that considering CC as a core enabling
technology for Cloud manufacturing - as is often put forth in the literature - is limited to
the early stage CMfg history and should be reconsidered. A new core-enabling vision
toward Cloud manufacturing, called Cloud Anything (CA) is exemplified by option a.
of the CMfg model previously defined. CA is based on the idea of abstracting low-level
resources, beyond computing resources, into a set of core control building blocks pro-
viding the grounds on top of which any domain could be “cloudified”. This vision leads
finally to the more general sharing concept of Manufacturing as a Service (MaaS) which
is based on the “cloud networked manufacturing” paradigm.
The full dual CC-CMfg model was adopted by the SOHOMA research for pro-
duction planning and control of manufacturing systems with multiple resources and
products with embedded intelligence. CC features were taken over by operational tech-
nologies (control, supervision, dynamic reconfiguring): i) the product-making services
are provisioned automatically by a MES optimal resource allocation program; ii) the
cloud computing component offers network access to HPC services through distributed
message platform / manufacturing service bus (MSB); iii) the shop floor resources of
the CMfg component are placed in clusters with known location relative to the material
flow, and dynamically assigned at batch run time; this location is one input parameter
weighting the optimal resource allocation; iv) the CC services can scale rapidly in order
to sustain the variable real time computing demand for order rescheduling respectively
anomaly detection, the resources being assigned or released elastically; v) the assigned
CMfg resources are monitored and controlled and both the MES (service consumer)
and the Cloud (service provider) are notified about the usage within the smart control
application; the cost model ‘pay as you go’ is used to establish the cost offers for client
6 T. Borangiu et al.
orders in the service level agreements. Such Cloud models and services were devel-
oped for optimized, energy-aware production at batch level with resource sharing in
semi-heterarchical control topology [9, 10].
In the last years’ context of economic environment changes, manufacturing firms
need to shift their focus from linearly improving efficiency towards real time learning
from big data and contextual decision making. This approach reduces the uncertainty
by allowing accurate predictions of relevant key performance indicators based on his-
torical production data. Data and the way it can be processed in real time become thus
differencing success factors in these companies. The digitalization processes of large
manufacturing enterprises and the integration of increasingly smart shop floor devices
and control software caused an explosion in the data points available at shop floor and
MES layers. The degree in which enterprises can capture value from processing these
data and extract useful insights from them represents a differentiating factor on short-
and medium-term development of the processes that optimize production.
Machine learning (ML) and Big Data technologies have gained increased traction
by being adopted, as more computation power became available, in some critical areas
of planning and control. Cloud manufacturing provides a robust platform for developing
these solutions, lowering the cost of experimentation and solution implementation.
In this context, the SOHOMA community worked out ML-based approaches for
reality awareness and efficiency in cloud manufacturing and proposed applications of
machine learning algorithms for global optimization of manufacturing at batch level,
robust behaviour at disturbances and safe utilisation of manufacturing resources. The
focus was put on the prediction of Key Performance Indicators (KPI), like instant power
consumption of resources or execution time of product operations, to provide more
accurate input for the Cloud-based System Scheduler (SS) - an optimization engine
for mixed product and operation scheduling plus resource allocation: instead of static
(history records) or current (last measured) energy consumption values, short-term fore-
casted values are used as input for SS optimization at batch execution horizon. Three
new technologies were used for this type of smart Cloud-based manufacturing control:
• Big Data (BD) streaming for shop floor data processing: aggregating at the right
logical levels when data originates from multiple sources; aligning data streams in
normalized time intervals; extracting insights from real time data streams.
• Digital Twins (DT) of production assets (resources, products, orders) and system
(control, maintenance and tracking) to record and maintain a complete view of past
behaviours and KPIs of resources, processes and outcomes, and to forecast their future
evolutions. Recording historical data is needed to train ML patterns.
• ML workload virtualization in Cloud using HPC and fast deployment techniques
for: i) updating DTs: deep learning patterns and measurement variations as basis
for predictions; classification, when DT finds classes for feature vectors; clustering,
which searches and identifies similarities in non-tagged, multiple-dimension data and
tags suitably each feature vector; ii) embedding DTs: making intelligent decisions for
smart production control in two roles: smooth reconfiguring of CMfg resources for
global batch optimization based on the predicted cost of their usage; resource health
management and predictive maintenance [11].
Cloud Networked Models of Knowledge-Based Intelligent Control 7
This research topic focuses on the use of Artificial Intelligence (AI) methods in the
Cloud-based smart manufacturing control vision of the ‘Factory of the Future’ (FoF),
based on the concepts of digitalization and interconnection of distributed manufactur-
ing entities in a ‘system of systems’ approach: i) new types of production resources will
be strongly coupled and self-organizing in the entire value chain, while products will
decide upon their own making systems; ii) new types of decision-making support will be
available from real time production data collected from resources and products [5, 12].
Local initiatives like Industry 4.0 (Germany) and Advanced Manufacturing (US)
address common FoF topics among which CMfg. Thus, Industry 4.0 focuses on Cyber-
Physical Systems (CPS) for manufacturing which will provide intelligent services and
interoperable interfaces in order to support flexible and networked production environ-
ments. Smart embedded devices will work together seamlessly via the Industrial IoT, and
the centralized system controls will be transferred to networks of distributed intelligence
from machine-to-machine (M2M) to factory-to-factory (F2F) connectivity.
The vision on the future of Cloud manufacturing research was expressed since the
7th edition of the SOHOMA event held in Nantes, France and developed since then. It
relies on the concept of ‘Cloud Anything’ extended beyond the production phases and
facilities. This vision integrates the technologies and tools available for industry such as:
PLM, PLC, MES, ERP and the frameworks under development (CPS for production -
CPPS and Industrial IoT - IIoT) with the dual Cloud model CC-CMfg on top of these
infrastructures to create a product-service centric closed loop collaboration.
From the product lifecycle perspective, both the virtual (Design and Engineering) and
the physical parts of the product (Making) are assisted respectively tracked in the Cloud.
In fact, products conceived and designed to be embedded with computational power and
intelligence and so to be “smart” both in production (product-driven automation) and
utilization phases are able to exchange information both within and beyond the limit of
the factory. These smart objects are connected in the Cloud with assets and enterprises in
the supply networks and can provide a new type of cooperation, enabling collaborative
demand and supply planning, traceability, and execution [13].
CPS take advantage from the integration of Cloud-based and Service-Oriented Archi-
tecture to deploy end-to-end support along both product lifecycle (including after sales
services, etc.) and factory lifecycle. On a factory lifecycle perspective, CPS are able to
interact with all the hierarchical layers of the automation pyramid - from field level to
ERP - and to empower the exchange of information across all the process and service
stages, resulting in a better product-service development. This will foster the value net-
work alignment with its customers’ changing needs and optimization against different
perspectives (quality, time to market, costs, sustainability goals, etc.).
The architecture of Cloud-based CPS in manufacturing is organized on 7 hierarchical
layers: Physical Product; Sensors/Actuators; IoT node; Fog; Middleware; Cloud; Cloud
analytics. Information are transferred from-to the digital world with sensors and actuators
connected with IoT gateway and embedded aggregation nodes, able to pre-process and
store data. A first level of data analysis, cleaning and decision is the Fog computing layer
that aligns data in time, aggregates it and issues ad-hoc, urgent decisions, reducing the
amount of data transmitted and managed by cloud. Data is streamed via middleware to
the cloud where insights are extracted, time series are updated and patterns are created
8 T. Borangiu et al.
Table 1. (continued)
The next chapters present details of the scientific work performed in the Cloud
manufacturing domain and some of the main contributions of SOHOMA research groups.
10 T. Borangiu et al.
This topic represents the early SOHOMA research line in Cloud-based manufacturing.
Morariu et al. analyse in [15] the integration of job shop activities in business processes at
enterprise level; they report the design and implementation of a web service abstraction
layer for holonic manufacturing systems that allow business process orchestration of
Customer Order Management (COM) module. The COM module interacts with the
MES layer using real time events handled by the BPEL process implementation in the
execution stage.
Close related to IT infrastructures of Web Services (WS), the Service Oriented Archi-
tecture was considered as a technical architecture and enterprise integration source based
on defining production processes as workflows, decomposing them into production tasks
carried out by proper invocations to WSs and coordinated through orchestration and
choreography mechanisms. In this context, SOA was also accepted as a natural technol-
ogy to create collaborative environments linking levels 3 and 4 of ISA 95-type manufac-
turing enterprises and as implementation mean for Multi-Agent frameworks (MAS) used
to distribute intelligence across hierarchical management and control levels. Business
and process information systems integration and interoperability at enterprise level are
feasible by considering the customized product as “active controller” of the enterprise
resources, thus providing consistency between the material and informational flows.
Gerber et al. describe in [16] a flexible communication architecture approach for the
vertical integration of production process-relevant data, for closing the gap between the
business (strategic) and technical (operations) levels. The approach enables the transfer
of information in form of key performance indicators which support decision-making
processes in manufacturing companies.
In the first SOHOMA edition, a study reported in [16] focused on an active search
mechanism that creates a bridge between the enterprise use cases and intelligent man-
ufacturing systems. The research presents a framework for intelligent search of web
services that expose the offer request management functionality for intelligent manu-
facturing systems. A novel concept of volunteer-based search was introduced, in which
the search criteria is passed to the manufacturing system for self-assessment. The inte-
gration of Holonic Manufacturing Systems (HMS) in SOA/BPEL processes presents a
great advantage to enterprises, allowing simple process modification and reconfiguration
using standard tools. Also, integrated enterprise architectures allow a better tracking and
auditing of business process executions, providing valuable information based on which
the processes can be optimised and improved.
An early SOHOMA research work about the vertical enterprise integration of the
Manufacturing Operations Management layer (MES production control) with the Busi-
ness Logistics layer is described in [17]; the authors propose a conceptual model for Man-
ufacturing Systems Performance Monitoring (MSPM) derived from Gartner’s Research
Application Performance Monitoring (APM) conceptual framework [18]. The shop floor
monitoring solution is based on distributed MAS architecture capable of real time
resource-, product- and service-monitoring, and analytics/reporting. For each metric
Cloud Networked Models of Knowledge-Based Intelligent Control 11
collected by the target monitoring agents, the data is stored in Cloud database tables
using two strategies: short term storage (staging), consists in a rolling table containing
the “last N time intervals” of that particular metric and is used to display real time data
in the web application; long term storage, consists in a table containing averaged data
for each metric and is used for system tuning and for long term reporting. Thus, SOA
governance assures the capability for dynamic composition of services at runtime with-
out human intervention, allowing the manufacturing system to automatically align itself
to the business drivers.
The early stage of SOHOMA applied research for vertical enterprise integration from
shop floor layer up to ERP layer has been strongly influenced by IBM’s Manufactur-
ing Integration Framework (MIF) [19], initially developed together with manufacturing
enterprises from the automotive domain in order to assure vertical integration from MES
up to the business layer and external partners. MIF is a solution enablement workbench
built on open standards and on SOA technology and should be understood as a framework
rather than a complete application. Figure 1 illustrates the MIF architecture consisting
in a workbench application and the actual MIF runtime.
using the BPEL engine APIs; BPEL Engine is the IBM WebSphere Process Server that
implements a BPEL runtime platform [20].
Inspired by this technology, the SOHOMA research reported by C. Morariu et al. in
[21] concerns a framework for manufacturing integration which matches plant floor solu-
tions with business systems and suppliers. This solution focuses on achieving flexibility
by enabling a low coupling design of the entire enterprise system through leveraging
SOA and Manufacturing Service Bus (MSB) as best practices. The article presents the
integration between an upper layer Enterprise Service Bus (ESB)-based business sys-
tem with a distributed Holonic MES (HMES) system based on MSB built using JADE
multi agent platform, event triggered communication and dynamic business rules. At
architectural level, ESB provides a uniform and centralized information flow across all
business components; at technical level, the ESB assures message mediation and data
transformation, offering a uniform messaging platform.
At shop floor layer the horizontal data flow is enabled by using a manufacturing
adaptation of the ESB concept – the Manufacturing Service Bus. The MSB integration
model developed by the authors is an adaptation of ESB for manufacturing enterprises
and introduces the concept of bus communication for the manufacturing systems. The
MSB acts as an intermediary for the data flows, assuring loose coupling between modules
at shop floor level. The proposed MIF Integration framework with MSB-based HMES
is illustrated in Fig. 2.
The lower level MSB integrates the shop floor components, while MIF is used to
integrate the business level components of the manufacturing enterprise. The two busses
are linked together by a Mediation Agent which is plugged in both busses and contains a
set of rules for message passing between them. The authors demonstrate experimentally
that this architecture is superior to single bus architecture for two reasons: firstly it
provides a loosely coupled architecture at both MIF and HMES layers based on open
standards, which assures flexibility and scalability to the whole system; secondly, the
MSB implementation shields the enterprises wide ESB from a large amount of messages
that are produced and consumed at HMES layer.
The developments for service-oriented vertical integration of manufacturing enter-
prises with hierarchical ISA 95 organization have been validated in industry scenarios.
As mentioned in the introductory part, the dual CC-CMfg model was adopted as a generic
solution for high performance computing (CC) and shop floor resource sharing in large
scale, cloud-oriented manufacturing infrastructures and applications.
Cloud Networked Models of Knowledge-Based Intelligent Control 15
Several MES specifications for the migration of workloads in the cloud have been
proposed starting with the SOHOMA 2014 edition. There were defined strategies for
cloud adoption resulting in robust, highly available architectures in which the informa-
tion flow can be synchronized with the material flow, and which are flexible enough to
cope with dynamic reconfigurations of shop floor devices through APIs (Application
Program Interfaces) exposed and SOA choreography [27]. Cloud adoption in manufac-
turing enterprises with ISA-95 organization gained from using a 2-layer public-private
cloud-based software architecture with MES workload virtualization in a private cloud
platform delivering services in the IaaS model and having connectivity with external
organizations and clients for high level business processes, as shown in Fig. 3.
Fig. 3. Dual cloud adoption strategy for manufacturing enterprises and MES virtualization with
programmable infrastructure
The private cloud platform implements in the IaaS model the centralized part of
the MES layer by provisioning computing resources (CPU, storage, I/O) and global
applications. One of these applications, the System Scheduler (SS), uses the HPC capa-
bilities of the cloud for: resource teams configuring, batch planning, product scheduling,
resource allocation, cell and production monitoring. The cloud MES communicates with
its decentralized part in which intelligence is distributed among agentified and virtualized
shop floor resources and intelligent products [28]; the delegate MAS pattern (dMAS)
was used for this decentralized part. The emerging concept of programmable infrastruc-
ture (PI) strongly impacted the virtualized MES design; PI provides a series of APIs to
the cloud software stack, including hypervisor, operating system and application layers
for accurate identification, monitoring, real time (re)configuration and control. At sys-
tem level, redundancy mechanisms that detect and correct failures were implemented.
Morariu et al. describe in [29] a mechanism based on workload monitoring that is able to
detect failures and unexpected events in real time and to process them based on rules in
order to assure smooth execution of the manufacturing operations. The implementation
of such a mechanism requires prior definition of the metadata documents: workload
16 T. Borangiu et al.
redundancy profile, event definitions and recovery rules; this redundancy was evaluated
for the virtualized CoBASA-type resource team reconfiguring in vMES implementation.
Virtualization of shop floor devices, i.e., the creation of a virtualized layer (the
vMES), involves the migration of MES workloads that were traditionally executed on
physical machines to the private cloud infrastructure as virtual workloads. The idea is
to run all the control software in a virtualized environment and keep only the physical
resources (robots, machines, conveyor, etc.) with their dedicated real time controllers on
the shop. From a virtualization perspective, two types of workloads have been considered
in the SOHOMA CMfg developments:
• Shop floor resources: robots, CNC machines, conveyors etc. Their control architec-
ture varies depending on the manufacturer and technology used, but in general the
resource is controlled by a PC-based workstation. The communication between the
control workstation and the physical resource can be either standard TCP/IP based
(the workload is directly virtualized and a virtual network interface, used to control
the resource, is mapped to it) or a proprietary wire protocol (the virtualization process
needs a local controller on the shop floor that provides the physical interface). This
physical interface is virtualized and mapped through a specific driver to the virtualized
workload over the network.
• Intelligent products (IP) that are created temporarily in the production stage by embed-
ding intelligence on the physical order or product carrier (pallet) that is linked to
information and rules governing the way the product will be made (order of oper-
ations, resources assigned to operations, transfer routes, storages). IP virtualization
moves the processing from the intelligence embedded in the product to the virtual
machine (VM) in the cloud using a thin hypervisor on the product carrier and WI-FI
connection, either in a dedicated workload or in a shared workload to make decisions
relevant to its own destiny (Fig. 4).
Fig. 4. Intelligent Product virtualization. Left: a) Intelligence embedded on product carrier, i.e.
on the product during its making; b) IP virtualization mechanism. Right: IP based on mobile
technology and standardized OS (e.g. Arduino, Raspberry PI).
The binding between workload templates and virtualized resources is done using
shop floor profiles, which, in the authors’ view are XML files and contain a partial or
complete definition of the manufacturing system’s virtual layout and mappings [5, 30].
Shop floor profiles are workload centric and contain a list of workload definitions. The
Cloud Networked Models of Knowledge-Based Intelligent Control 17
JADE agents are implemented using Spring Session (https://spring.io/) which offers an
easy way to replace a HTTP session in an application container, and also supports clus-
tered sessions. Docker containers are clustered and managed in Swarm mode, with load
balancing. Further research on CMfg infrastructure design was reported.
Based on this principle, Răileanu et al. developed a 2-layer CC-CMfg semi-
heterarchical cloud architecture for manufacturing control tasks: i) resource team con-
figuring, mixed batch planning and product scheduling, resource allocation, cell and
production monitoring on the upper SS level running in cloud IaaS platform, and ii) exe-
cution and rescheduling of orders running on the lower decentralized, agent-based dMES
level. The choice of having centralized batch optimization and decentralized production
control (which may override the centralized optimization) is justified by the HPC avail-
ability to run on line optimization programs at batch horizon in the cloud while reacting
quickly at unexpected events; this implies decentralizing the control structure on the
shop floor layer, distributing intelligence to products and agentification [34], Fig. 5.
Cloud
Database
Optimization
Optimization engine
D)
data
Supervisor agent:
Update resource
and WIP status 1. Interface for optimization engine
2. GUI for production monitoring (WEB)
3. Control strategy
4. Dispatching orders based on chosen
1. Energy consumption control strategy
2. Resources status
In order to solve the HA problem extended to real time CMfg monitoring and control
applications with real time update of the cloud databased fed with shop floor data, a
HA cluster is foreseen for database and application availability. The proposed solution
involves the following VMs: 1) two load balancers (VMs) running in a cluster; the Load
Balancer VM is publicly available and is accessed from Internet to get requests for two
types of services: secure https requests and JAVA agent communication, and to forward
these requests to the HA cluster in the internal network; 2) the HA cluster composed
by two nodes (VMs), it offers availability for three services: i) web interface access; ii)
Cloud Networked Models of Knowledge-Based Intelligent Control 19
JAVA agent communication; iii) database access; 3) The MySQL cluster uses four VMs
grouped in two Node Groups amongst which is distributed the 10-table Cloud database.
Recent SOHOMA research is devoted to data collection and aggregation from shop
floor resources and products with embedded intelligence on the Edge Computing layer
of Industrial IoT (IIoT) frameworks, and data transfer in real time to a database located
in the private Cloud. In order to support multiple communication protocols and adapt the
information generated by the sensors/devices to centralized CC tasks, the IoT gateway
principle was extended to the aggregation node concept (Fig. 6) as defined in [35].
Fig. 6. Edge computing architecture for: (a) continuous; (b) operation-based data collection for
database storage and centralised MES tasks in the Cloud
One problem for which solutions were obtained in SOHOMA research is the inte-
gration of an IaaS cloud system (the CC component) with the manufacturing system
sharing multiple resources (the CMfg component) in the dual CC-CMfg control topol-
ogy. The cloud services offered in the IaaS infrastructure can be accessed in two ways:
a) deploying the services inside the cloud system before the production is started, or b)
deploying the services on-demand whenever they are needed.
The first approach offers the services just before the production starts and the man-
ufacturing system can start the production without any delay. The problem is that the
resources must be pre-provisioned and the load of the services should be predicted,
online reconfiguration or deployment of the services adding in some cases downtime to
the services due to the restart of the virtual machines implied in the process. Also, this
approach will use more resources than needed and is less flexible. The second approach
is consuming the resources in an efficient way; however, being an IaaS system, service
provisioning will require time which is translated in production delays.
The solution proposed by Anton et al. in [36] is to use a combination of virtual
machines deployed in cloud before the start of the production, and to run the services
in containers executed on the virtual machines. Thus, the virtual machines accessed as
services in the IaaS cloud will offer the static part of the services, and the containers
which are deployed much faster than virtual machines will cover the dynamic services.
In order to integrate the manufacturing system with the cloud services a middle-
ware application was created; it acts as a communication bridge and protocol translator
between the shop floor devices shared in the CMfg system and the CC management
platform. In order to have an elastic solution which can scale rapidly, a special template
20 T. Borangiu et al.
was added into the cloud offerings. The VM template is based on Red Hat Enterprise
Linux, and is configured to use OpenShift container platform and Ansible automation
for managing the containers.
This section describes the research and scientific contributions of SOHOMA members in
the development of manufacturing applications that use the dual CC-CMfg infrastructure
and control model. This research refers in principal to Cloud-based optimization of
production planning and control, and energy awareness in allocating shop floor resources
(CNC machines, robots, AGVs, conveyors) to product-making tasks.
The CMfg on-demand resource sharing concept was referred to throughout all nine
SOHOMA editions, which demonstrates the interest to apply Cloud Computing princi-
ples in the operation technology domain; of course, large scale manufacturing systems
are mostly benefiting from transposing CC principles in the industrial field. The Cloud
Computing component showed up after the initial research stage of vertical enterprise
integration adopted for ISA 95 hierarchical organizations, the main reason being the
utilisation of its HPC capability for intensive computational tasks such as: optimization
of mixed product planning, operation scheduling and resource allocation at the farthest
(batch) horizon, balancing resource usage, minimization of energy consumption at shop
floor level, product traceability, etc. All these tasks are typical for the high-level, cen-
tralized MES of production planning and control in semi-heterarchical topology and,
being assigned to the CC platform, will be executed better, faster and in larger number.
Because the semi-heterarchical control topology was correlated permanently with the
holonic manufacturing paradigm and reference architectures (PROSA-1998, CoBASA-
2006, ADACOR-2006, HAPBA-2012, ARTI-2018), the main role of the IaaS CC system
22 T. Borangiu et al.
was to implement the System Scheduler with its complex processing tasks: optimiza-
tion engine, resource state and quality of service (QoS) update, multiple correlation of
signals measured from resources (consumed energy, temperature of motors, vibrations
of mechanical elements, etc.), decision support for control mode switching.
Progressively, the efforts of SOHOMA authors were directed towards the real-time
integration of high level MES workloads in private Cloud IaaS platforms for:
• Analysis of Big Data streams collected at run time from multiple shop floor entry
points (resources, processes, products), aligned in time, grouped using covariance
rules and map-reduced at the edge of the distributed MES layer to be transferred in
the Cloud; CC tasks update the state and QoS of resources to evaluate the necessity
of reconfiguring production plans and resource teams.
• Real-time update of optimized production schedules (operation sequencing and
resource allocation) at pre-planned timing moments: operation- or product complet-
ing and dynamically reconfiguring controls and resource teams based on process and
resource data collected during batch production run time.
• Extending CC tasks to predictive control of manufacturing systems.
Optimization
Minor change update model
Resource usage cost
Optimization engine
Minor change update
for centralized Updated
scheduling schedule for
Major change orders to be
update inserted
Operation 1
...
Operation 2 Operation n
Trigger rescheduling
only once just before
inserting a new order
Operation types
• Update of resource execution time and energy consumption: the resource agents
receive a request for the execution of an operation. This request is forwarded to
the resource controller which starts a counter and reads from a sensor the current
energy consumption of the resource; at operation completion, the controller reads the
execution time from the counter and the energy consumption by subtracting the initial
energy value from the final one. These values update the Cloud database through Java
Database Connectivity. This information is stored for the centralized optimization
model and operations traceability to build up the product execution’s history.
• Adjusting production planning and resource scheduling: by collecting the resources’
state and usage cost at operation / product completion time, the current QoS and cost
of resources are used to update their weights for participation in the optimized task
assignment process. The events that can alter the optimally computed schedule fall into
two categories: hard changes and soft changes. Hard changes are events which alter
in a major way the production (resource/operation failures which cannot be overcome
without batch rescheduling in order to eliminate unavailable resources). Soft changes
consist of small variations of the cost parameters which alter the computed schedule
but do not make it unfeasible. Only when the increase of operation execution time or
energy consumption exceeds a predefined threshold the schedule optimization will be
run in the Cloud, but the new computed schedule replaces the current one only if it
produces significant better results (e.g., energy consumption).
Optimization engines such as ILOG CPLEX were used in the Cloud SS to construct a
solution (allocation of operations on resources) sequentially and continuously improve
it. The advantage of this operating mode is that the time interval in which a feasible
solution for a given search space (shop floor layout, batch size, order complexity) is
computed can be evaluated. This interval of time is used then in real time in conjunction
with the moment when an order is completed to determine when the optimization model
will be verified against the cost parameters updated from the resources.
The conclusion about the real-time capability of the cloud infrastructure and software
to acquire and process multiple data sources for global MES tasks is that data can be
processed both locally on embedded device (instantaneous power monitoring) and on
the Cloud (energy consumption / operation) since the latency permits this option. New
MES designs with private cloud platforms can act in real time on predicted data.
In the complex analysis on emerging IC2 T for smart, safe and sustainable industrial
systems, Trentesaux, Borangiu and Thomas argue that CMfg and MES virtualization will
be included in future “Smart” developments as a networked and service-oriented manu-
facturing model focusing on new opportunities in the field of networked manufacturing
(NM), enabled by the emergence of hybrid CC platforms [45].
of ML depends on the real time aspect of the data used in training. The results of these
algorithms are often accurate in a small time horizon in the future, so only a real time
scheduling engine can benefit from these approaches. Two use cases for Cloud MES
were defined: optimizing scheduling from predicted KPIs, and outliner detection.
• Batch planning (operations scheduling and resource allocation) based on the predic-
tion of energy consumptions for the optimization of global energy cost, and
• Safe resource usage (predictive maintenance and team reconfiguration) based on
resource health monitoring, detecting anomalies and predicting unexpected faults.
The ability to predict accurately in real time the instant power consumed by a resource
in any operation is the core functionality used as building block for predictive planning
and maintenance, and as real time fault detection. The prediction functionality is designed
at the time horizon of individual operations for each shop floor resource.
Processing shop floor data for these two applications involves three tasks: i) aggre-
gating at the right logical levels when data originates from multiple sources; ii) aligning
the data streams in normalized time intervals; iii) extracting insights from real time data
streams. The designed messaging system architecture allows handling large amounts
of real time data originating from various sources and having different formats. The
messaging platform was divided in two separate parameter domains (or topics): for shop
floor resource messages (including instant energy consumption data) and for intelligent
product messages (including energy consumption and execution time per operation).
The overall layout of the information flow for the proposed solution is shown in Fig. 8.
The messages are ordered in time, guaranteed by the messaging system for each topic.
The initial resource stream and intelligent product stream are considered as raw streams
of information, being then joined in application specific streams. Once the information
required for a given application is merged in a joined stream, the next operation is a
map-reduce type operation, also working at micro-batch level and processing message
sets logically to create a reduced information stream aligned in time.
The aggregated data streams can be considered as an endless stream of feature vectors
that can be used to extract insights in real time. The research developed three types of
machine learning applications that can be used for intelligent manufacturing control:
1. Prediction: the insights obtained can be used directly in the business layers for
decision making (optimal planning) or in fault detection systems. Specifically, the
prediction problem is interpreted as the possibility to use a deep learning neural
network to learn patterns and variations in numerical measurements, i.e. energy con-
sumption, and then use the neural network to make forecasts for that measurement.
This is especially useful when the data has temporal patterns that can be learned.
2. Classification: the system tries to determine a class for every feature vector received.
3. Clustering: a set of algorithms try to find similarities in non-labelled, multidimen-
sional data and tag each feature vector accordingly. Examples of applications of the
latter are quality assurance, pattern recognition for part picking, etc.
Fig. 8. Information flow in real time Big Data-driven prediction scheme with machine learning
sequence. The LSTM model was used to learn the pattern of a single parameter - con-
sumed energy - and predict in time its values one or more steps ahead. Two machine
learning techniques were used in the pilot implementation:
• Resource-based predictors: bound to each shop floor resource and distinct for each
operation the resource is performing. They are implemented using sklearn LSTM
and are running centralized in clusters of cloud VMs;
• Product-based classifiers: bound to each (distinct) product type, the classifiers rate
the overall efficiency of each product completed using a multivariate feature vector
against a pre-trained model.
The scheduling algorithm was implemented in two stages. In the first stage schedul-
ing and allocation are computed based on historical data and initial estimations; in the
second stage a LSTM is trained on each resource/operation pair, during live execution of
Cloud Networked Models of Knowledge-Based Intelligent Control 27
scheduled operations. The LSTM models are then used as input for real-time reschedul-
ing and/or reallocation, whenever the initial estimation starts to differ significantly from
the actual predictions learned on each resource. In this architecture, the ML prediction
module and the optimization module work in parallel being interconnected by a multi-
dimensional buffer which is updated by the prediction module each time the execution
of an operation is recorded, and is read by the optimization module each time a new
schedule needs to be computed. The multi-dimensional buffer represents the forecast of
parameters over a set of time intervals, operations and resources (Fig. 9).
The prediction-based solution described provides better results than the traditional
production planning based either on static, historical data or on current measurements
from resources, because it offers a look-ahead global view at batch horizon, continuously
improved by predictions of enhanced accuracy.
module
Set
Prediction
ons
ourc
ML
k res
m op
Fig. 9. Cloud architecture interconnecting the ML-based prediction and optimization modules
The solution uses a private cloud infrastructure that offers high processing power
(needed for big shop floor data streaming, machine learning and optimization in real
time), scalability and fault tolerance. The execution time for predictive production re-
scheduling ranges from 0.38 s for a 100 product-batch to 4.1 s for a 1000 product-batch,
with a 2-core cloud machine running at 2.6GHz with 4GB of RAM.
Another important benefit derived from applying AI techniques to manufacturing
control is the system’s increased reality awareness, which is obtained by: a) mirroring
the shop floor reality through the prediction of behaviours and properties of highly
abstracted entities (operations, products, resources) based on realistic ML models, and
b) predicting unexpected events through classifying, clustering and anomaly detection.
The three last years of SOHOMA research acknowledged as a key enabler for the
advances promised by manufacturing CPS the digital twin (DT) concept – i.e., the cyber
28 T. Borangiu et al.
representation of the physical twin. In their paper [52], Redelinghuys et al. reiterate
Oracle’s point of view that a virtual (or digital) twin is a representation in the cloud of a
physical asset or a device, whose presence in the cloud persists even when its physical
counterpart is not always connected. It is important for backend software to be able to
interrogate the last known status or to control the operating parameters even when the
physical twin is not online/connected. The cloud also provides a convenient mechanism
for sharing a structured database with other devices of the shop floor in a global context.
Further, many apps are becoming available to extract value from data in the cloud [53].
In the manufacturing context, a digital twin is a set of computer models that provide
the means to design, validate and optimize a part, a product, a shop floor resource, the
production facility (the shop floor) or the batch production system in the cyber space.
The authors of [52] present a 6-layer architecture for a manufacturing cell; the DT
allows exchanging data and information between a remote simulation and the cell, for:
validation and optimization of the system’s operation; batch control; remote monitoring
of the cell; simulating future behaviour and predictive analytics (Fig. 10).
Fig. 10. 6-layer architecture of the cloud digital twin of a manufacturing cell (Redelinghuys, [52])
The same authors extend the Six-Layer Architecture for Digital Twins (SLADT)
described in [52] to a reference architecture with aggregation (SLADTA) [54]; this
architecture makes maximum use of vendor-neutral off-the-shelf software, as well as
secure and open protocols for twin-to-twin communication (Fig. 11). The idea of a
digital twin of twins through the aggregation of DTs is therefore considered.
The essential role of Digital Twins in the intelligent manufacturing control of Indus-
try of the Future (IoF) was emphasized at the SOHOMA’18 edition in Bergamo, Italy
by Paul Valckenaers in his paper [55] and presentation about the evolution from the
reference architecture PROSA for Holonic Manufacturing Systems (HMS) to the new
ARTI reference architecture extended to industry and service applications beyond man-
ufacturing. The PROSA+dMAS design is now divided into a reality-reflecting part and
a decision-making part. The importance of the former is maximised, and so the concept
Cloud Networked Models of Knowledge-Based Intelligent Control 29
Fig. 11. Connection architecture of SLADTA aggregating digital twins of SLADT type [54]
of an intelligent being was introduced. In the light of the theory of flexibility, intelli-
gent beings are protected by their corresponding reality; this property makes intelligent
beings right candidates for building an IC2 T infrastructure that is knowledgeable about
an application domain, without adding restrictions to this real world.
The digital twins constitute the blue cubes of the smart ARTI control model. They are
connected to their physical counterparts (resources, processes, products) in a direct, one-
to-one relation. The digital twins - the intelligent beings - have to become the partners of
their corresponding reality. The move to ARTI assures in-depth interoperability, access
to the world-of-interest through digital twins that safeguard real-world interoperability
when connecting to their control systems. ARTI concepts could represent a basis for the
Enterprise Integration and Interoperability strategy [56].
Inspired by the new ARTI vision, several research projects have addressed Cloud-
based DT models for particular manufacturing applications.
In this context, Cardin et al. introduce in [57] a generic framework of an energy-
aware digital twin of an industrial asset. This framework enables the coupling between
multi-physical and behavioural models for both real-time virtualization of the asset and
look-ahead behaviour forecasts for decision making. The energy-awareness relies on
the evaluation of the energetic performance of the resource, which is related both to
the physical process running and to the parameters describing the environment of the
resource. The behavioural model integrates the actual data sensed from the physical twin
and the data of the control system in order to synchronize the activity of the DT and
the physical resource. Several multi-physical models can be used all along the operation
of the asset, because the behaviour of the physics can be completely different from
one product handled by the asset to another (e.g., due to the environment variability,
influencing the asset’s parameters). The models must be switched dynamically in the
DT. These ideas are exemplified in a case study on injection moulding machines.
Anton et al. [58] developed a framework of a reality-aware digital twin for industrial
robots integrated in intelligent manufacturing. The DT framework uses cloud services
performed on two layers: i) a layer distributed among fog computing elements, and
30 T. Borangiu et al.
ii) a centralized cloud IaaS platform. It enables robot virtualization, health monitoring
and anomaly detection, and the coupling between behavioural robot models and multi-
physical processes for real time predictive robot maintenance and optimal allocation
in production tasks. The main functionalities of this digital twin are: monitoring the
current status and quality of services performed by robots working in the shop floor, early
detecting anomalies and unexpected events to prevent robot breakdowns and production
stops, and forecasting robot performances and energy consumption. Machine learning
techniques are applied in the cloud layer of the virtual twin for predictive, customized
maintenance and optimized robot allocation in production tasks. Figure 12 shows the
proposed data stream processing and analysis layer of the robot’ digital twin.
The architecture design and implementing solution for the digital twin of a shop floor
transportation system embedded in the global manufacturing scheduling and control
system are presented by Răileanu et al. in [59].
Web connection
(for parameter monitoring)
Smart sensors
(collision, weight,
vibrations)
TCP connection
(for programming)
Data collection (C)
Smart sensors
(vibrations)
Fig. 12. Interconnections of the robot DT aggregation node for continuous and operation-based
monitoring of QoS and anomaly detection
The main functionalities of the DT are: mirroring the current stage of the physical
pallet transportation process and the state of the physical conveyor components, pre-
dicting the values of the pallet’s transportation times (tt) along the conveyor’s segments
between any two workstations, applying these values for optimized product scheduling
and resource allocation, and detecting anomalies in the conveyor equipment behaviour
(Fig. 13). Two processes are depicted: the actualization of the transportation times and
the monitoring of the current pallet, each of them being computed for each conveyor
section.
Cloud Networked Models of Knowledge-Based Intelligent Control 31
Fig. 13. Real-time conveyor monitoring, prediction of pallet travel time (tt) and anomaly detection
at conveyor section level
Fig. 14. Infrastructure for production data transfer from the conveyor monitored with DT
1. Big data: smart sensors pervasively instrument resources, products, processes and
orders as ‘plug-and-produce’ modules;
2. Platform: hardware aggregation nodes and middleware aligning data streams in
normalized time intervals and transferring map/reduce data in the cloud;
32 T. Borangiu et al.
Figure 15 shows the architecture of the proposed 6-layer digital twin embedded in
the smart control tasks - batch optimization and resource health maintenance. The tasks
are performed using HPC cloud services.
Raising reality awareness for shop floor resources imposes collecting online data
from their physical parts and served process (layer I), aggregate and analyse the data
streams (layer II) and classify states, predict QoS and usage costs / product operation
(layer III). Forecasting unexpected events, operating anomalies and decisions for cus-
tomized predictive maintenance are performed in layer IV. These layers are replicated
for all active resources; the forecasted KPIs are transferred to layer V for predictions on
product topic for real time batch optimization computed in layer VI.
Fig. 15. 6-layer digital twin embedded in the decision-making process of context-aware,
predictive resource allocation in batch orders and maintenance
Recent SOHOMA’19 papers [41, 62] address the new Fog-, Mist- and Edge tech-
nologies and related hardware / software distributed devices acting as smart gateway
that interfaces the IIoT network with the Cloud in large-scale industrial applications. A
Mist-Edge Gateway architecture is proposed in [41]. It brings the computing capabil-
ities of a system with distributed intelligence (Industrial IoT, industrial CPS) as close
as possible to the IoT network, is able to perform data streaming, time alignment and
Cloud Networked Models of Knowledge-Based Intelligent Control 33
aggregation on which small amount of analytics is computed and ad-hoc; urgent deci-
sions may be taken. The rest of big data is transferred to the Cloud where it is stored for
future long-term thorough analyses and AI-based predictive decisions.
In the perspective of Industry 4.0, the cloud-based service delivery model for the manu-
facturing industry includes product design, batch planning, product scheduling, real-time
manufacturing control, testing, management and all other stages of a product life cycle.
Nowadays manufacturing processes go beyond the production phases and factory limits.
A real impact can be achieved only via the integration of process lifecycle and product
lifecycles.
In the EU vision of the ‘Industry of the Future’ Cyber-Physical Systems are a break-
through research area for IC2 T in manufacturing and represent the new innovation fron-
tier for accomplishing the EU2020 “smart everywhere” vision. Cloud and Cloud ana-
lytics are defined as highest level layers of CPS in manufacturing, being referred in 9 of
the 14 research priorities for cyber-physical manufacturing [13].
SOHOMA research aligns to the CPS orientation in manufacturing by addressing the
following key challenges: (i) increasing autonomy and intelligence of existing machin-
ery and robots providing them with sensing and reasoning capabilities to recognize their
environment, identify components of material flows, detect unforeseen events and gain
flexibility in their assigned tasks; (ii) adaptation through context awareness and reason-
ing, aiming at making machines and robots aware of their workplace environment so
that they can perceive and obtain information on the unexpected and not programmed
conditions and events, and adapt their behaviour in order to better handle them while
taking into account safety aspects.
The past history and present of SOHOMA research includes examples of developing
multi-layered and decentralized manufacturing control architecture enabling shop floor
assets: intelligent products, orders, smart resources to take autonomous decisions. Vari-
ous applications involving big data analysis and cloud computing have been studied and
proposed, that include real-time monitoring, decentralized intelligence and smart object
networking with the interaction of the real and virtual worlds, representing a crucial new
aspect of the manufacturing production processes.
Time to market is a key success factor for competing at global level while multiple
stakeholder participation to the design, engineering and distribution process is a reality
to cope with, associated to the need for adoption of new business model in selling and
utilizing products (e.g. servitisation). The availability of flexible production technologies
(e.g. Additive, Robotics, 2D and 3D Artificial Vision, Nano Technologies) provide new
opportunity for engineering, manufacturing products and design innovative processes.
IoT technologies providing full knowledge of status and behaviour of assets and products,
as well as strong asset and activity coupling in manufacturing CPS make available a new
possibility to monitor and control the reality inside and outside the plant environment.
The adoption of IoT and CPS as enablers of product servitisation allows tracking the
34 T. Borangiu et al.
products and services along the whole lifecycle and consequently enhance customers’
experiences and satisfaction.
This landscape required more recently a new approach for defining the very concept
of manufacturing; we consider that this approach is represented by ‘Manufacturing as
a Service’ (MaaS). MaaS stands for new models of service-oriented, knowledge-based
smart manufacturing systems optimized and reality-aware, with high efficiency and low
energy consumption that deliver value to customer and manufacturer via Big data ana-
lytics, Internet of Things communications, Machine learning and Digital Twins embed-
ded in Cyber-Physical System frameworks. From product design to after-sales services,
MaaS relies on the servitisation of manufacturing operations that can be integrated into
different manufacturing cloud services such as Design as a Service (DaaS), Predict as a
Service (PraaS) or Maintain as a service (MNaaS). A schematic representation of these
services is given in Fig. 16.
MaaS relies on a layered cloud networked manufacturing perspective, from the fac-
tory low level of CMfg shop floor resource sharing model to the virtual enterprise high
level, by distributing the cost of the manufacturing infrastructure - equipment, software,
maintenance, networking, a.o. - across all customers. Manufacturing as a Service relies
on real-time insight into the status of manufacturing equipment; new sensor technology
will boost the amount of data about their statuses provided to the manufacturing cloud.
This may include data on lifetime manufacturing history, error rates, service histories,
upcoming reservations, manufacturing environmental conditions and more.
Fig. 16. Service initiation and MaaS within the services complex
this layer of non-IT resources gives rise to a manufacturing infrastructure similar to the
known cloud-based IaaS, which the authors identify as Manufacturing as a Service.
SOHOMA19 came with one of the first MaaS models in the literature. Babiceanu
and Seker proposed in [14] a combined Product Design and Manufacturing as a Service
model (PDMaaS) in which customers, new or existing, with a product need in mind,
are offered the opportunity to either select an existing product type from a database or
being assisted by a deep learning engine to design their own product. This represents the
PDaaS part of the combined model. Geographically distributed manufacturers, having
similar and/or different production capabilities, are linked between them and to logistics
providers through a Software-Defined Network service infrastructure. Once an order is
placed the MaaS part of the model is employed and the selected part type is produced
and delivered to the customer.
Recent literature review provides insights for the SOHOMA community on how to
further develop the MaaS concept. Kusiak proposed the concept of Service Manufactur-
ing [63] which includes design, manufacturing, supply, distribution, maintenance and
optimization activities, all of them being offered in the “as a service” option. Wang et al.
provided a MaaS framework in the context of manufacturing service allocation in the
cloud, which features users with preferences generating the task demand and manufac-
turing providers supplying the tasks with their services, all within a Cloud Manufac-
turing platform [64]. Lastly, Liu et al. proposed a PLM framework [65] which, besides
manufacturing services, includes product design, logistics, maintenance, and end-of-life
recycling services, all integrated into a blockchain-based model.
This year, the SOHOMA’20 event returns to Paris. The workshop theme is “Man-
ufacturing as a Service - Virtualizing and encapsulating manufacturing resources and
controls into Cloud Networked Services”. It is expected that the participants will bring
a convergence of innovations in Cloud-based factory and product lifecycle management
with cyber-physical organisation and applied AI. Together, these advances will offer
new sustainable business models in the manufacturing value chain.
References
1. Wu, D., Rosen, D.W., Wang, L., Schaefer, D.: Cloud-based design and manufacturing: a new
paradigm in digital manufacturing and design innovation. Comput. Aided Des. 59, 1–4 (2014)
2. Kubler, S., Holmström, J., Främling, K., Turkama, P.: Technological theory of cloud manu-
facturing, service orientation in Holonic and multi-agent manufacturing. In: Proceedings of
the OHOMA 2015, Studies in Computational Intelligence, vol. 640, pp. 267–276. Springer
(2016)
3. Thomas, A., Trentesaux, D., Valckenaers, P.: Intelligent distributed production control. J.
Intell. Manuf. 23(6), 2507–2512 (2011)
4. Morariu, C., Morariu, O., Borangiu. T.: Volunteer-based search engine for holonic manufac-
turing systems, service orientation in Holonic and multi-agent manufacturing. In: Proceedings
of the SOHOMA 2011, Studies in Computational Intelligence, vol. 402, pp. 293–306. Springer
(2012)
5. Borangiu, T., Trentesaux, D., Thomas, A., Leitão, P., Barata, J.: Digital transformation of
manufacturing through cloud services and resource virtualization. Comput. Ind. 108(2019),
150–162 (2019)
36 T. Borangiu et al.
6. Anton, F.D., Borangiu, T., Anton, S., Răileanu, S.: Deploying on demand cloud services
to support processes in robotic applications and manufacturing control systems. In: Pro-
ceedings of the 23rd International Conference on System Theory, Control and Computing
(ICSTCC 2019), 9–11 October 2019 (2019). https://doi.org/10.1109/ICSTCC.2019.8885712,
IEEE Xplore Digital Library
7. Babiceanu, R.F., Seker, R.: Software-defined networking-based models for secure interoper-
ability of manufacturing operations, service orientation in holonic and multi-agent manufac-
turing. In: Proceedings of the SOHOMA 2017, Studies in Computational Intelligence, vol.
762, pp. 243–251. Springer (2018)
8. Coullon, H., Noyé, J.: Reconsidering the relationship between cloud computing and cloud
manufacturing, service orientation in holonic and multi-agent manufacturing. In: Proceedings
of the SOHOMA 2017, Studies in Computational Intelligence, vol. 762, pp. 217–228. Springer
(2018)
9. Răileanu, S., Anton, F.D., Borangiu, T., Anton, S., Nicolae, M.: A cloud-based manufac-
turing control system with data integration from multiple autonomous agents. Comput. Ind.
102(2018), 50–61 (2018). https://doi.org/10.1016/j.compind.2018.08.004
10. Răileanu, S., Anton, F.D., Borangiu, T.: High availability cloud manufacturing system inte-
grating distributed MES agents, service orientation in holonic and multi-agent manufactur-
ing. In: Proceedings of the SOHOMA 2016, Studies in Computational Intelligence, vol. 694,
pp. 11–23. Springer (2017)
11. Morariu, C., Morariu, O., Răileanu, S., Borangiu, T.: Machine learning for predictive schedul-
ing and resource allocation in large scale manufacturing systems. Comput. Ind. 120, 103244
(2020). https://doi.org/10.1016/j.compind.2020.103244,Elsevier
12. International Electrotechnical Commission: Factory of the future, White paper, ISBN 978–2–
8322–2811–1, Geneva (2018). https://www.iec.ch/whitepaper/pdf/iecWP-futurefactory-LR-
en.pdf
13. sCorPiuS: Future trends and Research Priorities for CPS in Manufacturing, White Paper,
EuroCPS Project (2017). https://www.eurocps.org/wp-content/uploads/2017/01/sCorPiuS_
Final-roadmap_whitepaper_v1.0.pdf
14. Babiceanu, R.F., Seker, R.: Cloud-enabled product design selection and manufacturing as a
service, service oriented, holonic and multi agent manufacturing systems for the industry of
the future. In: Proceedings of the SOHOMA 2019, Studies in Computational Intelligence,
vol. 853, pp. 210–219. Springer (2020)
15. Morariu, C., Morariu, O., Borangiu, T.: Customer order management in service oriented
holonic manufacturing. J. Comput. Ind. 64(8), 1061–1072 (2013)
16. Gerber, T., Bosch, H.-C., Johnsson, C.: Vertical integration of decision-relevant production
information into IT systems of manufacturing companies, service orientation in holonic and
multi-agent manufacturing. In: Proceedings of the SOHOMA 2012, Studies in Computational
Intelligence, vol. 472, pp. 263–278. Springer (2013)
17. Morariu, O, Morariu, C., Borangiu, T.: Resource, service and product: real-time monitoring
solution for service oriented holonic manufacturing systems, service orientation in holonic and
multi-agent manufacturing. In: Proceedings of the SOHOMA 2013, Studies in Computational
Intelligence, vol. 544, pp. 47–62 (2014)
18. Gartner: Keep the Five Functional Dimensions of APM Distinct, Gartner Research ID
Number: G00206101), September 16, 2010 (2010)
19. Morariu, C.: © 2009 IBM Manufacturing Integration Framework, Presentation UT Brasov
(2012). https://slideplayer.com/slide/9975956/
20. Moore, W., Collier, J., Mount, J., Spiteri, C., Whyatt, D.: Using BPEL Processes in Web-
Sphere Business Integration Server Foundation Business Process Integration and Supply
Chain Solutions. IBM Redbooks, IBM Press (2004)
Cloud Networked Models of Knowledge-Based Intelligent Control 37
21. Morariu, C., Morariu, O., Borangiu, T., Răileanu, S.: Manufacturing service bus integration
model for highly flexible and scalable manufacturing systems, service orientation in holonic
and multi-agent manufacturing. In: Proceedings of the SOHOMA 2012, vol. 472, pp. 19–40.
Springer (2013)
22. Babiceanu, R.F.: Complex manufacturing and service enterprise systems: modeling and com-
putational framework. In: Proceedings of the SOHOMA 2012, Service Orientation in Holonic
and Multi-Agent Manufacturing, vol. 472, pp. 197–212. Springer (2013)
23. Kubler, S., Madhikermi, M., Buda, A., Främling, K.: QLM messaging standards: introduc-
tion and comparison with existing messaging protocols, service orientation in holonic and
multi-agent manufacturing and robotics. In: Proceedings of the SOHOMA 2013, Studies in
Computational Intelligence, vol. 544, pp. 237–256. Springer (2014)
24. Giret, A., Botti, V.: ANEMONA-S + Thomas: a framework for developing service-oriented
intelligent manufacturing systems, service orientation in Holonic and multi-agent manufac-
turing, In: Proceedings of the SOHOMA 2014, Studies in Computational Intelligence vol.
594, pp. 61–69. Springer (2015)
25. Babiceanu, R.F., Seker, R.: Cyber-physical resource scheduling in the context of industrial
internet of things operations, service orientation in holonic and multi-agent manufacturing.
In: Proc. SOHOMA 2018, Studies in Computational Intelligence, vol. 803, pp. 399–411.
Springer (2019)
26. Pipan, M., Protner, J., Herakovič, N.: Integration of distributed manufacturing nodes in smart
factory, service orientation in holonic and multi-agent manufacturing. In: Proceedings of the
SOHOMA 2018, Studies in Computational Intelligence, vol. 803, pp. 424–435. Springer
(2019)
27. Morariu, O., Morariu, C., Borangiu, Th.: vMES: virtualization aware manufacturing execution
system. Comput. Ind. (67), 27–37 (2015)
28. Morariu, O., Borangiu, Th., Răileanu, S.: Redundancy mechanisms for virtualized MES work-
loads in private cloud, service orientation in holonic and multi-agent manufacturing. In: Pro-
ceedings of SOHOMA 2014, Studies in Computational Intelligence, vol. 594, pp. 147–156.
Springer (2015)
29. Morariu, O., Morariu, C., Borangiu, T.: Shop-floor resource virtualization layer with private
cloud support. J. Intell. Manuf. 16 (2014). https://doi.org/10.1007/s10845-014-0878-7
30. Morariu, O., Morariu, C., Borangiu, Th.: Adopting virtualization technologies in robotized
manufacturing, In: Proceedings of the 22th International Workshop on Robotics in Alpe-
Adria-Danube Region RAAD 2013, September 11–13, vol. 22, No. 1, Porto Roz, Slovenia
(2013)
31. Morariu, O., Morariu, C., Borangiu, T.: Policy-based security for distributed manufacturing
execution systems. Int. J. Comput. Integr. Manuf. 31(3), 306–317 (2018)
32. Morariu, O., Morariu, C., Borangiu, Th.: Security issues in service oriented manufacturing
architectures with distributed intelligence, service orientation in holonic and multi-agent man-
ufacturing. In: Proceedings of SOHOMA 2015, Studies in Computational Intelligence, vol.
640, pp. 243–263. Springer (2016)
33. Răileanu, S., Anton, F.D., Borangiu, T., Anton, S.: Design of high availability manufacturing
resource agents using JADE framework and cloud replication, service orientation in holonic
and multi-agent manufacturing. In: Proceedings of SOHOMA 2017, Studies in Computational
Intelligence, vol. 762, pp. 201–215. Springer (2018)
34. Răileanu, S., Anton, F., Borangiu, Th.: High availability cloud manufacturing system inte-
grating distributed mes agents, service orientation in holonic and multi-agent manufacturing.
Proc. of SOHOMA 2016, Studies in Computational Intelligence, vol. 694, pp. 11–23. Springer
(2017)
38 T. Borangiu et al.
35. Răileanu, S., Anton, F., Borangiu, T., Morariu, O., Iacob, I.: An experimental study on the
integration of embedded devices into private manufacturing cloud infrastructures, service
orientation in holonic and multi-agent manufacturing. In: Proceedings of SOHOMA 2018,
Studies in Computational Intelligence, vol. 803, pp. 171–182. Springer (2018)
36. Anton, F., Borangiu, T., Anton, S., Răileanu, S.: Deploying on demand cloud services to
support processes in robotic applications and manufacturing control systems. In: Proceedings
of the 23rd International Conference on System Theory, Control and Computing (ICSTCC
2019), 9–11 October 2019, pp. 537–542 (2019). https://doi.org/10.1109/ICSTCC.2019.888
5712, IEEE Xplore Digital Library
37. Babiceanu, R.F., Seker, R.: Secure and resilient manufacturing operations inspired by
software-defined networking, service orientation in holonic and multi agent manufactur-
ing. In: Proceedings of SOHOMA 2015, Studies in Computational Intelligence, vol. 640,
pp. 285–294. Springer (2016)
38. Babiceanu, R.F., Seker, R.: Cybersecurity and resilience modelling for software-defined
networks-based manufacturing applications, service orientation in holonic and multi agent
manufacturing. In: Proceedings of SOHOMA 2016, Studies in Computational Intelligence,
vol. 694, pp. 167–176. Springer (2017)
39. Babiceanu, R.F., Seker, R.: Software-defined networking-based models for secure interoper-
ability of manufacturing operations, service orientation in holonic and multi- agent manufac-
turing. In: Proceedings of SOHOMA 2017, Studies in Computational Intelligence, vol. 762,
pp. 243–252. Springer (2018)
40. Babiceanu, R.F., Seker, R.: Cyber-physical resource scheduling in the context of industrial
internet of things operations, service orientation in holonic and multi agent manufacturing. In:
Proceedings of SOHOMA 2018, Studies in Computational Intelligence, vol. 803, pp. 399–411.
Springer (2019)
41. Crăciunescu, M., Chenaru, I., Dobrescu, R., Florea, G., Mocanu, S.: IIoT Gateway for Edge
Computing Applications, Service Oriented, Holonic and Multi Agent Manufacturing Systems
for the Industry of the Future, Studies in Computational Intelligence, vol. 853, pp. 53–66.
Springer (2020)
42. Răileanu, S., Borangiu, T., Rădulescu, S.: Towards an ontology for distributed manufacturing
control, service orientation in holonic and multiagent manufacturing and robotics. In: Pro-
ceedings of the SOHOMA 2013, Studies in Computational Intelligent, vol. 544, pp. 97–109.
Springer (2014)
43. Talhi, A., Huet, J.-C., Fortineau, V., Lamouri, S.: Toward an ontology-based architecture
for cloud manufacturing, service orientation in holonic and multi agent manufacturing. In:
Proceedings of SOHOMA 2014, Studies in Computational Intelligence, vol. 594, pp. 187–195.
Springer (2015)
44. Răileanu, S., Anton, F., Borangiu, T., Anton, S., Nicolae, M.: A cloud-based manufacturing
control system with data integration from multiple autonomous agents. Comput. Ind. 102,
50–61 (2018)
45. Trentesaux, D., Borangiu, T., Thomas, A.: Emerging ICT concepts for smart, safe and
sustainable industrial systems. J. Comput. Industry 81(2016), 1–10 (2016)
46. Morariu, O., Morariu, C., Borangiu, T., Răileanu, S.: Manufacturing systems at scale with big
data streaming and online machine learning, service orientation in holonic and multi-agent
manufacturing. In: Proceedings of SOHOMA 2017, Studies in Computational Intelligent, vol.
762, pp. 253–264. Springer (2018)
47. Babiceanu, R.F., Seker, R.: Manufacturing operations, internet of things, and big data: towards
predictive manufacturing systems. In: Proceedings SOHOMA 2014, Studies in Computational
Intelligence, vol. 594, pp. 157–164. Springer (2015)
48. Adler, J.: R in a Nutshell: A Desktop Quick Reference, 2nd edn. O’Reilly Media Inc.,
Sebastopol (2012)
Cloud Networked Models of Knowledge-Based Intelligent Control 39
49. Babiceanu, R.F., Seker, R.: Manufacturing cyber-physical systems enabled by complex event
processing and big data environments: a framework for development. In: Proceedings of the
SOHOMA 2014, Studies in Computational Intelligence, vol. 594, pp. 165–173. Springer
(2015)
50. Morariu, C., Răileanu, S., Borangiu, T., Anton, F.: A distributed approach for machine learning
in large scale manufacturing systems, service orientation in holonic and multi-agent manu-
facturing. In: Proceedings of SOHOMA 2018, Studies in Computational Intelligence, vol.
803, pp. 41–52. Springer (2018)
51. Morariu, C., Morariu, O., Răileanu, S., Borangiu, T.: Machine learning for predictive schedul-
ing and resource allocation in large scale manufacturing systems, J. Comput. Ind. 120
(2020)
52. Redelinghuys, A., Basson, A., Kruger, K.: Six-layer digital twin architecture for a manu-
facturing cell, service orientation in holonic and multi-agent manufacturing. In: Proceedings
of SOHOMA 2018, Studies in Computational Intelligence, vol. 803, pp. 412–423. Springer
(2019)
53. Oracle: Digital Twins for IoT Applications: A Comprehensive Approach to Implementing
IoT Digital Twins, Redwood Shores (2017)
54. Redelinghuys, A., Kruger, K., Basson, A.: A six-layer architecture for digital twins with
aggregation, service orientation in holonic and multi-agent manufacturing systems for indus-
try of the future. In: Proceedings of SOHOMA 2019, Studies in Computational Intelligence,
vol. 853, pp. 171–182. Springer (2020)
55. Valckenaers, P.: ARTI reference architecture – PROSA revisited, service orientation in holonic
and multi-agent manufacturing. In: Proceedings of SOHOMA 2018, Studies in Computational
Intelligence, vol. 803, p. 19. Springer (2019)
56. Borangiu, T., Cardin, O., Babiceanu, R.F., Giret, A., Kruger, K., Răileanu, S., Weichhart, G.:
Scientific discussion: open reviews of “ARTI Reference Architecture - PROSA Revisited”, ser-
vice orientation in holonic and multi-agent manufacturing. In: Proceedings of SOHOMA’18,
Studies in Computational Intelligence, vol. 803, pp. 20–37. Springer (2018)
57. Cardin, O., Castagna, P., Couedel, D., Plot, C., Launay, J., Allanic, N., Madec, Y., Jegouzo,
S.: Energy-aware resources in digital twin: the case of injection moulding machines, service
orientation in holonic and multi-agent manufacturing. In: Proceedings of SOHOMA 2018,
Studies in Computational Intelligence, vol. 803, pp. 183–194. Springer (2020)
58. Anton, F., Borangiu, T., Răileanu, S., Anton, S.: Cloud-based digital twin for robot integration
in intelligent manufacturing systems, advances in service and industrial robotics. In: Proceed-
ings of RAAD 2020, Mechanisms and Machine Science, vol. 84. Springer (2020). https://doi.
org/10.1007/978-3-030-48989-2_60
59. Răileanu, S., Borangiu, T., Ivănescu, N., Morariu, O., Anton, F.D.: Integrating the digital
twin of a shop floor conveyor in the manufacturing control system, service orientation in
holonic and multi-agent manufacturing systems for industry of the future. In: Proceedings
of SOHOMA 2019, Studies in Computational Intelligence, vol. 853, pp. 134–145. Springer
(2020)
60. Borangiu, T., Anton, S., Răileanu, S., Anton, F.: Smart manufacturing control with cloud-
embedded digital twins. In: Proceedings of 24th International Conference on System Theory,
Control and Computing, 8–10 October 2020, Sinaia, Romania. IEEE Xplore Digital Library
(2020)
61. Borangiu, T., Oltean, E., Răileanu, S., Anton, F., Anton, S., Iacob, I.: Embedded digital
twin for ARTI-type control of semi-continuous production processes, service orientation in
holonic and multi-agent manufacturing systems for industry of the future. In: Proceedings of
the SOHOMA 2019, Studies in Computational Intelligence, vol. 853, pp. 113–133. Springer
(2020)
40 T. Borangiu et al.
62. Mihai, V., Popescu, D., Ichim, L., Drăgana, C.: Fog computing monitoring system for a flex-
ible assembly line, service orientation in holonic and multi-agent manufacturing systems for
industry of the future. In: Proceedings SOHOMA 2019, Studies in Computational Intelligence,
vol. 853, pp. 197–209. Springer (2020)
63. Kusiak, A.: Service manufacturing: basic concepts and technologies. J. Manuf. Syst. 52,
198–204 (2019)
64. Wang, T., Lia, C., Yuan, Y., Liu, J., Adeleke, I.B.: An evolutionary game approach for man-
ufacturing service allocation management in cloud manufacturing. Comput. Ind. Eng. 133,
231–240 (2019)
65. Liu, X. L., Wang, W. M., Guo, H., Barenji, A. V., Li, Z., Huang, G.: Industrial blockchain
based framework for product lifecycle management in industry 4.0. Robotics and Computer
Integrated Manufacturing, vol. 63, art. no. 101897 (2020)
About the Applicability of IoT Concept
for Classical Manufacturing Systems
Abstract. This paper discusses about introducing IoT principles into manufac-
turing systems so that different devices expose their state and talk to each other.
The IoT device protocol, including published events and received commands, was
adapted for various manufacturing equipment. Thus, through a cloud application,
the event published by a device is converted to a command toward other devices,
and the receiver is able to interpret the message as a command or an event. The
proposed experiment involves disabling the local I/O interaction among devices
and the use of an IoT messaging solution through cloud. One concern of IoT
applicability in production systems regards the delay measurement of a round trip
message. Our experiments revealed that such delays are mainly produced by the
external interaction protocols, as these are provided by manufacturers of devices.
Though this weakness exists, the advantage of the IoT concept can be under-
lined: collaboration among IoT devices is allowed without the need to extend the
hardware configuration. This approach is suitable to smoothly enlarge classical
manufacturing systems with new equipment and functionalities.
1 Introduction
Many areas of Artificial Intelligence have made a significant progress lately; methods
and techniques of this field are applicable in industry and participate in the so-called
Industry 4.0. On the information technology side, this trend is accelerated in areas such
as cyber-physical systems (CPS), cloud computing (CC) and Internet of Things (IoT).
A complete review [1] over the period 2014–2017 on the three approaches and their
influence on manufacturing sector indicates: (1) a continuous doubling of the number
of publications in ISI journals, (2) a large number of conceptual methods and few case
studies, simulations and experiments, (3) a lack of researches on the human-machine
interface and the interaction between industrial equipment, and (4) there are no studies
presenting the cost/effort of implementation versus the advantages presented in literature.
Adoption of these methods and techniques has a slow pace in robotic production systems
due to the difficult way for this technology to penetrate such heterogeneous, complex
environments, which are highly dependent on manufacturers of industrial solutions.
By applying the IoT principles and solving some aspects of integration and resource
interaction, today’s industry can overcome certain challenges. Among them, there are:
real-time data collection from existing equipment, expanding sensory systems, smooth
integration with new devices and functionalities, improving coordination and collab-
oration methods by applying techniques from distributed artificial intelligence. From
a technological and socio-economical point of view, manufacturing systems will ben-
efit, through simple integration into cloud computing, of optimization and prediction
mechanisms at production, maintenance, energy consumption and human staff levels.
As highlighted, the lack of case studies or experiments in literature [1] makes it
hard to anticipate the implementation effort; some elements of difficulty are: current
physical resources offer limited support for the IoT-specific communication protocol, a
production system based on cloud introduces overloading of the communication network
and delays in data transmission (concerning this, systems for fog and edge computing
can be considered), and, moreover, coordination and collaboration between resources
under IoT and determining false or inaccurate states are difficult, too.
The paper is organized as follows. Related work is presented in Sect. 2. Then, in
Sect. 3 the considered manufacturing cell is described. The architecture of the IoT
solution and the developed Node-RED application, facilitating communication between
IoT devices, are detailed in Sect. 4. A way of analyzing communication delays is revealed
in the following Sect. 5. The paper ends with some results and conclusions.
2 Related Work
A preliminary study on the IoT reference architecture and its major components appears
in a European FP7 project [2], which has influenced the analysis and application in
many areas (transport and logistics, medicine, social activities and so-called intelligent
environments). IoT concept finds a distinct notion in manufacturing as Industrial IoT
(IIoT). The overview presented in [3] highlights the issues regarding connectivity in
IIoT and the challenges for this solution, too. Security and privacy requirements are
usually brought into discussion where a breach may have a negative impact in production
ecosystems. On the other hand, IIOT can be the needed glue so that many technologies
should be able to extend and improve even the closed manufacturing systems at the cell
level.
The authors present in [4] a complex IoT system consisting of two micro-assembly
manufacturing cells, a full suite of sensors and cameras, software modules and monitor-
ing applications, a cloud platform and a virtual assembly environment (a virtual reality
simulator). Inter-connectivity between IoT and cloud manufacturing is demonstrated
in [5], where the dynamics of a production system can be determined in real-time and
analyzed in cloud to achieve flexible resource management. More specifically, in [6]
there is presented an IoT architecture with five levels (resources, perception, network,
services and applications) to expose in real time the states of interconnected resources,
similar to an SCADA solution. In [7], the underlined advantages of IIoT regard the way
precision machining can be achieved using embedded sensors and intelligent algorithms
that can detect in real time deviations from nominal values during the manufacturing
processes. A method to integrate an industrial robot into an IoT system is detailed in [8].
About the Applicability of IoT Concept for Classical Manufacturing Systems 43
Authors show that the integration process gets simpler when the manufacturing system
is developed as an IoT one.
The most common connectivity frameworks considered in IoT applications are Dis-
tributed Device Service (DDS), OPC Unified Architecture (OPC UA), Robot Operating
System (ROS), and Message Queuing Telemetry Transport (MQTT). An overview of
these protocols and their performance are detailed in [9]. The last one, MQTT, was
designed for low bandwidth environments and low power consumption, particularly for
sensors and mobile devices in unreliable networks. System latency depends on data-
transfer over the network and on computing at the edge device and cloud application
[10]. Cloud solutions may have latency issues when an increased volume of data is
transferred through the cloud. An advantage is obtained with fog and edge computing
by going as close as possible to devices and sensors. These techniques facilitate IoT
communication without cloud and only preprocessed information is sent towards cloud
services. Several benefits of fog computing for manufacturing are presented in [11].
The study described in [12] highlights the strong points of using edge computing at the
robot level. Lately, the digital twin concept is requested to mirror a physical process
with one or more simulated environments, leading to a strong event-based connectivity
with sensors and actuators. The development of digital twin for a shop floor conveyor is
presented in [13].
In all mentioned projects [4–7] the basic mechanism of IoT systems is adopted to
facilitate the transfer of information from sensors and embedded devices to cloud areas
(for analysis and decision making) and the transfer of commands from cloud manufac-
turing to resources. Furthermore, the design of IoT concepts considers a collaborative
work among resources and between workers and resources, too. To the authors’ knowl-
edge, the issue of collaboration in a robotic cell under the IoT principles was still not
addressed.
communicate with robot R1 to work with the conveyor. Moreover, the communication
protocol is limited by the IO interface; namely, one digital output of a robot is linked
to a digital input of the other robot. States of sensors/devices are directly exposed to
the digital/analogical inputs of dependent devices; for instance, this is the case for the
interface between robot R2 and CNC machine 5. In another cases, states are obtained via
serial, TCP/IP, and other protocols; the vision inspection system (marked 4 in Fig. 1) gives
the state of working table for the robot R1 through a serial communication, meaning
the operator application (10 in Fig. 1) gets this vision state via robot R1 - no direct
link is supported. Pneumatic actuators of robot grippers (labeled with 8 and 9) are
directly commanded by robots through local connections. Extending the sensorial system
and/or the actuators for a robot involves to solve first the local integration constraints.
Commands are also sent by means of I/O outputs and communication protocols. These
kinds of dependences limit the fast adaptability and define a rigid system.
R2 R1 8
9 7 4 4
Vision
inspection
system
Industrial
Robot
R1
Sensors from
storage
8
6
5 9 3
CNC Mill Conveyor
7
Sensors from Application for
Industrial R2 storage manufacturing
Robot commands and
10
monitoring
Some other remarks are important to consider. Common industrial devices have
specialized and limited programming capabilities which do not allow developing an IoT
protocol. As the authors pointed out in [8] and [12], an industrial robot has its own private
protocol provided by the manufacturer to facilitate the communication with a computer
through a custom API. Other equipment, such as the visual system and CNC Mill, expose
services only by the serial ports for limited clients. The diversity of protocols requires
support of an integrator, and usually the number of possible new devices and sensors
that can be used in the system is limited by the old equipment; for example, the cost of
integration of a new camera is higher than the device itself.
About the Applicability of IoT Concept for Classical Manufacturing Systems 45
4 IoT-Based Solution
IoT is based on the publish/subscribe pattern, where the sender and receiver are loosely
coupled through a broker server. A message published on a specific topic by a client is
routed by the server to all clients that are subscribed to that topic. MQTT is the most used
IoT messaging solution over TCP/IP protocol. IoT clients are separated into devices and
applications. IoT devices are able to publish their states/events and receive commands.
IoT applications are developed to receive states/events and publish commands toward
devices. At this point, the direct interaction among devices is not supported. Without
changing the IoT approach, the proposed solution includes an IoT application with the
role of changing the type of message from event to command. Thus, a published event
can be also a command, and the received command can be an event, too; the difference
is made when the content of messages is interpreted.
The classical manufacturing systems already have some means of external com-
munication. None of our manufacturing devices supports implementation of an MQTT
client. Thus, we have to develop IoT adapters where it is possible, considering the exter-
nal interaction supported by manufacturer. It is to notice that a general architecture for
an industrial IoT adapter follows the scheme from Fig. 2. An industrial device enables
external communication through services and/or digital/analog channels and has an
application program interface (API) or protocol to interact with. Implementation of an
adapter is restricted by the client interface: the operation system and API supported by
the programming language. For those devices that have only digital/analog interactions,
some types of embedded devices with WiFi options can be used. An IoT adapter for
an ABB industrial robot is detailed in [8]. Other implemented adapters for real/virtual
controllers, visual system and CNC machine are sketched in Fig. 3.
Figure 4 illustrates the resulting IoT architecture for the considered production sys-
tem (see Fig. 1) based on an IBM Watson IoT broker [14]. In this stage, some sensors
and equipment are indirectly connected to the cloud through their dependent device; for
instance, the conveyor (marked with 3) is controlled by the robot.
46 C. Pascal et al.
Fig. 3. Developed adaptors for robots, visual system, and CNC device
As mentioned earlier, two or more IoT devices can talk one to another by a developed
Node-RED [15] cloud application (see Fig. 5). The received events are converted in
commands by the EventToCmd function node. This mechanism broadcasts every event
to all devices, excepting the sender. A limitation can be made by filtering an event
according to its content. In this case, two rules are applied: (1) if the event con-
tains the attribute “to” equal with *, then all devices receive the command, (2) when a
name of device is specified in attribute, the indicated device receives the information.
Not all events must be shared among devices; in such a case, they are ignored by not
using the attribute “to”.
Monitoring or debugging the manufacturing process at the cloud level is possible
too. Each device’s task can be individually tested with the insertion of commands from
Node-RED, by linking inject, function, and ibmiot nodes. Collecting, analyzing, and
visualizing data may be easily made by taking information from published events. For
example, measuring the time between sending a command and receiving the result is
carried out by two function nodes, start and stop, in Fig. 5.
Two issues can be observed: (1) the manufacturing system produces a lot of events
and data that are transported to cloud; (2) the delay of messages can be a drawback in
some manufacturing scenarios.
Fig. 5. Node RED IoT application for supporting device to device communication
Several RTD measurements have been made by considering pairs of IoT devices
developed according to Fig. 3. Table 1 summarizes the results obtained with different
types of IoT devices: a simple software application (labeled with software in Table 1),
a robot controller, and a small embedded device with WiFi connection (Wemos D1
WiFi mini). For the robot IoT device, we had the possibility to use several types of
controllers from ABB: S4Cplus (Robot 1), S4C (Robot 2), IRC5 (Robot 3), and virtual
IRC5 controllers from RobotStudio (Robot 4). Network latency with the IoT platform
located in another country was between 52–55 ms, measured over TCP/IP protocol;
Quality of Service (QoS) for MQTT was set to 0, meaning fast delivering without
guarantees of message reachability. The QoS level and the network latency have given
us a good overview of RTD.
The first measurement from Table 1, which is for two applications developed in
python with the module ibmiotf , shows an average time around 111 ms. In this case,
the device1 measures the time in accordance with the protocol presented in Fig. 6. One
can observe that RTD is near the latency of network (2 round trips * 52 ms: device1 →
cloud → device2 → cloud → device1 ). In the second experiment, the device1 only sends
messages and the measure is made at the cloud level. We expected to be around 55 ms
as in the first experiment, but the average is a bit higher, namely with 10 ms. A simple
explication is that the effort of both devices is different. Another experiment was to use
a 4G network (see line 3 in Table 1). As we anticipated, the average time increased due
to the latency of the network. So far, the delay of IoT platform was directly dependent
on the network, meaning that if one brings the IoT system closer to the manufacturing
network, the RTD can be relatively acceptable.
About the Applicability of IoT Concept for Classical Manufacturing Systems 49
Knowing the average time delay between two simple software applications without
physical parts, the developed IoT adapters for ABB controllers can be analyzed. The
experiments 4–6 contain as device2 an S4Cplus controller. The polling rate of adapter
is set to 200 ms, representing the minimum value offered by manufacturer for RAP
protocol (see Fig. 3). This parameter influences the notification time when something
has changed for the controller (e.g., when the controller updates the received counter
– see Fig. 6). Another aspect of the developed experiment is the running program on the
controller that must be maintained in a big loop which should contain a waiting time.
This temporizing instruction allows external connections to modify some variables of
the program through the RAP protocol. In our case, the adaptor updates the received
counter in one persistent variable and waits the updated value in another variable (see
Appendix). Thus, in experiment 4, the waiting time is set as the polling rate (200 ms);
using a waiting time having the value 0, the controller limits the external interaction
and the delay gets bigger (see case 5); by increasing the waiting time to 300 ms, the
difference is found in the round trip delay (case 6). Thus, one can see that the best waiting
time is equal to the polling rate. By comparing experiment 1 with 4, one can see that
the average RTD is increased in the last case with 241 ms, which can be explained by
the limitation of the RAP solution. Another comparison for S4C controller (Robot 2)
which is older than S4Cplus shows that the round-trip delay is higher (see rows 4 and 7
in Table 1).
In the presented manufacturing process, robots 1 and 2 interact through an I/O
digital interface. Switching communication through IoT, the average delay is around
1400 ms, which is more than we expected. Thus, we must admit that a 20 years old
controller is not designed for external interactions. By contrast, the new version of
controller, namely IRC5 (Robot 3), was used in experiment 9. The entire architecture
50 C. Pascal et al.
for external interaction was changed (no polling rate was used), and the results are better
than in the first experiment. Similar results were obtained using a virtual controller from
RobotStudio (see row 10 in Table 1).
The last two lines of Table 1 contain results for a small device (Wemos D1 WiFi
mini) supporting an MQTT client. The two measures at device1 and cloud level are
showed in Fig. 7, according to the interaction protocol presented in Fig. 6. Similar plots
were obtained with different devices. One can observe two aspects: the delays are mainly
produced by the communication supported by of the industrial equipment and this limit
can be overcome only with local integration. In conclusion, the proposed approach opens
an easy way to develop interaction among old and new devices without a high cost of
implementation.
Fig. 7. Comparison of cloud based (blue) and device based (red) measures using Wemos D1 as
device2
6 Conclusions
The presented approach strengthens the vision of IoT concept by allowing device to
device sharing information. This improves coordination and collaboration by endorsing
solutions from distributed artificial intelligence. Time-related performance of round-
trip message depends on the external communication supported by each manufacturer.
However, this issue can be overcome by locally connecting external IoT devices with I/O
capabilities and by using the fast MQTT protocol for devices with considerable delays;
for example, a robot can publish events through its I/O system.
Without changing the current manufacturing equipment, a production system can
be closer to Industry 4.0 by adopting/developing IoT adapters. Generally speaking, it
will not be the case to entirely remove the local I/O interaction in order to obtain a full
About the Applicability of IoT Concept for Classical Manufacturing Systems 51
IoT system, as we did in this research. The local connections existing in the classical
manufacturing systems can be kept, but when there is a need for trying new hardware
and software solutions, without disrupting the system configuration, the proposed IoT
approach can be of great help. Preventive maintenance, context-aware systems, digital
twin and digital product are challenges that can be integrated into classical manufacturing
systems. Our experiments revealed that a digital twin is achievable by introducing IoT
digital twin devices in the system.
Moving the entire communication through cloud is a complication for the proposed
solution. About this, fog computing and IoT gateway have to be further considered.
Appendix
References
1. Kamble, S.S., Gunasekaran, A., Gawankar, S.A.: Sustainable Industry 4.0 framework: a sys-
tematic literature review identifying the current trends and future perspectives. Process Safety
Environ. Protect. 117, 408–425 (2018)
2. Atzori, L., Iera, A., Morabito, G.: The internet of things: a survey. Comput. Netw. 54(15),
2787–2805 (2010)
3. Mumtaz, S., Alsohaily, A., Pang, Z., Rayes, A., Tsang, K.F., Rodriguez, J.: Massive Internet
of Things for industrial applications: addressing wireless IIoT connectivity challenges and
ecosystem fragmentation. IEEE Ind. Electron. Mag. 11(1), 28–33 (2017)
4. Lu, Y.J., Cecil, J.: An Internet of Things (IoT)-based collaborative framework for advanced
manufacturing. Int. J. Adv. Manuf. Technol. 84(5–8), 1141–1152 (2016)
5. Qu, T., Lei, S.P., Wang, Z.Z., Nie, D.X., Chen, X., Huang, G.Q.: IoT-based real-time produc-
tion logistics synchronization system under smart cloud manufacturing. Int. J. Adv. Manuf.
Technol. 84(1–4), 147–164 (2016)
52 C. Pascal et al.
6. Tao, F., Zuo, Y., Da Xu, L., Zhang, L.: IoT-based intelligent perception and access of manu-
facturing resource toward cloud manufacturing. IEEE Trans. Ind. Inform. 10(2), 1547–1557
(2014)
7. Luvisotto, M., Tramarin, F., Vangelista, L., Vitturi, S.: On the use of LoRaWAN for indoor
industrial IoT applications. Wirel. Commun. Mob. Comput. 2018 (2018)
8. Pascal, C., Raveica, L.O., Pănescu, D.: Robotized application based on deep learning and
Internet of Things. In: Proceedings of 22nd International Conference on System Theory,
Control and Computing (ICSTCC), Sinaia, Romania, October 2018, pp. 646–651 (2018).
https://doi.org/10.1109/ICSTCC.2018.8540714
9. Profanter, S., Tekat, A., Dorofeev, K., Rickert, M., Knoll, A.: OPC UA versus ROS, DDS,
and MQTT: performance evaluation of Industry 4.0 protocols. In: Proceedings of the IEEE
International Conference on Industrial Technology (ICIT) (2019)
10. Ferrari, P., Flammini, A., Sisinni, E., Rinaldi, S., Brandão, D., Rocha, M.S.: Delay estimation
of Industrial IoT applications based on messaging protocols. IEEE Trans. Instrum. Meas.
67(9), 2188–2199 (2018)
11. Aazam, M., Zeadally, S., Harras, K.A.: Deploying fog computing in industrial Internet of
Things and Industry 4.0. IEEE Trans. Ind. Inform. 14(10), 4674–4682 (2018)
12. Răileanu, S., Anton, F., Borangiu, T., Morariu, O., Iacob, I.: An experimental study on the
integration of embedded devices into private manufacturing cloud infrastructures. In: Pro-
ceedings of 8th Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing,
Bergamo, Italy, 11–12 June 2018 (2018)
13. Răileanu, S., Borangiu, T., Ivănescu, N., Morariu, O., Anton, F.: Integrating the digital twin of a
shop floor conveyor in the manufacturing control system. In: Proceedings of the International
Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing, Studies in
Computational Intelligence, pp. 134–145. Springer, Cham (2019)
14. IBM IoT platform. https://www.ibm.com/cloud/watson-iot-platform. Accessed 2020
15. Node-RED. https://nodered.org/. Accessed 2020
An Open-Source Machine Vision Framework
for Smart Manufacturing Control
Abstract. The paper describes the design, implementation, testing and validation
of an open-source machine vision framework based on OpenCV (Open Source
Computer Vision) library. This framework was developed for smart manufacturing
control. Material conditioning and handling processes involving industrial robots
are the processes that benefit from the proposed solution. The solution offers
the following functionalities: acquisition of video streams from multiple sources,
image analysis, object recognition, localization and interaction with industrial
equipment using standard, open communication protocols. The paper covers sev-
eral design aspects: system architecture, data acquisition and standardization of
the image representation to be used by the analysis algorithms and object recogni-
tion module, input/output interaction protocols, camera-robot calibration. Results
are reported for an implementation of the framework using a commercial image
acquisition device and an industrial robot.
1 Introduction
Given the importance of robotized solutions in manufacturing [1], the research and devel-
opment of vision systems used both in robot guidance and workplace monitoring have
continuously intensified during the last decade [2]. Such systems work in unstructured
shop floor areas containing parts randomly located usually in 2D environments, and
recently in 3D environments, where machine vision assists industrial robots to handle
components of the material flows [3]. There are also applications in which vision systems
are used to perform quality control based on part shape analysis and/or routing products
according to their type [4].
In the current economic context, a high number of manufacturing enterprises needing
automation cite high investment and operating costs as a major obstacle while digitaliza-
tion is perceived as inaccessible (expensive and complex) by many companies [5, 6]. The
scope of this research is to demonstrate that an open-source machine vision framework
for manufacturing with industrial robots can be easily developed from available open-
source general vision processing libraries and commercial image acquisition devices. It
will be demonstrated that this solution has similar characteristics with a commercial one:
accuracy, speed and connectivity with a wide range of sensors and computing resources.
The realization of this machine vision framework aims at lowering the total cost of
robot-vision projects by reducing to zero the cost of the machine vision application and
the allowing to use alternative, possibly non-industrial image acquisition devices which
are much cheaper than their industrial counterparts. Currently there are commercial
applications offering similar functionalities (part detection, recognition and location)
such as Cognex Vision Pro (cognex.com), MVTec Halcon (mvtec.com), Omron Adept
SmartVision MX (omron247.com) but their drawbacks are the high price and the fact that
they use specific video input and a limited number of input devices. Concerning the open-
source solutions, to the best of our knowledge there are only applications dedicated to
image handling and processing like ImageJ (https://imagej.nih.gov/) or Micro-Manager
(https://micro-manager.org/) for laboratory equipment (e.g., control of microscopes)
and generic libraries (OpenCV, Accord) or applications (Matlab) that perform standard
computations on images. In this context the developed solution will offer: the needed
functionality for part recognition and location based on its body / contour shape and
features (area, perimeter, number of holes, blob moments), extension and standardization
of input sources and a standardized protocol for interaction with industrial equipment
(e.g., robot controller, PLC).
Some external preliminary results consist of existing image processing frameworks
(OpenCV [7], Accord.NET [8] as an extension of AForge.NET) which facilitate the
operation with different image and video formats, are able to apply standard filters and
offer a set of useful functionalities like operation with blobs and shapes (polylines).
These preliminary results represented the starting point of the solution considered in
order to improve its design and accelerate its implementation. Concerning the novelty,
this consists in the open-source nature of the project with all the associated advantages,
and in the adopted object modelling technique that uses a combination of features (e.g.,
of moments invariant to translation, rotation and scaling) and contour shape.
The article is structured as follows: Sect. 2 details the components of the machine
vision framework, their interconnection and the location of each component. Section 3
presents the interaction protocol that controls the remote operation of the vision frame-
work with types of industrial equipment. Section 4 describes the image acquisition and
calibration processes. Section 5 presents a set of experiments for object handling using
vision and a comparison with a commercial software. Section 7 formulates conclusions
and defines future research and development directions.
Fig. 1. Machine vision framework structure and information exchange between its components
The main characteristics of the video server are: i) streaming: consists in the conver-
sion of generic streams and image formats into a standardized and open format accepted
by the artificial vision application, and ii) standardization of camera interface: identifica-
tion, open/close stream, adjust image size, trigger image acquisition. Industrial cameras
have dedicated drivers which limit the number of concurrent clients to one, guarantee-
ing thus a high frame rate acquisition. By inserting this middle module (vision server),
it will be possible to connect multiple clients (e.g., multiple artificial vision guidance
applications in order to compare image processing performance, and/or multiple video
surveillance and monitoring applications). The stream standardization will not restrict
the resolution of the original stream / image, which can be modified through the camera
interface protocol. Besides the above characteristics it will be possible to adjust at video
server level the region of interest (ROI) from the original input image. Thus, by limiting
the dimension of the input data structure (a matrix), the image processing time decreases;
a lower processing time is desired in real-time applications such as visual servoing of
robot manipulators.
The advantage of being able to operate on a variety of video streams is that it
will be possible to compare the performance of the part detection, identification and
locating functions and to recommend significant cheaper hardware for the same process
automation project. Another advantage of this architecture is that the vision application
can be located on a cloud infrastructure offering vision services (based on standardized
interaction protocols) to manufacturing resources eliminating the cost with dedicated
hardware and making it easy to replicate the framework for additional vision projects.
56 S. Răileanu et al.
a) Vision algorithms were designed to work with black and white images obtained from
grayscale images after applying an adaptive threshold;
b) Objects are seen as dark spots (blobs) on a light background;
c) The image plane is parallel with the XOY plane of the robot;
d) The camera plane (physical mounting) is also parallel with the XOY plane of the
robot and
e) The end effector is perfectly aligned with the robot’s tool control point so that the
robot points to the same object location and orientation as the vision system.
Table 1. (continued)
the robot XOY plane, the image plane and the camera plane are parallel, and using a
scenario where the camera is fixed relative to the robot base, the transformation has only
4 elements: a translation of the robot frame (dx, dy, dz) followed by a rotation about the
Z axis. By knowing the location of the reference point (represented by the calibration
target) in both coordinate systems, finding the transformation is similar to solving a
2D puzzle avoiding thus complex matrix computations [10]. Thus, the robot-camera
calibration can be performed in a two-step process using simple geometrical formulas:
first the location of the vision plane relative to the robot frame is computed (Fig. 2 left)
and then the rotation about the Z axis of the robot is determined (Fig. 2 right).
An Open-Source Machine Vision Framework for Smart Manufacturing Control 59
Origin of the
vision plane
Z correction
Vision
coordinate theta_P_VA
system
D3
alfa
D2
D1
Robot coordinate
system
Positions of the calibration target in robot
coordinate system
theta_VA_R
Possible locations for the origin of the
vision plane Robot coordinate system
Fig. 2. Finding the location (left) and rotation (right) of the vision plane relative to the robot base
Knowing the location of the reference point in both robot and vision coordinate
systems we can plot a set of possible locations for the origin of the vision plane (Fig. 2
left). These possible locations are located on circles whose centres are the locations of
the associated calibration target in the robot coordinate system and the radius is equal
to the distance from the origin of the vision system to the location of the target in the
vision frame. Each two circles have two intersection points and theoretically all the
circles should have a common intersection point. In practice this is not true because
the information comes from measurements affected by errors (e.g., caused by different
lighting) resulting thus a zone in space where it is most likely that the vision system’s
origin is located (see Fig. 5). For a set of N calibration points, (N-1)*N intersection
points are computed. From these computed intersections, those that are grouped are
chosen and their average location on X and Y will be the origin of the vision coordinate
system. After establishing the origin of the vision frame, we can apply the cosine law in
the triangle from Fig. 2 right and compute the Z rotation correction. The Z offset will
be the Z location of the calibration target.
The minimum set of points (learnt in robot and vision coordinate system) is three
in order to obtain a system of three equations with three unknowns (offset X, offset Y,
rotation Z). In order to improve the accuracy of the system this minimum number is
increased choosing the points as far away from each other as possible. Due to lighting
issues, four vision samples rotated with 90 degrees are taken for each robot calibration
point and an average location in vision coordinate system is computed as the average of
the intersections on X and Y.
The output of the vision system is a unique transformation for each object type
and for each grasping style (an object can be grasped in several ways). This calibration
translates and rotates the coordinate system attached to the object into the coordinate
system attached to the gripping position. Also, for a given object there can be defined
60 S. Răileanu et al.
different grasping styles in order to avoid collisions if several objects are close to each
other and there isn’t enough space to access them with the current robot gripper.
In industrial applications the shape of the object is of interest for its recognition [4].
In this respect there are two recognition techniques: a) matching contours operating
on greyscale images, and b) matching blobs operating on binary - black and white -
images obtained from greyscale images after thresholding/binarization [11]. A common
mathematical approach when working with binary images is the use of moment-based
analysis [12]. This method offers information about the area, centre of gravity and
orientation of a blob, and the derived image Hu moments [13]. Since these features
are invariant at image translation, scaling and rotation, they can be used to describe the
shape which is used for object recognition. These characteristics are of interest for object
identification (what type of object), location (where it is located) and association to a
fixed coordinate system (how is it rotated) allowing to compute a unique transformation
needed by the robot to handle correctly the object.
For a discrete binary image, the moment of order “p + q” (1) and the central moment
(2) are defined as follows:
Mpq = xp yq I (x, y) (1)
x y
− p − q
μpq = (x− x) (y− y ) I (x, y) (2)
x y
AIM Center of
gravity
X
X axis
Derivatives of the image moments which have invariant features with respect to
translation and rotation (as is the case of objects situated on a vision plane) can be
constructed. Examples of such expressions are given in the work of Hu [14], where
7 such invariants are proposed which can also detect if an object is mirrored.
From the above formulas, of interest are: M00 (4) representing the blob’s area, the
first order moments that are used to calculate the coordinates of the blob’s centre of
gravity (5), the second order central moments which are used to calculate the major and
minor axes of the blob and thus its orientation (6), and the Hu invariants which together
with the blob area are used for the definition of an object prototype.
The object’s prototype is defined as the distance between the Hu invariants (7) rep-
resenting the desired shape combined with the 7 th Hu invariant which indicates whether
the object is mirrored (essential in the case of robot applications), and the object area.
Tolerances are defined for each class in order to discriminate between existing objects.
7
shape1 shape2
D(shape1, shape2) = |Hu_invariant i − Hu_invariant i | (7)
i=1
6 Experiments
Experiments have been done using an industrial robot from Adept (Cobra s850) working
with the proposed machine vision framework and with the Adept Sight commercial vision
software in order to compare the two vision solutions. The proposed vision system was
fed with images from a smartphone while the commercial application took images from
an industrial camera, both devices observing the same scene. We evaluated the accuracy
of the calibration processes and the robot-vision systems, and the runtimes of the vision
application.
A circular target has been used for camera calibration in the proposed machine vision
framework producing a pixel (width, length) = (0.8, 0.8), while the dedicated camera
calibration of the commercial application a pixel (width, length) = (0.6, 0.6).
The second experiment concerned the robot-camera calibration described in Sect. 4.
A set of 4 points from the vision plane have been located 4 times (each location with
the object rotated with 90°). Table 2 left gives the calibration points in robot and vision
frames, while Table 2 right presents the possible locations of the vision frame. Each infor-
mation is presented in Fig. 4. The computed deviation for the robot-camera calibration
process is 0.42mm (on X axis), 0.82mm (on Y axis), 0.531º (about the Z axis).
The third experiment concerned the accuracy in the process of determining the loca-
tion of a known point in robot coordinate system using the location of the object in the
vision system and the robot-camera transformation determined in the previous experi-
ment. Using a set of 100 randomly generated points in the vision plane the robot placed
the testing object at the designated position and its location was computed using the
vision system. The average deviation is 0.71(on the X axis), 0.33 (on Y axis). In order
to correctly handle the objects this deviation/positioning error should be less than the
gripper opening which was the case for the testing scenario (10 mm gripper opening).
In the fourth experiment we tested how the components of the prototype model
(as presented in Sect. 5) vary between different model classes, different locations and
different positions. The results for an offline measurement are offered Table 3. These
An Open-Source Machine Vision Framework for Smart Manufacturing Control 63
Table 2. Set of calibration points (left) and candidate vision coordinate system points (right)
measurements (area in pixels and Hu_i representing the ith Hu invariant) have been made
for the image presented in Fig. 5 which contains both objects from different classes and
objects from the same class.
By analysing the results in Table 3 it can be concluded that the membership of an
object to a trained prototype can be established by computing the conformity and verify
percentage values for each component of the model.
Concerning the runtime, the average processing time for an image containing several
objects of interest is 70msec for the open-source machine vision framework. The same
image plane foreground was processed by the commercial application in 50msec.
64 S. Răileanu et al.
blob ID type area Hu0 Hu1 Hu2 Hu3 Hu4 Hu5 Hu6
13 I 126787 0.52 1.18 4.35 4.78 9.35 5.40 -10.20
17 r 124837 0.67 2.05 2.41 3.61 -6.71 -4.69 -6.87
18 r 113107 0.65 1.98 2.37 3.56 -6.55 -4.56 -7.01
263 circle 171358 0.80 4.79 5.78 9.34 -16.91 -11.74 -17.70
286 T 221521 0.56 2.26 1.74 3.23 5.72 4.37 6.39
291 nonL 150324 0.49 1.32 1.93 2.78 5.81 4.21 5.14
392 I 117118 0.51 1.16 5.12 5.54 10.88 6.14 -11.71
396 T 202846 0.55 2.28 1.68 3.12 5.52 4.26 -6.40
406 L 167974 0.50 1.34 1.96 2.79 5.56 3.94 -5.20
452 rest 131626 -0.82 -1.64 -0.94 -0.94 -1.88 -1.76 -2.94
The objective of the paper was to propose an open-source machine vision framework
which operates with off-the-shelf equipment in manufacturing tasks and to demonstrate
that the accuracy of the system is similar with a commercial vision application. Mathe-
matical formulas for image processing - object recognition and locating along with the
calibration principle were presented and their results were detailed in the experiments
section. The advantage of this open-source framework is that it reduces the implementa-
tion costs of industrial applications by eliminating the cost of the software and offers the
possibility to use cheaper cameras without affecting accuracy. The framework can be
An Open-Source Machine Vision Framework for Smart Manufacturing Control 65
customized for different types of control devices (like robot controllers, PLCs and indus-
trial computers) that build up the automation layer of manufacturing systems by using
an interaction protocol dedicated to each class of these partner resources, embedded in
the TCP standard communication protocol.
Future work will be oriented towards extending the machine vision framework from
2D to 3D object recognition and locating, and defining new interaction protocols with
PLCs controlling the material- and operations flows in manufacturing work places.
References
1. International Federation of Robotics. https://ifr.org/. Accessed Apr 2020
2. Ford, M.: Rise of the Robots: Technology and the Threat of a Jobless Future, ISBN 978–
0465059997 (2015)
3. Kang, S., Kim, K., Lee, J., Kim, J.: Robotic vision system for random bin picking with dual-
arm robots, MATEC Web of Conferences 7, 07003, ICMIE 2016 (2016). https://doi.org/10.
1051/matecconf/20167507003 (2016)
4. Borangiu, T.: Intelligent Image Processing in Robotics and Manufacturing, pp. 1–650.
Romanian Academy Press, Bucharest (2004). ISBN 973–27–1103–5
5. Saam, M., Viete, S., Schiel, et al.: Digitalisierung im Mittelstand: Status Quo, aktuelle
Entwicklungen und Herausforderungen (‘Digitalisation in SMEs: status quo, current trends
and challenges’ - our translation, in German only), research project of KfW Group (2016)
6. McFarlane, D., Ratchev, S., Thorne, A., Parlikad, A.K., Silva, L., Schonfuss, B., Hawkridge,
G., Tlegenov, Y.: Digital manufacturing on a shoestring: low cost digital solutions for SMEs,
Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the Future.
SOHOMA 2019, Studies in Computational Intelligence, Vol. 853, Springer (2020)
7. Open source computer vision – OpenCV. https://opencv.org/. Accessed Apr 2020
8. The Accord.NET Image Processing and Machine Learning Framework. https://accord-fra
mework.net/index.html. Accessed Apr 2020
9. Bellifemine, F., Carie, G., Greenwood, D.: Developing multi-agent systems with JADE, Wiley
(2007). ISBN 978–0–470–05747–6
10. Sharifzadeh, S., Biro, I., Kinnell, P.: Robust hand-eye calibration of 2D laser sensors using a
single-plane calibration artefact. Robot. Comput. Integrat. Manuf. 61, 101823 (2020). https://
doi.org/10.1016/j.rcim.2019.101823
11. Anton, F.D., Borangiu, T., Anton, S., Raileanu, S.: Using cloud computing to extend robot
capabilities. In: 26th International Conference on Robotics in Alpe-Adria-Danube Region,
RAAD 2017, Turin, Italy, 21–23 June 2017 (2017)
12. Korta, J., Kohut, P., Uhl, T.: OpenCV based vision system for industrial robot-based assembly
station: calibration and testing. PAK 60 (1/2014) (2014)
13. Huang, Z., Leng, J.: Analysis of Hu moment invariants on image scaling and rotation. Pro-
ceedings of 2nd International IEEE Conference on Computer Engineering and Technology
(ICCET’10), pp. 476–480, Chengdu, China (2010)
14. Mallick, S., Bapat, K.: Shape Matching using Hu Moments (C++/Python) (2018). https://
www.learnopencv.com/shape-matching-using-hu-moments-c-python/. Accessed Apr 2020
Using Cognitive Technologies as Cloud Services
for Product Quality Control. A Case Study
for Greenhouse Vegetables
1 Introduction
Quality control of manufactured parts, agriculture products (vegetables, fruits), food
products (meat, eggs) and prepared food represents an important stage in the production
value chain because it allows certifying the desired characteristics of products (e.g. the
shape, degree of finishing, surface texture, correct and complete mounting of parts, the
size and aspect of food products and the composition of prepared food), assuring the
safety of consumers, validating the timing of various processing stages and authorizing
the start of new operations (e.g., termination of assembly, of finishing, establishing
the proper time of harvesting), and classifying products according to quality indicators
expressed through size, shape and colour.
The quality control of vegetables is repeated at short time periods (days, hours) due
to their short growing cycles. Tomatoes are among the most popular vegetables grown
in greenhouses. If they receive proper temperature and sufficient light, they can be even
harvested twice a year; however indoor conditions require in general more rigorous
control to assure successful pollination of flowers and prevent the spread of diseases.
Tomatoes are also among the most consumed vegetables. According to statistics
from the Food and Agriculture Organization (FAO), around 170 million tons of fresh
and processed tomatoes were produced worldwide in 2014 [1]. The area covered with
tomato cultivation was 5 million hectares. Global tomato production has grown steadily
since 2000, by more than 54% from 2000 to 2014. China is the largest tomato producer
followed by the United States and India. Other major producers in the market are the
European Union and Turkey. Together these major tomato market producers account
for around 70% of global production. Mexico is the largest exporter of tomatoes in the
world followed by the Netherlands and Spain [2]. In 2016, Mexico, Netherlands and
Spain held respectively 25.1% ($ 2.1 billion), 19% ($ 1.6 billion), and 12.6% ($ 1.1
billion) of total tomato exports [2].
In the United States, approximately 16 million tons of tomatoes were produced in
2015; about 8% of total production was represented by fresh tomatoes that have much
higher prices than processed tomatoes. In 2015, the total values of fresh and processed
tomatoes in the United States were respectively $ 1.22 billion and $ 1.39 billion [3].
Florida and California have about two-thirds of U.S. fresh tomato production [4], while
California has about 95% of processed tomato production [5].
The United States is one of the leaders in the production of fresh tomatoes. In
2015, 1.35 billion kilograms of fresh tomatoes were produced here. Domestic produc-
tion accounted for about 40% of total domestic demand for fresh tomatoes; the rest of
the demand was met by imports, mostly from Mexico and Canada. Since 2000, the pro-
duction of fresh tomatoes in the United States has generated a downward slope. Total
production of fresh tomatoes decreased from 1.6 billion kilograms in 2000 to 1.35 billion
kg in 2015 (Fig. 1) [6]. One main reason was the increase in competition from Mexico.
• Energy management
• IoT systems for monitoring and control
• Image processing in agriculture
• Robotics in greenhouses
The articles in the field of energy management focus on proposing solutions for the
efficiency of systems serving greenhouses in order to reduce energy consumption [10]
or the use of renewable energy sources such as photovoltaic panels [11].
The field of IoT solutions for greenhouses is the most active with a large number
of works dealing with these issues [12]. The main areas of interest are: the monitoring
Using Cognitive Technologies as Cloud Services for Product Quality Control 69
of the parameters inside the greenhouse [13, 14], the control of these parameters with a
low energy consumption [15–18], and the automatic irrigation of plants [19–22].
The field of artificial vision in agriculture is approached in order to monitor plant
growth [23–26], monitoring that is usually done using satellite images or images acquired
with drones; these methods do not involve accurate identification of the plant but rather
identification of planted areas. Artificial vision is integrated with robotics in automatic
harvesting solutions using robots [27]. Robots are also used to transport materials to
predetermined locations inside greenhouses [28].
From the point of view of complete greenhouse automation, an interdisciplinary app-
roach [29] is required to integrate management, control, forecasting, artificial intelligence
and other technologies so that the resulting solution allows the creation of independent
greenhouses.
This paper proposes a solution for monitoring and quality control of tomatoes in
greenhouses using advanced cognitive technologies accessed through Cloud services.
The monitoring system is mounted on a mobile platform that allows the navigation
inside the greenhouse, locating its position in the greenhouse and acquiring images that
are then processed by a cognitive image recognition system. This system has the ability
to differentiate the fruit from the plant in order to classify the quality of the fruit and to
identify the diseases that affect the plant. The system can be used both to monitor the
development of plants inside the greenhouse and to check the quality of the fruits when
they are harvested.
The paper is structured in five sections: an introduction and previous research in
the field of greenhouse automation; cognitive systems; development of vision-based
recognition models and their system integration; presenting the system architecture;
experimental results, conclusions and possible further developments.
Watson can understand all forms of data, can interact naturally with people, and
can learn and reason on a certain scale (Fig. 3). Data, information and expertise create
the foundation for working with Watson. Figure 3 below shows examples and data that
Watson can analyse and learn from and then generate new conclusions and observations
that have not been stated before.
Fig. 4. Creating Artificial Intelligence (AI) solutions using Watson services in IBM Cloud
learning techniques. It uses deep learning algorithms to analyse scenes, objects, faces,
and other content in images. The results include keywords that provide content informa-
tion. There is a set of predefined models that provide results with high accuracy without
the need for training; it is also allowed to train custom models to create specialized
classes. By creating a custom classifier, one can use the visual recognition service to
recognize images that are not available using the pre-trained classification.
Watson’s visual recognition service can learn from example images that can be
uploaded to create a new classifier. Each example file is trained by comparison with all
the other uploaded files when the classifier is created, and positive examples are stored
as classes. These classes are grouped to define a single classifier while returning their
own scores. Figure 5 shows the process of Watson visual recognition using a custom
classifier.
A new custom classifier can be trained using several archived files, including files
that contain examples of positive or negative images. Images can be in jpg or png format.
At least two compressed files must be used, either two files with positive examples or
one file with positive examples and one with negative examples.
Compressed files that contain positive examples are used to create classes that define
the new classifier. The prefix specified for each positive example is used as a class name
in the new classifier. There is no limit to the number of positive example files that can
be uploaded in a single session. The compressed file containing negative examples is
not used to create a class inside the classifier but specifies what the new classifier is
not. Negative examples should contain images that do not depict the subject of any of
the positive examples. Only one file with negative examples can be specified in a work
session.
The use of compressed files is recommended when using the Watson Studio interface
to define the classifier; this method allows faster file upload. However the system is not
72 F. Anton et al.
limited to the use of compressed files but allows also the upload of image files not
found in an archive; this allows the creation of classifiers or the modification of existing
classifiers through an external program.
Figure 6 shows the steps for creating and learning a specialized classifier for visual
recognition.
1. Preparing data for training. Uploaded image files are used as positive and negative
examples for the training process.
2. Creating and training the classifier. In order to create the custom classifier the location
of training images must be specified and the visual recognition API is called.
3. Testing the custom classifier. In this step image classification is performed using the
new custom classifier and the classifier’s performance is measured.
sends to the application on the server the acquired image, the position of the mobile
platform and a timestamp to identify the time of acquisition.
The application on the server (written in C++ Builder) further contacts the Watson
visual recognition service to determine or classify the objects in the image. To do this in
Watson Visual recognition two classifiers were learned: a classifier for fruit recognition
and a classifier for plant recognition. Each of the two classifiers has classes for the
healthy plant as well as for the plant affected by various diseases.
When it is detected that there is a sick plant or fruit in the image, the acquired image,
the position of the platform and the timestamp are saved by the application on the server
in a database containing alarm events. This allows an operator to inspect the image and
validate whether the recognition was successful or not. If the recognition was successful,
human intervention in the greenhouse is required to remove or treat the affected plant. If
it is decided that the plant will be treated (there may be a larger area of affected plants)
the mobile platform system can be programmed so that for a specified period of days
it does not take pictures from that area; this will avoid multiple reporting of the same
problem. If the recognition was not successful and therefore the visual recognition system
reported false information, the image that was falsely classified can be reintegrated into
the classifier which can be retrained so that it can improve its performance using the
new image. Figure 7 below shows how the system works.
The flow chart in Fig. 8 summarizes the main activities of the image processing
service. The image is sent to the Watson visual recognition service and processed as
follows:
• Fruit cracks
• Sunscald
• Catfacing
• Bacterial canker
• Antchracnose
• Viral diseases
For each disease 15 test images have been used. It has been found from tests that the
recognition of healthy plants or fruits is achieved with a percentage higher than 75% if
at least 50 positive images are used; however this score is relative and depends on the
quality of the images used both to learn the classifier and in the process of recognition
(resolution, the position of the affected part of the plant in the image). For the set of
images used for training (77 and 85 for diseases and 50 and 50 for good fruits/plants)
the score was over 90%.
Figure 9 below shows an example of recognition for a healthy tomato and a tomato
affected by blossom end rot.
We can conclude that the system provides accurate results if there are at least 50
positive quality images for defining the healthy plant as well as for the diseases to be
detected that are used to learn the classifier.
76 F. Anton et al.
As for further development, the system could withstand a greater number of types of
diseases (currently only one type of disease for the fruit and plant have been integrated
due to the lack of images) that could be detected. Future research will be also carried
out with the AI-based vision system to classify and locate fruits for automatic harvest.
References
1. FAO (Food and Agriculture Organization of the United Nations): FAOSTAT Database (2017).
http://faostat3.fao.org/
2. CIA (Central Intelligence Agency): The World Factbook, Field Listing: Exports – Com-
modities, CIA, Washington, DC (2017). https://www.cia.gov/library/publications/the-world-
factbook/fields/2049.html
3. USDA-AMS (United States Department of Agriculture, Agricultural Marketing Service):
Tomatoes, USDA-AMS, Washington, DC (2017). http://www.agmrc.org/commodities-pro
ducts/vegetables/tomatoes
4. Wu, F., Guan, Z., Suh, D.H.: The effects of tomato suspension agreements on market price
dynamics and farm revenue, applied economic perspectives and policy. forthcoming (2018).
https://doi.org/10.1093/aepp/ppx029
5. USDA-ERS (United States Department of Agriculture, Economic Research Service): Toma-
toes, USDA-ERS, Washington, DC (2017). https://www.ers.usda.gov/topics/crops/vegeta
bles-pulses/tomatoes.aspx
6. USDA-NASS (United States Department of Agriculture, National Agricultural Statistics Ser-
vice): Data and Statistics, USDA-NASS, Washington, DC (2016). https://www.nass.usda.
gov/Data_and_Statistics/index.php
7. European Commission, https://ec.europa.eu/info/sites/info/files/food-farming-fisheries/far
ming/documents/tomato-dashboard_en.pdf
8. European Commission, https://ec.europa.eu/info/sites/info/files/food-farming-fisheries/far
ming/documents/tomatoes-trade_en.pdf
9. European Commission, https://ec.europa.eu/info/sites/info/files/food-farming-fisheries/far
ming/ Documents / tomatoes-production_en.pdf
10. Iddio, E., Wang, L., Thomas, Y., McMorrow, G., Denzer, A.: Energy efficient operation and
modeling for greenhouses: a literature review. Renew. Sustain. Energy Rev. 117, p. 109480,
January 2020 (2020)
11. Yano, A., Cossu, M.: Energy sustainable greenhouse crop cultivation using photovoltaic
technologies. Renew. Sustain. Energy Rev. 109, 116–137, July 2019 (2019)
12. Jha, K., Doshi, A., Patel, P., Shah, M.: A comprehensive review on automation in agriculture
using artificial intelligence. Artif. Intell. Agric. 2, 1–12, June 2019 (2019)
13. Alper Akkaş, M., Sokullu, R.: An IoT-based greenhouse monitoring system with Micaz motes.
Procedia Comput. Sci. 113(2017), 603–608 (2017)
14. Postolache, O., Pereira, J.M., Girão, P.S., Monteiro, A.A.: Greenhouse environment: air and
water monitoring. In: Mukhopadhyay, S. (ed) Smart Sensing Technology for Agriculture and
Environmental Monitoring. Lecture Notes in Electrical Engineering, vol. 146, pp. 81–102.
Springer, Berlin, Heidelberg (2012)
15. Drakulić, U., Mujčić, E.: Remote monitoring and control system for greenhouse based on
IoT. In: Avdaković, S., Mujčić, A., Mujezinović, A., Uzunović, T., Volić, I. (eds) Advanced
Technologies, Systems, and Applications IV, Proceedings of the International Symposium on
Innovative and Interdisciplinary Applications of Advanced Technologies (IAT 2019), Lecture
Notes in Networks and Systems, vol 83, pp. 481–495. Springer, Cham (2020)
Using Cognitive Technologies as Cloud Services for Product Quality Control 77
16. Wu, Y., Li, L., Li, M., Zhang, M., Sun, H., Sygrimis, N., Lai, W.: Remote-control system for
greenhouse based on open source hardware. IFAC-PapersOnLine 52(30), 178–183 (2019)
17. Suryawanshi, S., Ramasamy, S., Umashankar, S., Sanjeevikumar, P.: Design and implemen-
tation of solar-powered low-cost model for greenhouse system. In: SenGupta, S., Zobaa, A.,
Sherpa, K., Bhoi, A. (eds.) Advances in Smart Grid and Renewable Energy. Lecture Notes in
Electrical Engineering, vol. 435. Springer, Singapore (2018)
18. Reka, S.S., Chezian, S.S., Chandra, B.: A novel approach of IoT-based smart greenhouse
farming system. In: Drück, H., Pillai, R., Tharian, M., Majeed, A. (eds.) Green Buildings
and Sustainable Engineering, pp. 227–235. Springer Transactions in Civil and Environmental
Engineering book series, Springer, Singapore (2019)
19. Carvajal-Arango, R., Zuluaga-Holguín, D., Mejía-Gutiérrez, R.: A systems-engineering app-
roach for virtual/real analysis and validation of an automated greenhouse irrigation system.
Int. J. Interact. Des. Manuf. 10, 355–367 (2016). https://doi.org/10.1007/s12008-014-0243-2
20. Sivagami, A., Hareeshvare, U., Maheshwar, S. et al.: Automated irrigation system for green-
house monitoring. J. Inst. Eng. (India): Series A, 99, 183–191(2018). https://doi.org/10.1007/
s40030-018-0264-0
21. Mohandas, P., Sangaiah, A.K., Abraham, A., Anni, J.S.: An automated irrigation system based
on a low-cost microcontroller for tomato production in South India. In: Abraham A., Falcon
R., Koeppen M. (eds) Computational Intelligence in Wireless Sensor Networks. Studies in
Computational Intelligence, vol. 676, pp. 49–71. Springer, Cham (2017)
22. Joshi, V.B., Goudar, R.H.: IoT-based automated solution to irrigation: an approach to control
electric motors through android phones. In: Sa, P., Bakshi, S., Hatzilygeroudis, I., Sahoo, M.
(eds.) Recent Findings in Intelligent Computing Techniques, Advances in Intelligent Systems
and Computing, vol. 707, pp. 323–330. Springer, Singapore (2019)
23. Hatou, K., Sugiyama, T., Hashimoto, Y., Matsuura, H.: Range image analysis for the green-
house automation in intelligent plant factory. In: IFAC Proceedings Volumes, vol. 29, no. 1,
pp. 962–967, June–July 1996 (1996)
24. Tian, H., Wang, T., Liu, Y., Qiao, X., Li, Y.: Computer vision technology in agricultural
automation - a review. Inf. Process. Agric. 7(1), 1–19, March 2020
25. Yang, I.C., Hsieh, K.-W., Tsai, C-Y., Huang, Y.-I., Chen, Y.-L., Chen, S.: Development of an
automation system for greenhouse seedling production management using radio-frequency-
identification and local remote sensing techniques. Eng. Agric. Environ. Food 7(1), 52–58,
February 2014 (2014)
26. McCarthy, C.L., Hancock, N.H., Raine, S.R.: Applied machine vision of plants: a review
with implications for field deployment in automated farming operations, Intell. Serv. Robot.
3, 209–217 (2010). https://doi.org/10.1007/s11370-010-0075-2
27. Tejada, V.F., Stoelen, M.F., Kusnierek, K., et al.: Proof-of-concept robot platform for exploring
automated harvesting of sugar snap peas. Precision Agric. 18, 952–972 (2017). https://doi.
org/10.1007/s11119-017-9538-1
28. Li, X.Y., Chiu, Y.J., Mu, H.: Design and analysis of greenhouse automated guided vehicle. In:
Krömer, P., Zhang, H., Liang, Y., Pan, J.S. (eds) Proceedings of the 5th Euro-China Conference
on Intelligent Data Analysis and Applications (ECC 2018), Advances in Intelligent Systems
and Computing, vol. 891, pp. 256–263. Springer Cham (2019)
29. Koleva, K., Toteva-Lyutova, P.: Greenhouses automation as an illustration of interdisciplinar-
ity in the creation of technical innovations. Procedia Manuf. 22, 923–930 (2018)
30. On Semiconductors: Image Sensors and Processors (2020). https://www.onsemi.com/pro
ducts/sensors/image-sensors-processors. Accessed May 2020
Digital Twins in Manufacturing
and Beyond
Past and Future Perspectives on Digital Twin
Research at SOHOMA
Abstract. The concept of the Digital Twin has attracted notable research atten-
tion in recent years and has emerged as a prominent theme at recent editions of the
SOHOMA workshop. This paper aims to provide perspectives on past and future
Digital Twin research within the SOHOMA context. The paper describes the evo-
lution of the Digital Twin concept over the past decade of SOHOMA workshops
and reviews the contributions in terms of functions, architectures and implemen-
tation technologies. Considering the future of Digital Twin research within the
SOHOMA context, the paper identifies key enabling factors and challenges, and
proposes a strategic research focus to promote future impact.
1 Introduction
One of the first introductions to the elements of a Digital Twin was by Dr. Michael
Grieves in 2002 in a University of Michigan presentation to industry for the formation
of a Product Lifecycle Management (PLM) centre [1]. The term Digital Twin was first
introduced as the “Conceptual Ideal for PLM” and consisted of all the currently accepted
elements of a DT – real space, virtual space and a connection with data/information flow
between the virtual and real space. Around 2011, the term Digital Twin was introduced
in Virtually Perfect: Driving Innovative and Lean Products through Product Lifecycle
Management and was attributed to John Vickers of NASA [2]. However, some believe
that DT technology has its roots in a concept practiced ever since the 1960s, where
NASA would use basic twinning ideas by creating physically duplicated systems at
ground level to match the system in space. A well-known example was the Apollo 13 in
1970. Since those early beginnings, DTs became one of the top strategic technological
trends in 2017 [1] and was named one of Gartner’s Top 10 Strategic Technology Trends
for the past couple of years. The Digital Twin has been identified as a key enabler for
Industry 4.0, as it constitutes a cornerstone for the development and effective integration
of cyber-physical production systems.
The most widely-accepted definition of a DT, as introduced by NASA in [3], is: “an
integrated multi-physics, multi-scale, probabilistic simulation of a system that uses the
best available physical models, sensor updates, fleet history, etc. to mirror the life of
its flying twin”. Grieves and Vickers (2017) distinguishes between DT instances and
aggregates. A DT instance (DTI) mirrors its physical twin during its entire lifespan.
A DT aggregate (DTA), on the other hand, is not directly associated with a physical
counterpart, but is the aggregation of some DTIs and other DTAs. While a DTI can be
an independent structure, a DTA cannot. DTIs can, for example, be interrogated by a
DTA for their current state [4].
A DT ideally creates a highly accurate digital model of the physical system in
cyberspace. Through the quality and fidelity of information, the DT can accurately
replicate and simulate the behaviour of the physical system [2, 5]. According to Tao
et al. [6], a DT can also provide a digital footprint of products by integrating geometry,
structure, behaviour, rules and functional properties. In the context of designing, setting
up and configuring an automation system for manufacturing, a DT is a set of computer
models that provide the means to design, validate and optimize a part, a product, a man-
ufacturing process or a production facility in the cyberspace. A DT enables flexibility in
manufacturing by reducing the required time for product design, manufacturing process
design, system planning design and production facility design [7].
The combination of the physical production system and its corresponding DT are the
fundamental building blocks of fully connected and flexible systems that can learn and
adapt to new demands. Ideas about the role of the DT are still developing at this stage.
Some of the roles postulated in recent literature are [7–10]: remote monitoring; predictive
analytics; simulating future behaviour; optimization and validation; documentation and
communication; and connection of disparate systems.
DTs have attracted significant research interest in various domains, also beyond
production and manufacturing. The importance and potential of the DT concept has been
reflected in the increasing number of DT-related contributions to the recent editions of
the SOHOMA workshops.
This paper reviews the research that has been presented at SOHOMA in the past
in papers that have considered or developed the DT concept. The review investigates
the needs and objectives for which DTs are intended, the architectures that have been
developed, and the implementation technologies that have been used. The paper then
turns it focus to the future – aiming to understand and predict the trajectory that DT
research might (or should) follow in future editions of SOHOMA.
The paper presents the past and future perspectives of DT research at SOHOMA in
two sections: Sect. 2 reviews the DT-related contributions in past SOHOMA workshops,
and Sect. 3 considers a future trajectory for DT research within the SOHOMA workshops
and community. A conclusion is offered in Sect. 4.
While only emerging recently, DT research has garnered notable research interest in the
SOHOMA community. This section quantifies this emergence through an analysis of
DT related contributions, and provides a chronological description of the development
of DT research at past editions of the SOHOMA workshop.
The growth of DT research, as is evident in Fig. 1 and Fig. 2, is the result of contribu-
tions from several research teams from different countries. Teams are based in France,
Portugal, Romania, Slovenia, South Africa and the United Kingdom. While there has
been a strong focus on applications within the manufacturing domain, DT develop-
ments and implementations for other domains (e.g. maritime, and building and asset
management) have also been presented.
Fig. 2. The increase in the percentage of papers that refer to DTs in SOHOMA editions
The DT idea was presented in the form of a “digital mirror” in [11] and an “observer”
in [12], and as “objects in the real world are linked with the virtual world” in [13].
In the earliest mentions, DT is considered closely integrated, and even synonymous,
with discrete event simulation. In retrospect, the session at SOHOMA 2016 on Virtu-
alization and Simulation in Computing-Oriented Industry and Services (see [14–17])
focused strongly on this aspect of the DT (under the gaze of different terminology). The
term DT first appears in the proceedings of SOHOMA 2017, as used by both [18] and
[19]. In both papers, the function of mimicking the behaviour of the physical counterpart
is highlighted.
available in the DT, the DT is not only used by control and can have many more benefits.
ARTI, pointing out the difference between beings and agents, also implies that computer
agents are not necessarily the development primitives of DTs. This role of the DT, and
its importance as an enabler for cyber-physical systems within Industry 4.0, is supported
and further evaluated by a collection of reviews from the SOHOMA community [22].
Furthermore, many of the proposals of the following years, detailed in next section,
inherit from this position.
• Support data and information exchange between physical and digital worlds [24].
• Gather and aggregate data from the physical world, from multiple sources [28].
• Couple the virtual representation to their physical counterpart [27].
86 K. Kruger et al.
• Store historical data of the physical twin over its entire lifespan [28].
Building on the above-mentioned supporting functions and benefits, the papers fre-
quently refer to the high-level functions, or roles, that DTs are envisioned to fulfil. From
[24, 28, 31] these roles can be summarized as follows: remote monitoring, predictive
maintenance, simulation of “what-if” scenarios, and planning and optimization.
Three regimes can be distinguished in the above four roles: Firstly, some roles require
an emulation of the physical twin (i.e. remote monitoring that reflects the current opera-
tion). Secondly, some roles rely on a simulation model of the physical twin to predict its
future behaviour, either using historical information (e.g. predictive maintenance) or a
combination of historical information and chosen scenarios (e.g. planning and “what-if”
simulations). The third regime, control, is also focused on the future, but is aimed at
affecting the physical twin’s behaviour (e.g. planning and optimization). The simulation
regime contains the roles that most significantly distinguish a DT from a supervisory
control and data acquisition (or SCADA) system. There is some disagreement amongst
researchers whether the simulation and/or control regimes should be considered to be
part of the DT, as will be pointed out below.
The advancement of DT research and the wide adoption of DT solutions benefit greatly
from the development of reference architectures to guide the design and implementation.
In recent editions of SOHOMA, several papers presented such reference architectures.
As mentioned in Sect. 2.1, the introduction of the ARTI architecture presented a blueprint
for the creation and integration of DTs within holonic systems. ARTI requires that every
system component be classified along three dimensions: Activities or Resources, Types
or Instances, and Intelligent Beings or Intelligent Agents. While the classification in the
first two dimensions were present in ARTI’s predecessor, PROSA, the last classification
represents an important change in approach. The classification allows for the separation
of decision-making and the reflection of reality. Intelligent Agents should encapsulate
the decision-making functionality, while Intelligent Beings should reflect the reality of
their corresponding element in the world-of-interest. Intelligent Beings thus represent
DTs [20, 21].
The architectures developed in [24, 27–29] have a shared characteristic – these archi-
tectures aim to encapsulate the functionality of the DT in layers. This clear encapsulation
is arguably the result of the holonic systems influence in their design. The architectures
developed in [24] and [28] (as illustrated in Fig. 3 and Fig. 4, respectively) have clearly
Past and Future Perspectives on Digital Twin Research at SOHOMA 87
defined functionality for each of the architecture layers. A similar, yet more detailed and
possibly more context-specific architecture was developed in [27].
The architectures developed in [24] and [28] propose very similar functionality, as
encapsulated within each of the defined layers of the DT. At the lowest level of these
architectures are the interfaces to the physical twin, where data is gathered through smart
sensors, embedded devices, low-level controllers and data acquisition systems. In both
cases, Open Platform Communications Unified Architecture (OPC UA) (discussed in
Sect. 2.4) is proposed for the communication of this gathered data to the cyber levels
of the architecture – in essence, Layer 3 in [24] and the Data transmission layer in
[28] are equivalent in their functionality. Both architectures emphasize the need for data
aggregation, as achieved in Layer 4 in [24] and the Data update and aggregation layer
in [28]. This function aims to convert raw sensed data into contextualized information
and, in the process, reduces the amount of data that must be processed and analysed
within the DT. Database storage for historical information is used to support the highest
level of functionality in both architectures. While [28] directly specifies this level in
the architecture as Analysis and decision-making, [24] infers this function by providing
decision-makers with access to emulation and simulation functions that build on the DT
data.
When comparing these architectures with the regimes mentioned in the previous
section, it is notable that the top layer in [28] emphasises the simulation regime, but also
mentions the control regime in the top layer and explicitly indicates the functionality
to implement control. The sixth layer in [24] explicitly mentions the emulation and
88 K. Kruger et al.
simulation regimes, but indirectly implies that a control regime also is provided through
bidirectional information flow.
Another architecture for the DT, developed in [32], was guided by the ARTI architec-
ture. This architecture, developed in the context of enabling energy-aware resources, is
illustrated in Fig. 5. The architecture makes a clear distinction between the functions of
reality reflection and decision making. Reality reflection is achieved through Intelligent
Beings, which in this case constitutes the DT. Decision-making functionality is mapped
to the Intelligent Agents. In this architecture, the DT is focused solely on the emulation
regime, with the simulation and control regimes present, but outside the DT.
The Six-Layer Architecture for Digital Twins (SLADT) developed in [24] was
extended to support aggregation in [30]. The SLADT with aggregation (SLADTA)
allows for the creation of a hierarchical (or even hybrid) system of DTs, as is shown in
Fig. 6. SLADTA aims to support the scalability of DT solutions, by reducing complexity
through encapsulation and modularity.
are identified for realizing communication, data storage, and simulation and analysis in
DT implementations.
Communication
Upon inspection of the presented DT implementations, it is clear that communication
within these implementations generally occur over two interfaces: the interface between
the real and virtual worlds (i.e. between the physical twin and DT), and the interface
90 K. Kruger et al.
between digital systems/processes that are local and those that reside on the internet
(or cloud). [33] argued that at both these communication interfaces there exists a need
for heterogeneous communication to support the vast variety of devices, software, and
legacy systems.
For the data exchange between the physical twin and DT, Open Platform Communica-
tions Unified Architecture (OPC UA) has been used. [24] identified the vendor-neutrality
and security of OPC as a valuable characteristic for DT implementation within the con-
text of Industry 4.0. [27] and [32] also reported using Modbus TCP for communication
at this level in their implementation, which is also vendor neutral. Such a bus communi-
cation would be particularly appropriate if the DT interfaces directly with the sensors,
and not through a controller in the physical twin.
For the communication between the local components of the DT digital implemen-
tation and the internet or cloud, several technologies appear to be suitable. [27] indicated
that the Devices Profile for Web Services (DPWS) and Representational State Transfer
(REST) interfaces were used in their case study implementation, with data formatted in
the JavaScript Object Notation (JSON). [33] identified Message Queue Telemetry Trans-
port (MQTT) as a suitable communication protocol, while [24] used the Structured Query
Language (SQL) to communicate with a cloud-based database.
Data Storage
There are many suitable cloud-based database options available to the developers of
DTs. Among these options, [24] used the Google Cloud Platform and [28] selected the
IBM CloudBurst private cloud platform.
Shifting the focus from the past to the future, this section aims to describe a possible
trajectory of DT research for the coming editions of SOHOMA. It is expected that
DT research has notable momentum and opportunities to build on; however, several
challenges must still be addressed. The section thus highlights the existing enablers and
challenges to DT research, and offers a strategy for ongoing and future DT research
endeavours.
Past and Future Perspectives on Digital Twin Research at SOHOMA 91
3.1 Enablers
classical concepts of the workshop series (holonic control, for example) in the organi-
zation of these companies. DTs are currently following the same path, with some early
industrial implementations expected to appear in the next couple of years and represent
one of the future directions of this topic in SOHOMA.
3.2 Challenges
A key challenge exists in the development of DT software. DTs are globally meant to
become middleware interfacing all sorts of applications with the manufacturing system.
This middleware is also meant to integrate reconstruction and forecasting models [32] to
enhance the quality and quantity of available data. However, all these objectives, added
to a context of high variability of architecture and technologies of targeted applications,
tend to increase the complexity of software development. This complexity might become
one of the main barriers to achieving actual DT implementations on industrial scales. The
SOHOMA workshop series is a precursor in a domain that should gain major interest in
the next few years and integrate researchers from communities more oriented towards
software development.
Further challenges relate to achieving real-world impact through industrial appli-
cations of DTs. In this context, the first challenge is the validation and evaluation of
DT functionality and performance. A particular challenge in evaluating alternative DT
architectures and implementations, is the lack of widely accepted benchmarks and stan-
dards. The different life-cycle phases of the DT itself, as well as the underlying system
it mirrors, will require different benchmarks. For example, the benchmarks in the initial
development phase of a DT, where the immediate development context (e.g. the team’s
expertise and tools) can be influenced, will be different from the maintenance and support
phase where the future context is less certain. Also, some figures of merit are difficult
to quantify in a research context (e.g. availability) and others are highly dependent on
the expertise and experience of the persons doing the research (e.g. customizability and
maintainability).
In addition to a lack of benchmarks and standards, there is currently still a shortage of
real-world, industrial case studies and applications. [34] mention that the development
of the DT is still at its infancy, as literature (at the time) mainly presented conceptual
ideas and architectures without concrete case studies. Although there exist many papers
on the DT for a manufacturing system, there is still little concrete evidence of DT imple-
mentation and evaluation in real world settings. Many existing solutions and platforms
already provide communication in one direction for monitoring and analytical purposes,
but there is insufficient evidence of bidirectional communication.
Considering the potential for industrial applications, a key concern is that of cyberse-
curity. With more and more devices migrating towards an interconnected environment,
where devices in the real world are connected to/through the cyberspace – the risk of infil-
tration through cyber-attacks increases. [35] mentioned that “CEOs see cybersecurity as
the number one threat to the global economy over the next five to ten years”. [36] further
mentions that 80% of enterprises are not equipped with cybersecurity prevention and
mitigation plans. Security threats that Industry 4.0 may introduce can be broadly grouped
as [37]: data loss or corruption, intellectual property breaches, and Denial-of-Services
(DoS). It is therefore sensible to integrate cybersecurity best practices when developing
Past and Future Perspectives on Digital Twin Research at SOHOMA 93
functional elements of DTs, it would enable groups to focus more on the novel aspects
in new architectures and applications. Some agreement on the architectures (particu-
larly the generic functional units), as well as communication protocols and ontologies,
are necessary precursors for such collaboration. To gain maximum value out of collab-
oration with industry, it is necessary that research teams aim to build on the existing
technology and expertise of industry partners and ensure that developed architectures
support integration with existing tools and solutions. In fact, the realization and mainte-
nance of complex real-world DT applications will be extremely challenging without the
support of industry. To this end, the SOHOMA community, with its network of diverse
industry partners, is an environment conducive to developing the foundation for effective
collaboration.
On a broader scope, SOHOMA community can drive major evolutions in the concept
and implementation of DTs. Currently, a loose consensus is emerging on the manufac-
turing aspect, with applications on automated production cells. Many research trends
emerging in the community would benefit from integrating, or being integrated into, the
notion of DT. Among others, the notion of cloud manufacturing, for example, can be
highlighted. Being able to enhance the visualization of the actual state of the systems,
in real-time over the internet, would bring about major changes in the way the systems
are controlled in the future. Another interesting research trend is the relationship with
humans: should humans use the DT, or are they modelled inside the DT, or both? What
are the best augmented-reality visualization tools that can be connected to the DT for
providing decision support in real-time to production managers and operators? Simi-
larly, sustainable manufacturing (especially from an energy point of view) could also be
greatly influenced by the notion of DT: are we able to do real-time modelling, predic-
tion and evaluation of the energy consumption of every element of our system? These
research questions are interesting, challenging and have great potential for impact, and
should continue to stimulate DT research in the SOHOMA community.
Finally, in support of the outlined strategy, the research community should retain
a predisposition of constructive scepticism and place high value on the evaluation of
DT applications. A strong focus on validation and evaluation will serve to address the
challenge of not having any effective mechanisms for evaluating DT architectures and
implementations. It is also crucial for the researchers to communicate their development
and applications to the research community at events like SOHOMA and engage in the
discussions on key issues.
4 Conclusion
The Digital Twin (DT) has emerged as a key enabler for cyber-physical systems and,
as such, Industry 4.0. Considering the recent editions of SOHOMA, DT research has
gained notable traction and resulted in an increasing number of contributions. While
several papers in these editions recognize the DT concept as a critical aspect of Industry
4.0 and CPPS landscape, truly valuable contributions have been made by papers which
developed reference architectures for the design and implementation of DTs.
Past and Future Perspectives on Digital Twin Research at SOHOMA 95
This paper provided a review of the DT related papers at SOHOMA, which focused
on the functions and roles of DTs, the presented reference architectures, and the most
prominent implementation technologies. The review is followed by a discussion of the
future trajectory of DT research in the context of SOHOMA. The discussion highlights
some aspects that may enable and challenge DT research in the future, and attempts to
outline a research strategy for the SOHOMA community.
The enabling factors for DT research are summarized as follows:
The paper recommends that the DT research community continues with a strategy
that balances the development of generic architectures and platforms, with context-
specific applications, which can simultaneously support research collaboration and
industry impact. Several interesting research questions are identified, which are expected
to continue to stimulate DT research for future editions of the SOHOMA workshop.
References
1. Miskinis, C.: The history and creation of the digital twin concept (2019). https://www.challe
nge.org/insights/digital-twin-history/. Accessed 28 May 2020
2. Grieves, M.: Digital Twin: Manufacturing Excellence through Virtual Factory Replication.
Melbourne (2014)
3. Shafto, M., Conroy, M., Doyle, R., Glaessgen, E.: Modeling, simulation, information
technology & processing roadmap. Technology Area (2010)
4. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emergent behav-
ior in complex systems. In: Kahlen, F.J., Flumerfelt, S., Alves, A. (eds.) Transdisciplinary
Perspectives on Complex Systems. Springer, Cham (2017)
5. Vachalek, J., Bartalsky, L., Rovny, O., Sismisova, D., Morhac, M., Loksik, M.: The digital
twin of an industrial production line within the industry 4.0 concept. In: Proceedings of the
2017 21st International Conference on Process Control, pp. 258–262 (2017)
96 K. Kruger et al.
6. Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F.: Digital twin-driven product design,
manufacturing and service with big data. Int. J. Adv. Manuf. Technol. 94(9–12), 3563–3576
(2018)
7. Feuer, Z., Weissman, Z.: The value of the digital twin (2017). https://community.plm.automa
tion.siemens.com/t5/Digital-Transformations/The-value-of-the-digital-twin/ba-p/385812.
Accessed 6 June 2020
8. Marr, B.: What is digital twin technology – and why is it so important?
(2017). https://www.forbes.com/sites/bernardmarr/2017/03/06/what-is-digital-twin-techno
logy-and-why-is-it-so-important/#26203f1c2e2a. Accessed 6 June 2020
9. Martin, J.: The value of automation and power of the digital twin (2017). https://newsignat
ure.com/articles/value-automation-power-digital-twin/. Accessed 6 June 2020
10. Oracle: Digital twins for IoT applications: A comprehensive approach to implementing IoT
digital twins (white paper). Redwood shores (2017)
11. Van Belle, J., Philips, J., Ali, O., Saint Germain, B., Van Brussel, H., Valckenaers, P. A service-
oriented approach for holonic manufacturing control and beyond. In: Borangiu, T., Thomas,
A., Trentesaux, D. (eds.) Service Orientation in Holonic and Multi-Agent Manufacturing
Control. Studies in Computational Intelligence, vol. 402, pp. 1–20 (2011)
12. Cardin, O., Castagna, P. Myopia of service oriented manufacturing systems: Benefits of data
centralization with a discrete-event observer. In: Borangiu, T., Thomas, A., Trentesaux, D.
(eds.) Service Orientation in Holonic and Multi-Agent Manufacturing Control. Studies in
Computational Intelligence, vol. 402, pp. 197–210 (2011)
13. Thomas, A., Trentesaux, D.: Are intelligent manufacturing systems sustainable? In Service
Orientation in holonic and multi-agent manufacturing and robotics. Springer Stud. Comput.
Intell. 544, 3–14 (2013)
14. Dobrescu, R., Merezeanu, D.: Simulation platform for virtual manufacturing systems. In:
Borangiu, T., Trentesaux, D., Thomas, A., Leitao, P., Barata Oliviera, J. (eds.) Service Orien-
tation in Holonic and Multi-Agent Manufacturing, Proceedings of SOHOMA 2016. Studies
in Computational Intelligence, vol. 694, pp. 395–404. Springer, Cham (2017)
15. Rocha, A., Barroca, P., Dal Maso, G., Barata Oliviera, J.: Environment to simulate distributed
agent based manufacturing systems. In: Borangiu, T., Trentesaux, D., Thomas, A., Leitao,
P., Barata Oliviera, J. (eds.) Service Orientation in Holonic and Multi-Agent Manufacturing,
Proceedings of SOHOMA 2016. Studies in Computational Intelligence, vol. 694, pp. 405–416.
Springer, Cham (2017)
16. Rocha, A., Rodrigues, M., Barata Oliviera, J.: An evolvable and adaptable agent based smart
grid management – a simulation environment. In: Borangiu, T., Trentesaux, D., Thomas,
A., Leitao, P., Barata Oliviera, J. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing, Proceedings of SOHOMA 2016. Studies in Computational Intelligence, vol.
694, pp. 417–426. Springer, Cham (2017)
17. Kruger, K., Basson, A.: Validation of a holonic controller for a modular conveyor system
using an object-oriented simulation framework. In: Borangiu, T., Trentesaux, D., Thomas,
A., Leitao, P., Barata Oliviera, J. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing, Proceedings of SOHOMA 2016. Studies in Computational Intelligence, vol.
694, pp. 427–436. Springer, Cham (2017)
18. Zupan, H., Zerovnik, J., Herakovik, N.: Local search with discrete event simulation for the
job shop scheduling problem. In: Borangiu, T., Trentesaux, D., Thomas, A., Cardin, O. (eds.)
Service Orientation in Holonic and Multi-Agent Manufacturing, Proceedings of SOHOMA
2017. Studies in Computational Intelligence, vol. 803, pp. 371–380. Springer, Cham (2018)
19. Derigent, W., Thomas, A.: Situational awareness in product lifecycle information systems. In:
Borangiu, T., Trentesaux, D., Thomas, A., Cardin, O. (eds.) Service Orientation in Holonic
and Multi-Agent Manufacturing, Proceedings of SOHOMA 2017. Studies in Computational
Intelligence, vol. 803, pp. 127–136. Springer, Cham (2018)
Past and Future Perspectives on Digital Twin Research at SOHOMA 97
20. Valckenaers, P.: ARTI reference architecture – PROSA revisited. In: Borangiu, T., Trente-
saux, D., Thomas, A., Cavalieri, S. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing, Proceedings of SOHOMA 2018. Studies in Computational Intelligence, vol.
803, pp. 1–19. Springer, Cham (2019)
21. Valckenaers, P.: Perspective on holonic manufacturing systems: PROSA becomes ARTI.
Comput. Ind. 120, 103226 (2020)
22. Borangiu, T., Cardin, O., Babiceanu, R., Giret, A., Kruger, K., Raileanu, S., Weichhart, G.:
Scientific discussion: open reviews of “ARTI reference architecture – PROSA revisited”. In:
Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.) Service Orientation in Holonic
and Multi-Agent Manufacturing, Proceedings of SOHOMA 2018. Studies in Computational
Intelligence, vol. 803, pp. 20–37. Springer, Cham (2019)
23. Pipan, M., Protner, J., Herakovic, N.: Integration of distributed manufacturing nodes in smart
factory. In: Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.) Service Orienta-
tion in Holonic and Multi-Agent Manufacturing, Proceedings of SOHOMA 2018. Studies in
Computational Intelligence, vol. 803, pp. 424–435. Springer, Cham (2019)
24. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six-layer digital twin architecture for a
manufacturing cell. In: Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.) Service
Orientation in Holonic and Multi-Agent Manufacturing. Proceedings of SOHOMA 2018.
Studies in Computational Intelligence, vol. 803, pp. 412–423. Springer, Cham (2019)
25. Selma, C., Tamzalit, D., Mebarki, N., Cardin, O., Bruggeman, L., Theriot, D.: Industry 4.0
and service companies: the case of the French postal service. In: Borangiu, T., Trentesaux,
D., Thomas, A., Cavalieri, S. (eds.) Service Orientation in Holonic and Multi-Agent Manu-
facturing. Proceedings of SOHOMA 2018. Studies in Computational Intelligence, vol. 803,
pp. 436–447. Springer, Cham (2019)
26. Berdal, Q., Pacaux-Lemoine, M., Bonte, T., Trentesaux, D., Chauvin, C.: A benchmarking
platform for human-machine cooperation in Industry 4.0. Submitted to SOHOMA (2020)
27. Borangiu, T., Oltean, E., Raileanu, S., Anton, F., Anton, S., Iacob, I.: Embedded digital twin
for ARTI-type control of semi-continuous production processes. In: Borangiu, T., Trentesaux,
D., Leitao, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent
Manufacturing Systems for Industry of the Future. SOHOMA 2019. Studies in Computational
Intelligence, vol. 853, pp. 113–133. Springer, Cham (2020)
28. Raileanu, S., Borangiu, T., Ivanescu, N., Morariu, O., Anton, F.: Integrating the digital twin
of a shop floor conveyor in the manufacturing control system. In: Borangiu, T., Trentesaux,
D., Leitao, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent
Manufacturing Systems for Industry of the Future. SOHOMA 2019. Studies in Computational
Intelligence, vol. 853, pp. 134–145. Springer, Cham (2020)
29. Lu, Q., Xie, X., Heaton, J., Parlikad, A., Schooling, J.: From BIM towards digital twin:
strategy and future development for smart asset management. In: Borangiu, T., Trentesaux,
D., Leitao, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent
Manufacturing Systems for Industry of the Future. SOHOMA 2019. Studies in Computational
Intelligence, vol. 853, pp. 392–404. Springer, Cham (2020)
30. Redelinghuys, A., Kruger, K., Basson, A.: A six-layer architecture for digital twins with
aggregation. In: Borangiu, T., Trentesaux, D., Leitao, P., Giret Boggino, A., Botti, V. (eds.)
Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the Future.
SOHOMA 2019. Studies in Computational Intelligence, vol. 853, pp. 171–182, Springer,
Cham (2020)
31. Taylor, N., Human, C., Kruger, K., Bekker, A., Basson, A.: Comparison of digital twin devel-
opment in manufacturing and maritime domains. In: Borangiu, T., Trentesaux, D., Leitao, P.,
Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent Manufacturing
Systems for Industry of the Future. SOHOMA 2019. Studies in Computational Intelligence,
vol. 853. Springer, Cham (2020)
98 K. Kruger et al.
32. Cardin, O., Castagna, P., Couedel, D., Plot, C., Launay, J., Allanic, N., Madec, Y., Jegouzo, S.:
Energy aware resource in digital twin: the case of injection moulding machines. In: Borangiu,
T., Trentesaux, D., Leitao, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic
and Multi-agent Manufacturing Systems for Industry of the Future. SOHOMA 2019. Studies
in Computational Intelligence, vol. 853, pp. 183–194. Springer, Cham (2020)
33. Andre, P., Azzi, F., Cardin, O.: Heterogenous communication middleware for digital twin
based cyber manufacturing systems. In: Borangiu, T., Trentesaux, D., Leitao, P., Giret Bog-
gino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent Manufacturing Systems
for Industry of the Future. SOHOMA 2019. Studies in Computational Intelligence, vol. 853,
pp. 146–157. Springer, Cham (2020)
34. Kritzinger, W., Karner, M., Traar, G., Henjes, J., Sihn, W.: Digital twin in manufacturing:
a categorical literature review and classification. IFAC-PapersOnLine. 51(11), 1016–1022
(2018)
35. Taylor, C.: Cybersecurity is the biggest threat to the world economy over the next decade,
CEOs say (2019). https://www.cnbc.com/2019/07/09/cybersecurity-biggest-threat-to-world-
economy-ceos-say.html. Accessed 03 June 2020
36. Bocetta, S.: 10 Most Urgent Cybersecurity Issues in 2019 (2019). https://www.csoonline.com/
article/3501897/10-most-urgent-cybersecurity-issues-in-2019.html. Accessed 28 May 2020
37. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: Cybersecurity considerations for indus-
trie 4.0. In: D. Dimitrov, D. Hagedorn-Hansen, K. von Leipzig (eds.) International Con-
ference on Competitive Manufacturing (COMA 19). Knowledge Valorisation in the Age of
Digitalization. Stellenbosch, pp. 266–271 (2019)
38. Dignan, L.: GE aims to replicate Digital Twin success with security-focused Digital
Ghost (2017). https://www.zdnet.com/article/ge-aims-to-replicate-digital-twin-success-with-
security-focused-digital-ghost/. Accessed 23 May 2018
Decision Support Based on Digital Twin
Simulation: A Case Study
1 Introduction
Industry 4.0 is changing the manufacturing industry landscape, considering the
digitisation and the value of data as its foundations. Most of the companies that
consider adopting the Industry 4.0 paradigm have to bear in mind the applica-
tion of, amongst others, Cyber-Physical Systems (CPS), Artificial Intelligence
(AI) and Internet of Things (IoT) [5]. In the manufacturing environment, the
implementation of CPS comprises the digitisation of systems, merging the real
and virtual worlds. This characteristic has provided the opportunity to the Dig-
ital Twin to emerge as one of the key enabling technologies.
The concept of the Digital Twin was proposed by M. Grieves in 2002, by
defining features as the real space, the virtual space and the connection between
them [7]. With the 4th industrial revolution, the rapid evolution of certain tech-
nologies, e.g., IoT, simulation, Big Data and Machine Learning allowed to boost
this approach, making its application in the manufacturing domain a reality
[1,2].
The scientific and industrial world have been directing their attention towards
the Digital Twin technology. According to [10], the interest and research about
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 99–110, 2021.
https://doi.org/10.1007/978-3-030-69373-2_6
100 F. Pires et al.
Digital Twin technology have not only grown in the academic field but also
among industry practitioners. In 2017, a study conducted about the Digital
Twin market showed that it is expected to reach $15.66 billion by 2023 [15]. A
new study conducted in 2019 showed that the Digital Twin market would reach
$35.8 billion by 2025 [16].
Although there has been a growing interest of the scientific community in the
Digital Twin, there is still a lack of applications that include the decision support
functionality [10,12], mainly using simulation and what-if engines. Bearing this
in mind, the main goal of this paper is to reduce the gap that exists in the cur-
rent research literature related to Digital Twin applications in the manufacturing
domain, including decision support capabilities. The main scientific contribution
of this paper is the development of a conceptual Digital Twin architecture that
considers simulation capabilities to support decision-making and its application
in a case study for a proof of concept Digital Twin providing decision support
in the manufacturing domain. The presented case study is a flexible production
cell with monitoring and decision support obtained the Digital Twin based sim-
ulation. The experimental results allowed to verify the applicability of using the
Digital Twin to support the production managers in decision-making when a
change in conditions occurs.
The rest of the paper is organised as follows. Section 2 presents the Dig-
ital Twin concept in the manufacturing sector, and Sect. 3 reviews the deci-
sion support approaches based on Digital Twin concept and introduces the pro-
posed system architecture. Section 4 describes the implementation of the Digital
Twin simulation architecture to the case study and analyses the achieved results.
Finally, Sect. 5 rounds up with the conclusions and points out some future work.
The manufacturing domain has evolved since the 1st industrial revolution, with
the invention of the steam engine as a new source of energy. Today, the world
finds itself in the fourth industrial revolution [3,4].
The German government launched the Industry 4.0 initiative to drive the digital
revolution in the manufacturing industry [5]. According to [5], the manufacturing
environment compliant with the Industry 4.0 principles comprises the implemen-
tation of CPS, requiring the digitisation of systems and the convergence between
the real and digital worlds. Bearing this in mind, the digitisation of the manu-
facturing environment has been the main focus of both academia and industry
in the last few years. In this context, the Digital Twin concept has emerged
and received attention in the scientific community as a promising new field of
investigation for the digitisation of the manufacturing environment [6].
Grieves proposed the foundations of the Digital Twin technology in 2002.
At the time, the concept, called “Mirrored Spaces Models”, comprised features
Decision Support Based on Digital Twin Simulation: A Case Study 101
as the real space, the virtual space and their connections allowing the flow of
data [7]. In 2011, the concept was adopted by the US National Aeronautics and
Space Administration (NASA) entering the field of aeronautics to determine the
health of aircrafts and predict their structural life [8].
From this point on, the evolution of the concept has grown rapidly covering
several sectors, like manufacturing. One of the first authors to bring the concept
of Digital Twin to the manufacturing sector was [9], who defined the Digital
Twin as a “the coupled model of the real machine that operates in the cloud
platform and simulates the health condition with an integrated knowledge from
both data-driven analytical algorithms as well as other available physical knowl-
edge”. The concept is increasing and “has evolved into a broader concept that
refers to a virtual representation of manufacturing elements such as personnel,
products, assets and process definitions, a living model that continuously updates
and changes as the physical counterpart changes to represent status, working
conditions, product geometries and resource states in a synchronous manner”
[10]. Another recent definition was provided by [6] that defines the Digital Twin
as “a method or tool for modelling and simulating a physical entity’s status
and behaviour”, that can “realise the interconnection and intelligent operation
between the physical manufacturing space and virtual space”.
The growing interest in Digital Twin technology is illustrated in Fig. 1 that
presents the evolution in time of the number of scientific papers related to the
Digital Twin retrieved from the Scopus database using the search query TITLE-
ABS-KEY (“digital twin” AND “manufacturing”). This analysis shows that the
number of scientific publications regarding the use of Digital Twin in Manufac-
turing is growing exponentially since 2016. This can be translated to a growing
interest from the scientific community and consequent production of knowledge
about the Digital Twin technology in the field of the manufacturing sector.
Fig. 1. Evolution of the number of scientific publications in the Scopus database related
to Digital Twin (Query TITLE-ABS-KEY (“digital twin” AND “manufacturing”)) over
the years.
102 F. Pires et al.
Despite the rapidly growing scientific interest in the Digital Twin technology in
the manufacturing domain, there are several challenges to be addressed.
According to [6], the main focus of Digital Twin research in manufactur-
ing tackles two main challenges, namely: 1) lack of standard framework for the
physical and virtual worlds to enable real-time interaction between them, and 2)
lack of unification in the development of models in various lifecycle phases and
domains within the manufacturing environment (e.g., product model for data
transmission/sharing).
On the other hand, the study conducted by [10] identified seven key research
issues in this field, namely the existence of a pattern architecture for a Digital
Twin, the required communication latency between the physical system and its
Digital Twin, the data collection mechanisms, the existing standards for Digital
Twins, the decision-support functionality of the Digital Twin, the existence of
Digital Twin model version for management and, finally, the human role in the
Digital Twin applications for the manufacturing domain.
The authors of [11] have concluded that the conducted research in applying
Digital Twin in the manufacturing area is still in its infancy, and there is a lack of
publications that address end-to-end implementation and integration of Digital
Twins in the industrial domain. The existing literature takes into consideration
smaller parts and fewer aspects of the Digital Twin (e.g., virtual modelling or
monitoring) and uses ad hoc integration methods to connect digital and physical
space.
The Digital Twin is gaining significant attention in the scientific and industrial
community for its versatile embedded functionalities and benefits in the manu-
facturing sector. A particular aspect is that the Digital Twin can enhance the
manufacturing systems ability to use the simulation for decision support using
what-if analysis and optimisation techniques in the virtual space.
The simulation paradigm evolved throughout the years. Initially, around 1960,
the simulation was mostly used for individual applications in particular topics,
e.g. mechanics. In 1985, simulation started to be used as a standard tool to
provide answers to specific problems in specific engineering design domains (e.g.
fluid dynamics). Around 2000, the system-level simulation was developed, which
allowed for a systematic multi-level and multi-disciplinary approach. Over the
last decade, simulation models are considered for use beyond the design phase,
i.e. connected to physical assets to enable dynamic optimisation of systems and
help in providing decision support [2], giving birth to the concept of Digital
Twins (see Fig. 2).
Decision Support Based on Digital Twin Simulation: A Case Study 103
the physical entity with a simulation model (e.g., DES model), establishing the
connectivity between the physical and virtual through the use of standard indus-
trial network protocols, realizing real-time monitoring of the collected data, using
simulation to perform optimisation of the physical entity, and offering decision
support to the human operator based on the real-time data and the performed
simulations.
Fig. 3. General architecture for the decision support based on Digital Twin simulation.
The proposed Digital Twin architecture aims to overcome some of the iden-
tified gaps in the literature and addresses some key issues, for example, the
inclusion of the human operator in the Digital Twin applications, the applica-
tion of a feedback control option based on the decision support provided by the
Digital Twin and the conjugation of the Digital Twin with the decision sup-
port functionality. This leads to a better decision-making since it includes the
ability to test the real system through the application of what-if scenarios, ver-
ifying what will be the impact and what will be the most profitable operational
strategy to be followed.
This section presents the implementation and results of the proposed architecture
into a case study related to a flexible production cell.
The case study considered in this work comprises a Fischertechniks flexible pro-
duction cell that is producing different parts, as illustrated in Fig. 4. This figure
also illustrates the virtual system model developed using the FlexSim software.
The flexible production cell consists of five assembly stations, two punching
stations (1–2), two indexing stations (3–4), and one pneumatic processing centre
(5). All of the stations have their conveyor belts, a set of light sensors and
RFID (Radio-Frequency IDentification) readers. The stations are controlled by
a programmable logic controller (PLC), in this case, the Schneider Modicon
106 F. Pires et al.
Fig. 4. Case study flexible production cell (physical system and virtual model).
M340. Parts are moved between the stations according to their process plans
through the use of an IRB 1400 ABB robot. Additionally, the parts are fed to
the stations through an input conveyor and leave the system through an output
conveyor.
For this case study, the process plan for a typical part includes the following
steps: the robot picks a piece from the input conveyor and feeds it to the punching
station; after the punching operation is concluded, the robot transfers the part
to one of the indexing stations; and finally, after the conclusion of the indexing
operation, the robot transfers the part to the output conveyor.
The developed Digital Twin for this production cell can monitor the perfor-
mance of the physical system in real-time. When a condition change is detected,
the Digital Twin performs a simulation of different scenarios for the virtual
system model aiming to define the best strategy that improves the system per-
formance.
The implementation of the Digital Twin for the flexible production cell uses the
architecture previously defined. Figure 5 represents the technological implemen-
tation for the case study.
As shown in Fig. 5, this technological architecture is divided into two
domains: the physical and the virtual one. The physical system and the opera-
tor are the sources of information for the Digital Twin, and the virtual model
and the visualisation and monitoring dashboard are the main components in the
virtual domain. Physical-virtual connectivity is achieved through the Modbus
TCP/IP industrial communication protocol, which allows collecting data from
the PLC used to control the production cell workstations. The data was col-
lected through the use of the KEPServer software, which supports several types
of communication protocols, such as Modbus TCP/IP, MQTT (Message Queu-
ing Telemetry Transport) protocol and OPC DA (OPC Data Access). In this
work, the communication with the DES model was performed using the OPC
Decision Support Based on Digital Twin Simulation: A Case Study 107
Fig. 5. Technological architecture for the case study flexible production cell.
DA, and the MQTT protocol realised the communication with the developed
visualisation and monitoring dashboard.
The DES model representing the digital copy of the production cell was
developed using the FlexSim simulation software (see the right side of Fig. 4).
This virtual model is fed with the real-time data collected from the physical
system through Modbus and OPC DA, being possible to be simulated according
to different scenarios devised by the user.
The dashboard for monitoring and visualisation was developed by using
NodeRED which allows the operator to visualise the actual operating parameters
from the physical system and to receive the warnings on performance degrada-
tion or condition change as well as the simulation results. The user can also
configure different scenarios to be simulated by the DES, e.g., modifying the
availability of machines, processing time, production line configuration and pro-
duction demand.
The flexible production cell was tested in a configuration that contains an input
conveyor, a punching station, an indexing station, an output conveyor and the
robot, having a maximum capacity of 523 parts per shift. In this situation, the
resource utilisation of the punching station, the indexing station and the robot
are 56.4%, 56.3% and 87.2% respectively.
During the production system operation, the Digital Twin is collecting the
real-time data that is displayed on the visualisation and monitoring dashboard.
The monitoring mechanisms are running in parallel aiming to detect abnormal-
ities, condition changes or performance degradation. To simulate a production
demand change scenario, the system is fed with a new demand of 580 parts
per shift, which generates production demand change warning on the dash-
board. Since it is impossible to reach this demand with the current production
108 F. Pires et al.
Table 1 also includes the expected profit for each scenario, that is calculated
in a simplified manner using the Eq. 1. The calculation of this parameter is based
on revenues (calculated by multiplying the number of parts produced per hour
by the part value and the production time) and expenses (calculated through
the sum of the multiplication between the cost per hour of the machine i and
the production time).
n
P rof it = NP arts × P artV alue × P rodT ime − Ci × P rodT ime (1)
i=1
The achieved results show that from the four simulated scenarios, only Sce-
narios 2 and 4 can attain the desired production demand. In fact, in Scenario 2,
Decision Support Based on Digital Twin Simulation: A Case Study 109
the production capacity is increased but not doubled since the robot manipula-
tor becomes the bottleneck (utilisation of 100%). In Scenario 4, the capability of
the robot to perform more operations per time unit leads to an increase in the
throughput. On the other hand, for Scenario 1, although a punching station was
added, the single indexing station in the system becomes a bottleneck, maintain-
ing the productivity capacity equal to the current production configuration. The
same is happening to Scenario 3, where the existing punching station becomes
the bottleneck.
Having two scenarios that address the initial requirements, the production
manager needs to decide which alternative is better. For this purpose, the profit
parameter can be analysed to take the decision. In this case, Scenario 4 is the
one that fulfils the requirements and presents the highest profit, since there is
no need to add new stations to the current configuration. Based on the achieved
results, the manager can make a justified and applicable decision about which
would be the most profitable configuration to face the increase in production
demand.
References
1. BC Group: Embracing Industry 4.0 and Rediscovering Growth. https://www.bcg.
com/capabilities/operations/embracing-industry-4.0-rediscovering-growth.aspx.
Accessed 09 Nov 2018
2. Rodič, B.: Industry 4.0 and the new simulation modelling paradigm. J. Manag.
Inf. Syst. Hum. Resour. 50(3), 193–207 (2017)
3. Bloem, J., van Doorn, M., Duivestein, S., Excoffier, D., Maas, R., van Ommeren,
E.: The Fourth Industrial Revolution Things to Tighten the Link Between IT and
OT (2014)
4. Da Xu, L., Xu, E.L., Li, L.: Industry 4.0: state of the art and future trends. Int.
J. Prod. Res. 56(8), 2941–2962 (2018)
5. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the
strategic initiative INDUSTRIE 4.0. Final report, Industrie 4.0 WG, no. April, p.
82 (2013)
6. Bao, J., Guo, D., Li, J., Zhang, J.: The modelling and operations for the digital
twin in the context of manufacturing. Enterp. Inf. Syst. 13(4), 534–556 (2019)
7. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emer-
gent behavior in complex systems. In: Transdisciplinary Perspectives on System
Complexity: New Findings and Approach, no. August, pp. 85–113 (2017)
8. Glaessgen, E.H., Stargel, D.S.: The digital twin paradigm for future NASA and
U.S. air force vehicles. In: 53rd Structures, Structural Dynamics, and Materials
Conference, pp. 1–14 (2012)
9. Lee, J., Lapira, E., Yang, S., Kao, H.: Predictive manufacturing system - trends of
next-generation production systems. Soc. Manuf. Eng. 1, 38–41 (2013)
10. Lu, Y., Liu, C., Wang, K.I., Huang, H., Xu, X.: Digital Twin-driven smart man-
ufacturing: connotation, reference model, applications and research issues. Robot.
Comput. Integr. Manuf. 61, 101837 (2020)
11. Fuller, A., Fan, Z., Day, C., Barlow, C.: Digital twin: enabling technology, chal-
lenges and open research. IEEE Access 8, 108952–108971 (2020)
12. Pires, F., Melo, V., Almeida, J., Leitão, P.: Digital twin experiments focusing
virtualisation, connectivity and real-time monitoring. In: Proceedings of the 3rd
IEEE International Conference on Industrial Cyber-Physical Systems (ICPS 2020),
pp. 309–314 (2020)
13. Kunath, M., Winkler, H.: Integrating the digital twin the manufacturing system
into a decision support system for improving the order management process. In:
51st CIRP Conference on Manufacturing Systems, vol. 72, pp. 225–231 (2018)
14. Liu, Q., Zhang, H., Leng, J., Chen, X.: Digital twin-driven rapid individualised
designing of automated flow-shop manufacturing system. Int. J. Prod. Res. 7543,
1–17 (2019)
15. Rohan: Digital Twin Market Worth 15.66 Billion USD by 2023. MarketsandMar-
kets (2017). https://www.prnewswire.com/in/news-releases/digital-twin-market-
worth-1566-billion-usd-by-2023-642374603.html. Accessed 30 Apr 2020
16. Singh, S.: Digital Twin Market worth $35.8 billion by 2025 (2019). https://www.
marketsandmarkets.com/PressReleases/digital-twin.asp. Accessed 30 Apr 2020
17. Macchi, M., et al.: Exploring the of digital twin for asset lifecycle management.
IFAC-PapersOnLine 51(11), 790–795 (2018)
Digital Twin Data Pipeline Using MQTT
in SLADTA
1 Introduction
The fourth industrial revolution (also known as Industry 4.0) has sparked increased
interest in the concept of a digital twin. Various definitions for digital twins have been
proposed, but in this paper a digital twin is taken to be a virtual representation of a real-
world entity (the physical twin) to facilitate integration with digital systems [1]. Digital
twins facilitate this integration through models in a virtual environment that are updated
by sensor outputs so that the models reflect the state of the physical twin. According
to some authors, a digital twin should also be able to influence the physical twin and,
therefore, bi-directional communication is required [2]. Digital twins can satisfy various
needs, such as [3–6]:
• Simulation and analysis based on real-time and historical data, to accurately predict
system behaviour.
• Centralized and integrated data models that contain all relevant data to assist in
decision making.
• Improved insight into system dependencies for enhanced future designs.
Various frameworks for digital twins have been proposed, such as the 5C Architecture
[7], the C2PS Architecture [8], the Digital Twin Shop Floor architecture [6] and the
Six Layer Architecture for Digital Twins with Aggregation (SLADTA) [9]. SLADTA,
which is outlined in Sect. 2, was chosen for the data pipeline described in this paper
due to its general applicability, clear functional separation, vendor-neutral approach and
provision for scaling and granularity through aggregation. These qualities make it an
attractive architecture for complex systems.
Exchanging data and information between a physical twin and its corresponding
digital twin, as well as between digital twins, requires a communication platform such
as Message Queuing Telemetry Transport (MQTT), Advanced Message Queuing Pro-
tocol (AMQP), Constrained Application Protocol (CoAP), Hyper Text Transport Pro-
tocol (HTTP) or Open Process Control Unified Architecture (OPC UA). The choice
depends on the application because some systems require highly configurable and
reliable communication, while others prioritize speed or resource constraints [10].
CoAP is a lightweight protocol, designed for machine-to-machine communication
and supports request/response communication, as well as resource/observe communi-
cation (similar to publish/subscribe). CoAP uses User Datagram Protocol (UDP), but
also has functionality to improve reliability. AMQP is also a lightweight machine-to-
machine communication protocol that supports request/response and publish/subscribe
messaging. AMQP has a wide variety of features and was designed for reliability, secu-
rity, provisioning (additional services) and interoperability. HTTP is predominantly a
web messaging protocol that supports request/response Representational State Transfer
(RESTful) Web communication [10].
MQTT, which was chosen for the data pipeline in this paper and is further discussed
in Sect. 3, is a publish/subscribe messaging protocol that is suited to large networks
of small devices, particularly if the devices have very limited resources or if network
bandwidth is small. MQTT, compared to AMQP and HTTP, requires less bandwidth,
has a lower latency and is generally more reliable. The drawback of these advantages is
that MQTT has less built-in security, less provisioning (fewer additional services) and
is not as standardized as the other protocols. Compared to CoAP, MQTT has higher
latencies, requires more bandwidth, and uses more device resources. CoAP, which uses
UDP, is more efficient than MQTT which uses TCP/IP. Despite this, MQTT is often
preferred because it is much more reliable and only slightly less efficient, which makes
MQTT more popular than CoAP. Due to its reliability and efficiency, MQTT is an
attractive protocol to use for large networks (which aligns with SLADTA’s target) of
simple devices. AMQP and HTTP offer functionality that is often not required by small
devices and therefore do not justify their larger message overhead and message size [10].
The objective of this paper is to evaluate MQTT as a communication protocol for
use in a digital twin data pipeline, where the digital twin is based on SLADTA. MQTT
is evaluated for communicating information to the cloud, as well as communicating
information between digital twins. The findings about MQTT are expected to extend to
other digital twin architectures too.
Digital Twin Data Pipeline Using MQTT in SLADTA 113
The remainder of this paper is structured as follows: Sects. 2 and 3 outline SLADTA
and MQTT, respectively. Section 4 describes how a data pipeline can be setup using
MQTT and SLADTA and Sect. 5 illustrates how the pipeline was implemented using a
case study. Finally, Sect. 6 provides a conclusion.
The Six Layer Architecture for Digital Twins (SLADT) is a reference architecture for
digital twin development and has been applied to a manufacturing cell for close to
real-time monitoring and fault detection [11, 12]. SLADTA (the added A represents
Aggregation) is an extension that includes twin-to-twin communication that facilitates
system orientated decision making [9, 11]. Figure 1 illustrates SLADT and SLADTA.
SLADTA was designed as a general framework that is vendor-neutral and suitable
for adding digital twins to existing systems. SLADTA can be considered for a wide
variety of systems, including manufacturing cells, renewable energy systems, ships and
building information models (BIM), all of which are being investigated by the authors’
research group.
The functions allocated by the architecture to its six layers are as follows: Layer 1
contains the devices and sensors that generate data, and Layer 2 contains data sources
that interface with the sensors (for example controllers). Together the first two layers form
the physical twin. Layer 3 contains local data repositories, which is part of the physical
twin or, if necessary, added by the digital twin. Layer 4 is an IoT Gateway, which is a
custom software application that manages the communication between physical twin and
digital twin, and between digital twins. Layer 5 is a cloud-based information repository
and, finally, Layer 6 is the emulation and simulation layer. The software used in Layer
6 is application specific, but is generally a data-endpoint and user interface.
Functionally, Layers 1 and 2 are responsible for data generation, Layers 3 and 4 are
responsible for the data and information flow between a device and the cloud, as well as
between different digital twins, and Layers 5 and 6 extract value from the information
114 C. Human et al.
being gathered. Further details about the functions of Layer 4 are given in Sect. 4 when
discussing the pipeline.
The aggregation ability of SLADTA facilitates communication between digital twins,
while limiting the data flows to what is necessary and what is compatible with privacy
and confidentiality considerations. Higher level digital twins, i.e. aggregate digital twins,
do not contain Layers 1 and 2, but aggregate the data of lower level digital twins for
decision-making that requires information from multiple digital twins.
• QoS 0: At most deliver once. The message is sent only once and no acknowledge
message is required upon receiving the message. Therefore, network connectivity
faults could result in a message not being received by the broker or the subscriber.
• QoS 1: At least once delivery. The receiver (either the broker or a subscriber) must
acknowledge receipt, otherwise, the message is published again. Note that this is a
non-blocking function and, therefore, the publisher can publish other messages while
it is waiting for an acknowledge message.
• QoS2: Exactly once delivery. The receiver must acknowledge the published message
and additional acknowledgement steps are applied to ensure that the message is not
duplicated on the receiver’s end. This ensures that no data is duplicated or lost, but
the messages have additional overhead to ensure this.
only one of the sub-topics. The wildcard operator ‘#’ may be used when subscribing
to an unknown or unspecified sub-topic. The dollar topic prefix ‘$’ is used to make
subscriptions private and prevents wildcard subscriptions [13].
MQTT supports the use of various other application layer transport protocols to
enhance its features, such as the Transport Layer Security (TLS) protocol and Web-
Sockets. TLS, in particular, is used for security and TCP ports 8883 and 1883 are reg-
istered with Internet Assigned Numbers Authority (IANA) for MQTT TLS and MQTT
non-TLS communication, respectively [13].
In terms of security, MQTT does not specify any security solutions since technology
changes quite rapidly and the required security is situation dependent. That said, the
MQTT documentation does provide general guidance to ensure communication security,
such as [13]:
• Servers can authenticate clients by using the username and password field of the
CONNECT packet. The implementation is situation dependent, but common practices
include using the Lightweight Directory Access Protocol (LDAP), OAuth tokens or
operating system authentication.
• It is good practice to hash text before sending it.
• Virtual Private Networks (VPN) can be used when available, to ensure better data
security.
• Clients can authenticate servers by using the TLS certificate sent by the server when
TLS is used.
• Normal messaging containing application-specific authentication information may be
used to authenticate a server.
data and providing a communication means, since it was an OPC UA server. In general,
however, the required communication functions can be achieved through MQTT without
involving Layer 3.
The following subsections first consider the requirements for Layer 4 and then
MQTT’s use in the data pipeline as a means of secure asynchronous communication.
Fig. 2. The digital twin data pipeline mapped onto the SLADTA framework
• The design should allow for the addition and removal of physical twins to/from the
system, with their associated digital twins, with minimal reconfiguration time and
effort, because complex systems are likely to change over time.
• The design should allow for the communication between digital twins, which can
include aggregation and/or peer-to-peer interaction, because in complex systems
the various digital twins in the system may be developed by different parties and
information may be shared on a selective basis.
Digital Twin Data Pipeline Using MQTT in SLADTA 117
Layer 4 acts as gateway for information flows between the physical twin and the
virtual world, and between its digital twin and other digital twins. This layer’s functions
are [11]:
• Transform the data received from the physical twin to information as required by the
context (e.g. perform unit conversions or convert information to forms more useful
for the higher layers).
• Select or transform the information to be transmitted to the cloud repositories to avoid
excessive database requirements and bandwidth bottlenecks.
• Direct the information flows to the different data repositories on Layer 5 and other
digital twins, taking into account that different users of the digital twin may have
access to different subsets of the information.
• Check all information received from the cloud repositories and other digital twins,
to ensure safety, consistency and compatibility with the physical twin. Where
appropriate, communicate the relevant data to the physical twin.
From the data pipeline perspective, the main functions that this layer performs are:
• Structure the data being gathered into a format that is suitable for messaging.
• Structure and process the data so that it can be used and manipulated in other software
applications.
• Interface with the Layers 3 and 5, as well as with other digital twins.
In the previous work on SLADTA [11], as outlined in Sect. 2, Layer 3 was named “local
data repositories”, but included communication functions. Upon closer interrogation of
the previous uses of SLADTA, the combination of the repository and communication
functions in Layer 3 was found to be incidental, because of the choice of OPC UA as
the means of storage and communication. Also, the previous SLADTA research did not
identify the means of communication between Layers 4 and 5 as a distinct function.
Redelinghuys [11] divided Layer 3 into Sublayers 3P and 3A, where 3P (“P” for
physical) is used to exchange data between the physical twin (Layer 2) and Layer 4 of
its digital twin, while 3A (“A” for aggregation) provides information transfer between
digital twins. In this paper, the nomenclature is changed to 4P and 4A, because it is more
general to allocate the communication functions to Layer 4. Further, 4C (“C” for cloud)
118 C. Human et al.
is added, to denote the communication between Layers 4 and 5. The names 4P, 4A and
4C denote all the communication passing through Layer 4, the IoT Gateway.
Although 4P, 4A and 4C can use different communication platforms, their primary
functional requirements are similar: they must provide (1) secure, (2) asynchronous com-
munication in a (3) vendor neutral format. Firstly, secure communication is required to
protect the proprietary information being exchanged and to prevent malicious inter-
ference with the data pipeline. Secondly, asynchronous communication supports the
overarching objectives of easy reconfiguration and aggregation. If synchronous commu-
nication is used, more knowledge about the communicating partners is required when
developing a digital twin’s IoT Gateway and unexpected consequences can arise if one
communication path unexpectedly blocks another path’s communication. In complex
systems, such emergent behaviour is best avoided. Finally, vendor neutral communica-
tion allows interconnections with physical twin elements, other digital twins and cloud
repositories from different vendors. It also supports reconfiguration and aggregation.
MQTT satisfies the above three functional requirements. Using “off-the-shelf” soft-
ware and libraries often will be preferable because the people developing Layer 4 (a
custom application that requires specialist knowledge of the physical twin) usually will
not be specialists in developing this level of communication from scratch.
Communication using MQTT through GCP is the obvious route for 4C when Layer
5 uses GCP. For 4A, it may also be preferable to use MQTT through GCP due to its
security measures and additional services. An alternative is a conventional MQTT bro-
ker such as Eclipse’s Mosquitto broker, with much more straight forward publish and
subscribe operations. With a broker such as the Mosquitto one, the developer has more
responsibilities than in GCP to implement security measures. For example, authenti-
cation of the client and broker must be configured and there are various options. The
easiest is no authentication in which case the broker will use its default configurations.
Authentication of the broker using TLS protocol includes the considerations:
To authenticate a client, the broker can apply username and password authen-
tication, the TLS protocol or both. For username and password authentication, the
mosquito_passwd utility function (provided with the broker) is used to create a text
file containing valid username and password pairs. To authenticate the client using the
TLS protocol, the client must be provided with a security key and a security certificate
in addition to the CA security certificate. The broker must then be configured to request
a security certificate from the client during the authentication process. It is also possible
to configure the Mosquitto broker to listen on various ports and apply different types of
authentication on the different ports.
Authorization to access certain topics can also be specified for certain usernames
by creating an access control list (ACL) file and specifying the path of this file in the
configuration file of the broker. Mosquitto brokers can further be connected using their
bridging function and then selected topics can be shared between brokers as specified in
the configuration file of each broker.
Irrespective of the MQTT broker chosen, the digital twin’s MQTT clients that connect
to the MQTT broker(s) are set up and configured in Layer 4 of the digital twin.
a battery voltage level, two stepper motor positions, a timestamp, a status value and a
target position (as calculated by the Grena algorithm). The higher-level controller is
connected to an Ethernet network and sends its data to a PostgreSQL database on Layer
3. The controller for the whole heliostat field interfaces with this database. For the case
study evaluation, Layers 1 and 2 were simulated in software that sends the appropriate
data to Layer 3. The simulation of Layers 1 and 2 is transparent to the higher layers of
the digital twin, but gives the opportunity to easily experiment with, e.g., data rates.
have a larger influence. Further tests are being done to determine the number of digital
twins that can be aggregated effectively within the case study environment for various
scenarios.
6 Conclusion
This paper presents a digital twin data pipeline that is built on SLADTA and uses MQTT.
SLADTA provides a framework to implement digital twins for complex systems. MQTT
is a communication protocol that is suited to large networks of small devices and is well
suited as the protocol for communication with the twin’s cloud repository and with other
digital twins.
To demonstrate the functionality of SLADTA with MQTT, a case study of a heliostat
field was chosen because a heliostat field is a large collection of simple devices. When
using the MQTT broker provided within Google’s IoT Core service and the accompany-
ing Cloud Pub/Sub service, as well as the Mosquitto broker, MQTT was found to be well
suited to the tasks of asynchronous communication and aggregation within SLADTA.
In the case study the GCP IoT Core broker was used for communication between the
digital twin’s IoT Gateway and the cloud, while the Mosquitto broker was used for com-
munication between pairs of digital twins, through their respective IoT Gateways. The
choice of broker is situation dependent since both have certain benefits and drawbacks.
The case study demonstrated that MQTT can effectively provide vendor neutral,
secure, asynchronous communication that complements SLADTA’s use for digital twins
in complex systems.
Further research is required to explore the advantages and limitations of the com-
bination of SLADTA and MQTT, such as cases with a large number of digital twins.
Additionally, the use of other cloud services on Layers 5 and 6 also need to be researched.
A more extensive comparison of the GCP broker vs the Mosquitto broker, particularly
regarding security and latency, would be valuable.
Acknowledgements. This work was funded by the Horizon 2020 PREMA project. The project
investigates concentrated solar thermal (CST) power, inter alia, for pre-heating of manganese
ferroalloys to save energy and reduce CO2 emissions. Industry 4.0 technologies and concepts can
play an important role, as it can increase the level of automation and improve the reliability of
CST plants.
References
1. Taylor, N., Human, C., Kruger, K., Bekker, A., Basson, A.: Comparison of digital twin devel-
opment in manufacturing and maritime domains. In: Borangiu, T., Trentesaux, D., Leitao, P.,
Boggino, A.G., Botti, V. (eds.) Springer Service Oriented, Holonic and Multi-agent Manu-
facturing Systems for Industry of the Future. Studies in Computational Intelligence, vol 853,
pp. 158–170. Springer, Cham (2020)
2. Kritzinger, W., Traar, G., Henjes, J., Sihn, W., Karner, M.: Digital twin in manufacturing: a
categorical literature review and classification. In: Proceedings of the 16th IFAC Symposium
on Information Control Problems in Manufacturing INCOM 2018. IFAC-PapersOnLine, vol.
51, pp. 1016–1022. Elsevier B.V. (2018)
122 C. Human et al.
3. Bao, J., Guo, D., Li, J., Zhang, J.: The modelling and operations for the digital twin in the
context of manufacturing. Enterprise Inf. Syst. 13(4), 534–556 (2018)
4. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emergent behavior
in complex systems. In: Kahlen, F.-J., Flumerfelt, S., Alves, A. (eds.) Trans-Disciplinary
Perspectives on Complex Systems: New Findings and Approaches, pp. 85–113. Springer,
Cham (2017)
5. Siemens AG: MindSphere the cloud-based, open IoT operating system for digi-
tal transformation. https://www.siemens.com/mindsphere https://www.plm.automation.sie
mens.com/media/global/en/Siemens_MindSphere_Whitepaper_tcm27-9395.pdf. Accessed
16 July 2019
6. Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F.: Digital twin-driven product design,
manufacturing and service with big data. Int. J. Adv. Manuf. Technol. 94, 3563–3576 (2018)
7. Roy, R., Tiwari, A., Stark, R., Lee, J.: Predictive big data analytics and cyber physical sys-
tems for TES systems. In: Redding, L., Roy, R., Shaw, A. (eds.) Advances in Through-Life
Engineering Services, Decision Engineering. Springer, Cham (2017)
8. Alam, K.M., El Saddik, A.: C2PS: a digital twin architecture reference model for the cloud-
based cyber-physical systems. IEEE Access 5, 2050–2062 (2017)
9. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six-layer architecture for digital twins
with aggregation. In: Borangiu, T., Trentesaux, D., Leitao, P., Boggino, A.G., Botti, V. (eds.)
Springer Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of
the Future. Studies in Computational Intelligence, vol. 853, pp. 171–182. Springer, Cham
(2020)
10. Naik, N.: Choice of effective messaging protocols for IoT systems: MQTT, CoAP, AMQP
and HTTP. In: IEEE International Systems Engineering Symposium (ISSE), Vienna, Austria,
pp. 1–7. IEEE (2017)
11. Redelinghuys, A.J.H.: An architecture for the digital twin of a manufacturing cell. Published
Doctoral dissertation, Stellenbosch University, Stellenbosch (2019)
12. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six - layer architecture for the digital twin:
a manufacturing case study implementation. J. Intell. Manuf. 1–20 (2019)
13. Banks, A., Briggs, E., Borgendale, K., Gupta, R.: MQTT Version 5.0. https://docs.oasis-open.
org/mqtt/mqtt/v5.0/mqtt-v5.0.html. Accessed 14 Oct 2019
Toward Digital Twin for Cyber Physical
Production Systems Maintenance: Observation
Framework Based on Artificial Intelligence
Techniques
Maroua.nouiri@univ-nantes.fr
1 Introduction
Industrial companies today face two important problems. Customer demand is increas-
ingly diverse, while customers are more and more demanding. At the same time, mundi-
alization implies significant competition. Faced with these challenges, industries search
to improve the efficiency, reliability and availability of their services to be more com-
petitive. Many studies indicate that the maintenance service and related activities have
a direct impact on the efficiency of the production [1].
In fact, a good maintenance strategy reduces significantly the operating costs of the
concerned systems and increases their reliability and overall availability to undertake
operations. Maintenance activities aim to restore an item for correction or to achieve
better status [2]. The lack of knowledge about the production systems, its equipment and
the associated processes complicate more the management of these systems. However
with the developments of Information and Communications Technologies (ICT) and the
rise of the fourth industrial revolution, the focus on the maintenance of Cyber-Physical
Production Systems (CPPS) is increasing. Several definitions have been proposed for
CPPS, mostly related to varying contexts. According to [3], Cyber-Physical Production
Systems are defined as “systems of systems of autonomous and cooperative elements
connecting with each other in situation dependent ways, on and across all levels of
production, from processes through machines up to production and logistics networks,
enhancing decision-making processes in real-time, response to unforeseen conditions
and evolution along time”.
Our research target in this work is the maintenance of CPPS. Recently, many tech-
niques have been proposed in order to help managers, supervisors and operators to
optimize maintenance decisions. An observation-based predictive maintenance frame-
work for CPPS is proposed. Our work exploits the concept of CPPS and uses Industrial
Internet of Things (IIoT) technologies to deploy an intelligent tool for predictive and
reactive maintenance. The proposed observation-based predictive maintenance frame-
work is based on real time data acquisition and analysis through Artificial Intelligence
(AI) techniques to detect and treat dysfunctions.
The rest of this article is organized as follows: a review of recent works showing
the evolution of CPPS maintenance is given in the next section. Section 3 describes the
details of the proposed framework. Section 4 discusses the results of our experimentation
on a case study based on a learning factory. A conclusion and some futures directions
are given in Sect. 5.
The maintenance strategies are classified in two main groups: reactive and preventive
[4]. In the reactive category, the maintenance activity is triggered by an occurrence of a
failure. In opposition, the preventive category aims at avoiding failure occurrence.
In order to avoid the significant downtime and repair costs due to a classical cor-
rective maintenance, the manufacturers are more and more interested in the predictive
maintenance strategy [5], which is based on continuous measurements to detect faults
and anticipate problems. In [6], methods and tools related to predictive maintenance in
manufacturing systems are reviewed and an integrated predictive maintenance platform
is proposed. In [7], a review on simulation-based approaches is made, which have been
widely used in the maintenance context. In these study, the behaviour of the system is
reproduced and simulated.
Recently, AI techniques and Big Data applications provide technical support for the
efficient development of manufacturing systems by accurately and timely data collection,
data analysis, data processing, root-cause identification, and deriving valuable insights
for maintenance improvement [8]. In literature, there have been several reviews on the
role of AI for the maintenance of manufacturing systems [9].
Machine learning (ML) is widely used in condition monitoring, fault prediction and
predictive analytics. ML techniques are a branch of AI methods based on the use of
Toward Digital Twin for Cyber Physical Production Systems Maintenance 125
huge amounts of data to learn and to identify patterns [8]. In [10], they proposed a
conceptual model for proactive decision support system based on real-time predictive
analytics, designed for maintenance of a cyber-physical systems in order to minimize
its downtime. A Hierarchical Modified Fuzzy Support Vector Machine (HMFSVM) is
proposed in [11] to understand the trends of vehicle faults. This method is compared
with commonly used approaches like logistic regression, random forests and support
vector machines. A reference architecture based on deep learning for CPS is proposed
in [12]. The concept for a CNC machine utilized in shop floor is explored.
The Centre for Intelligence Maintenance System (IMS) created a Watchdog Agent
Technology - an approach for product performance degradation assessment and predic-
tion, for modeling and decision making with human interaction [13]. This technology
includes time domain analysis, Principal Component Analysis (PCA), Fuzzy Logics
System (FLS), Logistic Regression (LR), Artificial Neural Network (ANN), Bayesian
belief networks and Support Vector Machines (SVM) [14].
An Intelligent System for predictive maintenance (SIMAP) [15] has been devel-
oped for real time diagnosis of industrial processes based on neural networks that detect
anomalies. The fuzzy logic method is used to provide behaviour modelling of a main-
tainer experience integrated into an intelligent maintenance system [16]. To estimate
failure degradation of bearings and to predict failure probability, LR has also been used
in combination with relevance vector machine (RVM) [17].
The various applications of ANN applications in fault risk assessment and early
fault detection analysis have been reviewed with examples of their usage in predictive
maintenance cases [18]. SVM has been used for fault diagnosis of automobile hydraulic
brake system [19]. Recently, new methods based on hybridization between supervised
and unsupervised learning techniques have been developed; an example is the root cause
analysis and faults prediction for intelligent transportation systems (ITSs) based on the
coupling of K-means Algorithm and ANN [20]. The method was tested on the Train
Door System at Bombardier Transport (BT) as a case study.
We conclude from the literature review that several methods have been proposed
for maintenance (mathematical modelling, simulation-based techniques, AI tools, etc.).
However, the previous cited methods consider the mass of data accumulated over the
years from integrated embedded sensors of CPPS (historical data) to make effective
maintenance decisions. Few works use real time data to detect the deviation of the
system and treat in real time the system malfunctions.
The faults in CPPS may be due to internal causes (for example: machine breakdown) or
external causes. The proposed framework aims to get early discovery of system faults
that may compromise the reliability of the production system.
In this work, an observation-based predictive and reactive maintenance model frame-
work is proposed. The main objective is to identify and localize the disruptions, assess
their criticalities and then notify maintenance managers or operators via IoT tools.
126 F. Abdoune et al.
Figure 1 presents the flowchart of the proposed framework. The framework is structured
in four main parts detailed hereinafter.
be extracted through sensors. In Fig. 1, this flow of data was simplified and drawn as
collected from IIoT devices (b).
Set of Detectors
The third part of the framework consists of a set of n Detector modules (D1 to Dn ) used
to detect CPPS malfunctions. The core of each detector is a real-time observation model.
Based on information from the CPPS and IoT sensors, given by the Data Acquisition
(DA), this model predicts what the “ideal” (or nominal) behaviour of the system should
be. The real time observation model Mi determines the difference () between the
nominal behaviour (e) and the actual behaviour of the CPPS (c). The difference provides
a valuable information about the dysfunction occurring in the CPPS. Each detector is
assigned to a specific aspect of the CPPS (real-time events, thermal behaviour, energy
management, economic, etc.).
Industrial MES control the system. The aim of the tool is to improve the pallet move-
ment system by detecting minor blockages causing the system to slow down and major
blockages leading to the total immobilization of pallets.
Fig. 3. The flowchart of the implemented framework for the case study
When the virtual pallet arrives at the virtual RFID unit point, it is blocked to wait for
the arrival of the actual pallet. But it is important to generate an alert before the arrival of
the actual pallet, especially in case of blockage of this actual pallet. This is the function
of the loop denoted (d) in Fig. 4. Of course, if the actual pallet arrives during the Dlim
period, the alert is not logged in the database.
When the actual pallet arrives at the RFID unit location, the virtual pallet is resyn-
chronized at the location of the virtual unit. This is important, so that the behaviour of
the virtual model is consistent with reality.
The data analysis and decision centre is of course a central element of the proposed
framework. It is based on a learning algorithm. Through literature review, it has been
shown that many artificial intelligence techniques are used. In this work, the logistic
regression seems to be suitable for our case.
A virtual PLC programmed with Schneider Unity was used. This PLC communicates
with the FlexSim observation model via an OPC UA server. The Flexsim simulator was
connected to a MySQL database. A program written in Python enables the analysis of
the recorded data.
This proof of concept is only made on one conveyor, containing an entry point A and
an RFID read/write module L1 located at a distance D from point A. V is the speed of
the conveyor. At time t0 , the virtual PLC sends the information leading to the creation
of a pallet in the simulator. On the date t1 , the PLC sends the information indicating the
arrival of the actual pallet at point L1. t1 was programmed such that:
D
t1 = t0 + +ε
V
132 F. Abdoune et al.
with ε being a random variable such as ε = Uniform − VD , VD allowing us to introduce
a perturbation.
Each time a pallet is created, a new value of ε is fired. As shown in Table 1, the
virtual PLC generates a population of values of ε that can be split into five classes. If
ε is negative, this corresponds to an early arrival of the pallet at L1. Otherwise, this
corresponds to a late arrival.
The detector analyses the differences between the arrival dates of the virtual pallets
in the observation model and the actual arrival dates provided by the virtual PLC. This
difference is logged in the SQL database together with the timestamp, the identification
of the tag reader unit and the identification of the pallet. We created a toy application
in Python to demonstrate the possibility to analyze these data. To do so, it is meant to
reconstruct the classes of Table 1. Table 2 shows the number of data logs in the SQL
database for each class of ε together with the number of logs the Python application
considered in each class. Globally, the number of logs is coherent, which demonstrates
the accuracy of the observations made.
The next step will be to implement the framework on the real learning factory. In par-
allel, machine learning methods will be implemented to interpret the results and propose
probable causes of dysfunctions. Several packages are needed for logistic regression in
Python. The most popular data science and machine learning libraries such as Scikit
Learn, Numpy, Panda and Matplotlib allow writing elegant and compact code and also
implementing models and solving problems. We already used these packages in the toy
Toward Digital Twin for Cyber Physical Production Systems Maintenance 133
application in order to train our LR algorithm and improve its accuracy using simulated
observations to predict faulty patterns, while waiting to experiment it on the real learning
factory and adjust its parameters. For now, the model and application were only about
detecting the gaps between the real system and its nominal model. However, we shall
introduce in a next version an extra analysis aiming at detecting pallet defectiveness or
RFID read/write units’ latency and propose solutions based on the problem criticality.
5 Conclusions
In this work, an observation framework based on artificial intelligence techniques is
proposed to deal with CPPS maintenance. The proposed framework aims to help man-
agers and supervisors to detect and predict dysfunctions of the CPPS. The objective
of the integrated tool is to improve the system’s reliability and to adjust maintenance
decisions. To assess its performance, the implementation of the framework is detailed
and tested on a real case study. Due to the COVID 19 pandemic, the proof of concept is
detailed and only based on a virtual implementation. Tests on the full learning factory
are the first objectives of future works. Different AI techniques will be implemented
on the framework and tested in order to select the best one which is able to optimize
maintenance strategies.
Another future direction is to generalize the framework to get a first design and
implementation in the context of integration inside the Digital Twin of CPPS. The idea
would be to add other detectors with different objectives: minimize energy consumption,
improve productivity, etc. The analysis of historical data in the IMaDeSC will then be
based on multiple objectives. Thus the Digital Twin will be able to help supervisors to
make compromise-based decisions by providing advanced data for decision support.
Additional research will be carried out on the optimization of the implementation of
intelligent sensors (location, frequency of sending data, cycle, etc.) to get more pertinent
data via IIoT technology.
References
1. Jardine, A.K.S., Tsang, A.H.C.: Maintenance, Replacement, and Reliability: Theory and
Applications, 2nd edn. CRC Press, Taylor & Francis, Boca Raton (2013)
2. Shafiee, M., Chukova, S.: Maintenance models in warranty: a literature review. Eur. J. Oper.
Res. 229(3), 561–572 (2013)
3. Cardin, O.: Classification of cyber-physical production systems applications: proposition of
an analysis framework. Comput. Ind. 104, 11–21 (2019)
4. Khazraei, K., Deuse, J.: A strategic standpoint on maintenance taxonomy. J. Facil. Manag. 9,
96–113 (2011)
5. Verhagen, W.J.C., De Boer, L.W.M.: Predictive maintenance for aircraft components using
proportional hazard models. J. Ind. Inf. Integr. 12, 23–30 (2018)
6. Efthymiou, K., Papakostas, N., Mourtzis, D., Chryssolouris, G.: On a predictive maintenance
platform for production systems. Procedia CIRP 3, 221–226 (2012)
7. Nguyen, A.-T., Reiter, S., Rigo, P.: A review on simulation-based optimization methods
applied to building performance analysis. Appl. Energy 113, 1043–1058 (2014)
134 F. Abdoune et al.
8. Zhu, L., Yu, F.R., Wang, Y., Ning, B., Tang, T.: Big data analytics in intelligent transportation
systems: a survey. IEEE Trans. Intell. Transp. Syst. 20(1), 383–398 (2018)
9. Rault, R., Trentesaux, D.: Artificial intelligence, autonomous systems and robotics: legal
innovations. In: Service Orientation in Holonic and Multi-Agent Manufacturing. Studies in
Computational Intelligence, pp. 1–9. Springer, Cham (2018)
10. Shcherbakov, M.V., Glotov, A.V., Cheremisinov, S.V.: Proactive and predictive maintenance
of cyber-physical systems. In: Kravets, A., Bolshakov, A., Shcherbakov, M. (eds.) Cyber-
Physical Systems: Advances in Design & Modelling. Studies in Systems, Decision and
Control, vol. 259. Springer, Cham (2019)
11. Chaudhuri, A.: Predictive Maintenance for Industrial IoT of Vehicle Fleets using Hierarchical
Modified Fuzzy Support Vector Machine, arXiv180609612 Cs, June 2018 (2018)
12. Lee, J., Azamfar, M., Singh, J., Siahpour, S.: Integration of digital twin and deep learning in
cyber-physical systems: towards smart manufacturing. IET Collab. Intell. Manuf. 2(1), 34–36
(2020)
13. Djurdjanovic, D., Lee, J., Ni, J.: watchdog agent - an infotronics-based prognostics approach
for product performance degradation assessment and prediction. Adv. Eng. Inf. 17, 109–125
(2003)
14. Raza, J., Liyanage, J.P., Al Atat, H., Lee, J.: A comparative study of maintenance data classi-
fication based on neural networks, logistic regression and support vector machines. J. Qual.
Maint. Eng. 16, 303–318 (2010)
15. Garcia, M.C., Sanz-Bobi, M.A., del Pico, J.: SIMAP: intelligent system for predictive main-
tenance: application to the health condition monitoring of a wind turbine gearbox. Comput.
Ind. 57, 552–568 (2006)
16. Niu, G., Li, H.: IETM centred intelligent maintenance system integrating fuzzy semantic
inference and data fusion. Microelectron. Reliabil. 75, 197–204 (2017)
17. Caesarendra, W., Widodo, A., Yang, B.-S.: Application of relevance vector machine and
logistic regression for machine degradation assessment. Mech. Syst. Signal Process. 24, 1161–
1171 (2010)
18. Krenek, J., Kuca, K., Blazek, P., Krejcar, O., Jun, D.: Application of artificial neural networks
in condition based predictive maintenance. In: Król, D., Madeyski, L., Nguyen, N.T. (eds.)
Recent Developments in Intelligent Information Database Systems, pp. 75–86. Springer,
Cham (2016)
19. Jegadeeshwaran, R., Sugumaran, V.: Fault diagnosis of automobile hydraulic brake system
using statistical features and support vector machines. Mech. Syst. Signal Process. 52–53,
436–446 (2015)
20. Mbuli, J., Nouiri, M., Trentesaux, D., Baert, D.: Root causes analysis and fault prediction in
intelligent transportation systems: coupling unsupervised and supervised learning techniques.
In: IEEE International Conference on Control, Automation and Diagnosis (ICCAD), pp. 1–6
(2019)
An Aggregated Digital Twin Solution
for Human-Robot Collaboration in Industry 4.0
Environments
Abstract. The digital twin is a powerful concept and is seen as a key enabler for
realizing the full potential of Cyber-Physical Production Systems within Industry
4.0. Industry 4.0 will strive to address various production challenges among which
is mass customization, where flexibility in manufacturing processes will be critical.
Human-robot collaboration – especially through the use of collaborative robots –
will be key in achieving the required flexibility, while maintaining high production
throughput and quality. This paper proposes an aggregated digital twin solution
for a collaborative work cell which employs a collaborative robot and human
workers. The architecture provides mechanisms to encapsulate and aggregate data
and functionality in a manner that reflects reality, thereby enabling the intelligent,
adaptive control of a collaborative robot.
1 Introduction
The fourth industrial revolution, characterized by the implementation of Cyber-Physical
Production Systems (CPPS) involving vast networks of cognitive, interconnected and
communicating devices, is bringing major changes in the manufacturing industry. The
increase in demand for customized products, along with the new capabilities brought
about by Industry 4.0 technologies, is causing a manufacturing paradigm shift from mass
production to mass customization.
The importance of Human-Robot Collaboration (HRC) is increasing, as it provides,
among other things, a means to attain the required manufacturing flexibility. HRC
offers the advantage of combining a robot’s speed, power, accuracy, repeatability and
insusceptibility to fatigue, with the agility, intelligence and perception of humans.
It is often perceived that the result of the fourth industrial revolution will be full
robot autonomy in ‘dark factories’. However, due to the lack of technology and high
complexity involved in automating intricate tasks, full robot autonomy, especially in
changing and unstructured environments, will remain out of reach for the foreseeable
future. Developing the building blocks of collaborative robot systems, which will enable
robots and humans to work in collaboration - building on each other’s strengths - is thus
of great interest [1]. This calls for the development of technologies enabling learning,
cooperating and coordinating machines [2, 3].
Conventional industrial robots lack the ability to collaborate with humans. On the
other hand, new-age robots – better known as collaborative robots or CoBots – are
designed to be intrinsically safe for operation alongside and in collaboration with human
workers within collaborative workstations. CoBots address three main challenges: safety,
rapid programmability, and flexible deployment and re-deployment.
CoBots achieve their collaborative capabilities by incorporating several safety fea-
tures, such as force and power limits, momentum limits, position limits and orientation
limits. On impact with a human or an object, most collaborative robots are designed to
stop moving immediately, or to move away from the point of impact. Although collab-
orative robots are designed with these inherent safety features, this does not mean the
collaborative application will be safe. For instance, a CoBot manipulating a sharp object
is unsafe whether the speed and impact force can be limited or not [4].
The development of CoBots has brought solutions to some of the problems that
plagued HRC implementation in the past. However, there are still numerous challenges
that need to be overcome before collaborative robots can be efficiently and effectively
implemented in HRC applications, to bring about real competitive advantage for man-
ufacturing businesses. These challenges include: addressing the relatively slow speed
of operation of collaborative robots; ensuring that collaborative workstations conform
to safety standards; maintaining robot effectiveness in chaotic environments filled with
uncertainty, and enabling real-time robot motion planning and control.
The objective of this research is to address some of these challenges through a
digital twin (DT) solution, which aims at enabling intelligent control of the robot to
achieve improvements in throughput, safety and efficiency. Presented in this paper is an
architecture for the implementation of this DT solution.
The first section of the paper presents background into: HRC, challenges faced when
trying to achieve high levels of collaboration, and CoBots and their shortfalls. Presented
thereafter is a discussion on DTs and the generic DT architecture on which the proposed
DT solution is based. Finally, the aggregated DT solution for HRC is presented, along
with discussions on the functionality of each DT involved.
2 Human-Robot Collaboration
2.1 Classification of HRC
The HRC research field is focused on finding solutions to problems involved in enabling
a human and robot to work together to achieve a common goal. Research into HRC is
motivated by the desire to achieve high levels of manufacturing flexibility [5]. As shown
in Table 1 and Fig. 1, HRC applications can be divided into categories according to the
level of collaboration between the robot and human. In Fig. 1 the orange zone represents
the human operators’ workspace; the green zone represents the robot’s workspace; and
the overlapping zone represents the shared workspace.
Although there have been significant advances in manufacturing HRC, most
collaborative applications today are constrained to level two and level three collaboration.
An Aggregated Digital Twin Solution for Human-Robot Collaboration 137
Level of collaboration Same vicinity Shared workspace Shared time Shared workpiece
1. Cell (fenced robot)
2. Coexistence x
3. Sequential x x
collaboration
4. Co-operation x x x
5. Responsive x x x x
collaboration
Complexity and need for inherent
safety features
Level of collaboraƟon
Actually, many robotic applications are still at level one, where the robot and human
are separated by the fence while the robot is functional. To achieve level four and level
five collaboration, more advanced technologies are required, which enable intelligent
control of the robot. Many of the challenges involved in HRC become evident when
trying to achieve these levels of collaboration.
• Ensuring that the collaborative workstation is safe: Safety considerations are cru-
cial in collaborative workstations, where humans and robots work in close proximity.
138 A. J. Joseph et al.
It should be the first consideration when designing tasks for HRC. More research is
needed in order to enable safe and robust robot action - especially in environments
where unforeseen events are possible [7–9].
• Robotic interaction in the presence of uncertainty: For a robot to successfully
participate in an interaction with a human, it is necessary to obtain a thorough under-
standing of the environment and the interaction dynamics. This includes knowledge of
the partnering agent, its internal state and the constraints that are imposed on the object
with which interaction needs to occur. Accurate models are necessary for capturing
the interaction dynamics [1, 8].
• Motion planning and control needs to occur in near real time to ensure natu-
ral workflow: In an interactive setting, it is necessary to be able to instantaneously
generate control commands so that the robots’ actions meet the user requirements.
Classical sense – plan – act architectures are not sufficient [1].
• Effective use of multisensory data in real time is necessary: Many existing methods
for integrating sensor data are not adequate for accurately capturing the dynamics of
physical contact with rigid and deformable objects [1].
• Communication mediums, such as speech, gaze and gestures need to be unam-
biguous and interpretable by the robot controller algorithms: Both parties must
be able to convey instructions between each other and should be capable of referring
to objects in a shared workspace without confusion [1].
• Reproducing the effectiveness and flexibility of human hands remains an open
challenge: Robotic grippers still lack dexterity when compared to human hands [1].
CoBots are becoming prevalent in industry today and they bring solutions to a few
of the problems listed above. The foundation of collaborative robots is based on solving
the most important concern in HRC - safety. However, these safety improvements come
at a cost - limited payload size and limited speed of operation, and therefore limited
throughput. This cost is currently quite prohibitive and limits the use of CoBots to
simple automation tasks.
Cobots rely heavily on safety stops initiated on impact. Once a robot encounters
a safety stop the operator usually needs to intervene to get the robot functional again.
Safety stops disrupt natural workflow and throughput in a collaborative workstation,
where obstructions can occur within the programmed robot path at any time.
Without augmenting CoBot capabilities, they currently cannot be used to achieve
high levels of collaboration, since they simply do not address enough of the challenges in
HRC implementation. They do however provide a good starting point by incorporating
necessary safety features to ensure that injury/damage is minimized if impact does occur.
By developing a means to intelligently control the collaborative robot in real-time, it is
possible to exploit the benefits that these robots bring, while increasing their efficiency,
effectiveness and range of applicability.
limited to SMEs; they can also offer productivity and ergonomic improvements for
larger enterprises that already have automated production lines. Applications of CoBots
include: assisting to carry heavy tools, fetching parts, feeding machines and performing
quality inspections [4].
Human-robot collaboration has been identified to be best suited to manufacturing
operations involving high product variance and low production volume [10]. As shown
in Fig. 2, CoBots fill the gap between manual assembly, and robotic automation [10].
Fig. 2. Economically justified operational regions for manual assembly, HRC, robotic automation
and fixed automation [11]
One of the concepts growing in popularity as an Industry 4.0 driver is the DT. A DT
is defined in [12] as a “multiphysics, multiscale, probabilistic simulation” of a system,
that utilizes the most accurate physical models and sensory data to mirror the life of the
physical entity which it attempts to ‘twin’; i.e. DTs gather real-time sensor data from
multiple entities and effectively aggregate the data to produce a digital replica of the
physical entities.
The primary goal of DTs in the manufacturing industry is to optimize the entire
production system by enabling the real-time integration of simulation data and sensory
data [6]. This integration opens the door to real-time monitoring and re-planning of
production activities to ensure that actions performed are always the most efficient ones
from a business and operation perspective. Using DTs, complex problems can be solved
140 A. J. Joseph et al.
through sense - predict/perceive - plan - act architectures, over sense - plan - act archi-
tectures. Some of the roles of DTs presented in literature include: remote monitoring,
predictive analytics, simulating future behaviour, optimization and validation [13].
A reference architecture for a single DT instance, called the Six-Layer Architec-
ture for Digital Twins (SLADT), has been proposed in [13]. This architecture has
been expanded in [14] to accommodate the aggregation of multiple DT instances. This
expanded architecture is called the Six-Layer Architecture for Digital Twins with Aggre-
gation (SLADTA). SLADT and SLADTA are illustrated in Fig. 3. The various layers of
SLADT are characterized as follows [13]:
• Layer 1 and Layer 2: The physical twin is encompassed in these two layers. It
consists of the entire physical twin, along with the various sensors and data sources
which measure and provide the actual state of the physical twin to the higher levels.
• Layer 3: This layer consists of the local data repositories such as databases, stored near
the physical twin. It is recommended that vendor neutral data servers communicating
through secure, reliable and widely used communication protocols, such as OPC UA,
be used. The data servers should also be able to communicate with an IoT gateway,
or directly with the cloud if applicable.
• Layer 4: This layer is an IoT gateway, which serves to convert data into information
before uploading it to cloud services. It also serves to manage communication between
the cloud and the local data repositories, and between DTs.
• Layer 5: This layer represents cloud-based information repositories which serve to
enhance availability, connectedness and accessibility of the DT.
• Layer 6: This is an emulation and simulation environment which adds intelligence
to the DT. The actual functionality of this layer depends on the use-case. This layer
is connected to the local data repositories and cloud-based information repositories.
Fig. 3. (a) Six-Layer Architecture for Digital Twins, (b) Six-Layer Architecture for Digital Twins
with Aggregation [14]
An Aggregated Digital Twin Solution for Human-Robot Collaboration 141
SLADTA, shown in Fig. 3b, aims to enable the creation of DTs of multi-system
environments through the aggregation of information from various DTs. This is partic-
ularly beneficial when the system to be twinned is composed of many different compo-
nents, possibly from different manufacturers. Methods and protocols for communication
between the various DTs are presented in [14]. The key SLADT layers involved in aggre-
gation are levels three and four. The DTs are connected with one another in a hierarchal
manner through their level three or level four, and OPC UA is suggested for implement-
ing these connections. The flow of information between the DTs is controlled in layer
four (a custom-built IoT gateway).
The benefit of utilizing an aggregated DT model is that data is reduced as it travels
up towards the higher-level DTs. This ensures that only valuable information arrives at
the highest DT levels where value-creating decisions are made based on the goal of the
whole system. Some other key benefits of DT aggregation are [13]:
Legend
Controllers and data Local data Cloud
1 Sensors 2 3 4 IoT gateway 5
acquisiƟon devices repositories repositories
EmulaƟon
6 and
simulaƟon
3 4 5
6 6 6
5 5 5
EmulaƟon EmulaƟon
4 4 and 4
and EmulaƟon
simulaƟon simulaƟon
3 3 3
2 2 2
Shop floor
CoBot
controller
1 1 1
streaming from the CoBot. Motion planning could also be done by the collaborative work
cell DT, it might be preferred since it contains information about the entire work cell.
However, to ensure proper encapsulation of data specific to the CoBot, it is suggested
that robot motion planning remains a function of the robot DT; while information such
as safe-zones for motion are obtained from the collaborative work cell DT.
• The human body and actions in 3D through commercial motion capture devices such
as the Microsoft Kinect depth camera [15, 17];
• 3D location of a human in a workspace through RFID [18];
• Human eye gaze and target using commercial eye trackers such as Tobii X2-30 [19];
Open source algorithms, such as the following, are also readily available today:
BlazePalm [20] for real-time hand/palm tracking and gesture recognition, and PoseNet
[21] for estimating the pose of a human in an image or video.
Level one of the proposed human DT (see Fig. 4) is composed of the sensors necessary
to track the human operators pose, position and heading in the work cell. The primary
sensor is a camera array that can be shared between the human DT and the workspace
DT. The raw data is gathered by the data acquisition device in level two. The raw data is
analysed using algorithms in layer six to determine various types of information about
the human, such as the predicted path of motion with confidence levels.
the form of a depth map. Fine details of the environment may not be of importance. To
reduce computation time workspace change detection should be employed instead of
constantly re-computing the entire workspace map.
It is evident from literature that camera-based monitoring is a popular method for
monitoring shared workspaces. Some ways to achieve a 3D representation of a workspace
is through the use of a set of stereo vision cameras [22]; depth cameras, such as a
Microsoft Kinect [23] or through the combination of sensor data from multiple 3D
sensors of different modalities [24].
One interesting idea for developing the 3D map described previously is to start
with a 3D CAD model of the collaborative workstation - which represents most of
the components in the workspace - and then complete the model using data from a
stereo vision camera array, by dynamically mapping any additions and changes in the
environment.
Layer one of the proposed workspace DT (see Fig. 4) is composed of the sensors
necessary to detect the state of the static and dynamic work environment. The primary
sensor is a camera array that is shared between the human DT and the workspace DT. To
improve confidence in the camera data, extra sensors can also be employed; for instance,
to track continuously moving objects in the workspace, such as objects on a conveyor
belt. The raw data is gathered by the data acquisition device in layer two. Algorithms
in layer six use the raw data from the various sensors to create the required workspace
map.
it needs to perform. These tasks could include motion planning, power optimization and
payload gripping location determination.
This is the highest-level DT, formed by the aggregation of the collaborative robot DT,
the human DT and the workspace DT. It contains (or has access to) all the information
necessary to make business and operation critical decisions. This makes any visualization
of the system information best done within this DT.
The primary goal of this DT is to monitor and identify any unsafe or suboptimal
conditions in the work cell from a business and operation level, and then to inform DTs
controlling the affected process of any changes required to ensure conditions are optimal.
These decisions are made through simulations in layer six which can provide the needed
capabilities using software such as Siemens Technomatix, Simio or AnyLogic.
In the task planning stage, simulations can be used for path, activity, and workspace
planning to optimize parameters such as power consumption, human and robot motion
distances. Once optimal parameters have been obtained, the robot can be programmed
to comply with these parameters. Once the robot program is live, the collaborative work
cell DT will be continuously updated with the state of the robot, human and workspace
through their respective DTs. Within layer six, this real-time information can be used
in various ways, for instance to calculate the safe zone for robot motion in the form
of a free-space map, i.e. a model indicating the space within the work cell which is
unoccupied by any other entity and is available for the robot to safely navigate through.
The free-space map can then be used to consistently check that the robot is not
currently moving, and in the near future is not expected to move within any unsafe zone.
If otherwise detected, the collaborative work cell DT informs the CoBot DT to generate
a new robot motion path within the safe zone. If the time to collision is less than that to
generate a new motion path, the CoBot DT is informed to stop until some condition is
met or to move to a safe position till the original path is clear. The work cell DT can be
used to inform the operator of the robot’s intended motion and possible collisions.
CoBots offer solutions to some of the challenges associated with HRC. However, the
improvements they bring come at a cost; one cost is throughput. A DT solution has the
potential to address some of the shortfalls of CoBots by enabling intelligent control of
the CoBot and the collaborative work cell.
This paper first establishes the need for intelligent control of a CoBot, and then
presents a DT solution for enabling its intelligent adaptive control. The primary goal
of the DT is to improve CoBots’ safety, efficiency and effectiveness. The proposed
architecture aggregates: a collaborative robot DT, a human operator DT, and a workspace
DT. The value to be produced by each DT, as well as possible technologies that can be
used to create each DT, are also briefly discussed. Future work involves a detailed
requirements analysis for each DT, followed by the implementation of the proposed DT
146 A. J. Joseph et al.
solution in an industrial case study, that will be used to evaluate the performance of the
DT solution relative to improve safety, throughput and efficiency of the collaborative
robot.
References
1. Kragic, D., Gustafson, J., Karaoguz, H., Jensfelt, P., Krug, R.: Interactive, collaborative robots:
challenges and opportunities. In: International Joint Conference on Artificial Intelligence,
pp. 18–25 (2018)
2. Oztemel, E., Gursev, S.: Literature review of Industry 4.0 and related technologies. J. Intell.
Manuf. 31, 127–182 (2018)
3. Bauer, A., Wollherr, D., Buss, M.: Human-robot collaboration: a survey. Int. J. Humanoid
Rob. 5(1), 47–66 (2008)
4. International Federation of Robotics: Demystifying collaborative industrial robots (2018)
5. Krüger, J., Lien, T.K., Verl, A.: Cooperation of human and machines in assembly lines. CIRP
Ann. Manuf. Technol. 58(2), 628–646 (2009)
6. Bauer, W., Bender, M., Braun, M., Rally, P., Scholtz, O.: Lightweight robots in manual
assembly - best to start simply!, Fraunhofer IAO (2016)
7. Malik, A.A., Bilberg, A.: Framework to implement collaborative robots in manual assembly:
a lean automation approach. In: Proceedings of 28th International DAAAM Symposium 2017
(2018)
8. Kulić, D., Croft, E.A.: Safe planning for human-robot interaction. J. Rob. Syst. 22(7), 383–396
(2005)
9. Masinga, P., Campbell, H., Trimble, J.A.: A framework for human collaborative robots, oper-
ations in South African automotive industry. In: IEEE International Conference on Industrial
Engineering and Engineering Management, 2015, pp. 1494–1497 (2015)
10. Djuric, A.M., Rickli, J.L., Urbanic, R.J.: A framework for collaborative robot integration in
advanced manufacturing systems. SAE Int. J. Mat. Manuf. 9(2), 457–464 (2016)
11. Bjorn, M.: Industrial Safety Requirements for Collaborative Robots and Applications (2014)
12. Glaessgen, E., Stargel, D.: The digital twin paradigm for future NASA and U.S. Air Force
Vehicles, Structures, Structural Dynamics and Materials Conference, 2012, pp. 1–14 (2012)
13. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six-layer architecture for the digital twin:
a manufacturing case study implementation. J. Intell. Manuf. 31(6), 1383–1402 (2019)
14. Redelinghuys, A.J.H., Basson, A.H., Kruger, K.: A six-layer architecture for digital twins
with aggregation. In: Studies in Computational Intelligence, vol. 853. Springer, Cham (2019)
15. Mohammed, A., Schmidt, B., Wang, L.: Active collision avoidance for human–robot
collaboration driven by vision sensors. Int. J. Comput. Integr. Manuf. 30(9), 970–980 (2016)
16. Sparrow, D., Kruger, K., Basson, A.: Human digital twin for integrating human workers
in industry 4.0. In: International Conference on Competitive Manufacturing (COMA 2019)
(2019)
17. Bortolini, M., Faccio, M., Gamberi, M., Pilati, F.: Motion analysis system (MAS) for pro-
duction and ergonomics assessment in the manufacturing processes. Comput. Ind. Eng. 139,
105485 (2020)
18. Ko, C.H.: RFID 3D location sensing algorithms. Autom. Constr. 19(5), 588–595 (2010)
19. Clemotte, A., Velasco, M., Torricelli, D., Raya, R., Ceres, R.: Accuracy and precision of
the tobii X2-30 eye-tracking under non ideal conditions. In: Proceedings of 2nd Interna-
tional Congress on Neurotechnology, Electronics and Informatics (NEUROTECHNIX-2014),
pp. 111–116 (2014)
An Aggregated Digital Twin Solution for Human-Robot Collaboration 147
20. Bazarevsky, V., Zhang, F.: On-Device, Real-Time Hand Tracking with MediaPipe, Google AI
Blog, 2019 (2019). https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-
with.html
21. TensorFlow. Pose estim. https://www.tensorflow.org/lite/models/pose_estimation/overview
22. Bosch, J.J., Klett, F.: Safe and flexible human-robot cooperation in industrial applications. In:
2010 International Conference on Computer Information Systems and Industrial Management
Applications (2010)
23. Flacco, F., Kroeger, T., De Luca, A., Khatib, O.: A depth space approach for evaluating
distance to objects with application to human-robot collision avoidance. J. Intell. Rob. Syst.
Theory Appl. 80, 7–22 (2015)
24. Rybski, P., Anderson-Sprecher, P., Huber, D., Simmons, R.: Sensor fusion for human safety
in industrial workcells. In: IEEE International Conference on Intelligent Robots and Systems,
pp. 3612–3619 (2012)
Holonic and Multi-agent Process
Control
Ten years of SOHOMA Workshop Proceedings:
A Bibliometric Analysis and Leading Trends
damien.trentesaux@uphf.fr
1 Introduction
Since the 2000s, the world has undertaken an industrial, economic and social transfor-
mation that has changed the way we interact with each other. In the industrial context,
this transformation has been recognized as a new technological revolution that lever-
ages the productivity, competitiveness and efficiency of firms with the use of techno-
logical advances [1, 2]. This new revolution, named ‘Industry of the Future’ (IoF) or
Industry 4.0 [3], visualizes a future of smart products, processes and procedures, which
strongly relates to usage of Internet of Things/Services and Cyber-Physical Systems,
search for improving operations in areas as manufacturing, logistics, energy, health and
transport, among others [4]. Certainly, the incorporation of new technologies into the
industry requires new operative and functioning characteristics capable of applying the
new advances in service for industrial objectives. In this respect, it was needed to develop
new concepts, methods, solutions, proof of concepts and implementations to contribute
to the employment of new technological advances towards IoF.
A considerable amount of literature has been published on the Industry of the Future.
Interested readers can refer to the following works for detailed comprehension [5–7].
These studies have portrayed the characteristics involved by the IoF concept and have
categorized the technological enablers that potentiate the desired implementation. Some
examples of these enablers are Internet of Things (IoT), Cyber-Physical Systems (CPS),
visualization technologies, cloud computing (CC), cyber-security, modelling and sim-
ulation, machine learning (ML), distributed systems, data analytics, advanced robotics.
Still, from these enablers, one of the most important characteristics needed to heighten the
benefits of the IoF is related to an orchestrator technology that coordinates and synchro-
nizes the technological enablers towards the expected objectives. From many approaches,
some paradigms ease the orchestration of these technologies such as service-orientation,
holonic and multi-agents paradigms. Service-orientation is a paradigm for organizing
and utilizing distributed capabilities that, controlled under different ownerships, modu-
larize services to solve or support a solution for processing business operations according
to specific requirements [8, 9]. The holonic paradigm deals with the design of organiza-
tional structures composed of autonomous and cooperative elements - the holons - with
recursive properties, that collectively integrate an entire system and interact to achieve
a common goal or objective [10, 11]. The multi-agent paradigm concerns the design of
autonomous decision makers - the agents that, communicating with each other under
prescribed rules, are suited communally to solve a problem in a distributed manner
[12, 13]. In conclusion, these paradigms have generated powerful platforms to con-
trol and pilot the IoF technologies, and play a crucial role in the development of the
technological revolution.
Considering the stated needs for the Industry of the Future, and under the initiative
of the FP7 EU project ERRIC (grant agreement ID 264207) aiming to foster in Roma-
nia and other EU countries the development of Intelligent Information Technologies,
it was decided to launch in 2011 the international workshop on Service Orientation in
Holonic and Multi-agent Manufacturing Control as a scientific event that congregates
high level researchers and practitioners to present and discuss their contributions on
subjects associated with service-oriented, agent-based technologies for holonic manu-
facturing control and management in manufacturing enterprises, and for agile production
considering the factory and the product lifecycle [14]. The first workshop, which opened
in 2011 at ENSAM Paris the series of annual SOHOMA events, was organized by the
University Politehnica of Bucharest (Romania), in collaboration with the Universities
of Valenciennes and Nancy (France) and the research group IMS2 of the GDR MACS
scientific coordination structure of the CNRS in France. Following the positive impact
of this initiative in the international research community working in the area of intel-
ligent manufacturing control, SOHOMA workshops were replicated annually in major
university centres in Europe, gathering the most representative scientists from academia
Ten years of SOHOMA Workshop Proceedings 153
other attributes in the data. The authors think that this approach will identify the oncoming
topics for future research in digital manufacturing systems and Industry 4.0.
The remaining of the paper is structured as follows. Section 2 briefly presents the
research methodology including the bibliometric analysis and the text mining approach
for the descriptive and leading trends, respectively. Section 3 presents the results of the
bibliometric analysis including the performance metrics such as number of publications,
number of citations, most active authors, etc. Section 4 explores the trends and patterns
identified in the publications and analyse these findings. Finally, Sect. 5 summarizes the
main findings from the articles in the SOHOMA proceedings.
2 Research Methodology
Bibliometrics is a research field that studies a bibliographic material taking into account
the structured and unstructured information retrieved from a set of publications [23]. In
general, it is the implementation of a set of mathematical and statistical techniques in the
scientific and technical activities to measure the contributions within a specific domain
[24]. For the SOHOMA workshop 10-year series, there are 274 papers in the proceedings
published in the Springer book series ‘Studies on Computational Intelligence’. For this
bibliometrics study, two distinct parts were identified for each paper - the metadata and
the paper content. The metadata is the structured data attached to a document file that
describes the information concerning the paper such as title, authors, affiliations, dates,
keywords, abstract, etc. The paper content is the unstructured data regarding to the
paper itself, specifically from the first word of the first section to the last word of the last
section. In general terms, even though it is likely to assume that the paper content could
be categorized as structured data, the extension and the irregularity organization of each
paper led this study to consider the paper content as unstructured information. In this
section, the research methodology used to analyze the set of publications is explained
for bot, the metadata and the paper content of the manuscript.
The objective in the research methodology of this study was to provide an infor-
mative overview of the bibliographic material, and identify patterns or generalizations
in the SOHOMA proceedings publications. Indeed, the information resulted may differ
depending on the interpretation considered - each reader can interpret the results accord-
ing to his interest. Still, for this study, the methodological approach was engaged specif-
ically from quantitative perspective to gain deeper insights related to the impact/trend
topics and to maintain scientific consistency and validity from the available data.
Figure 1 illustrates the research methodology that was used; it consisted of eight
phases as follows: Firstly, in the Manuscript collection phase, while the paper content
was obtained as pdf file of the paper by extracting each paper from the book file for each
SOHOMA book, the metadata was retrieved by joining the BibTex record from the DBLP
computer science database from the Schloss Dagstuhl Leibniz Center for Informatics1 ,
the detailed information was retrieved from the Mendeley reference manager software
and the citations for each paper from Google Scholar. Unfortunately, even though the
Scopus repository was used for data collection, some books and manuscripts were not
indexed yet in the repository.
1 Database can be consulted at: https://dblp.org/db/conf/sohoma/.
Ten years of SOHOMA Workshop Proceedings 155
Fig. 1. The 8-phase process followed in this study to deploy the research methodology
Secondly, the data cleansing phase is associated to the paper content; the data was
converted to a .txt file, unnecessary words like book headers, footnotes and page numbers
were erased through a java code, and the paper content was trimmed from the first word
of the introduction to the last word of conclusions. Thirdly, in the file consolidation phase
the paper content was combined in a .csv file with an identification code and the text of
the paper content. A .csv file was created for the metadata with the following fields: iden-
tification code, year, title, abstract, authors, affiliations, countries, keywords and Google
Scholar citations. Fourthly, in the importing-files phase, both .csv files were imported to
the text mining software VantagePoint; this software offers correlations, autocorrelations
and cross-correlations among data, along with other text mining analysis.
Fifthly, in the record refinement phase, the VantagePoint software was used to fusion
the metadata and the paper content fields, to clean the records and data, to remove the
duplications, normalize the data fields and refine the fields according to a natural language
processing technique. Sixthly, in the data analysis phase the analysis was divided into
two different approaches: i) the metadata and paper content were analysed with the
VantagePoint software to gather overview information from both databanks; ii) only the
paper content was analysed with a text mining technique using Python 3.6.8 libraries to
further extract more detailed information. The analysis started with the extraction of the
lemma and part of speech of the paper content, using the spacy 2.0 library for lemma
identification with a noun or verb as a part of the speech. Applying this procedure for each
proceedings volume (one per year), a term frequency/inverse document frequency (or
TF-IDF) technique was applied to evaluate the importance of the term within and between
years. Seventhly, the reports retrieving phase examined, extracted and interpreted the
information to derive conclusions resulting from both approaches - the VantagePoint
and the R text mining technique.
156 J. F. Jimenez et al.
Finally, in the data visualization phase the reports extracted were organized in order
to be presented and communicated clearly and efficiently to the paper reader. The bibli-
ographic material was then plotted using the VOS viewer software [25] to visualize the
bibliometric network.
This section presents the results of the bibliometric quantitative analysis. The study
analyses the publications and citations structures including the most productive and
influential authors, institution and countries of the publications. As specified, the series of
SOHOMA proceedings have published 274 articles (book chapters), the first publication
date being February 2012. On April 2020, published papers had 1359 citations. The
average of published papers has been 30.4 papers per year, reaching an increasing average
of 35.5 papers per year for the last four editions and a total average of citations of 151 per
edition. Also, the average of citations per paper was 4.96, where 90 publications obtained
a number of citations above this average. In total, 206 publications have received at least
1 citation, which represents a 75.2% of the publications from the past workshop editions.
The entire information of the annual citation structure and the descriptive analysis of
the publications and citations is included in Table 1. The number of publications has a
slight growing trend, while the number of citations has decreased in the last three years.
This means that the ratio between citations and publications has a clear decreasing trend
after 2013. The year 2012 was particularly unusual because although it was the year
with the smallest number of published papers, these publications obtained the highest
number of citations. Regarding the number of citations per published article, it should
be noted that a little more than half of the articles have at least two citations.
Table 1. Descriptive analysis of publication and citations, and annual citation structure
The SOHOMA workshops cover an entire spectrum of technology enablers for the
digital transformation of manufacturing in the Industry 4.0 vision of the future. In this
respect, it is interesting to review the keywords and title terms to identify significant
research topics of the publications from the last decade.
Ten years of SOHOMA Workshop Proceedings 157
The visualization network, which resulted in 58 terms, was created under the asso-
ciation strength technique with attraction parameter 2 and repulsion parameter −2. This
technique, which normalizes the strength of the links between terms, illustrates the
strength of the connection between terms with the link thickness and the number of
repetitions of the term with the node size. In addition, the figure presents the coloured
clustering used in the network, using 12 keywords as a minimum number of terms per
cluster. The results from the visualization network reveal a cluster of papers on manufac-
turing problems that holistically analyze the problem through holonic and multi-agent
approaches. This finding may be expected due to the specific research objectives of
the SOHOMA community, but an additional finding is that several papers include also
optimization within the main approaches. A second cluster from the visualization is
related to production and internet of things, which allows developing product-driven,
distributed manufacturing systems. Finally, a third cluster is related to Industry 4.0
and cyber-physical systems, which are topics that apply directly to manufacturing and
logistics areas and the, being also rising topics within the industry of the future.
The results of the co-occurrence analysis demonstrate that the key topics extracted
from the authors’ keywords are classical technical terms that describe the domain of smart
manufacturing and extend it in the IC2 T context with subjects such as multi-agent systems
158 J. F. Jimenez et al.
Table 2 presents the most cited papers from the SOHOMA workshop series. The
most cited paper was published by Montreuil, Meller and Ballot and has 81 citations
[26]. This paper and other two papers from the list (ranked 2nd and 5th ) are related to
the concept and implementing details of the Physical Internet initiative. This topic is an
example of a positioning paper that was published and presented in SOHOMA events; it
establishes and disseminates the foundations of the trending concept of Physical Internet
Ten years of SOHOMA Workshop Proceedings 159
- an open global logistics system based on physical, digital and operational intercon-
nectivity through encapsulation, interfaces and protocols - a natural consequence of
the digital transformation revolution [27]. Physical internet has been recognized by the
European technology platform ALICE as a key driver for developing the deployment of
logistics and supply chain management in Europe [28]. Besides this initiative, other top-
ics extracted from these top 15 cited papers are related to big data, cloud computing and
knowledge-based technologies; human-centred and/or human-in-loop systems; collab-
orative, self-organizing and sustainable intelligent systems; and self-aware, activeness
and intelligent products.
Table 2. The 15 most cited papers, authors and year of the SOHOMA workshop
is the layer with distributed intelligence that assures the convergence of information- and
operation-technologies (OT, IT) for the intelligent control tasks to be performed in the
CPS. Lastly, the increase tendency of ‘physical internet’ is expected as the foundations
of this concept was first published in the SOHOMA 2012 proceedings book, and then
implementing solutions were constantly developed [26]; this evolution exemplifies the
SOHOMA strategy of promoting important research, development and innovation lines
for the manufacturing value chain (MVC) of the future by publishing prospective papers
in the field.
control, although a shift in time from papers describing closed, self-determining struc-
tures to papers describing open, networked structures that share resources, services and
infrastructures took place. This shift and considerations of business objectives (market
dynamics, variability of products, servitisation and after-sales services, customer orien-
tation) came with significant contributions of SOHOMA researchers to develop novel
technologies for global industries and bring closer the factory and product lifecycles.
Concerning the decrease trends in Fig. 5, some basic terms from the ‘concept’ cat-
egory - holonic and agent-based and some other terms derived from these basic ones -
centralized, reconfiguration, synchronization and negotiation, feature reduced appear-
ance throughout the SOHOMA book series. The explanation is that the first two terms
are mostly associated to early reported designs of manufacturing control systems, in
principal inspired by the PROSA holonic reference architecture extended with semi-
heterarchical topology. On the other hand, the 4 derived terms represent main features
and operating modes of this topology. Two things happened: first, the ‘holonic’ concept
was sufficiently well transposed in standard control architectures by the end of 2015,
and second, it looked as if holonic research produced robust, auto-configuring and self-
organizing solutions but sustainable, broadly-scoped, optimized (or even guaranteed)
performance remained out-of-reach for industry. These two facts determined researchers
to focus - beyond the ‘holonic’ and ‘agent-based’ concepts - on solutions that close the
gap between process control and shop floor reality, provide energy-awareness, plan pro-
duction and allocate resources based not only on history but also on the prediction of
behaviours and quality of services. Finally, the terms ‘robustness’ and ‘agility’ appear
less frequent because they were the main attributes of distributing intelligence in multi-
agent frameworks, like dMAS [30]; they were replaced in time by ‘reality awareness’,
‘high availability’, ‘resilience’ and ‘responsiveness’ [29, 31, 32].
Figure 6 illustrates a set of terms that exhibit either a positive, steady or nega-
tive trend through the paper content of the SOHOMA publications, specifically the
more generic categories ‘method’, ‘tool/device, ‘problem-to-solve’ and ‘event-during-
execution’. Concerning this set of terms, the trend is balanced between increasing,
decreasing and steady tendency for the analysed publications. The detailing terms that
increase the occurrence of the generic categories “method’, ‘tool/device’ and ‘problem-
to-solve’ are: algorithm, decision, predictive, machine, logistics, production, factory
and manufacturing. While the terms ‘machine’, ‘logistics’, ‘production’, ‘factory’ and
‘manufacturing’ consolidate the workshops’ research objectives in manufacturing and
logistics systems, one can notice the increase in referring the terms ‘algorithm’, ‘deci-
sion’ and ‘predictive’, which indicates the recent orientation of the SOHOMA research
towards intelligent decision making in manufacturing planning, control and maintenance
based on Artificial Intelligence techniques (AI), machine learning algorithms and data
science tools for prediction, classification, clustering and anomaly detection [33].
Even though the SOHOMA workshops focus on the concepts, methods, solutions,
proof of concepts and implementations for manufacturing systems in the ‘industry of
the future’ vision, (intelligent) decision-making represents the final stage of data-driven
manufacturing control and management of the contextual enterprise, because it uses
effectively the big amount of data about production, processes, products and customer
demands obtained through pervasive device instrumenting and digital marketing, and
Ten years of SOHOMA Workshop Proceedings 163
the papers. The entire set of terms was trimmed considering only terms that have at least
8 records of co-occurrence with another term.
Fig. 7. Symmetrical correlation matrix between terms in the SOHOMA paper content
Figure 7 is analysed from two points of views: the collection of terms that gather in a
cloud from positive correlations, and the negative correlations seen in a few couplings of
terms from the records. Both consider only the top diagonal part of the matrix as this one
is symmetrical. From the first point of view, three clusters of terms have been identified
according to the correlation analysis. First, a cluster A (Bottom-left) can be identified
as a natural clustering group of the papers describing the Physical Internet initiative,
gathering terms such as ‘logistics’, ‘robustness’, ‘PI-container’, ‘encapsulation’ and ‘PI-
Hub’. Second, a cluster B (matrix centre), which is the biggest clustering of terms can be
identified as the main topic of SOHOMA workshops with terms associated to: ‘service-
oriented’, ‘holonic’ and ‘multi-agent’ paradigms; ‘cyber-physical’ and ‘digital twin’
technologies; topics referring to solutions and applications - ‘component’, ‘structure’
Ten years of SOHOMA Workshop Proceedings 165
First Term Second Term First Term Second Term First Term Second Term
Manufacturing ↔ Pi-Hub Architecture ↔ Pi-hub Case Study ↔ Interoperability
Manufacturing ↔ Encapsulation Digital Twin ↔ Robustness Pi-hub ↔ Interoperability
Manufacturing ↔ Physical Int. Digital Twin ↔ Encapsulation Multi Agent ↔ PI-Containers
Operations ↔ Robustness Supply Chain ↔ Robots CPS ↔ PI-Hub
Architecture ↔ Logistics Machines ↔ Pi-Hub
In a Nutshell, these findings suggest that the SOHOMA proceedings series includes
contributions to concepts for manufacturing control rather than to prototyping and appli-
cations. In this respect, the bibliometric qualitative analysis presented in this paper con-
firms that the proceedings published in Springer reflect valuable results of the research
carried out in the last decade by the SOHOMA community in the domain of digital
transformation of manufacturing in the Industry of the Future vision.
5 Conclusions
The aim of the present paper is to examine the scientific articles published in the pro-
ceedings of the SOHOMA workshops held in the last 10 years, in order to proceedings
series to find out important research facts and leading trends concerning manufacturing
control for the industry of the future. The most obvious finding from the bibliometric
analysis is the increasing research effort carried out by the SOHOMA community to
166 J. F. Jimenez et al.
References
1. Müller, J.M., Buliga, O., Voigt, K.-I.: Fortune favours the prepared: how SMEs approach
business model innovations in I 4.0, Technological Forecast and Social Change, 132 (2018)
2. Kamble, S., Angappa, G., Gawankar, S.A.: Sustainable Industry 4.0 framework: a systematic
literature review identifying the current trends and future perspectives. Process Saf. Environ.
Prot. 117, 408–425 (2018)
3. Schwab, K.: The Global Competitiveness Report 2017–2018. World Economic Forum (2017)
4. Preuveneers, D., Ilie-Zudor, E.: The intelligent industry of the future: a survey on emerging
trends, research challenges and opportunities in Industry 4.0. J. Ambient Intell. Smart Environ.
9(3), 287–298 (2017)
5. Oztemel, E., Gursev, S.: Literature review of Industry 4.0 and related technologies. J. Intell.
Manuf. 31(1), 127–182 (2020)
6. Romero, M., Guédria, W., Panetto, H., Barafort, B.: Towards a characterisation of smart
systems: a systematic literature review. Comput. Ind. 120, 103224 (2020)
7. Alcácer, V., Cruz-Machado, V.: Scanning the industry 4.0: a literature review on technologies
for manufacturing systems. Eng. Sci. Technol. 22(3), 899–919 (2019)
8. Valipour, M.H., Amir Zafari, B., Maleki, K.N., Daneshpour, N.: A brief survey of software
architecture concepts and service-oriented architecture. In: 2nd IEEE International Confer-
ence on Computer Science and Information Technology, pp. 34–38. IEEE Xplore (2009).
https://doi.org/10.1109/ICCSIT.2009.5235004
Ten years of SOHOMA Workshop Proceedings 167
9. Calabrese, M., Amato, A., Di Lecce, V., Piuri, V.: Hierarchical-granularity holonic modelling.
J. Ambient Intell. Humaniz. Comput. 1(3), 199–209 (2010)
10. MacKenzie, C.M., Laskey, K., McCabe, F., Brown, P.F., Metz, R., Hamilton, B.A.: Reference
model for service-oriented architecture 1.0, OASIS standard, 12(S 18) (2006)
11. Valckenaers, P., Bonneville, F., Van Brussel, H., Bongaerts, L., Wyns, J.: Results of the holonic
control system benchmark at KU Leuven. In: Proceedings of the 4th International Conference
on Computer Integrated Manufacturing and Automation Technology. IEEE Xplore (1994)
12. Jimenez, J.F., Bekrar, A., Trentesaux, D., Leitão, P.: A switching mechanism framework
for optimal coupling of predictive scheduling and reactive control in manufacturing hybrid
control architectures. Int. J. Prod. Res. 54(23), 7027–7042 (2016)
13. Guo, Q.L., Zhang, M.: An agent-oriented approach to resolve scheduling optimization in
intelligent manufacturing. Robot. Comput. Integr. Manuf. 26(1), 39–45 (2010)
14. Borangiu, T., Thomas, A., Trentesaux, D. (eds.): Service Orientation in Holonic and Multi-
Agent Manufacturing Control. Proceedings of SOHOMA 2011, Paris, France. Studies in
Computational Intelligence, vol. 402. Springer, Cham (2012)
15. Borangiu, T., Trentesaux, D., Thomas, A. (eds.): Service Orientation in Holonic and Multi-
Agent Manufacturing and Robotics. Proceedings of SOHOMA 2012, Bucharest, Romania.
Studies in Computational Intelligence, vol. 472, Springer, Cham (2013)
16. Borangiu, T., Trentesaux, D., Thomas, A. (eds.): Service Orientation in Holonic and Multi-
Agent Manufacturing and Robotics. Proceedings of SOHOMA 2013, Valenciennes, France.
Studies in Computational Intelligence, vol. 544. Springer, Cham (2014)
17. Borangiu, T., Thomas, A., Trentesaux, D. (eds.): Service Orientation in Holonic and
Multi-Agent Manufacturing. Proceedings of SOHOMA 2014, Nancy, France. Studies in
Computational Intelligence, vol. 594. Springer, Cham (2015)
18. Borangiu, T., Trentesaux, D., Thomas, A., McFarlane, D. (eds.): Service Orientation in
Holonic and Multi-Agent Manufacturing. Proceedings of SOHOMA 2015, Cambridge, UK.
Studies in Computational Intelligence, vol. 640. Springer, Cham (2016)
19. Borangiu, T., Trentesaux, D., Thomas, A., Leitão, P., Barata, J. (eds.): Service Orientation in
Holonic and Multi-Agent Manufacturing. Proceedings of SOHOMA 2016, Lisbon, Portugal.
Studies in Computational Intelligence, vol. 694. Springer, Cham (2017)
20. Borangiu, T., Trentesaux, D., Thomas, A., Cardin, O. (eds.): Service Orientation in Holonic
and Multi-Agent Manufacturing. Proceedings of SOHOMA 2017, Nantes, France. Studies in
Computational Intelligence, vol. 762. Springer, Cham (2018)
21. Borangiu, T., Trentesaux, D., Thomas, A., Cavalieri, S. (eds.): Service Orientation in Holonic
and Multi-Agent Manufacturing. Proceedings of SOHOMA 2018, Bergamo, Italy. Studies in
Computational Intelligence, vol. 803. Springer, Cham (2019)
22. Borangiu, T., Trentesaux, D., Leitão, P., Giret Boggino, A., Botti, V. (eds.): Service Oriented,
Holonic and Multi-Agent Manufacturing Systems for Industry of the Future. Proceedings of
SOHOMA 2019, vol. 853. Springer, Cham (2020)
23. Broadus, R.N.: Toward a definition of “bibliometrics.” Scientometrics 12(5–6), 373–379
(1987)
24. McBurney, M.K., Novak, P.L.: What is bibliometrics and why should you care? In: Proceed-
ings of IEEE International Professional Communication Conference, pp. 108–114. IEEE
Xplore (2002)
25. Van Eck, N., Waltman, L.: Software survey: VOSviewer, a computer program for bibliometric
mapping. Scientometrics 84(2), 523–538 (2010)
26. Montreuil, B., Meller, R.D., Ballot, E.: Physical internet foundations. In: Service Orientation
in Holonic and Multi-Agent Manufacturing and Robotics. Proceedings of SOHOMA 2013.
Studies in Computational Intelligence, vol. 544, pp. 151–166. Springer, Cham (2014)
27. Savelsbergh, M., Van Woensel, T.: 50th anniversary invited article - city logistics: challenges
and opportunities. Transp. Sci. 50(2), 579–590 (2016)
168 J. F. Jimenez et al.
28. Sternberg, H., Norrman, A.: The physical internet - review, analysis and future research
agenda. Int. J. Phys. Distrib. Logist. Manag. 47(5) (2017). https://doi.org/10.1108/IJPDLM-
12-2016-0353
29. Valckenaers, P.: Perspective on holonic manufacturing systems: PROSA becomes ARTI.
Comput. Ind. 120, 103226 (2020)
30. Novas, J.M., Bahtiar, R., Van Belle, J., Valckenaers, P.: An approach for the integration of a
scheduling system and a multi-agent manufacturing execution system. Towards a collabora-
tive framework. In: Proceedings of 14th IFAC Symposium INCOM 2012, Bucharest. IFAC
PapersOnLine, pp. 728–733. Elsevier (2012)
31. Trentesaux, D., Borangiu, T., Thomas, A.: Emerging ICT concepts for smart, safe and sus-
tainable industrial systems. Comput. Ind. 81, 1–10 (2016). https://doi.org/10.1016/j.compind.
2016.05.001
32. Borangiu, T., Trentesaux, D., Thomas, A., Leitão, P., Barata, J.: Digital transformation of
manufacturing through cloud services and resource virtualization. Comput. Ind. 108, 150–162
(2019). https://doi.org/10.1016/j.compind.2019.01.006
33. Morariu, C., Morariu, O., Răileanu, S., Borangiu, T.: Machine learning for predictive schedul-
ing and resource allocation in large scale manufacturing systems. Comput. Ind. 120, 103244
(2020). https://doi.org/10.1016/j.compind.2020.103244
Proposition of an Enrichment for Holon Internal
Structure: Introduction of Model and KPI
Layers
Erica Capawa Fotsoh1,2(B) , Pierre Castagna2 , Olivier Cardin2 , and Karel Kruger3
1 IRT Jules Verne (French Institute in Research and Technology in Advanced Manufacturing),
44340 Bouguenais, France
erica.fotsoh@irt-jules-verne.fr
2 LS2N, UMR CNRS 6004, Université de Nantes, IUT de Nantes, 44 470, Carquefou, France
olivier.cardin@ls2n.fr
3 Department of Mechanical and Mechatronic Engineering, Stellenbosch University,
Abstract. The current holon structures that exist so far are built to take advantage
of holon dynamism through self-reconfiguration, but not in case of unexpected
situations when holon behaviour is unpredicted and the dynamism is lost. In this
paper, we propose a way to fill this gap by adding a model layer and a KPI layer to
the holon internal structure. The specificity of these layers is that they allow both
dynamic and non-dynamic reconfigurations for RMS that use holonic control. The
added layer could then be used as forecasting and previewing tool and could be
considered as one more step in aid in control (e.g. for digital twin), as well as
an additional tool in the reconfiguration process. An application on a learning
factory shows the feasibility of the proposed concept that brings perspectives on
the notions of data and models aggregation.
Keywords: Holon internal structure · Model layer · KPI layer · RMS · HMS
1 Introduction
During the last decades, manufacturing systems have evolved in order to cope with the
frequent changes imposed by the significant fluctuation in market demand [1]. Several
manufacturing paradigms have been developed to meet new constraints - the system
has to respond more quickly and efficiently to changes (e.g., the introduction of a new
product), for lower price, in short time, and with better quality. Traditional rigid manu-
facturing systems can no longer cope with such constraints; thus Holonic Manufacturing
System (HMS) [2] and Reconfigurable Manufacturing System (RMS) [3] have emerged.
RMS have the ability to reconfigure hardware and control resources to rapidly adjust the
system in response to sudden changes. They are characterized by modular components
which are integrable with other technologies. In order to enable control reconfiguration
in RMS, the idea of holonic control has been widely adopted [4].
Developed by [5], holon refers to an element which is sufficient to exist alone, and
also can live in a social framework. In manufacturing context, a holon represents an
autonomous and cooperative identifiable part of the system, i.e. which interacts with
other elements to meet overall goals in the system. Holonic control provides autonomy,
intelligence capabilities, fast adaptation and reconfiguration to quickly and efficiently
respond to new challenges. Holonic control is usually achieved through the use of refer-
ence architectures [6] such as PROSA [7] or ADACOR [8]. When changes occur in the
system, holons instantly gather information of their closest environment, negotiate with
other holons, and then make a decision, e.g. parameter changes or reconfiguration. This
process takes place during the manufacturing process (dynamic aspect of holons).
As stated in [9], holon behaviour could be of three types: either skills-based behaviour
(when the current situation is exactly the same as a previous one), rule-based behaviour
(when the current situation is similar to a previous one) or knowledge-based behaviour
(when the current situation has never been encountered before). The first two cases
cope with the dynamic features expected in HMS, as actions driven by the holon are
predefined regarding the scope of predefined events. In the last case, a new solution has
to be determined, and this goes beyond the dynamic scope of holons.
Reconfiguration of RMS that use holonic control architecture has to deal both with
dynamic and non-dynamic behaviour of holons. Therefore, holons have to be designed in
order to fit both cases. The descriptions of the holon that exist so far in literature are made
with a focus on the dynamic aspect of the holons. That is, holons are designed to make
the right decision during manufacturing process - they are highly reactive, dynamically
change their behaviour to adapt to changes that occur in the system (self-reconfiguration).
Many works are identified in this context, among which [10] based on ADACOR,
[11] on Erlang, or [12] who proposes a governance mechanism for the control system
that dynamically changes its behaviour. The remaining gap in reconfiguration of RMS
that uses holonic control architecture is when the situation that arises is unexpected
and therefore not predefined [12]. In this case, the self-reconfiguration (i.e. the dynamic
reconfiguration) of the holons can no longer be used, and the existing internal structure
of the holons can no longer cope with changes. It is therefore necessary to review the
internal structure of the holons, so that it can both maintain their dynamism for known
or predicted situations, and react to new unknown situations.
This paper describes a way to fill this gap, by proposing an enrichment of the internal
structure of the holons. The paper focuses on the resource holon (the so called operational
holon in the reference control architecture ADACOR). This holon is an abstraction of a
production mean [7]; it is the one whose description is more complete since it integrates
both software and hardware, whereas the other types of holon (product, order, staff -
respectively called product, task, and supervisor in ADACOR) might only be software,
and therefore will not require interfacing to a physical system entity. The term holon
without precision in the following refers to the resource holon.
The remainder of this paper is organised as follows: Sect. 2 presents the proposal of
enrichment of the holon structure; an application of the proposal is given Sect. 3. Section 4
discusses the interest of the proposal both for RMS reconfiguration and control aid
process. Section 5 concludes the paper and gives some perspectives for futures research.
Proposition of an Enrichment for Holon Internal Structure 171
Relating each KPI with its holon allows a modular and more accurate analysis of the
system;
• Model layer: This layer contains a representation of the holon, and is built using
data from the real manufacturing system ([9] proposes the use of a discrete-event
observer that gathers the information on the current state of the system), or data that
can be introduced by the production manager. The model is used to forecast the holon
behaviour and to foresee the impact of potential changes. The data are commonly
stored in a database as well as KPIs, and could be accessed through SQL queries. The
aim of adding a model layer is twofold: firstly, to aid in reconfiguration process, and
secondly to aid in control.
Fig. 1. Proposal of the new holon structure with model and KPI layer
The use of a model of the system in decision making (especially for aid in control) is
not newly developed. Indeed, [16] proposed in their work the use of D-MAS as a virtual
representation of some holons’ tasks and duties in order to obtain feasibility information,
explore and propagate them. D-MAS thus allows to foresee the impact of future inter-
actions, as well as the enriched holon. It is used within the ARTI architecture (Activity-
Resource-Type- Instance, which is an update of PROSA) proposed by [17], whose pur-
pose is to clearly exhibit the digital twin (in a larger context than manufacturing) [6].
For example for digital twin applications, the model is often an online simulation and
the reconfiguration process is dynamic and based on predefined behavioural decision.
Yet, when the situation is unforeseen and unexpected, the system gets stuck, and none of
the physical or digital twins know how to react. The proposal to add the model and KPI
layers that were explained above aims to address this kind of situation. Like D-MAS, the
Proposition of an Enrichment for Holon Internal Structure 173
enriched holon is used as a previewed tool. In addition, it can be considered as one more
step in control decision support. Moreover, it offers possibilities for hardware decision
support, and can be considered as an additional tool in the reconfiguration process. This
will be discussed further in Sect. 4.
to build the same configuration. Hence, by using a model for each holon, the number of
basic objects used to build the configuration is divided by 30. This shows that the use
of a model representing each holon allows gaining time, reducing the complexity and
facilitating the construction of the system’s simulation model.
No. of ARENA’s
Holon Model in the library
basic objects
Convergence 20
Workstation P10 51
Bend 9
Small conveyor 9
For configuration evaluation, we will focus on workstation holons whose model and
part of the description are given in Fig. 2. We will consider for the example that the
initialization data corresponds to an empty model at the beginning of the simulation.
That is, there is no ongoing operation on the workstation, and the execution manager has
no current operation time to set. To test different configurations, the production manager
will vary the data related to the operation to be performed by the holon, as well as the
next holon to send the product to. He will use the WIP and availability of each holon to
decide which configuration would best fit the production context.
P10 and P30. Note that even if holon P30 has a low availability, its WIP refers to the
product currently manufactured. Holons P20a and P10 seem to be the bottlenecks of the
manufacturing line. This analysis is possible because the holon model is associated with
a KPI layer that retrieves the data related to the holon simulation.
The first action regarding a reconfiguration process is to act on P10 and P20a. The
use of the library with each holon model allows rapidly building and testing alternative
configurations. Table 3 gives an overview of the results of tested configurations. These
configurations were built using the holon model of the library.
176 E. Capawa Fotsoh et al.
Conf
2 Production vol-
548
ume
Holon P10a P10b P20a P20b P30
WIP 1 4 5 1 1
Availability (%) 62.09 38.98 0.46 27.65 0.98
Conf
3 Production vol-
542
ume
Holon P10 P20a P20b P30
WIP 1 1 1 5
Availability (%) 0.99 13.14 12.92 1.36
Conf 2 has a better production volume than conf 1, but WIP and the availability of
P20a are not improved. Conf 3 performs better than Conf 1 and Conf 2 in terms of WIP
yet the drawback is the production volume. It is important to notice that, within Conf 2,
a new workstation was added, whereas Conf 3 uses the same elements as Conf 1. An
additional analysis such as economic analysis or KPI analysis on the system level i.e.
KPI aggregation, would be necessary for an actual choice of configuration. Nevertheless,
Conf 2 seems to be the one that best fits the production context (considering both WIP,
availability and production volume). Having a model that represents the holon simplifies
Proposition of an Enrichment for Holon Internal Structure 177
the construction of the system simulation model. It also allows testing configurations
and analysing the KPIs of each holon.
In the context of reconfiguration, the enriched holon will be used to preview potential
configurations or to evaluate the consequences of the behaviours before their implemen-
tation. When an unexpected reconfiguration situation arises, the production manager
usually has little time to react. This reaction must of course depend on the state of the
system and the expected objectives. To know better what will be the consequences of
his choices, the production manager can use the model of the system as an analysis tool.
The model of the system is modularly built according to the explanation of Sect. 2,
i.e. by aggregation of holon modules and KPI (Fig. 4). Different configurations of the
system can then be tested and evaluated, and the configuration that best fits the new
production context is chosen. Data resulting from this configuration and parameters
have to be shared between several holons. That is, the behaviour of the upper holon,
the KPI and the parameters corresponding to the lower holons have to be transmitted to
each of these. We also propose to store these data in databases, in order to reuse them in
case of rule-based behaviour or skill-based behaviour decision making. The model of the
system can also be used as a forecasting tool in case of sudden changes. The production
manager could introduce new data in order to test new possible configurations scenarios.
If the results of these tests are conclusive, the parameters of the corresponding holons
are saved in a database and used at the appropriate time.
the different parameters when the models of the holons are aggregated. It is therefore
essential to have an aggregation manager for models, KPI and control logics. Indeed,
the data of the holons taken individually are relevant for a local analysis. They must be
integrated into the overall system data in order to guarantee consistency in the decision
process for a global view. The holon model gives a partial overview of the system, yet the
decision making regarding the system level needs to consider supplementary information
that could not be found in holons (for example the priority order assigned to each holon,
the formula to evaluate a KPI at the system level, etc.). We thus propose to consider an
aggregation model within the system’s model that coordinates first the holon model to
guarantee the relevance of both the model and the resulting data, and secondly provides
the supplementary information needed to conduct decision making as shown in Fig. 4.
Fig. 4. The construction of the system model and introduction of the aggregation model
digital twin applications where each holon has a digital twin and the system digital twin
is the aggregation of the lower level digital twin. This idea has been developed by [24].
The construction of the system model and the analysis of the KPIs are based on
modularity. The more holons, the more local information about the system and the
more need to aggregate and bring it up to the system level. The relevance of the model
and the KPIs will therefore depend on the aggregation model chosen. For numerical
values (KPI values) there are many different aggregation methods: arithmetic means,
geometric means, quadratic means, harmonic means, linear combinations, etc. [25]. The
aggregation of control logic remains a big issue in holonic systems. However, a lot
of research is being conducted to propose solutions to this problem. Future work may
follow the same direction, i.e. the proposal of an aggregation model for control logic
within the system model.
Acknowledgments. This research work is supported by the funding of the PhD program PER-
FORM (Fundamental research and development program resourcing on manufacturing) from the
IRT Jules Verne (https://www.irt-jules-verne.fr/).
References
1. El Maraghy, H.: Flexible and reconfigurable manufacturing systems paradigms. Flex. Serv.
Manuf. J. 17(4), 261–276 (2006). Special issue
2. Van Brussel, H.: Holonic manufacturing systems, the vision matching the problem. In: First
European Conference on Holonic Manufacturing Systems, Hannover (1994)
3. Mehrabi, M.G., Ulsoy, A.G., Koren, Y.: Reconfiguration manufacturing systems: key to future
manufacturing. J. Intell. Manuf. 11, 403–419 (2000)
4. Kruger, K., Basson, A.: Implementation of an Erlang-based resource Holon for a Holonic
manufacturing cell. In: Borangiu, T., Thomas, A., Trentesaux, D. (eds.) Service Orientation
in Holonic and Multi-agent Manufacturing. Studies in Computational Intelligence, pp. 49–58.
Springer, Cham (2015)
5. Koestler, A.: The Ghost in the Machine, Oxford. Macmillan, New York (1968)
6. Derigent, W., Cardin, O., Trentesaux, D.: Industry 4.0: contributions of holonic manufacturing
control architectures and future challenges. J. Intell. Manuf. (2020)
7. Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: PROSA. Comput. Ind. 37(3), 255–274 (1998)
8. Leitão, P.: An agile and adaptive holonic architecture for manufacturing control. Ph.D. thesis,
University of Porto (2004). https://www.ipb.pt/~pleitao/pjl-tese.pdf
9. Cardin, O., Castagna, P.: Using online simulation in Holonic manufacturing systems. Eng.
Appl. Artif. Intell. 22(7), 1025–1033 (2009)
10. Leitão, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing
control. Comput. Ind. 57(2), 121–130 (2006)
11. Kruger, K., Basson, A.: Erlang-based control implementation for a holonic manufacturing
cell. Int. J. Comput. Integr. Manuf. 30(6), 641–652 (2017)
12. Jimenez, J.F., Bekrar, A., Trentesaux, D., Rey, G.Z., Leitão, P.: Governance mechanism in
control architectures for flexible manufacturing systems. IFAC-PapersOnLine 28(3), 1093–
1098 (2015)
13. Buzacott, J.A.: Modelling manufacturing systems. Robot. Comput. Integr. Manuf. 2(1), 25–32
(1985)
180 E. Capawa Fotsoh et al.
14. Brandimarte, P., Villa, A.: Modeling Manufacturing Systems: from aggregate planning to real
time control 53(9) (2013)
15. Lameche, K., Najid, N.M., Castagna, P., Kouiss, K.: Modularity in the design of reconfigurable
manufacturing systems. IFAC-PapersOnLine 50(1), 3511–3516 (2017)
16. Holvoet, T., Valckenaers, P.: Beliefs, desires and intentions through the environment. In:
Proceedings of the International Conference on Autonomous Agents, vol. 2006, pp. 1052–
1054 (2006)
17. Valckenaers, P.: ARTI reference architecture - PROSA revisited. In: Borangiu, T., et al. (eds.)
Service Orientation in Holonic and Multi-Agent Manufacturing. Studies in Computational
Intelligence, p. 19. Springer, Cham (2019)
18. Castagna, P., Mebarki, N., Gauduel, R.: Apport de la simulation comme outil d’aide au pilotage
des systemes de production-exemples d’application. In: Proceedings of MOSIM MOSIM
2001, Troyes, France, 25–27 April 2001, pp. 241–247. https://www1.utt.fr/mosim01/pdf/
ARTICLE-091.pdf
19. Kouki, M., Cardin, O., Castagna, P., Cornardeau, C.: Input data management for energy related
discrete event simulation modelling. J. Clean. Prod. 141, 194–207 (2017)
20. Maier-Speredelozzi, V., Hu, S.J.: Selecting manufacturing system configurations based on
performance using AHP. Technical Paper – Society of Manufacturing Engineering MS, no.
MS02-179, pp. 1–8 (2002)
21. Cardin, O., Castagna, P.: Proactive production activity control by online simulation. Int. J.
Simul. Process Model. 6(3), 177–186 (2011)
22. Ateekh-Ur-Rehman, L.-U.-R.: Manufacturing configuration selection using multicriteria
decision tool. Int. J. Adv. Manuf. Technol. 65(5–8), 625–639 (2013)
23. Trentesaux, D.: Pilotage hétérarchique des systèmes de production, Habilitation thesis, Uni-
versité de Valenciennes et du Hainaut-Cambrésis (2002). https://tel.archives-ouvertes.fr/tel-
00536486/en/
24. Redelinghuys, A., Basson, A., Kruger, K.: A six-layer digital twin architecture for a man-
ufacturing cell, service orientation in holonic and multi-agent manufacturing. In: Borangiu,
T., et al. (eds.) Studies in Computational Intelligence, pp. 273–284. Springer, Cham, January
2019
25. Bouyssou, D., Dubois, D., Prade, H., Pirlot, M.: Decision Making Process: Concepts and
Methods. Wiley, New York (2013)
Holonic Architecture for a Table Grape
Production Management System
1 Introduction
The fourth industrial revolution has brought forth the evolution of cyber-physical systems
made possible through the developments in the IT infrastructure. These developments
include the increased usage of the internet to wirelessly connect resources, information,
objects and people with each other, to create the Internet of Things (IoT). The IoT has
been adopted in the manufacturing industry to improve their efficiency in order to stay
competitive [1].
The agricultural industry is traditionally very labour intensive, with significant depen-
dence placed on human workers to perform the production tasks. With the fourth indus-
trial revolution and the development of cyber-physical systems and IoT, the world has
become more connected, and the product specifications have become more precise as
new technologies enables the production of customized products. Therefore, the agri-
cultural industry must imitate the manufacturing industry and adopt new technologies
to improve their efficiency and product quality to stay competitive.
This paper considers the South African table grape industry. South Africa is a devel-
oping country where manual labour costs are still comparatively low and companies
are motivated to create jobs. Therefore, it is often more beneficial for companies in
South Africa to rather employ workers to accomplish tasks than investing in expensive
technology to automate processes.
South African farmers are not only faced with the challenges of keeping up with
the rapidly changing markets, but also with managing all the production management
aspects, such as the workers, tools and production processes. This creates a demand for
a production management system that improves the efficiency of the production process
and the quality of the final product [2].
Table grape production management consists of the following aspects that define it
as an open-air engineering process [3]:
This means that a table grape production management system must not only effi-
ciently adapt to frequent and unexpected production order changes, but also to changes
produced by the varying work performance of tasks, cooperation between workers,
cooperation between successive system tasks, changes in environmental conditions and
changes induced by market demand changes. The system needs to exhibit agility in
response to changes and robustness in its handling of disturbances [4].
This paper presents a holonic architecture for a table grape production management
system, which has the potential to address the above-mentioned challenges. The holonic
systems approach originates from the theories of Arthur Koestler [5]. The word holon
is constructed from the Greek word ‘holos’ meaning whole, and the suffix ‘on’ meaning
a part. These holons are, as the word itself describes, used to construct entities that
can simultaneously be a part of a larger entity and be an entity consisting of numerous
autonomous and cooperative entities.
The holonic systems approach has subsequently been used to develop architectures
for the modelling and control of complex systems – most notably in the field of holonic
manufacturing systems. However, the recent development of the Activity-Resource-
Type-Instance (ARTI) holonic reference architecture [3] aims to support applications
outside the manufacturing domain as well. This paper thus proposes an ARTI reference
architecture implementation for a holonic production management system for the table
grape industry.
The paper discusses the production management of the table grape industry in Sect. 2
to give an overview of the challenges and how the table grape industry can benefit from
the use of a table grape production management system. The paper then provides an
overview of holonic systems and the ARTI reference architecture in Sect. 3. In Sect. 4,
the paper describes how the ARTI architecture can be implemented to create a production
management system for the table grape industry and discusses the potential benefits of
such a system. Finally, the paper finishes with a conclusion and a discussion of the future
work in Sect. 5.
Holonic Architecture for a Table Grape Production Management System 183
The production order contains specifications for the grapes, the packhouse and the
packing material. This information is used to support three decisions to be made by
the production manager – labelled as numbers 2–4 in Fig. 1. These decisions, in turn,
initiate the set of decisions to be made by the farm manager (numbers 5–8 in Fig. 1).
The decision regarding grape harvesting (5) is executed according to the result of the
grape selection decision (2). Decisions regarding quality control (6) and grape packing
(7) are dependent on the results of the packhouse selection (3). Decisions regarding
transportation (8) consider the transportation of both grapes and packing materials and,
184 J. J. Rossouw
as such, are dependent on the outcomes of decisions regarding the selection of grapes
(2) and packhouse (3).
The grape selection is done according to the quality and quantity of the available
grapes that can satisfy the production order specifications. Each block of vineyard will
have a unique quality and quantity of grapes, which determines its suitability for a
specific production order. The quality of grapes is determined according to the colour,
sugar levels, blemish and size of the grape berries.
Each farm is equipped with its own packhouse. Usually, when grapes are assigned
to a production order, the packhouse on the same farm as the assigned grapes will be
assigned to the production order. This reduces the distance the grapes are transported
from the vineyard to the packhouse where the quality control and packaging are done.
However, each packhouse is different in terms of the farm on which the packhouse is
located, the capacity of grapes the packhouse can handle, and the food safety and hygiene
accreditation of the packhouse (as required by certain markets). This may result in the
selected packhouse being on a different farm due to the accreditation requirements.
It may also happen that multiple packhouses are assigned to one production order to
increase the throughput when there are packhouses available.
The packing material selection is done according to the specifications of the produc-
tion order. The production order will specify the inner packaging containing the grapes,
carton boxes wherein the grapes are packaged, labels placed on the packaging to iden-
tify the grapes and sulphur dioxide sheets to prevent fungal growth during storage and
transportation that should be used for the product. The packing materials assigned to the
production order are chosen from the available packing materials in the local packing
material storage facility.
The harvesting of the grapes is done by teams of workers with the required skills.
The harvesting teams are assigned by the farm manager to harvest the grapes assigned by
the production manager. The harvesting teams will harvest the grapes according to the
instructions they receive from the farm manager. Each team has a supervisor to ensure
the harvesting is done correctly and to instruct the team on the harvesting.
The transportation consists of transporting grapes from the vineyards, and packing
materials from the packing material store, to the packhouses. The transportation task
depends on the quantities of packing materials and grapes required by the production
order. The transportation fleet, from which vehicles can be assigned to the transportation
task, consists of tractors with trailers, trucks and utility vehicles. The farm manager
assigns a vehicle to perform the transportation task. Although a tractor is ideally used
for the transportation of grapes, the trucks ideally for the transportation of packing
materials, and the utility vehicles for the transportation of workers, the farm manager
can decide to assign any of the vehicles to any activity; depending on the availability of
vehicles and priority of a transportation activity.
The quality control and the packaging of the grapes are done within the assigned
packhouse. The quality control and grape packing are done by quality control and grape
packing stations. These stations employ one to four workers. The farm manager assigns
workers with the required skills to the stations. The farm manager also gives instructions
to the stations regarding the quality control and the grape packing. The workers at
the quality control stations will inspect the quality of the grapes as they arrive in the
Holonic Architecture for a Table Grape Production Management System 185
packhouse and adjust the grape bunches if required (by removing grape berries that are
damaged, too small or that did not colour enough).
The grapes that satisfy the production order’s quality specification are then passed
on to the packing stations. The workers at the packing stations place the grapes in the
inner packaging, before placing them in the carton boxes and placing the sulphur dioxide
sheets on top. The carton boxes are then closed and the labels specified in the production
order are placed on the boxes. The packaged grape boxes are then placed on pallets to
load them onto trucks to transport the product off the farm [8].
3 Holonic Architecture
The holonic systems approach aims to model complex systems as multiple autonomous
and cooperative entities. These entities, called holons, are autonomous in the sense that
they can create their own plans and/or control the execution thereof; and cooperative, in
the sense that they can develop mutually acceptable plans and/or strategies and execute
them. Holons can represent a physical or logical activity that transforms, transports or
stores information and physical objects [3].
Holons can be grouped (often dynamically) into holarchies, which could exhibit
hierarchical, heterarchical or hybrid structures. In holarchies, holons can work together
in a cooperative manner to achieve a complex system goal by combining their skills and
knowledge [9]. The structure of the holonic system can also exhibit fractal characteristics,
by which holarchies can be aggregated to form larger holons with their own identity.
The holons within the holarchy can belong to multiple holarchies, depending on the
functionality of the holons and where in the system the functionality of these holons are
required. These holarchies can be designed upfront or they can be dynamically created
by interactions with other holons within the system, according to the application’s needs
[10].
The ARTI holonic reference architecture uses the holonic systems approach to simplify
the modelling and control of complex systems. It is the result of a recent reconsideration
of its predecessor, the Product-Resource-Order-Staff Architecture (PROSA) [10], which
was developed for holonic control implementations for manufacturing systems. ARTI
addresses the shortcomings of PROSA, and proposes more generic terminology, to offer
improved support for applications outside the manufacturing domain.
As explained in Sect. 3.1, the holonic systems approach dictates that a complex
system must be broken down into a collection of holons. According to ARTI, these
holons should be classified in three dimensions (as depicted in Fig. 2):
• Resource or Activity
• Type or Instance
• Intelligent Being or Intelligent Agent.
186 J. J. Rossouw
ARTI prescribes that the holons in the system can either be Resources or Activities
– holons can either perform some service or coordinate the performance of services
by other holons. Furthermore, a holon can be classified as a Type or an Instance. Type
holons contain the expert knowledge and functionality to support the performance of
system tasks, while Instance holons are responsible for actually performing system
tasks. Finally, holons can either be Intelligent Beings or Intelligent Agents. Intelligent
Being holons can reflect and affect the state of the real or virtual system, and are thus
capable of performing system tasks. Intelligent Agents encapsulate the decision-making
functionality required for the effective performance of system tasks. The classification
of holons within ARTI is further explained in Sect. 4, in the context of a table grape
production management system.
Since the ARTI architecture maps a complex system to several holons of specific
type and functionality, each holon becomes responsible for managing and monitoring its
own small environment. Having each holon monitoring and managing only a small part
of the system reduces the overall complexity and increases the stability of the system.
Furthermore, the architecture allows for easy modification by only having to add or
remove single holons instead of changing entire system sections and their interactions
with the rest of the system.
For the ARTI reference architecture to sufficiently simplify a complex system while
maintaining flexibility and modification, the architecture requires that each system com-
ponent be mapped into a single ARTI-cube. Since a lot of systems are still strongly
dependent on humans, the ARTI architecture makes provision for humans performing
system tasks. However, humans are inherently equipped with abilities that allows them
to span across multiple ARTI-cubes and can therefore not be divided into a single cube.
Therefore, humans are represented as activity performers performing tasks spanning
multiple cubes [3].
Holonic Architecture for a Table Grape Production Management System 187
The WOI should reflect the real world as accurately as possible mirroring everything
in the WOI with a single real world counterpart and should update accurately whenever
reality changes.
The activities identified in the table grape production management are the selection
of the grapes, selection of the packhouse, selection of the packing materials, grape har-
vesting, quality control, grape packing and transportation. These activities all coordinate
the performance of service-providing holons. The resources identified in the production
management WOI are the vineyards, packhouses, packing material storage facility, har-
vesting teams, quality control stations, packing stations and transportation fleet, as well as
the production manager, farm manager and packing material store team. These resources
are all service providers and are represented by service-providing resource holons. All
these identified resources and activities inherent mental states, commitments, policies
and decision-making mechanisms.
To implement the ARTI architecture, the table grape production management sys-
tem’s resources and activities must be mapped to the ARTI-cubes (Fig. 2). To illustrate
how a system can be mapped according to the ARTI-cubes, the transportation activity
and transportation vehicle resource will be used. The transportation activity entails the
assignment of a transportation vehicle resources to transport grapes, packing materials
or workers, and the transportation vehicle resource manages the transportation service.
Transportation Activity Holon. The different types of transportation activities that can
be executed by the system are mapped to activity type holons. When a transportation
activity is required, the real world transportation activity is mirrored in the system by
creating an activity instance holon. These types and instances are further divided into
intelligent beings containing the execution functionality and intelligent agents containing
the decision-making functionality, as summarised in Fig. 3.
188 J. J. Rossouw
aspects, or updating the execution aspects without changing the decision-making aspects.
Since the implementation of the intelligent being instances are generic, the functional-
ities concerning how an intelligent being activity instance interacts with its intelligent
being type and with the intelligent agents are generic for all activities. This allows
the activity specific execution or decision-making functionality of the system to be
updated without changing the functionality concerning the interaction between ARTI
components, or changing the functionality concerning interaction between ARTI compo-
nents of the system without changing the activity-specific execution or decision-making
functionality.
Transportation Vehicle Resource Holon. The different types of vehicle resources that
can be used to perform transportation activities are mapped to resource type holons.
When a vehicle resource is active and available for the system to use, the real world
vehicle resource is mirrored in the system by creating a resource instance holon of the
specific vehicle type. These types and instances are further divided into intelligent beings
containing the execution functionality of the resources and intelligent agents containing
the decision-making functionality, as summarised in Fig. 4.
The execution functionality of the vehicle resource intelligent of specifying the mes-
sage response behaviour and the defining the beings consists functionality concerning
vehicle resource’s schedule, as well as how the collected resource-specific information
should be processed and stored.
190 J. J. Rossouw
The resource instances are used to mirror the real world vehicle resources. However,
the implementation of the resource instances is generic and can therefore be used for mul-
tiple vehicle resources. This requires that the resource instances contain the functionality
concerning interaction between ARTI components. This allows the resource instances to
execute the resource-specific functionality that is encapsulated in the resource types. This
is done by the implementation of the NEU protocol in the resource instances to enable
them to obtain the behaviour to execute, to execute the specified behaviour and to obtain
the next behaviour to execute depending on the outcome of the previous behaviour.
The decision-making functionality of a vehicle resource consists of making decisions
about the vehicle resource’s schedule by deciding when a resource completed its service
and what service to schedule next, what information to include in the response messages,
and what behaviour should be executed next depending on the outcome of the previous
behaviour.
The decision-making functionality of the intelligent agents are divided into types and
instances. The intelligent agent instances link the intelligent being instances to intelligent
agent types containing the resource specific decision-making functionalities. This allows
all the resource specific decision-making to be done in the intelligent agent types.
Similar to the activities, as previously stated, the separation of resource execution
and decision-making allows the system’s resources to be easily modified by separately
updating the execution and decision-making aspects. The implementation of the intel-
ligent being instances are generic, similar to that of the activities, which allows the
resource specific execution functionalities to be separately updated to the functionalities
concerning the interaction between ARTI components.
relationship between intelligent beings and agents provides an agile mechanism for the
development and execution of optimized production plans.
Apart from the information on their current state, the activity and resource holons,
as autonomous entities, also manage and maintain their own schedules for future assign-
ments and commitments. This information of future states and behaviours further
supports the production and farm managers in decisions regarding production plans.
References
1. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for implementing the strategic
initiative INDUSTRIE 4.0: Final report of the Industrie 4.0 working group. National Academy
of Science and Engineering (2013)
192 J. J. Rossouw
2. Sihlobo, W.: SA horticulture is blooming, but there’s still room for growth. Daily Maverick
(2019)
3. Valckenaers, P., Van Brussel, H.: Design for the Unexpected, 1st edn. Elsevier, Oxford (2015)
4. Ali, O., Valckenaers, P., Van Belle, J., Saint Germain, B., Verstraete, P., Van Oudheusden,
D.: Towards online planning for open-air engineering processes. Comput. Ind. 64, 242–251
(2012)
5. Koestler, A.: The Ghost in the Machine, 1st edn. Hutchinson, London (1967)
6. Kritzinger, D.: Modulêre kursus in tafel- en droogdruifverbouing. Modular course on table
and dried grape cultivation), Agrimotion, Somerset West, South Africa (2020)
7. South African Department of Agriculture, Forestry and Fisheries: Production guideline –
grapes (2012). https://www.nda.agric.za/docs/Brochures/grapesprod.pdf. Accessed 23 Feb
2020
8. South, S.: Star South Packing Guide. Star South, Wellington (2019)
9. Leitão, P.: An agile and adaptive holonic architecture for manufacturing, University of Porto
(2004)
10. Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture
for holonic manufacturing systems: PROSA. Comput. Ind. 37, 255–276 (1998)
11. Valckenaers, P.: ARTI reference architecture - PROSA revisited, service orientation in holonic
and multi-agent manufacturing. In: Borangiu, T., et al. (eds.) Proceedings of SOHOMA 2018.
Studies in Computational Intelligence, vol. 853, p. 19. Springer, Cham (2018)
12. Valckenaers, P., De Mazière, P.A.: Interacting holons in evolvable execution systems: the
NEU protocol. Ind. Appl. Holonic Multi-Agent Syst. 9266, 120–129 (2015)
Learning Distributed Control for Job
Shops - A Comparative Simulation Study
1 Introduction
2 Literature Overview
The concept of distributed control has been studied in a variety of research disci-
plines. Naturally, Production Planning and Control (PPC) is a research stream
which puts an emphasis on the idea of distributed control for manufacturing.
Moreover, as the implementation of such distributed control relies on large net-
work of computers, it is also studied within computer and electrical engineering.
As the core of the underlying assignment problem is an optimization problem,
the concept of distributed control is also studied within a variety of fields of
applied mathematics, such as operations research, control and game theory. Fur-
thermore, it has also been studied in operations management and management
science. Within PPC, researchers have shown the potential of distributed control
both for logistic as well as production control, with a focus of research on the
application to shop floor manufacturing setting (Bongaerts et al. 2000; Philipp et
al. 2006; Scholz-Reiter et al. 2006; Meissner et al. 2017; Hussain et al. 2019). Such
setup allows for a multitude of production paths for a product, thus providing a
number of alternative paths in case of disruption such as machine malfunctions.
However, in a system with a consequently great number of entities exhibiting
autonomy, the self-serving nature of these entities is likely to result in a local
Learning Distributed Control for Job Shops 195
3 Model
Order O_k
Product Type A
Machine M1 Machine MZ
No
Second Yes
Decision
Shop Floor
where MCj ⊆ M is the subset of machines capable to process job j, JM the set
of jobs already assigned to machine M and pl the amount of iterations necessary
to process job l.
We compare the QLE dispatching rule with a modification of it, in which
every machine agent considers its own historical data. For every machining step
performed, the machine agent records the deviation from the ideal processing
time and considers these deviations for future queue length estimations. With
λ̄M as maximum likelihood estimator of the rate parameter for machine M , we
can formulate this decision process analog to QLE as
min (pl + λ̄M ) (LQLE)
M ∈MCj
l∈JM
There is a multitude of rules that these production order agents can apply, such
as first-in first-out (FIFO) or last-in first-out (LIFO). For our simulation, we give
preferential treatment to jobs of production orders with the most following sub-
sequent jobs, otherwise applying FIFO. This approach aims to enable the entire
manufacturing network to process a mixture of different product types efficiently,
ranging from simple products requiring only two jobs to more complicated prod-
ucts requiring three or four jobs. The overall objective of the manufacturing
system is the fulfillment of every production order. Beyond this basic require-
ment, there is a multitude of further objectives that could be considered, such
as the reduction of costs, the maximization of the manufacturing systems uti-
lization or a reduction to the minimal iterations necessary to fulfill production
orders among many others.
4 Findings
In order to compare the QLE and LQLE dispatching rules, we devised the fol-
lowing simulation setup. We consider a job shop with a total of 20 machines
and four different machine types, with five machines per machine type. For
every machine type, the five machines are of different quality, represented by the
parameter λ ∈ {0, 1, 2, 3, 4} of its Poisson distribution, modeling the machines
deviation from the ideal processing time.
This setup is depicted in Fig. 2. We con-
sider a time frame of a total of 500 iterations
λ=0 λ=1 λ=2 λ=3 λ=4 for each simulation run, divided into intervals
of 50 iterations each. Production orders enter
the system in form of batches, with a batch
λ=0 λ=1 λ=2 λ=3 λ=4
consisting of a multitude of production orders
of the same product type. Every iteration
interval features exactly the same batches of
λ=0 λ=1 λ=2 λ=3 λ=4
production orders entering the system, at the
same relative iterations respectively. Produc-
tion orders of these batches can either be fin-
λ=0 λ=1 λ=2 λ=3 λ=4
ished within the iteration interval in which
they were ordered, or extend into the follow-
Fig. 2. Job shop configuration ing iteration interval, delaying the batches
of that iteration interval. Consequently, we
introduce batches of production orders in the first nine of the total of ten itera-
tion intervals. For this simulation, we consider six different batches of six different
product types, respectively. These batches vary in size (amount of production
orders) and the iteration at which they enter the manufacturing system. Con-
cluding, the load on the manufacturing system added every iteration interval
is identical, with production orders not finished within its respective iteration
interval impacting the processing of the following iteration interval. The con-
stant base load per iteration interval on the system is chosen in such a way, that
in principle it is possible to be processed entirely within the iteration interval in
which it entered the manufacturing network.
Learning Distributed Control for Job Shops 199
100.0%
95.0%
Completed Orders of Intervall
90.0%
Control
Method
QLE
LQLE
85.0%
80.0%
75.0%
1 2 3 4 5 6 7 8 9
Intervall
70
60
Additional Iterations required
50
Control
Method
40
QLE
LQLE
30
20
10
1 2 3 4 5 6 7 8 9
Intervall
5 Conclusion
The conducted simulation provides interesting insights into the potentials of dis-
tributed control for PPC in shop floor settings. The presented DES-MAS simu-
lation setup allows us to distribute control to agents corresponding to machines
and productions orders, thus enabling these agents to exhibit local control over
parts of the manufacturing process. With this local decision making, we explored
the potential of local data processing by considering historic data on a machine
Learning Distributed Control for Job Shops 201
References
Antons, O., Arlinghaus, J.C.: Modelling autonomous production control: a guide to
select the most suitable modelling approach. In: Lecture Notes in Logistics, pp. 245–
253, January 2020a. https://doi.org/10.1007/978-3-030-44783-0 24
Antons, O., Bendul, J.: Decision making in Industry 4.0 – a comparison of distributed
control approaches. In: Studies in Computational Intelligence, vol. 853, pp. 329–339,
January 2020b. https://doi.org/10.1007/978-3-030-27477-1 25
Aström, K.J.: Process control - past, present, and future. IEEE Control Syst. Mag.
5(3), 7 (1985)
Beregi, R., Szaller, Á., Kádár, B.: Synergy of multimodelling for process control. IFAC-
PapersOnLine 51(11), 1023–1028 (2018). https://doi.org/10.1016/j.ifacol.2018.08.
473
Bertelsmeier, F., Trächtler, A.: Decentralized controller reconfiguration strategies for
hybrid system dynamics based on product-intelligence. In: 2015 IEEE 20th Con-
ference on Emerging Technologies & Factory Automation (ETFA). IEEE, pp. 1–8
(2015). https://doi.org/10.1109/ETFA.2015.7301527
Blunck, H., et al.: The balance of autonomous and centralized control in scheduling
problems. In: Applied Network Science 3.1, January 2018. https://doi.org/10.1007/
s41109-018-0071-6
Bongaerts, L., et al.: Hierarchy in distributed shop floor control. Comput. Ind. 43(2),
123–137 (2000). https://doi.org/10.1016/S0166-3615(00)00062-2
Caridi, M., Cavalieri, S.: Multi-agent systems in production planning and control:
an overview. Prod. Plan. Control 15(2), 106–118 (2007). https://doi.org/10.1080/
09537280410001662556
Duffie, N.A.: Synthesis of Heterarchical manufacturing systems. In: Comput-
ers in Industry 14.1-3, pp. 167–174, May 1990. https://doi.org/10.1016/0166-
3615(90)90118-9
202 O. Antons and J. C. Arlinghaus
Grundstein, S., Freitag, M., Scholz-Reiter, B.: A new method for autonomous control
of complex job shops – Integrating order release, sequencing and capacity control to
meet due dates. J. Manuf. Syst. 42, 11–28 (2017). https://doi.org/10.1016/j.jmsy.
2016.10.006
Hussain, M.S., Ali, M.: Distributed control of flexible manufacturing system: control
and performance perspectives. Int. J. Eng. Appl. Manage. Sci. Paradigm 54(2), 156–
162 (2019)
Jones, A.T., Romero, D., Wuest, T.: Modeling agents as joint cognitive systems in
smart manufacturing systems. Manuf. Lett. 17, 6–8 (2018). https://doi.org/10.1016/
j.mfglet.2018.06.002
Koinoda, N., Kera, K., Kubo, T.: An autonomous, decentralized control system for
factory automation. Computer 17(12), 73–83 (1984)
Meissner, H., Ilsen, R., Aurich, J.C.: Analysis of control architectures in the context of
Industry 4.0. Procedia CIRP 62, 165–169 (2017). https://doi.org/10.1016/j.procir.
2016.06.113
Monostori, L., et al.: Cooperative control in production and logistics. Ann. Rev. Control
39, 12–29 (2015). https://doi.org/10.1016/j.arcontrol.2015.03.001
Morariu, O., et al.: Multi-agent system for heterarchical productdriven manufactur-
ing. In: 2014 IEEE International Conference on Automation, Quality and Testing,
Robotics, pp. 1–6. IEEE (2014). https://doi.org/10.1109/AQTR.2014.6857897
Philipp, T., Böse, F., Windt, K.: Evaluation of autonomously controlled logistic pro-
cesses. In: Proceedings of 5th CIRP International Seminar on Intelligent Computa-
tion in Manufacturing Engineering. CIRP, The International Academy for Produc-
tion Engineering, pp. 347–352 (2006)
Romero, D., Jones, A.T., Wuest, T.: A new architecture for controlling smart man-
ufacturing systems. In: 2018 International Conference on Intelligent Systems (IS).
IEEE, pp. 421–427 (2018)
Palau, A.S., Dhada, M.H., Parlikad, A.K.: Multi-agent system architectures for col-
laborative prognostics. J. Intell. Manufact. (2019). https://doi.org/10.1007/s10845-
019-01478-9
Scholz-Reiter, B., et al.: The influence of production networks’ complexity on the per-
formance of autonomous control methods. In: Proceedings of the 5th CIRP Interna-
tional Seminar on Computation in Manufacturing Engineering, pp. 317–320 (2006)
Scholz-Reiter, B., et al.: Modelling and analysis of autonomously controlled produc-
tion networks. IFAC Proc. Vol. 42(4), 846–851 (2009). https://doi.org/10.3182/
20090603-3-RU-2001.0081
Trentesaux, D.: Distributed control of production systems. Eng. Appl. Artif. Intell.
22(7), 971–978 (2009). https://doi.org/10.1016/j.engappai.2009.05.001
Wang, L., Törngren, M., Onori, M.: Current status and advancement of cyber-physical
systems in manufacturing. J. Manuf. Syst. 37, 517–527 (2015). https://doi.org/10.
1016/j.jmsy.2015.04.008
Weichart, G., et al.: An agent- and role-based planning approach for flexible automation
of advanced production systems. In: 2018 International Conference on Intelligent
Systems (IS), May 2019. https://doi.org/10.1109/IS.2018.8710546
Zambrano Rey, G., et al.: Reducing myopic behavior in FMS control: a semi-
heterarchical simulation-optimization approach. Simul. Model. Pract. Theory 46,
53–75 (2014). https://doi.org/10.1016/j.simpat.2014.01.005
A Reactive Approach for Reducing the Myopic
and Nervous Behaviour of Manufacturing
Systems
j-jimenez@javeriana.edu.co
Abstract. Scheduling is a crucial activity for the successful control and piloting
of manufacturing activities. Manufacturing systems operate in dynamic environ-
ments vulnerable to real-time events, which frequently force a reactive revision of
pre-established schedules. In an uncertain environment, it has become preferable
to adapt the scheduling from pre-established schedules as the latter may become
vastly degraded at unexpected events. For this, a reactive module is often included
to update the scheduling in order to assure manufacturing execution. However,
due to the need of rapid response, this updating might be executed experiencing
myopia and/or nervousness issues. This paper aims to develop a proof of con-
cept of a decision support system for practitioners, attempting to minimize the
degradation and lack of information during these scenarios. This approach starts
with a metaheuristic technique to generate a predictive schedule for establishing
an initial scheduling, named pre-established schedule. Then, during disruptions, it
executes a scheduling updating through a reactive module based on a heuristic tech-
nique. The proposed approach was tested in a simulated scenario of a real flexible
manufacturing system located in Valenciennes (France), called AIP-PRIMECA
Valenciennes.
1 Introduction
The arrival of new concepts, methods and technologies related to the digital transfor-
mation revolution has induced a substantial influence in manufacturing industries. It is
based on the establishment of smart factories, smart products, and smart services [11].
This revolution is perceived to be the key to higher levels of automation, to more efficient
processes and to better planning and control of manufacturing systems to achieve higher
flexibility and robustness. In a smart factory scenario, the definition of a control system
2 Proposed Approach
This section introduces the proposed approach that attempts to reduce the impact of
both myopia and nervousness in dynamic hybrid control architectures. First, a reactive
mechanism is presented. It is characterized by a local performance indicator trigger that
evaluates the status of the system and the need for a switch from the pre-established
schedule to a reactive policy. Second, a reactive control policy guided by a heuristic
is introduced. It contains strategies to control both myopia and nervousness based on
the supervisory entities and communication protocols mechanisms using multi-agent
systems [15].
This proposal is based on the Pollux architecture definitions [8]. Pollux is built over
three layers: the operation, the coordination, and the physical layer. It is composed of
three main types of decisional entities: the global decisional entities (GDE), the local
decisional entities (LDE) and the resource (RDE) decisional entities. A decisional entity
is mainly composed of an entity objective, a decision-making technique, governance
parameters, and a communication component.
The reactive mechanism (RM) introduced in this approach contains three describing
components as follows. Regarding the first component, the RM has both structural and
behavioural characteristics. In the structural characteristics, the RM can switch the gover-
nance parameter between decisional entities, i.e. coercive, permissive. Then, it is feasible
that the physical layer can simultaneously contain entities guided by a hierarchical deci-
sion relationship (coercive relationship), and entities following heterarchical decisions
(permissive relationship). At behavioural characteristics, the RM switches the objective
function and the decision-making processes of decisional entities to react to possible
perturbations occurring in the system. This means that the GDEs and the LDEs have
206 S.-M. Meza et al.
different behavioural characteristics. Regarding the second aspect, the degree of opti-
mality reached by RM is heuristic. Once RM switches the operation mode (structural
and behavioural characteristics), a reactive heuristic is executed by the LDEs whose
governance parameter state is permissive. Finally, the reason for switching is to react to
unforeseen events. The reactive mechanism is triggered by a local performance indica-
tor (LPI) measure method. Hence it is necessary to monitor the system continuously to
gather real time system information with aim of estimating the indicator.
A major advantage of RM is that it limits the nervousness of the control architecture.
Furthermore, RM leads to a fair use of LDE’s decision-making technique assuring a
reduction of the disturbance effect on the system while avoiding a global performance
decrease. To achieve that, RM changes the governance parameter state only of the entities
affected by a perturbation during the execution. At the beginning of the execution, the
operating mode establishes all governance parameters as coercive; this means that the
GDEs impose the instructions on the LDEs. The process starts when RM retrieves the
data from the set of jobs being processed. Once the LPI of each LDE exceeds the reference
threshold (rt) the reactive mechanism is triggered. It permits to specifically know which
manufacturing entities are actually impacted by the disturbance. The LPI is also used to
switch back the governance parameter. In case the indicator value of an LDE is lower
than rt the LDEs follow the commands initially given by the GDE.
Figure 1 shows the progress of the following control approaches: proposed approach
(green line); predictive–reactive approaches (red line) without control strategies; and
centralized approaches (blue line). For the centralized approach, there is a degradation
of both indicators caused by the perturbation occurrence. In the predictive-reactive app-
roach, manufacturing execution is assured but while one indicator improves, the other
may suffer further degradation because of the nature of reactive decisions and the lack
of control over the architecture drawbacks. The proposed approach achieves a balance
between the indicators (improvement of both) given the proposed mechanisms.
Table 1 summarizes the proposed mechanisms and their relationship with the D-HCA
drawbacks.
No reactive policy
Reactive policy – mechanisms not included
Reactive policy – mechanisms included
Local performance indicator (LPI)
Fig. 1. Conceptual model of the relationship between global and local objectives
3 Case Study
This section has been divided into three parts. The first part describes the flexible job
shop system used in this case study. Then, the instantiation of the proposed D-HCA
is presented, considering the inclusion of the proposed approach into the structural and
behavioural characteristics of the D-HCA. Finally, the experimental protocol on the case
study in order to validate the benefits of including a coupled strategy or reducing the
myopic and nervous behaviour of distributed systems is presented and conducted.
208 S.-M. Meza et al.
guided by the reactive heuristic technique. Its objective is to minimize the LPI of each
LDE considering the described strategies in Sect. 2 as shown in Fig. 2. With regard to
the job priority strategy, it was defined that the amount of most affected jobs (greatest
LPI) corresponds to 20% of the total number of jobs to be processed. This implies that
only these entities continue to look for a reactive solution. Regarding the queue strategy,
the maximum queue size was set at 1. Two PDPs were defined in the heuristic. The first
one corresponds to the turning points of (TN) in the manufacturing cell. The second
corresponds to the machines (MN). Decisions are executed only on those points. The
RDEs control the resources and their role are static, as their behaviour is not changed
by the switching mechanism.
Lj = t − tsj (1)
where t is the current execution time and tsj is the expected start time of the next
operation derived from the pre-established schedule. The mechanism switches the gover-
nance parameter to permissive of each LDE whose associated LPI exceeds the reference
threshold. It can be defined as follows:
where X is the current state of the LDE on the shop floor and α 0 is the reference
threshold. For the experiments, the parameter α 0 is set to 0. Similarly, when the LPI
value of an LDE is below α 0 the mechanism switches back the governance parameter to
coercive. Form that switching point, the LDE follows the instructions given by the GDE
in order to complete the remaining operations. This approach does not need a switching
synchronization because the reactive decision-making is executed at real time.
Start
Is the NO
LDE one of the most NO A
affected entities?
NO
Can machine m
process LDE next YES Is machine m empty?
operation?
NO YES
NO
Reassign machine m into LDE machine
f=f+1 sequence to process next operation
M7 suffers a breakdown. It lasts 15 times the processing time of all products in seconds.
The moment of disruption is after the departure of the first shuttle from M7. For the
experiments, three different scenarios were defined to evaluate the performance of the
proposed D-HCA. In scenario A the described disruption was modelled following only
the predictive decisions made by GDEs (fully predictive approach). In scenario B, the
architecture integrates the switching mechanism and the reactive heuristic. The latter
A Reactive Approach for Reducing the Myopic and Nervous Behaviour 211
does not consider the strategies presented in Table 1. Finally, in scenario C, the D-HCA
integrates the proposed strategies and is instantiated as presented in this chapter.
3.4 Results
Figure 3 presents the makespan obtained by each scenario for mrj_101. It also shows the
evolution of the mean lateness during execution for the same instance. Firstly, the result
reinforces that performance varies depending on the reactive policy configuration used
in the control architecture as shown in Fig. 1. The architecture described in scenario A
does not have a reactive behaviour and therefore is not able to make alternative deci-
sions. The reactive mechanism allows alternative decisions to be chosen and it absorbs
the degradation caused by disruptions. Secondly, the graph shows that the reactive pol-
icy of scenarios B and C represents improvements on the global performance measure
compared to the reference scenario. However, in scenario B, given the absence of mech-
anisms to control myopia, there is no balance between the global and local objectives
of the system. Therefore, although the value of makespan is reduced, the final value of
mean lateness increases.
Finally, in scenario B, the CTV value (20846) is higher than in scenario A (CTV
= 20215) since the decisions were only reactive to ensure the production continuity
but did not consider other entities information. In scenario C the CTV value decreases
(13036) demonstrating the efficiency of the proposed mechanisms. It was expected after
achieving a balance between the indicators.
Figure 4 shows the number of changes of jobs decisions in 10 s intervals during
execution and presents the evolution of the total number of decision changes of the jobs
for the mrj_101 instance. This result confirms that the proposed mechanisms achieve
their objective of controlling the nervousness of the system. On the one hand, they reduce
the number of decision changes in the presented time interval from a maximum of six
changes (scenario B) to a maximum of two changes (scenario C). On the other hand, they
reduce the total number of decision changes during execution from 82 changes to 25
changes in scenarios B and C, respectively. Additionally, the NI calculated for scenario
C (0, 40) was lower than in scenario B (2, 14).
212 S.-M. Meza et al.
140 A
B Lt = 129,85
Perturbation occurrence
C CTV = 20846
100
Mean lateness
Lt = 85,2
60 CTV = 20215
Lt = 52,39
CTV = 13036
20
C 536,4
B 632,2
A 651
makespan
Table 3 presents the results obtained from the simulation of each scenario above
described. The makespan (mkp) and mean lateness (mlt) refer to the system performance
indicators. The CTV and NI values show myopic and nervous behavior measurements,
respectively. The results indicate a link between the control of myopia and nervousness
and the system performance since the decrease of the indicators led to an improvement of
the system’s global and local indicators. Nevertheless, complementary statistical studies
must be led to confirm generalization of these results.
In scenario A, it was possible to follow the decisions generated by the GDEs since the
disruption did not last the whole execution time. Therefore, products with the machine
M7 in their sequence could be processed after repair. In scenario B, the results reinforce
the reactivity provided by the heuristic technique. In fact, it reduces the degradation on
the global indicator caused by the disturbance. However, given the myopic decision-
making (myopic behaviour), the local indicator performance suffers greater degradation
than in the reference scenario in which no reactive decisions were made. Furthermore,
the nervousness in the decision making, i.e. changing many times the selected machine
(reactive decision), causes the products start looping through the flexible job shop search-
ing for a decision. On the contrary, in scenario C, the myopic behaviour is reduced by the
queue control strategy achieving a balance between the global and local indicators of the
system. The job priority strategy reduces the recirculation of products in the system, as
only 20% of them search for a reactive intervention avoiding the changes of intentions
caused by the reactive mechanism. Physical points strategy has an impact as well on the
recirculation of jobs in the system. Its major advantage is that it allows the job to remain
on the machine being processed if it is one of the most affected (greatest LPI indicator)
and if that machine can perform its next operation.
A Reactive Approach for Reducing the Myopic and Nervous Behaviour 213
14
B 80
Number of decision changes in time interval
C Tc = 82
12
Perturbation occurrence
10 60
8 50
40
6
Tc = 25
NI = 0,40 30
4
20
2
10
0 0
0 100 200 300 400 500 600
Execution time
4 Conclusions
This paper proposed a D-HCA that integrates a reactive mechanism and a reactive con-
trol policy within the functioning of the semi-heterarchical system for reducing the
myopic and nervous behaviour for the dynamic scheduling of a flexible job shop problem.
The result confirms that including coupled strategies in a D-HCA reduces the myopic
behaviour for minimizing the degradation suffered from perturbations and reduces the
nervousness due the changing between reactive decision-making. Taken together, these
findings suggest including coupled strategies for promoting the control of not desirable
behavior within the control of distributed manufacturing systems.
Despite its exploratory nature, this study offers some insight into the synergy featured
from the coupled strategy. A natural progression of this work is to analyze the parameters
of the proposed strategies that lead to minimization of the degradation between the global
expected metrics and the local execution metrics. However, further studies need to be
214 S.-M. Meza et al.
carried out in order to validate the benefits of coupling strategies and, certainly, exploring
the likely trade-off resulted from the seeking the reduction of myopic behaviour and
nervousness simultaneously.
References
1. Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Dynamic self-organization in holonic multi-
agent manufacturing systems: the ADACOR evolution. Comput. Ind. 66, 99–111 (2015)
2. Bendul, J.C., Blunck, H.: The design space of production planning and control for industry
4.0. Comput. Ind. 105, 260–272 (2019)
3. Cardin, O., Trentesaux, D., Thomas, A., Castagna, P., Berger, T., Bril El-Haouzi, H.: Coupling
predictive scheduling and reactive control in manufacturing hybrid control architectures: state
of the art and future challenges. J. Intell. Manuf. 28, 1503–1517 (2017)
4. Dassisti, M., Giovannini, A., Merla, P., Chimienti, M., Panetto, H.: Hybrid production-system
control-architecture for smart manufacturing. In: Debruyne, C., et al. (eds.) On the Move to
Meaningful Internet Systems, OTM 2017 Workshops. Lecture Notes in Computer Science,
pp. 5–15. Springer, Cham (2018)
5. Derigent, W., Cardin, O., Trentesaux, D.: Industry 4.0: contributions of holonic manufacturing
control architectures and future challenges. J. Intell. Manuf. (2020)
6. Hadeli, P.V., Verstraete, P., Germain, B.S., Van Brussel, H.: A study of system nervousness
in multi-agent manufacturing control system. In: Brueckner, S.A., Di Marzo Serugendo, G.,
Hales, D., Zambonelli, F. (eds.) Engineering Self-Organising Systems, ESOA 2005. Lecture
Notes in Computer Science, pp. 232–243. Springer, Heidelberg (2006)
7. Jimenez, J.F., Bekrar, A., Trentesaux, D., Leitão, P.: A nervousness regulator framework for
dynamic hybrid control architectures. In: Borangiu, T., Trentesaux, D., Thomas, A., McFar-
lane, D. (eds.) Service Orientation in Holonic and Multi-Agent Manufacturing. Studies in
Computational Intelligence, pp. 199–209. Springer, Cham (2016)
8. Jimenez, J.F., Bekrar, A., Zambrano-Rey, G., Trentesaux, D., Leitão, P.: Pollux: a dynamic
hybrid control architecture for flexible job shop systems. Int. J. Prod. Res. 55, 4229–4247
(2017)
9. Mezgebe, T.T., Demesure, G., Bril El Haouzi, H., Pannequin, R., Thomas, A.: CoMM: a
consensus algorithm for multi-agent-based manufacturing system to deal with perturbation.
Int. J. Adv. Manuf. Technol. 105, pp. 3911–3926 (2019)
10. Pinedo, M.L.: Scheduling: Theory, Algorithms, and Systems. Springer, New York (2016)
11. Stock, T., Seliger, G.: Opportunities of Sustainable Manufacturing in Industry 4.0. Proc. CIRP
40, 536–541 (2016)
12. Trentesaux, D.: Distributed control of production systems. Eng. Appl. Artif. Intell. 22, 971–
978 (2009)
13. Trentesaux, D., Pach, C., Bekrar, A., Sallez, Y., Berger, T., Bonte, T., Leitão, P., Barbosa,
J.: Benchmarking flexible job-shop scheduling and control systems. Control Eng. Pract. 21,
1204–1225 (2013)
14. Zambrano Rey, G., Bonte, T., Prabhu, V., Trentesaux, D.: Reducing myopic behavior in FMS
control: a semi-heterarchical simulation-optimization approach. Simul. Model. Pract. Theory.
46, 53–75 (2014)
15. Zambrano Rey, G., Pach, C., Aissani, N., Bekrar, A., Berger, T., Trentesaux, D.: The control
of myopic behavior in semi-heterarchical production systems: a holonic framework. Eng.
Appl. Artif. Intell. 26, 800–817 (2013)
16. Zhong, R.Y., Xu, X., Klotz, E., Newman, S.T.: Intelligent manufacturing in the context of
Industry 4.0: a review. Eng. 3, 616–630 (2017)
Multi-agent Approach for Smart Resilient City
Russian Federation
Abstract. The Smart City concept now entirely relies on information and com-
munication technologies (ICT) with projects providing new or better services for
city residents. The resilience of a city, from our perspective, should also rely on
ICT. Resilience as a service implies predictive modelling and “what if” analysis
for better reaction at unpredictable events and providing emergency services in
critical modes of city life. Resilience as a property of a city means working as
normally as possible for citizens when extreme events occur and also adaptively
react and change the system’s behavior when normal mode has no chance to be
applied. This paper provides a review and analysis of the resilient properties of
existing Smart City frameworks and offers a new concept of a resilient city based
on the Demand-Resource (DR) model, multi-agent system (MAS) and ontologies.
The main idea of this concept is to create an ICT framework that is resilient by
design. For this, it should operate as a digital ecosystem of smart services. The
framework development process is divided into two main steps: first to create
Smart City simulation software for modelling, planning, and strategic assessment
of urban areas as a set of models at different levels of abstraction. The second
step involves the full integration of all services in one dynamic adaptive real-time
digital ecosystem with resilient properties.
1 Introduction
1.1 Basic Definitions
Recent developments place increasing emphasis on strengthening the resilience of ter-
ritorial units to global climate change, natural disasters, social unrest, terrorist attacks,
and cyber-attacks or power outages.
Urban resilience in full was defined in the paper [1]. Based on the classical approach
of Holling [2], resilience is the ability of the system to continue to function with the
change but does not necessarily remain the same. Chelleri [3] defines urban resilience
not as the ability to return for basic condition, but as the ability to change, evolve, and
adapt smoothly. From another perspective, a resilient city is a sustainable network of
physical systems and human communities [4]. Mehmood [5] emphasizes the need to
consider the city as a complex adaptive system to assess resilience. Linkov [6] notes the
need for a network-centric approach to addressing urban sustainability. In the theory of
complex systems, urban resilience is the ability to evolve [7].
There are many definitions contradictory at some points. Therefore, in our work,
we will adhere to the definitions of urban sustainability, urban resilience, and urban
transformation, which Elmqvist et al. [8] derived in their research:
Urban sustainability - manage all resources the urban region is dependent on and enhance
the integration of all sub-systems in an urban region in ways that guarantee the wellbeing
of current and future generations, ensuring distributional equity.
Urban resilience - the capacity of an urban system to absorb disturbance, reorganize,
maintain essentially the same functions and feedback over time, and continue developing
along a particular trajectory. This capacity stems from the character, diversity, redun-
dancies, and interactions among and between the components involved in generating
different functions.
Urban transformation - the systemic change in the urban system. It is a process of
fundamental irreversible changes in infrastructures, ecosystems, agency configurations,
lifestyles, systems of service provision, urban innovation, institutions, and governance.
As opposed to our research, Trucco et al. [18] modelled the city using ontologies
not for planning, but for resilience assessment. Also, several other works are devoted
to modelling the city to assess its resilience. Uribe-Perez and Pous [19] argue that
connections in the city require a unique architecture for modelling. Inspired by the
human nervous system, they propose to give the service bus functions like a spinal cord
for the simplest quickest reactions to events. Cavallaro et al. [20] solve the problem of a
vast number of connections in the city using Hybrid Social-Physical Complex Networks.
Compared with these approaches, we focus in our work on using multi-agent systems
as main decision-making elements, with the city knowledge base as the core element of
storing knowledge about decision-making processes and the most promising method of
ensuring the interoperability of services. It is a network in which all nodes are accessible
to each other and are capable of self-organization and risk management.
To summarize, we can say that many cities accept SRC strategies. They provide the
definition and vision, but they do not answer at how to create the SRC. Based on the
current technologies review, we will consider SRC as a network-centric system based
on ontologies. It should manage the flow of resources and all city services, be capable of
self-organization, adaptability and risk management, and vitally important, have a full
understanding of the current situation and instruments for modelling “what if” scenarios.
entirely autonomously, but able to interact and negotiate, and through concessions make
consistent decisions with other systems [21].
In the nearest future, all smart city systems and smart transport will work in the service
mode. The users will just mention two points A and B, points of origin and destina-
tion. The system will offer several options differing in price, comfort, time, and other
preferences. Even the public transport schedules will be developed based on real-time
demands from customers; solving the global transport task will be a combination of the
resources and demands of millions of users. To solve this task, we need to create the
global city Demand-Resource (DR) model.
The concept of DR is widespread for MA systems. In our work, we extend the
classical vision and offer a general DR model for the whole city. Agents are not localized
in every separate service but can negotiate on the free market of the whole city. In our
model, a demand created in one service can be matched by different solutions of multiple
agents of other services. Every service competes or cooperates for resources and demands
through a special service bus.
Every service in this platform can be functionally unique but common in the core
architecture and created on the same basic principles. The core of every service is a
Demand and Resource model and open access to the collective knowledge base. This
allows services to solve their internal task, but also be part of bigger services (holonic
principal) [22].
The SRC framework is designed as an autonomous smart cyber-physical system, able
to analyze the situation, make decisions and plan their actions, as well as monitor the
execution of plans and results, predict the development of the situation, and communicate
with all participants [21]. It allows all new elements (smart services) to easily enter or
leave this digital ecosystem and provides the full authorized access to the Smart City
Knowledge Base. Except for the KB the core element of the Smart Resilient City is
the virtual forum of agents as global city Demand and Resource model. Every agent on
this level represents a smart service of the city. We describe the SRC framework as a
large-scale networked application composed of the functionally similar elements:
• “Task” (demand, order, request for services) incoming from any entity within the
system or external world;
• “Resource” as the particular entity or product of the smart city services (taxi, parking
spot, a table in the restaurant, etc.);
• “Data source” as the basic telemetry data (sensor, GPS, cloud data, etc.);
• “Software” to support platform operation.
The primary approach is to achieve the interaction of all tasks and resources. This
means that every problem solved by SRC can be described as a combination of demand
and resource interactions. Practically, we can simulate different city sectors as shown in
Fig. 1 using different simulation software for transport, energy, land use, environment,
or other segments.
Multi-agent Approach for Smart Resilient City 219
Each technical component (building, street light, charging station, etc.) or different
uses (citizen, municipality, group of people) requires limited resources (energy, transport,
parking slot, land, etc.) in a given time interval t. We will call them dynamic demand
requirements. This means we need to make a plan for all entities.
To solve this task, we use a multi-agent system where all requirements and resources
are represented by Demand Agents and Resource Agents, which can negotiate among
themselves. In multi-agent systems (MAS), we can organize negotiations among demand
agents through different modelling and simulation tools [23]. Each model (transportation,
energy, environment, etc.) plays the role of “a dynamical digital DR market place” with
limited time-varying resources. Different demand agents negotiate in each time interval
to capture requested resources.
Our approach to a SRC is like puzzles of different pieces (urban areas), which could
be assembled into higher urban units like districts or whole cities. Negotiation among
Demand agents with the city simulation model will yield into dynamical resources
assignments represented by Resource agents that offer the best possible service to each
consumer. In case one consumer does not accept the assigned resources, it must change
its demands. The negotiation can be repeated once again under new conditions.
A plan represents the resulting approach as a user interface between aggregated
demand agents assigned to different smart city components and aggregated urban sus-
tainability parameters (economic, environmental, and social). The decision-makers, typ-
ically the municipality, should specify the sustainability parameters (KPIs) for the whole
urban area. The demand and resource agents mutually negotiate with a city simulation
model to propose to each smart city component the reduced comfort to fulfil the requested
KPI. We use DR service architecture to combine requests and resources, and a multi-
agent system to create a work plan (to satisfy all the demands with limited resources),
so every match of the demand and resource will have its own time slot. From this
perspective, the SRC is becoming a demand and resource model advanced by agents,
satisfaction functions, bonus-penalties and compensations. This model and technology
allow rebuilding a plan in real-time. At unpredictable events we can reschedule the emer-
gency services, governmental services, and other services. This already gives a first but
220 S. Kozhevnikov et al.
essential profile for SRC. If the rescheduling process can be maintained automatically,
we say that this architecture is resilient by design (Fig. 2).
The second SRC profile can be a model of different “what if” scenarios of city
development. The MAS can provide hundreds of different variants according to the set
of different KPIs. Different systems on the top of the proposed framework will provide
additional services (AI, knowledge bases, blockchain, and other instruments).
Now, most of the Smart City concepts rely on creating one unique database (DB) to
provide access for city services [24]. Creating the ontology as the knowledge base on
the top of such data sets is a new idea that is not yet achieved popularity.
This task requires formalizing the knowledge of all aspects of the SRC and enabling
simple access to this knowledge for different services and support interaction within
digital platforms and ecosystems.
The city’s ontology-based model can specify main objects like buildings, roads, bus
stops, traffic lights, energy sources, environment, and others. These objects can have a
detailed description - building can be a restaurant, a business centre or a living building
with many other properties stored as attributes (number of floors, date of construction,
etc.). The ontology can also describe people, their activities or requests, requirements,
or business processes.
For example, we can take the Smart Parking Service as an automated service that
provides the following features:
• It is available for the citizens via mobile application supporting search, reservation,
and payment of car parking place within the city area.
Multi-agent Approach for Smart Resilient City 221
• The mobile application is provided with an embedded interactive city map supporting
the selection, reservation, and payment of the chosen car parking place, visualizing the
shortest car route to the parking and informing parking security about the reservation
made and forthcoming visit.
• The mobile application support for the monitoring and planning of the parking places
availability and reservations using an embedded interactive planner.
• Connection of the car parking service with city adjustable ontology allows creating
new relationships based on existing properties that did not exist before. In case of an
emergency or a full load of parking spaces, the system can analyze the private parking
slots and offer the owners to rent their places when they are not occupied with other
drivers. Thus, the system adds new knowledge about the possibility of parking where
it was previously unfeasible.
The ontology leads to the possibility of changing the destination of the roads. Ontolo-
gies will generate various solutions to problems: launching night buses or new routes,
changing the organization’s work schedule or location, and improving recreational areas
on the outskirts of the city (so that people do not seek the centre).
3 Practical Realization
The development of the SRC framework is divided into three main steps. In the first step
the basic services with their own KPIs and data sets are modelled. In the second step the
smart services are connected and show the cooperative work. The common KPIs for the
whole city are set up, and the “what if” analysis is performed. The first resilient features
can be shown.
The third step (not presented in this paper) is implementing of the SRC platform for
creating a digital ecosystem of services. This step involves the full integration of all ser-
vices in one dynamic adaptive real-time digital ecosystem with resilient and sustainable
properties.
The CSS results can be visualized in two different ways. We use the SRC equalizer
in Fig. 4 for showing the economic, environmental, and social parameters assigned to
each smart city components or the visual comparison of the results made by different
models, Fig. 5.
The mixture of models yields in the SRC simulation of “what-if” critical scenarios
and different city cases:
The strategic target is to build an urban virtual model playing the role of the digital
twin of the real city area, in which economic, environmental, and social parameters
are combined with common synergies among different sectors (transport, environment,
security, etc.) to be optimized. In future augmented reality connected with CSS above,
the 3D model will allow studying different “what-if” situations in parallel in all sectors
e.g., transportation, energy, and the environment in a unified presentation tool.
224 S. Kozhevnikov et al.
In this system, all suppliers and consumers of resources (gas, electricity, water) are
united in a single information field that allows them to plan and optimize the delivery of
resources at the optimal price in real-time. Users and suppliers indicate the parameters
of consumption and production of resources. After that, agents representing the network
objects begin negotiations, which, through biddings and concessions, are completed by
reaching a consensus – that suits all parties. If this solution is not possible, Software
agents make recommendations to consumers and suppliers on how to changes the volume
of demand or supply. The recommendations are sent to users of the system via the
user interface (UI). This communication dialogue continues until the proposal that is
acceptable to all players is formulated. Working with the system, users can also get
access to various statistics on volumes of consumption, production, prices.
The DRN methodology was used to design and develop the Smart Grid multi-agent
model [25]. As stated in Sect. 2, the interactions between agents are only negotiations.
Based on predefined ontology, agents solve the task of resource and demand allocation.
Collaboration can be achieved on higher level between services (different MA systems)
if their resources and demands are represented as entities of common nature.
4 Conclusion
The presented concept and framework of the Smart Resilience City 5.0 provides a new
vision of a city as a digital platform and eco-system of smart services. In this ecosystem
agents of people, things, documents, robots, and other entities can directly negotiate
with each other on-demand resource and provide the best possible solution. The smart
environment of individuals, groups, or other objectives is created, making possible self-
organization in sustainable or, when needed, resilient way. The digital ecosystem of
smart services - an open, distributed system - can autonomously schedule resources of
large-scale applications and provide services; it shows resilience properties by design.
The presented outlines show the basics of the system and application services of
the framework and demonstrate the concept of real-life case studies. The authors plan
to continue working on the software. The next challenge is the integration of all ser-
vices in one dynamic adaptive real-time digital ecosystem with resilient properties and
implementation in the city of Prague.
Acknowledgment. This work was supported by the AI & Reasoning project CZ.02.1.01/0.0/0.0/
15_003/0000466, by the European Regional Development Fund and by the Technology Agency
of the Czech Republic (TACR), National Competence Center of Cybernetics and Artificial
Intelligence, TN01000024.
References
1. Meerow, S., Newell, J.P., Stults, M.: Defining urban resilience: a review. Landsc. Urban Plan.
147, 38–49 (2016)
2. Holling, C.S.: Resilience and stability of ecological systems. Annu. Rev. Ecol. Syst. 4, 1–23
(1973)
226 S. Kozhevnikov et al.
3. Chelleri, L.: From the “Resilient City” to urban resilience. a review essay on understanding
and integrating the resilience perspective for urban systems. Documents d’Anàlisi Geogràfica
58, 287–306 (2012)
4. Cutter, S.: Resilience to What? Resilience for Whom? Geogr. J. 182, 110–113 (2016)
5. Mehmood, A.: Of resilient places: planning for urban resilience. Eur. Plan. Stud. 24(2),
407–419 (2016)
6. Linkov, I., Bridges, T., Creutzig, F., Decker, J., Fox-Lent, C., Kröger, W., Lambert, J., Lever-
mann, A., Montreuil, B., Nathwani, J., Nyer, R., Renn, O., Scharte, B., Scheffler, A., Schreurs,
M., Clemen, T.: Changing the resilience paradigm. Nat. Climate Change 4, 407–409 (2014)
7. Welsh, M.: Resilience and responsibility: governing uncertainty in a complex world. Geogr.
J. 180, 15 (2014)
8. Elmqvist, T., Andersson, E., Frantzeskaki, N., McPhearson, T., Gaffney, O., Takeuchi, K.,
Folke, C.: Sustainability and resilience for transformation in the urban century. Nat. Sustain.
2 (2019)
9. Agudelo-Vera, C., Leduc, W.R.W.A., Mels, A.R., Rijnaarts, H.: Harvesting urban resource
towards more resilient cities. Resour. Conserv. Recycl. 64, 3–12 (2012)
10. Batty, M., Axhausen, K., Giannotti, F., Pozdnoukhov, A., Bazzani, A., Wachowicz, M.,
Ouzounis, G., Portugali, Y.: Smart cities of the future. Eur. Phys. J. Spec. Top. 214, 481–518
(2012)
11. Massei, M., Tremori, A.: Simulation of an urban environment by using intelligent agents
within asymmetric scenarios for assessing alternative command and control network-centric
maturity models. J. Defense Model. Simul. Appl. Methodol. Technol. 11, 137–153 (2013)
12. Brudermann, T., Yamagata, Y.: Behavioral aspects for agent-based models of resilient urban
systems. In: of the International Conference on Dependable Systems and Networks, pp. 1–7
(2013)
13. Mustapha, K., Mcheick, H., Mellouli, S.: Smart Cities and Resilience Plans: A Multi-Agent
Based Simulation for Extreme Event Rescuing (2016)
14. Rieger, C., Moore, K.L., Baldwin, T.L.: Resilient control systems: a multi-agent dynamic
systems perspective. In: International Conference on Electro Information Technology, p. 16
(2013)
15. Costin, A., Eastman, C.: Need for interoperability to enable seamless information exchanges
in smart and sustainable urban systems. J. Comput. Civ. Eng. 33, 04019008 (2019)
16. Ganzha, M., Paprzycki, M., Pawłowski, W., Szmeja, P., Wasielewska. K.: Semantic interop-
erability in the internet of things: an overview from the INTER-IoT perspective. J. Network
Comput. 81, 111–124 (2017)
17. Badii, C., Bellini, P., Cenni, D., Martelli, G., Nesi, P., Paolucc. M.: Km4City smart city
API: an integrated support for mobility services. In: IEEE International Conference on Smart
Computing, pp. 1–8 (2016)
18. Trucco, P., Petrenj, B., Bouchon, S., Dimauro, C.: Ontology-based approach to disruption
scenario generation for critical infrastructure systems. Int. J. Crit. Infrastruct. 12, 248 (2016)
19. Uribe-Pérez, N., Pous, C.: A novel communication system approach for a Smart City based
on the human nervous system. Future Gener. Comput. Syst. 76, 314–328 (2017)
20. Cavallaro, M., Asprone, D., Latora, V., Manfredi, G., Nicosia, V.: Assessment of urban
ecosystem resilience through hybrid social–physical complex networks. Comput. Aided Civ.
Infrastruct. Eng. 29, 608–625 (2014)
21. Svítek, M., Skobelev, P., Kozhevnikov, S.: Smart City 5.0 as an urban ecosystem of smart
services, service oriented, holonic and multi-agent manufacturing systems for industry of
the future. In: Borangiu, T., Trentesaux, D., Leitão, P., Giret Boggi, A. (eds.) Studies in
Computational Intelligence, vol. 853, pp. 426–438, Springer, Cham (2020)
Multi-agent Approach for Smart Resilient City 227
22. Kozhevnikov, S., Skobelev, P., Pribyl, O., Svítek, M.: Development of resource-demand net-
works for smart cities 5.0. In: Mařík, V. et al. (eds.) Industrial Applications of Holonic and
Multi-Agent Systems. HoloMAS 2019, Lecture Notes in Computer Science, vol. 11710.
Springer, Cham (2019)
23. Rzevski, G., Skobelev, P.O.: Managing Complexity. WIT Press, Boston (2014)
24. https://golemio.cz/
25. Vittikh, V.A., Skobelev, P.O.: The method of conjugate interactions for resource distribution
control. Avtometriya 45(2), 78–87 (2009)
Ethics and Social Automation
in Industry 4.0
Decision-Making in Future Industrial Systems:
Is Ethics a New Performance Indicator?
damien.trentesaux@uphf.fr
Abstract. This study deals with ethical aspects of decision-making in the con-
text of future industrial systems such as depicted by the Industry 4.0 principles.
These systems involve a great number of interacting elements, with more or less
autonomy. In this sense, ethics may become an important mean to ensure a long-
term viable joint integration of humans and artificial elements in future industrial
systems merged into the society. Two complementary views can be thus identified
to integrate ethics in such future industrial systems. The first view conventionally
defines ethics as a non-negotiable static set of conditions and rules to be met by
the considered systems throughout their lifecycle. The second view assumes that
ethics can be seen as a performance factor to which a KPI (Key Performance
Indicator) is associated and which can, therefore, be more or less directly mea-
sured and lead to improvement through time. Starting from an overall definition
of the concept of ethics, its conventional vision and its specifications regarding
future industrial systems, these two views are exposed and discussed, leading to
the establishment of some properties for the definition of a generic framework
to handle ethics throughout decision-making processes. Concluding remarks and
prospects are finally presented.
1 Introduction
The world is constantly changing, and the rate at which it currently changes is accelerated
with the increasing rate of technological breakthroughs, especially in the digital and
computational worlds. In the industrial sector, programs such as Industry 4.0 [1] are
looking for the right approach to integrate digital technologies in the industry. The
maturity of the systems regarding the use of these technologies is being assessed, leading
to the definition of readiness levels [2], indexes or roadmaps [3, 4] constituting thus points
of reference for digitalisation improvement.
Therefore, if a part of the handled information is well-controlled regarding systems’
transformation, another part handles risks and uncertainties. Thus, a set of emerging
expectations established by society, politicians and regulators are being imposed on
industrialists and researchers when designing and controlling systems in order to cope
with this point. In that context, the risk is that ignoring these major societal expectations
that are rapidly emerging will lead to unsustainable and sterile contributions given, for
example, the usual inertia of the research world to make research topics evolve. From our
point of view, these expectations, relevant to the federative concept of sustainable devel-
opment, concern mainly: 1) the consideration of the environment and the limited amount
of hardly-renewable resources from our planet, 2) the insurance that every technological
development is useful and suitable to the human society.
This paper concerns the second point. It focuses, more specifically, on the notion
of ethics and its study in the context of future industrial systems, as fostered by the
concept of Industry 4.0, with a focus on ethical aspects that are relevant to decision-
making. Addressing ethics in future industrial systems is an urgent need. Indeed, the
rapid evolution of digital technologies in Industry 4.0, fostering the multiplication of
sensors and actuators (e.g. Internet of Things - IoT, mobile robotics) and decision and
learning abilities (e.g. Artificial Intelligence - AI) of digital or Cyber-Physical Systems
causes the emerging of a great number of new functionalities and potentialities along
with a great number of high ethical stakes for human decision-makers who are involved
in industrial systems.
Three factors put ethics at high stakes with regards to industrial decision-making, as
depicted Fig. 1.
The first factor concerns the fact that the digitalised entities are intended to facilitate
the augmentation, the monitoring or even the replacement of humans, allowing new
possibilities in production control as well as new investigations regarding data analysis
to be enacted. Industrial practices already show some unforeseeable and questionable
situations due to this advent. The second and the third factors come from the fact that
two types of complexity have to be handled: an internal complexity (second factor) due
to the fact that these entities become more and more complex to understand and control,
and an external complexity (third factor) due to the fact that it will become more and
Decision-Making in Future Industrial Systems 233
more complex to understand and control the interrelations among them and the human
society, their consequence and their possible diversion in an unpredictable environment.
Consequently, it is getting more and more important to focus on the ethical behaviour
of all the stakeholders involved in future industrial systems, with regards to the new
developments that have been defined or that will be defined in the future; in particular,
those to deal with digital technologies. Ethics is relatively well studied and deployed in
a deterministic universe with long-term and small progressive changes. Meanwhile, in
future industrial systems characterised as introduced by the rapid evolutions of digital
technologies and the increasing internal and external complexities interlaced with the
human world, operating and engineering ethics remains a great challenge.
Adopting an information processing point of view, ethics can be considered as an
evolving notion that implies multiple criteria and concerns different facets when mak-
ing decisions. Moreover, ethics can be progressively enriched in its deployment and
improved in its achievement, which is the purpose of the KPIs (Key Performance Indi-
cators) [5]. This paper raises thus the question of handling ethics as a KPI or not when
deciding. To illustrate the complexity of the question raised, two industrialists were
asked the same questions: “how does your company manage ethics?” and “what are
the relevant stakes?” Their testimonies are provided in Fig. 2 (their names have been
changed to preserve their anonymity).
This paper suggests the establishment of some properties to define a generic frame-
work to handle ethics when making decision in future industrial systems (FIS) and
especially their control. Section 2 brings shortly some elements about the concept of
ethics from a general point of view on the one hand, and regarding FIS on the other
hand. Then, Sect. 3 discusses the two possible answers to the question raised. Therefore,
based on this analysis, a preliminary architecture of a generic framework is proposed.
Fig. 2. Industrial testimonies: pros and cons about considering ethics as a KPI or not
[9]. A significant question is thus: how to handle ethics when making decision with
regards to future industrial systems, and especially its control? From our perspective,
two approaches can be adopted to answer this question. The first one consists of stating
that ethics is a kind of a new criterion when making a decision, and imply considering
ethics as a kind of KPI, while the second approach states that ethics cannot be just another
criterion when making a decision, and is more global. The following section studies these
two approaches. Even if the question and the approach are felt to be generic, the primary
application field of this study concerns future industrial systems.
is made within a static, stable and well-established ethical context. Moreover, it offers
legality and liability boundaries, limiting or clearly explicating the legal responsibilities
in case of accident or injuries.
Dealing with artificial systems such as enterprises, organisations, and productive
systems, some approaches that are close to that of ethics are already proposed accord-
ing to this point of view. Namely, the Corporate Social Responsibility (CSR) [10], as
mentioned in the testimony of one of the industrialists as given in Fig. 2 is one of them
[11]. Standards are also proposed, leading to the assignment of some definitions and
conditions to fulfil [12]. Ethics has also been approached through the quality facet, as
it has to be managed and measured, identifying it as a kind of compliance with what
needs to be [13]. In addition, ethics has also been deployed according to the major pil-
lars of society, in coherence avec the sustainability paradigm [14, 15]. In this sense,
environmental ethics has been introduced for the relationship of human beings to nature
[16]. And, as far as we are concerned, digital ethics is currently dealing with the use
of digital technologies [17]. However, even if it is the core activity of many researchers
and practitioners, to the best of our knowledge, no specific habits have been established
concerning ethics in future industrial systems, even if attention starts to be drawn in
this sense by considering potential symbiosis between humans and machine [18]. As a
consequence of the situation described in the introduction, the issue of ethics in future
industrial systems deserves to be carried out.
Ethics will be as such in the case of the opposite of the factors presented in Fig. 1,
namely:
case of self-operating systems, the reaction in front of new unconsidered event is totally
unknown. Choosing the correct a posteriori approach and making the “good” ethical
decision will be challenging. In the second case, having the correct a priori design will
be impacting.
Aligned with this view, there are some arguments militating to consider ethics as
a KPI, as emphasized by one of the industrialists in Fig. 2. As a preliminary to this
discussion, let us recall the general definition of a PI, as “a variable indicating the
effectiveness and/or efficiency of a part or whole of the process or system against a
given norm/target or plan” [20]. By its definition, a PI - and a KPI when it is overall
or major - provides performance expression, subscribes to the control loop principle
and deals with the “What you measure is what you get” [21] principle. A PI involves
thus an objective (expected state) and a measurement (reached state). The performance
expression is the result of the comparison of the objective and the measurement. In
a reactive control logic, such a measurement leads to improvement action launching.
Figure 3 illustrates the principle of considering a PI as a triplet (objective, measurement,
action) [22].
With this view, ethics handling by PI vision drives the definition of respectively the
corresponding objectives, measurements and actions that are associated with it. These
three definitions are described hereinafter.
Ethics Objectives: Is it possible to associate with ethics expected states to be achieved?
Ethics objectives are subscribing to the general concept of performance as: “the
capability to go where we want” [23]. Assigning ethics objectives is coherent thus with
the idea of achieving them. Ethics is something that is possible to get and to act on it.
Moreover, “The use of the term performance itself can come to mean “positive
progress” in itself, without any qualifying adjective applied to the term. The meanings
of performance where performance is used to denote an “exploit” or an “achievement”
is analogous to this [24]. Ethics objectives also convey this idea of progress, i.e. the
objectives are part of a desire for a better state than the previous ones, with a maximum
notion that makes little sense. As ethics is something that can be improved, objectives
can be associated with it.
Decision-Making in Future Industrial Systems 237
In this sense, ethics objectives obey the respective conditions of effectiveness, effi-
ciency and effectivity [22], since they are achieved by searching the best result possible
(effectiveness) with the best possible use of resource to do so (efficiency). As for the
effectivity of ethics objectives, it is a matter of common sense, since effectivity (or rel-
evance) by definition concerns in its broadest sense the value of assigning objectives to
the means and actions implemented to achieve them as well as to the expected results.
In essence, ethics is part of this logic and even goes beyond it.
Furthermore, associating objectives with ethics means dealing with the SMART
principle [25]. It remains thus to discuss the variables and the values to achieve. The
concerned variables are the ones of the industrial system in its considered lifecycle
step. Ethics issues are indeed concerning all or part of the system. Values and temporal
horizons are then assigned according to the corollary actions and the different semantics
such as improvement, lack, emergency, risks, which are conveyed by the considered
situation.
Finally, ethics objectives assignment is made as for the other performance criteria of
the industrial systems. The only difference is in the purpose of the objectives, which is
the ethics of industrial systems. However, ethics KPIs will have strong interactions with
other KPIs of the industrial systems, as discussed later in this section.
Ethics Measurements: Measurements follow the way the objectives have been
assigned, under the property that each objective is measurable, either quantitatively
or qualitatively.
Note that situations may arise in which a direct measurement may not be eas-
ily obtained. Approaches based on indirect measurements could then be used, using
aggregation mechanisms [26] that involve criteria that are interrelated with the given
situation.
Ethics Actions: As seen for the objectives, improvement is always possible regarding
ethics enactment.
Within this logic, no immediate optimum in ethics can be defined. Therefore, the idea
of a perfect ethical state will be the goal constantly sought, leading to more than one pos-
sible action to launch. Actions are associated with the assigned objectives, constituting
an overall action plan, and satisfying the condition of bijection between objectives and
actions, according to the PI triplet vision as depicted in Fig. 3. Obviously, the definition
of the action should handle the semantics of the corresponding objective. Typology of
actions will be then addressed (e.g. curative, preventive).
As an illustration of what has been suggested, Table 1 gives some cases that the
authors have discussed with two industrialists regarding PIs and ethics (the bearing
manufacturer introduced Fig. 2 and a kitchen and bathrooms manufacturer). From the
discussion led, it is clear that both are currently dealing with the digitalisation of their
production and are encountering situations for which they can still decide regarding the
conventional industrial control state-of-the-art logic. Meanwhile, it is also clear that they
cannot decide regarding the ethics point of view.
More specifically, ethics becomes something to handle, in a progressive way, by
making assumptions and analysing results and then concluding. Assigning objectives,
i.e. expected states, launching actions, getting measurement about the achieved results
238 L. Berrah and D. Trentesaux
Table 1. PIs and ethics: cases studies in the digitalisation of industrial systems
then reacting will be thus the way of proceeding. As a summary, one can say that ethics
is associated with KPIs under the following considerations:
• No clear and unique rule can be retained and uncertainty in scope is observed.
• Measuring the reached situation is preliminary necessary for deciding.
• Ethics is something that can be continuously measured and improved.
Decision-Making in Future Industrial Systems 239
3.3 Synthesis
As a synthesis, from our perspective, ethics is sometimes a KPI and sometimes not.
The two complementary positions held by the two industrialists illustrate this duality of
the concept. Ethics in future industrial systems must then be approached using the two
paradigms (consequentialism and deontology): it is not possible to adopt only one of
these paradigms while ignoring the other.
In that sense, the novelty is that deontology, which is the classical approach in the
human society, is no more sufficient because of the increasing level of unpredictability
and complexity of the interaction between the digital (cyber) world and the human one,
letting room to the use of other paradigms, such as consequentialism. This is not neutral.
For example, the question arises for the autonomous car: are we sure, for every
possible situation met in an open environment, that deontological rules (e.g. the highway
code) will always lead the autonomous car to take the optimal single ethical decision or
at least, the decision every human would have taken in that situation? [8]. Because of the
factors specified in the introduction, the difficulty to ensuring the answer “yes” is high.
On the opposite, consequentialism assumes that it is possible to quantify ethics,
which has been discussed for centuries by philosophers and others, and this is one of
the main issues to solve. How to evaluate that a situation, a decision or an action is
more ethical than another one? From our perspective, an accurate articulation of the two
paradigms could be an interesting approach.
As an illustration of this novelty in ethics handle, Table 2 contains several examples in
three application fields, including the one considered in this paper, indicating the different
points of view one can adopt in a given unconsidered situation to behave ethically. It
is worth noting that the healthcare context relates to the situation encountered with the
emergence of the Covid-19 pandemic.
In the case of a consequentialist approach, associating ethics with a KPI induces its
deployment in accordance with all the decisional levels of the system under considera-
tion. This deployment has to be carried out within the other considered KPIs, as practiced
in conventional 3.0 PMSs (Performance Measurement Systems) [27]. However, in view
of the nature of the ethics criterion, it necessarily interacts with the other criteria usually
considered.
The entire decision-making process will be impacted by the deontological aspect of
ethics. Namely, it is not so much a matter of producing in accordance with the C-Q-D-
E (Cost-Quality-Delivery-Ethics) tetraptych, but to integrate ethics in each considered
criterion. But, the deontological involved aspects may be based on more than one single
rule, leading to diversified strategies. The definition of the PMS will thus require pre-
liminary discussions of weight and interactions, and preferences policies, namely what
is compensatory and to what extent, what is veto, etc.
240 L. Berrah and D. Trentesaux
Table 2. (continued)
As suggested, both deontology and consequentialism paradigms are required for han-
dling the ethics of decisions in future industrial systems, whatever the considered step
- design, use and support - of their lifecycle (even if the focus has been made here on
the support step). From the synthesis suggested in the previous section, it is obvious
that the deontological paradigm, even if it is relevant for a part, cannot totally handle
the overall ethics aspects of these systems. Indeed, general rules and principles can be
applied to normal operation context and deal with structural needs, but cannot cover new
unconsidered situations and all the consequences of the made decisions.
The suggested proposal is, therefore, to have a complementary manner for approach-
ing ethics. As the set of possibilities at each step of the industrial system is non-bounded
and since ethics become not only a result or a condition to check but a process to build,
242 L. Berrah and D. Trentesaux
• An objective and a measurement can be associated with the variables of the system.
• An action is possible to improve its performance expression.
• Feedback on the reached results is possible, in terms of establishment of new rules.
Ethics objectives will be considered until the situation is well-controlled, i.e. KPIs
return correct performance expressions. Therefore, rules and conditions will translate the
obtained results. “Conjunctural” ethics aspects, which temporarily have appeared, will
be thus replaced by structural ethics aspects, which will be intrinsically taking part in the
ethics deontological procedures, and, continuously, achieved objectives will be replaced
by new ones. Some avenues for approaching the ethics of future production systems are
given in the form of a generic framework, whose global architecture is described Fig. 4.
A known situation means that there exists a deontological rule for that situation. In
that case, the ethical decision made will be based on rules that are extracted from the
available database. These rules will correspond to the adequate handling of the occurred
known situation. Expert systems and formal logic modelling approaches could be used
in that context.
If the situation faced is unknown and put ethics at risks (e.g. threat, breakdown
of a critical system, cyber-attack, major evolution in the environment, application of
a new technology), meaning that no deontological rules apply, then a consequentialist
behaviour is triggered. The concerned variables are thus selected and objectives as well
as actions are associated to them, according to respectively the occurred event, the ‘as-
is’ situation and the expected one, as well as previously encountered similar situations
(when available).
The reached measurements will allow either the achievement of the expected state
or the redefinition of new objectives, in a continuous improvement logic. Indeed, as the
Decision-Making in Future Industrial Systems 243
Event
Iterative process (Deming principle)
monitoring
event
Situation?
Ethics-related
Consequentialist Deontological legal Deontological
Digital Indicators
based-behavior Based-behavior and legal rules
Twin Simulation (AI or Optimization) (expert system) database
request
Most ethical
calculated control decision
Multi-physics
Model dynamics Decision process
Update the
historization
deontological
(explainability)
set of rules
end
Fig. 4. A generic framework for ethical decision making in future industrial systems.
situation could be totally unknown, the reaching of the expected states could require
several iterative steps. At the end, the best ethical decisions to be made are defined
according to the analysis of the data that are provided by ethics KPIs. These KPIs could
gain from being associated to a digital twin of the industrial system that can simulate and
evaluate different strategies from its current state. Some parts of the framework can be
automated, while others not (e.g. the design of alternative decisions in a consequentialist
behaviour); the presence of the human remains therefore compulsory. It is also possible to
trigger systematically the consequentialist behaviour, even if deontological rules apply,
to suggest improvement either in the triggering situation and in the ethical decision
made.
This framework is clearly a first attempt. It remains to be improved, implemented
and tested in various situations. For example, it could be interesting to augment the
application of a deontological rule with a consequentialist study when a certain degree
of freedom remains available for decisions after the application of the deontological
rule. Another situation that could be studied is when it is not possible to evaluate all
the consequences of a decision from an ethical perspective. In that situation, clustering
technics could be used to find similar situations and evaluate the ethical degree of the
decision made then.
244 L. Berrah and D. Trentesaux
Acknowledgement. Parts of the research work presented in this paper are carried out in the context
of Surferlab, a joint research lab with Bombardier and Prosyst, partially funded by the European
Regional Development Fund (ERDF), Hauts-de-France. Other parts of the work presented in this
paper are performed in the framework of the HUMANISM ANR-17-CE10–0009 research project.
References
1. Kagermann, H., Wahlster, W., Helbig, J.: Recommendations for Implementing the Strategic
Initiative INDUSTRIE 4.0: Securing the future of German Manufacturing Industry. National
academy of science and engineering, Wirtschaft und Wissenschaft begleiten die Hightech-
Strategie. Final report of the Industrie 4.0 Working Group (2013)
2. Akdil, K.Y., Ustundag, A., Cevikcan, E.: Maturity and readiness model for industry 4.0 strat-
egy. In Ustundag, A., Cevikcan, E. (eds.) Industry 4.0: Managing the Digital Transformation,
pp. 61–94 Springer International Publishing, Cham (2018)
3. Issa, A., Hatiboglu, B., Bildstein, A., Bauernhansl, T.: Industrie 4.0 roadmap: framework for
digital transformation based on the concepts of capability maturity and alignment. Procedia
CIRP. 72, 973–978 (2018)
4. Schuh, G., Gartzen, T., Rodenhauser, T., Marks A.: Promoting work-based Learning through
INDUSTRY 4.0. Procedia CIRP 32, 82–87 (2015)
5. ISO.: ISO 22400. Automation systems and integration - Key performance indicators (KPIs)
for manufacturing operations management (2015). https://www.iso.org/obp/ui/#iso:std:iso:
22400:-2:ed-1:v1:en
Decision-Making in Future Industrial Systems 245
6. Morahan, M.: Ethics in management. IEEE Eng. Manage. Rev. 43(4), 23–25 (2015)
7. Nath, R., Sahu, V.: The problem of machine ethics in artificial intelligence. AI Soc. 35(1),
103–111 (2020)
8. Trentesaux, D., Rault, R., Caullaud, E., Huftier, A.: Ethics of autonomous intelligent systems
in the human society: cross views from science, law and science-fiction. In: Borangiu, T.,
Trentesaux, D., Leitao, P., Cardin, O., Lamouri, S. (eds.) Studies in Computational Intelli-
gence, Springer, ParisProceedings of 10th SOHOMA Workshop on Service Oriented, Holonic
and Multi-Agent Manufacturing Systems for Industry of the Future (2020)
9. Trentesaux, D., Caillaud, E.: Ethical stakes of Industry 4.0. In: IFAC World Congress (2020)
10. Philip, R.: Corporate social reporting. Hum. Resour. Plan. 26(3), 10–13 (2003)
11. Goel, M., Ramanathan, P.E.: Business ethics and corporate social responsibility - is there a
dividing line? Procedia Econ. Finance 11, 49–59 (2014)
12. ISO.: ISO 26000 and the International Integrated Reporting <IR> Framework briefing
summary (2015). https://www.iso.org/files/live/sites/isoorg/files/store/en/PUB100402.pdf
13. Vinten, G.: Putting ethics into quality. The TQM Magazine 10(2), 89–94 (1998)
14. World commission on environment and development: Our common future. Oxford University
Press 13(4) (1987)
15. Purvis, B., Mao, Y., Robinson, D.: Three pillars of sustainability: in search of conceptual
origins. Sustain. Sci. 14(3), 681–695 (2018)
16. Palmer, C.: An overview of environmental ethics. In: Rolston, H., Light, A. (eds.) Environ-
mental Ethics: An anthology, pp. 15–37. Blackwell, Oxford, UK (2003)
17. Maggiolini, P.: A deep study on the concept of digital ethics. Revista de Administração de
Empresas 54(5), 585–591 (2014)
18. Longo, F., Padovano, A., Umbrello, S.: Value-oriented and ethical technology engineering in
industry 5.0: a human-centric perspective for the design of the factory of the future. Appl.
Sci. 10(12), 4182 (2020)
19. Bergmann, L.T., Schlicht, L., Meixner, C., König, P., Pipa, G., Boshammer, S., Stephan,
A.: Autonomous vehicles require socio-political acceptance: an empirical and philosophical
perspective on the problem of moral decision making. Front. Behav. Neurosci. 12 (2018)
20. Fortuin, L.: Performance indicators: Why, where and how? Eur. J. Oper. Res. 34(1), 1–9
(1988)
21. Kaplan, R., Norton, D.: The Balanced Scorecard: Measures that Drive Performance. Harvard
Bus. Rev. 83, 172 (1992)
22. Berrah, L., Clivillé, V., Foulloy, L.: Industrial Objectives and Industrial Performance. ISTE
Wiley, Hoboken (2018)
23. Lebas, M.: Performance measurement and performance management. Int. J. Prod. Econ.
41(1–3), 23–35 (1995)
24. Folan, P., Browne, J., Jagdev, H.: Performance: Its meaning and content for today’s business
research. Comput. Ind. 58(7), 605–620 (2007)
25. Doran, G.T.: There’s a SMART way to write management’s goals and objectives. Manag.
Rev. 70(11), 35–36 (1981)
26. Berrah, L., Mauris, G., Vernadat, F.: Information aggregation in industrial performance
measurement: rationales, issues and definitions. Int. J. Prod. Res. 42(20), 4271–4293 (2004)
27. Nudurupati, S.S., Bititci, U.S., Kumar, V., Chan, F.T.S.: State of the art literature review on
performance measurement. Comput. Ind. Eng. 60(2), 279–290 (2011)
28. Deming, W.E.: Out of the Crisis. MIT Press, Cambridge (1986)
Ethics of Autonomous Intelligent Systems
in the Human Society: Cross Views
from Science, Law and Science-Fiction
Abstract. The objective of this paper is to discuss issues and insights relevant
to the ethical behaviour of future autonomous intelligent system merged in the
human society. This discussion is done at the frontier of three domains: science,
as the mean to imagine and design innovative technological solutions in the field
of autonomous artificial systems; law, as the mean to control, forbid and promote
what can be used in the human society or not from these technological solu-
tions; and science-fiction, as the imaginary world where scientists and lawyers
get consciously or not their inspirations, fears and dreams, driving their decisions
and actions in the real world. Four issues are specifically discussed. The crossing
of these domains illustrates that addressing ethics in AIS is an urgent need but
remains incomplete if addressed from a single discipline or domain point of view.
1 Introduction
The context of this paper is relevant to the design and use of autonomous intelligent
systems (AIS) immersed in the human society, excluding military systems. Autonomous
robots and cobots in future industrial systems or autonomous cars in cities are illustrations
of such AIS. AIS are characterized by their ability to sense, decide and act (e.g., on the
physical world) [1]. They interact with other AIS and with humans. Artificial Intelligence
(AI) techniques enable them to learn and adapt to unforeseen events.
On the one hand, the developments of AIS are powered by several factors, and
among them the will to compensate for human errors. For example, it is estimated in the
USA that 94% of car accidents have a human cause [2]. This type of statistic encourages
researchers and industrialists to develop AIS capable of outperforming the performances
of humans in various fields.
On the other hand, AIS will interact with others (AIS, humans). They will also evolve
in open and unpredictable environments. This complexifies the understanding and the
control of their behaviour. In addition, this behaviour can be potentially emerging in the
sense that it may not be possible to associate it explicitly to a statically programmed
computer code. As a consequence, risks relevant to their negative impact and even
potential dangerousness for the human society are induced. More, the development of
AIS may have a strong impact on human life, including works and working conditions in
our society. Consequently, the study of AIS behaviour, especially from an ethical point
of view, is getting importance. We thus state here the principle that, initially concerning
humans, ethics is a concept that will also concern AIS.
Ethics is initially a field of study in philosophy. This paper does not intend to discuss
the concept of ethics; philosophers have been working on it for centuries. Meanwhile,
it is important to set its definition. In our work, we selected the one of Ricoeur who
contextualizes it as “the strive for the good life, with oneself and others, in just/fair
institutions” (in French: “Une vie bonne, avec et pour autrui, dans des institutions
justes”) [3]. Ethics is seen here as a federative concept encompassing social expectations
described in terms of safety, security, integrity, explicability, altruism, kindness, caring,
trustworthiness, benignity, etc. [4].
Ethics is by essence a multi-disciplinary field of research. Written by authors working
in three different domains which are law, science and literature, this paper aims to discuss
issues and insights relevant to the ethical behaviour of future AIS merged in the human
society considering these three domains: science, as the mean to imagine and design
innovative technological solutions in the field of AIS; law, as the mean to control, forbid
and promote what can be used in the human society or not from these technological
solutions; and science-fiction (sci-fi), as the imaginary world where scientists and lawyers
get consciously or not their inspirations, fears and dreams, driving their decisions and
actions in the real world. Sci-fi is often the place for writers to put the science and the
technology at its limits and extrapolate consequences. While science builds “paradigms”
[5], sci-fi creates images from these paradigms, thus establishing itself as “transposition
literature” [6]. Figure 1 depicts the context of the paper.
This section must be seen as a starting point for the discussion. Basic answers (yes/no)
to this question are here provided through the prism of sci-fi. This will be used to point
out the fact that getting in the real life a positive answer to this question, which one
obviously hopes for, will require the solving of major issues introduced in Sect. 3, issues
yet nearly unaddressed.
248 D. Trentesaux et al.
A large proportion of sci-fi production (writings, novels, games and films) transcribe
our fears and concerns (as human beings) about AIS whose autonomy may no longer
be controlled or even supervised. This loss of control would lead to the situation where
AIS would no longer act in the interest of humans or the environment. This fear was
expressed very early on, as soon as the term “Robot” appeared on the creation of the
play R. U. R. (Rossumovi univerzální roboti; Rossum’s Universal Robots in English)
written by Karel Čapek in 1920: “the robots are not people. Mechanically they are more
perfect than we are; they have an enormously developed intelligence, but they have no
soul”1 . This lack of “soul” runs through sci-fi literature and justifies in a way “disasters”
as well as the impossibility to deal with ethical adaptability. A good example is given in
this regard by Brian Aldiss’s Who Can Replace a Man? where an AI adapts, but it lacks
flexibility, and it then reports a limited ethical behaviour, since it comes back endlessly
to “hence” and “therefore”.
According to this pessimistic perception, a new intelligent species is identified, dif-
ferent from the human species. This new species, built on silicon, electronics and infor-
matics, has objectives hardly compatible with the ones of the human society, which
1 Karel Čapek, R.U.R. (tr. P. Silver & Nigel Playfair), New York: Doubleday, 1923, p. 17.
Ethics of Autonomous Intelligent Systems in the Human Society 249
imagine that the reason is rather financial (the sensation of fear is always easier to sell)
or is guided by sensationalist purposes [12]. Very optimistic writings are still rare and
often lead to a lack of understanding from readers [13]. However, even in the mid-
twentieth century, when emerging sci-fi literature in this area was not as strongly guided
by these reasons, one notes very few works or writings where this positive side was
present. The fictional character of R. Daneel Olivaw imagined by Isaac Asimov and
Gabriel, the eponymous robot in Domingo Santos’ Gabriel, historia de un robot (1962)
remain special cases apart by their ethical desire to save the human species. This opti-
mistic view is also illustrated through the needed convergence of the interests of the two
“species” (human and silicon-based). For example, several public writings and novels,
e.g., Brown’s Origin or Herbert and Anderson’s Sandworms of Dune, describe, after a
phase where humans fear and suffer from robots, 1) the establishment of a community
of interest of the two “species”; or 2) the mutually beneficial integration (symbiosis) of
the two “species” (“benevolent cyborg”) in order to address common threats, or in order
to make these two “species” cohabitate better. Another type of convergence can be men-
tioned: a human having developed the processing capabilities of an AI, that the status
of “mentat” in the world of Herbert’s Dune illustrates perfectly. Finally, a specific vein
of sci-fi works puts in light robots having some of the human aspirations, in particular a
return to nature. The robots Jenkins in Clifford D. Simak’s City and Elmer in Cemetery
World are travellers, philosophers, historians…. They are a sort of robot-hobos which
reflect the aspirations towards a society where economic control is not the standard value.
These sci-fi works translate an optimistic view that is getting reality because of the
spreading of recent technological innovations. For example, the concept of augmented
human (e.g., the operator 4.0 in Industry 4.0 [14]) and symbiotic systems [15], aligned
with the principles of “transhumanism”, illustrates such a convergence in the industrial
sector. Its objective is to compensate for human deficiencies (or at least, that which is
perceived as such).
The advent of AIS will logically lead to inextricable situations due to the constant closed-
loop interaction between humans and AIS (and fleet of AIS), one deciding and acting, the
other reacting consequently and so on. Humans will learn through interactions with AIS
and AIS will learn through interactions with humans. This already exists: the concept
of human-as-a-service and micro-works assigned to humans by AI are clear examples
where an AI, needing to learn how to discriminate elements in pictures, asks human
to do the job, who in return tells the AI, knowingly or not, the reliable result of his
discrimination, e.g. in captchas. Sometimes captchas are provided to a user to check that
he is not a bot before accessing a website, while sometimes they are provided by an AI
to learn [21].
This interaction into a single, seamless, integrated world will lead to various percep-
tions, reasoning and actions whose responsibilities are hardly assignable. The sequence
of decisions taken by an AIS should lead, because of this constant interaction, to
behaviours that could be more or less ethical. An AIS could be forced by a human
to take a non-ethical decision (e.g., to kill people in case of non-avoidable accident).
252 D. Trentesaux et al.
Its actions could lead a human to behave unethically or to take risks to limit the conse-
quence of its actions (e.g., to bypass a security to avoid a hazardous situation provoked
by a defective AIS). This mutual interaction will blur the chain of responsibilities and
render expertise in the event of a problem or accident delicate: is an accident caused by
an autonomous car under its responsibility? Was it not the owner who did not respect
the maintenance logbook or the designer of the algorithm that has faulty coded a certain
behaviour? The work of Susan Calvin, a fictional character, robopsychologist in the
Robot series written by Isaac Asimov, consists in understanding such complex mutual
interactions. The aim is to solve paradoxical situations where a robot, also subject to the
laws of robotics designed to preserve the humankind, behaves in a specific, deviant or
hazardous way.
AIS will take decisions based on their own experience and knowledge by learning tech-
niques (e.g., AI). Their decisions will evolve according to their experiences and carried
out according to temporal and informational horizons yet to be determined statically
or dynamically. More, the quality of such decisions will depend on the time needed to
evaluate them given the time required to react to events. Consequently, assuming that it
is possible to quantify (measure) an ethical behaviour, which remains an open question
[22], the behaviour of an AIS may be more or less ethical, according to these horizons
(as the concept of “local optimum” vs. “global optimum” in operations research). Long
treated by sci-fi, the subject is new to scientists and lawyers working on AIS behaviour.
This subject is necessarily multidisciplinary, at the interface between science, technol-
ogy, psychology, law and philosophy. It concerns not only designers, operators, users
and maintainers of these AIS, but also the AIS themselves. In sci-fi, Law Zero Asimov’s
vision of the future has been designed to solve this kind of issue: at a short-term view, a
given behaviour is assessed very negatively (as it leads, for example, to several human
casualties), but becomes on a larger time window (centuries to come), evaluated very
positively as it leads to the survival of the human species. The AI Winston imagined
by Dan Brown in his novel Origin doesn’t hesitate to kill a character to serve a greater
purpose in the service of mankind, while the AI in Grant Sputore’s movie I Am Mother
decides to educate a human in order to entrust her with the mission of recreating the
humanity that it has just destroyed.
Consequently, various paradoxes can be imagined with Cornelian choices for the AIS
(example of the trolley dilemma) [2]. The issue is thus to assert in an absolute sense that
a given behaviour of an AIS is ethical since this assertion is not unequivocal: its “degree
of ethicality” depends on horizons and on the point of view taken. This is illustrated
in Federico D’Alessandro’s movie Tau, where an AI changes its behaviour and adapts
its “program” according to the person with whom it interacts: it therefore changes its
ethical point of view. Outside the scope of this paper, there is also the “political” and
“philosophical” question of a behaviour that for some people is ethical while not for
others. This depends on the culture of a country or community. In particular, what is
written here is highly correlated with the occidental culture.
Ethics of Autonomous Intelligent Systems in the Human Society 253
In the human society, ensuring morality is often driven by rules to comply with. The
main religions (the 10 Commandments), sci-fi (the laws of Asimov’s robotics), the legal
world (criminal laws), and the scientific world as well (concept of “safety bag”, or safety
level “SIL” to ensure a minimum level of operational safety) apply widely such principle
of “laws”.
Such principles can be extrapolated for AIS. Meanwhile, AIS will face unforeseen
situations, how to be sure that the behaviour of an AIS remains ethical? The “robustness”
of the analysis of the ethical dimension takes on its meaning: the issue is how to ensure
that all decisions made by an AIS, especially those that were not imagined by its designers
and applied by this AIS, be ethical.
In that context, there are moral and robust “safeguards” in the human mind that
applies to every decision he takes, an alarm that lights up when one is about to cross
a line and that alerts us about possible unethical behaviour or that helps us realizing
that one is crossing a moral barrier. Examples of such safeguards are the “Common
sense” (before acting) and the “bad conscience” (after acting). Not completely reliable
or even senseful, they however allow us to learn and apply as much as possible an ethical
behaviour, even in unforeseen situations, based on a set of beliefs to comply with and
to modify with time and experiences. This is a fundamental aspect of human education.
A “poorly educated” child is capable of trampling on a neighbour’s vegetable garden
to get his ball back. This can be extrapolated with an unethical AIS whose behaviour
can be associated to the one of a “bad” uneducated child. This point is clearly described
through the amoral proverb “the end justifies the means”: even if the “end” is ethical (in
the sense, moral and deontological), the means imagined and used by the AIS to reach
it may not be in unpredicted situations: with time, an AI may construct its own ethical
logic system, its own mental world, its own morality where ethical rules imposed by its
creator are translated into series of decisions and actions not imagined by its creator.
For example, the AI imagined by Antoine Bello in Ada develops a strategy of deci-
sions and actions in the objective of complying with the (legal) rule n°1 that governs it,
that of maximizing the profits from his owner’s business. Therefore, it discovers that it
could lie, violate several American laws and principles to reach its goals, while it has not
been programmed explicitly to do so (this behaviour emerges): it has not been properly
“educated” (voluntarily or involuntarily) and it does not “think” to behave unethically.
The theme of the slavery of man by a tyrannical machine is often studied in sci-fi. Beyond
this risk of slavery in the primary sense of the term (deprivation of liberty), two more
insidious and subtle risks of slavery must be considered by the researchers who work on
future AIS, obviously expected to behave ethically.
The first one is induced by the loss of skills and knowledge by humans that are trans-
ferred to AIS as time passes. This topic is already discussed, but not specifically regarding
ethics, in the field of human-machine cooperation [23]. [24] showed that navigational
aids impair spatial memory by dividing attention rather than selective interference of
verbal working memory. In a few years, with the autonomous car, humans will lose their
254 D. Trentesaux et al.
ability to drive (the concept of driving license will disappear). Develop a sense of direc-
tion will no longer be required. The human will thus be subject to a “skill slavery” by the
autonomous car as he will be completely dependent on its transportation ability he will
have no more. Generally speaking, AIS, regardless of their degree of ethical behaviour,
will absorb more and more knowledge and skills that human beings will forget (the brain
helps us to forget information that we don’t understand or no more need): the more time
passes, the more AIS will know how to make, transport, care for, cultivate, etc. and the
less the human will remember it. This is clearly illustrated by Pierre Boulle in his Planet
of the apes [25], if one considers the analogy of apes and AI: apes learnt the skills the
humans were forgetting. The issue is thus: what knowledge and skills do we assume to
lose? What power do we assume to leave? Are we going to be outclassed by ethical AIS?
Will they still need us? In this line, one could easily arrive at what we could call the
concept of “tyranny of benevolence”, as illustrated by some of the tales of Asimov, or
in Jack Williamson’s With Folded Hands (later rewritten Humanoids) where the Prime
Directive of any android (“To Serve and Obey, and to Guard Men from Harm”) will lead
to deprive men and women of their free will, creating a new kind of danger: “I found
something worse than war and crime and want and death… Utter futility” [26], p. 188.
One can identify a second kind of insidious slavery, regarding the risk of developing
an emotional dependence. One can easily imagine that, in an industrial system, the
presence of a companion AIS may relieve an operator who may stress and misbehave if
his companion AIS is not available or cannot help him. A step further: a human being
may be captivated, hypnotized by the kindness and caring of an AIS (cf. The concepts of
altruistic robot [27] and emotional robot [28]). One can imagine that if an autonomous
car takes care of everything, passengers afraid of driving may relax and get a positive
feeling along with the transportation experience. This risk can be again pushed forward.
For example, a driver falls in love with his autonomous car in Courtois’ Suréquipée.
A policeman is seduced by the AI Ada and develops a “bad conscience” because he
feels like he’s cheating on his wife, even virtually. If an AIS finally acquires enough
knowledge that an operator loses at the same time, an ethical behaviour on its part would
be to protect him in spite of himself, which would lead to prevent him from acting in a
way because of his lack of the knowledge. It remains to imagine an AIS having such an
aura that it unintentionally creates a kind of a cult welcoming men and women in search
of an ideal, much like a digital god willing.
In order to limit these two kinds of slavery, would it be useful to voluntarily limit
the ethical behaviour of an AIS to prevent it from being too altruistic (kind, caring)?
One can even imagine that a balance in the sharing of skills and knowledge between
human and AIS will be the best desirable ethical situation. The idea would be to let
humans maintain their level of skills and knowledge at the risk of making mistakes the
AIS would not have done but enabling them to keep learning to remain autonomous. In
the end, too much ethics kills ethics. A new paradox…
4 Examples
In this section, we detail application domains where ethical questions are crucial for AIS
and their designers. The reader will find for each several references to the four issues
introduced.
Ethics of Autonomous Intelligent Systems in the Human Society 255
First of all, surgical robots. They help the surgeons to be more precise and enable
medical acts to be proceeded remotely. At the current stage of their development, they are
neither autonomous nor intelligent and one can hardly imagine they will become soon.
Indeed, how can a human trust an AIS to practice surgery or at least to be used to interact
with his body [11]? But the robot, even under control, acts under the responsibility of the
surgeon, in reaction to his command, the surgeon adapting his command according to the
reaction of the AIS and so on. If there is a faulty reaction, even the responsibility of the
researchers that designed it may be engaged. Meanwhile, if, in the near future a surgical
robot operates with a precision, a speed and a vision that no human will never reach,
wouldn’t we prefer to enable it to work sometimes autonomously under the control of a
surgeon?
Transportation is obviously a flagship application of AIS. Autonomous cars are
being developed, but not only. Autonomous boats, planes and trains are also being
developed [29]. Ethical issues are raised by the development of these systems but still
no consensual solution has been found while the first autonomous cars, with various
degrees of autonomy are already tested on roads and marketed. From our perspective, it
will not be possible to remotely control or supervise all the autonomous vehicles, thus a
certain level of autonomy will be necessarily left to them since they will evolve and react
in real time in open environments. More, numerous AIS will be merged in fleets, each
of them interacting with a lot of humans. This will generate complex situations where
diagnosis of responsibility and decision chain identification will be hard to realize.
Future industrial systems, based on Industry 4.0 technologies, is a critical application
field of AIS, especially through the development of cyber-physical-human production
systems and human-in-the-loop cyber physical production systems [30], which fosters
the use of AIS, being products, robots, production resources, connected with the Internet
through sensors and actuators [31]. These AIS interact, interoperate and cooperate with
humans (e.g., cobots). How to ensure that the welfare and privacy of operators will be
preserved or that their job will not be suppressed with no compensation? How to avoid
the diversion of monitoring data?
Social robotics is also a critical application field of AIS questioning ethics. Having
a social robot to help elderly people is an option to allow them to be monitored and to
maintain social interactions when the resources allocated to take care of them is decreas-
ing or diverted for other issues (pandemics). Can one let a robot take care of children?
Are AIS a good substitute for human relations? Can a human depend (physically or
emotionally) on AIS? In this situation, what kind of relationship will be constructed?
The four issues and the various application fields of AIS described in the previous section
clearly highlight the complexity and the intrinsic multidisciplinary aspect of the debate
to be constructed. We describe hereinafter a few lines of thought, contributions and
reviews from different spheres (legal, legislative, academic, social…) that nourish the
discussion.
256 D. Trentesaux et al.
First of all, the legal sphere is at the forefront of these topics. Lawyers have been
debating for a long time about AI and AIS. However, no real consensus emerges [32–35].
In a very interesting article, [36] describes not only the legal vacuum that AI potentially
generates in the event of damage to property or persons, but also the inadequacy of the
founding principles (some of them unwritten, such as “there can be no damage without
liability”) that led to the construction of all the French and European legislation. One of
the main problems it raises is to designate who is liable in the event of a damage caused
by an AI (the designer? the integrator? etc.) [4]. Will a judge apply the theory of “the
equivalence of cause”, sanctioning with an equal manner each of the actors involved in
the damage (including the researcher!), or is he interested instead in seeking for the root
cause of the damage? Is the civil liability regime the most relevant tool, knowing that the
autonomy of AIS will render them more and more independent of humans? For Glaser,
“The Paris Court of First Instance rules that the algorithms, their combination and the
data provided are indeed the result of human will” (translated from French) [36]. The
potentially incorporeal nature of AI complicates matters a little more, as legislators are
used to pointing the finger at whoever they consider to be responsible. However, one
avenue that he considers interesting is that of an article of the French Civil Code which
provides that “in the event of damage caused by the defect of a product incorporated
into another, the producer of the component part and the person who carried out the
incorporation are jointly and severally liable” (translated from French). Yet, it is still
necessary to be able to consider, as an illustrative example, that an AI integrated into a
cobot should be considered as a standalone product: for a standalone product, a defect that
led to an accident must be effectively diagnosed according to the state of scientific and
technical knowledge at the moment when the product was marketed. This is technically
feasible if the algorithms are all deterministic and explainable, but what happens if the
AI of the cobot has learned by itself and applied a wrong decision? He concludes his
article by noting that the legislator has not yet grasped the whole issue and is still too
reluctant in the face of the advent of AI. In his view, the real trigger for the legislator’s
questioning will come when the first judges will be unable to decide in the current state
of laws and rules. To speed up the evolution of mentalities in the legal sphere, mock
trials are organized. For example, the mock trial of the “pile-up of the century” tested
the current legal arsenal of lawyers in a futuristic context in 2041 where, following the
triggering of an emergency stop in an autonomous car, a gigantic accident takes place.
The purpose of the trial was then to determine responsibilities. This idea of a mock trial
is becoming widespread. This makes it possible to test the behaviour of legal actors
facing new situations, to identify the limits of current legislation and imagine how the
responsibilities could be shared.
This debate also takes place at political and legislative spheres. For example, the
European Parliament Resolution of 16 February 2017 envisages the definition of civil
law rules on robotics based on the status of “electronic person” [37]. The report “Making
sense of Artificial intelligence” or the Villani report issued in March 2018 by the French
politician and mathematician Cédric Villani is dedicated to the need to incorporate
ethics in the development of AI. One of the challenges for ethics of AI and AIS is the
transparency of the algorithms, which are currently opaque to the public, and sometimes
even to people who have designed the AI. The Villani report recommends the creation of
Ethics of Autonomous Intelligent Systems in the Human Society 257
a body of sworn public experts, which could be referred to an auditing body, which could
be seized in the context of a judicial litigation. Incorporating ethics in the development
of AI means that ethics must be present from the very beginning of the design of the AI.
It would then be a question of devoting ethics from design, as mentioned in the Article
25 of the GDPR on data protection by design. To this end, the report stresses the need to
raise awareness and teach researchers and producers in the field of AI and ethics from the
beginning of their training. Universities shall integrate ethics courses into their scientific
curricula and AI courses into their humanities curricula (at the time of writing, some
have started, such as the Université Polytechnique Hauts-de-France). Data protection
rules are also proposed. The idea is to carry out non-discrimination impact assessments,
in the same vein as those provided for in the GDPR. More recently, the ethical issues
of robotics and AI were once again discussed within the European Parliament. This
resulted in a resolution of 12 February 2019 on a comprehensive European industrial
policy on AI and robotics. Many aspects highlighted in the report Villani are found in
this resolution, which demonstrates a certain political and legislative consensus at the
European level on the way forward. Thus, the deployment of an AI must necessarily,
according to this resolution, be ethical from its design. What, then, are the ethical values
of the European Union to which the actors of AI shall comply with? The resolution
emphasizes on principles such as the ones of justice, human dignity, equality, non-
discrimination, informed consent, privacy and family life, protection of personal data,
non-stigmatization, transparency, individual responsibility and accountability.
The debate also takes place at social and societal spheres. Let us mention for example
the European Parliament’s public consultation on “Robotics and Artificial Intelligence”
of July 2017 and the development of “Charter and Ethics codes for robotics engineers”.
Communication medias have also appropriated the subject, as many newspaper stories
are written about the recent cases where partially autonomous cars have been involved
in accidents, that have accentuated in public opinion a sense of concern.
The debate is also being conducted into academic spheres [38, 39]. The basic ques-
tion is that of the qualification of the intelligence of AIS and robots: what is an AIS?
Classically, the law distinguishes different categories to identify legal or juridical per-
sonalities (mainly: objects, goods, human beings and sensitive living beings such as
animals). At a first glance, several options are available: an AIS is either a thing or an
animal or something else that has yet to be defined. This debate is increasingly animating
in the legal academic sphere. Two currents of thought are currently being identified. The
first one is opposed to the idea of treating robots legally as animals [40]. In this logic,
and aligned with the report [37] or the most avant-garde work of [41], some consider
that the AIS owns a legal personality, signifying the recognition of a new species, aside
human species. The second one, as in the work of [42], considers rather that the existing
legal arsenal is sufficient, for example by applying the laws about pet owners to AIS
and robots owners. From our point of view, this debate is far from being over. More,
it depends on countries where laws are applied. The scientific academic sphere, which
had fallen behind on this theme, perhaps considering that to bring up a sci-fi subject is
not serious, starts to structure itself. Examples of activities include projects and initia-
tives such as RoboEthics [43] and the MIT Moral Machine Initiative. The IEEE, which
258 D. Trentesaux et al.
regularly addresses the subject in its publications [44] has created in June 2020 an inter-
national technical committee on the subject: the IEEE IES TC on Technology Ethics and
Society. The USA is highly involved on this theme [45], but other countries also work on
it (cf. The ANR EthicAA project in France). The discussions focus on the establishment
of behavioural rules, the use of deep learning, the modelling of ethical behaviours, the
definition of the status of AI (weak AI: highly specialized on one function, or strong AI:
generalist, capable of copying human intelligence and its ability to adapt to deal with
unforeseen problems), etc. The study of paradoxes such as that of the trolley case [46]
often triggers research activities in this area. An innovative current of thought emanates
from the scientific sphere working on emotional robots [47], notably in contact with the
elderly or children [28]. Almost all the robots that are currently built for public demon-
strations are designed considering an emotional dimension (child face, soft artificial
skin, colours…): humans will develop more easily an empathy towards these technical
beings, which will be used to artificially increase the confidence regarding them and
prepare the society to the future integration of AIS, which points out ethical questions
as well at this level.
Whatever the sphere at which this debate takes place, a first consensus concerns the
required involvement of all the stakeholders during the lifecycle of the AIS, from the
designers to the end users (so far, there is one) [48]. A second important and consensual
point concerns the urgent need for the establishment of administrative and political
regulation systems, see for example [35]. This could be ideally set at an international
level under the auspices of international organizations over states. The idea would be
to oblige stakeholders engaged in the development of AIS, and especially researchers,
to comply with a set of constraints, standards and regulations in order to protect the
populations or to limit side effects (disappearance of trades in particular).
6 Conclusion
The arrival of AIS will profoundly change our society. Even if the ethics of the human
being has been studied for centuries by philosophers, the ethics of AIS is now making
sense but remains insufficiently addressed and must be studied through a multidisci-
plinary prism. From our perspective, it is important that each researcher working on AIS
evaluates the ethical impact of his research activity. The authors thus foster for exam-
ple the development of the concept of “Ethical Lifecycle Assessment” for an AIS, as it
already exists for environmental aspects.
Acknowledgement. The work described in this chapter was conducted under the auspices of the
project “Law of robots and other human avatars” funded by the IDEX Strasbourg Université et
Cité and in the framework of the joint laboratory “SurferLab” founded by Bombardier, Prosyst and
the Université Polytechnique Hauts-de-France. This Joint Laboratory is supported by the CNRS,
the European Union (ERDF) and the Hauts-de-France region. Parts of the work are also carried
out in the context of the HUMANISM No ANR-17-CE10-0009 research program, funded by
the French ANR “Agence Nationale de la Recherche”. The authors would like to thank warmly
Bérangère Kieken, Fabien Bruniau and Sébastien Caudrelier for discussions that nourished this
paper. Finally, the authors testify that no AI was used or mishandled in the writing of this chapter.
Ethics of Autonomous Intelligent Systems in the Human Society 259
References
1. Trentesaux, D., Karnouskos, S.: Ethical behaviour aspects of autonomous intelligent cyber-
physical systems. In: Service Oriented, Holonic and Multi-agent Manufacturing Systems for
Industry of the Future. Studies in Computational Intelligence, vol. 853, pp. 55–71. Springer,
Cham (2020)
2. Jenkins, R.: Autonomous vehicle ethics and laws: toward an overlapping consensus. New
America (2016)
3. Ricoeur, P.: Soi-même comme un autre. Seuil (1990)
4. Trentesaux, D., Rault, R.: Ethical behaviour of autonomous non-military cyber-physical
systems. In: 19th International Conference on Complex Systems: Control and Modeling
Problems, Samara (2017)
5. Kuhn, T.S.: The Structure of Scientific Revolutions. University of Chicago Press, Chicago
(1970)
6. Stolze, P.: La Science-Fiction: littérature d’images et non d’idées. In: Nicot, S. (ed.) Les
Univers de la Science-Fiction - Essais, pp. 183–202. Galaxies (1998)
7. Dick, P.K.: Man, android and machine. In: Nicholls, P. (ed.) Science Fiction At Large. Harper
& Row, New York (1976)
8. Barlow, A.: Philip K. Dick’s androids: victimized victimizers. In: Kerman, J.B. (ed.)
Retrofitting Blade Runner. The University of Wisconsin Press, Madison (1997)
9. Arnold, T., Scheutz, M.: The “big red button” is too late: an alternative model for the ethical
evaluation of AI systems. Ethics Inf. Technol. 20, 59–69 (2018)
10. Karnouskos, S.: Self-driving car acceptance and the role of ethics. IEEE Trans. Eng. Manag.
1–14 (2018). https://doi.org/10.1109/TEM.2018.2877307
11. Rajaonah, B., Sarraipa, J.: Trustworthiness-based automatic function allocation in future
humans-machines organizations. In: 2018 IEEE 22nd International Conference on Intelligent
Engineering Systems (INES), pp. 371–376 (2018). https://doi.org/10.1109/INES.2018.852
3876
12. Kirby, D.: The future is now: diegetic prototypes and the role of popular films in generating
real-world technological development. Soc. Stud. Sci. 40, 41–70 (2010)
13. Alexandre, L., Besnier, J.-M.: Les robots font-ils l’amour? Le transhumanisme en 12
questions, Dunod (2018)
14. Romero, D., Bernus, P., Noran, O., Stahre, J., Fast-Berglund, Å.: The operator 4.0: human
cyber-physical systems & adaptive automation towards human-automation symbiosis work
systems. In: IFIP Advances in Information and Communication Technology, pp. 677–686.
Springer, Cham (2016)
15. Longo, F., Padovano, A., Umbrello, S.: Value-oriented and ethical technology engineering in
industry 5.0: a human-centric perspective for the design of the factory of the future. Appl.
Sci. 10, 4182 (2020). https://doi.org/10.3390/app10124182
16. Schwarz, J.O.: The ‘narrative turn’ in developing foresight: assessing how cultural products
can assist organisations in detecting trends. Technol. Forecast. Soc. Chang. 90, 510–513
(2015). https://doi.org/10.1016/j.techfore.2014.02.024
17. Bina, O., Mateus, S., Pereira, L., Caffa, A.: The future imagined: exploring fiction as a means
of reflecting on today’s grand societal challenges and tomorrow’s options. Futures 86, 166–184
(2017). https://doi.org/10.1016/j.futures.2016.05.009
18. Anderson, S.L.: Asimov’s “three laws of robotics” and machine metaethics. AI Soc. 22,
477–493 (2008). https://doi.org/10.1007/s00146-007-0094-5
19. Snow, C.P.: The Two Cultures: And a Second Look. Cambridge University Press, Cambridge
(1964)
260 D. Trentesaux et al.
20. Hottois, G.: SF ou l’ambiguïté d’une littérature vraiment contemporaine. In: Science-fiction
et fiction spéculative, Editions de l’Université de Bruxelles (1985)
21. Tubaro, P., Casilli, A.A.: Micro-work, artificial intelligence and the automotive industry. J.
Ind. Bus. Econ. 46, 333–345 (2019)
22. Berrah, L., Trentesaux, D.: Decision-making in future industrial systems: is ethics a new
performance indicator? In: 10th SOHOMA Workshop on Service Oriented, Holonic and
Multi-Agent Manufacturing Systems for Industry of the Future. Studies in Computational
Intelligence, vol. 952, 1–2 October, Paris. Springer, Cham (2020)
23. Pacaux-Lemoine, M.-P., Trentesaux, D.: Ethical risks of human-machine symbiosis in indus-
try 4.0: insights from the human-machine cooperation approach. IFAC-PapersOnLine 52,
19–24 (2019). https://doi.org/10.1016/j.ifacol.2019.12.077
24. Gardony, A.L., Brunyé, T.T., Mahoney, C.R., Taylor, H.A.: How navigational aids impair
spatial memory: evidence for divided attention. Spatial Cogn. Comput. 13, 319–350 (2013).
https://doi.org/10.1080/13875868.2013.792821
25. Huftier, A.: Pierre Boulle: présentation. ReS Futurae, Revue d’études sur la science-fiction
(2015). https://doi.org/10.4000/resf.781
26. Williamson, J.: The Best of Jack Williamson. Ballantine, New York (1978)
27. Billingsley, R., Billingsley, J., Gärdenfors, P., Peppas, P., Prade, H., Skillicorn, D., Williams,
M.-A.: The altruistic robot: do what i want, not just what i say. In: Moral, S., Pivert, O., Marín,
N. (eds.) Scalable Uncertainty Management, pp. 149–162. Springer, Cham (2017)
28. Wu, Y.-H., Pino, M., Boesflug, S., de Sant’Anna, M., Legouverneur, G., Cristancho, V.,
Kerhervé, H., Rigaud, A.-S.: Robots émotionnels pour les personnes souffrant de maladie
d’Alzheimer en institution. NPG Neurologie Psychiatrie Gériatrie 14, 194–200 (2014). https://
doi.org/10.1016/j.npg.2014.01.005
29. Trentesaux, D., Dahyot, R., Ouedraogo, A., Arenas, D., Lefebvre, S., Schön, W., Lussier, B.,
Chéritel, H.: The autonomous train. In: 2018 13th Annual Conference on System of Systems
Engineering (SoSE), pp. 514–520 (2018). https://doi.org/10.1109/SYSOSE.2018.8428771
30. Gaham, M., Bouzouia, B., Achour, N.: Human-in-the-loop cyber-physical production sys-
tems control (HiLCP2sC): a multi-objective interactive framework proposal. In: Service
Orientation in Holonic and Multi-agent Manufacturing, pp. 315–325. Springer, Cham (2015)
31. Rüßmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., Harnisch, M.:
Industry 4.0: The Future of Productivity and Growth in Manufacturing Industries (2015)
32. Rault, R., Trentesaux, D.: Artificial intelligence, autonomous systems and robotics: legal
innovations, service orientation in Holonic and multi-agent manufacturing. In: Borangiu, T.,
et al. (eds.) Studies in Computational Intelligence, vol. 762, pp. 1–9. Springer, Cham (2018)
33. Palmerini, E., Bertolini, A., Battaglia, F., Koops, B.-J., Carnevale, A., Salvini, P.: RoboLaw:
towards a European framework for robotics regulation. Robot. Auton. Syst. 86, 78–85 (2016).
https://doi.org/10.1016/j.robot.2016.08.026
34. Nagenborg, M., Capurro, R., Weber, J., Pingel, C.: Ethical regulations on robotics in Europe.
AI Soc. 22, 349–366 (2007). https://doi.org/10.1007/s00146-007-0153-y
35. Dreier, T., Döhmann, I.S.: Legal aspects of service robotics. Poiesis Prax. 9, 201–217 (2012).
https://doi.org/10.1007/s10202-012-0115-4
36. Glaser, P.: Intelligence artificielleet responsabilité: un système juridique inadapté? Bulletin
Rapide Droit des Affaires (BRDA), pp. 19–22 (2018)
37. Delvaux, M.: Civil law rules on robotics, European Parliament Legislative initiative procedure
2015/2103 (2016)
38. Marty, A.: Legal and ethical considerations in the era of autonomous robots, University of St.
Gallen, Zurich, Switzerland (2017)
39. Barfield, W.: Liability for autonomous and artificially intelligent robots. Paladyn J. Behav.
Robot. 9, 193–203 (2018). https://doi.org/10.1515/pjbr-2018-0018
Ethics of Autonomous Intelligent Systems in the Human Society 261
40. Johnson, D.G., Verdicchio, M.: Why robots should not be treated like animals. Ethics Inf.
Technol. 20, 291–301 (2018). https://doi.org/10.1007/s10676-018-9481-5
41. Bensoussan, A., Bensoussan, J.: Droit des robots, Larcier, Bruxelles (2015)
42. Nevejans, N., Hauser, J., Ganascia, J.-G.: Traité de droit et d’éthique de la robotique civile.
Les Etudes Hospitalières édition, Bordeaux (2017)
43. Alsegier, R.A.: Roboethics: sharing our world with humanlike robots. IEEE Potentials 35,
24–28 (2016). https://doi.org/10.1109/MPOT.2014.2364491
44. Allen, C., Wallach, W., Smit, I.: Why machine ethics? IEEE Intell. Syst. 21, 12–17 (2006).
https://doi.org/10.1109/MIS.2006.83
45. Anderson, M., Anderson, S.L.: GenEth: a general ethical dilemma analyzer. Paladyn J. Behav.
Robot. 9, 337–357 (2018). https://doi.org/10.1515/pjbr-2018-0024
46. Bergmann, L.T., Schlicht, L., Meixner, C., König, P., Pipa, G., Boshammer, S., Stephan,
A.: Autonomous vehicles require socio-political acceptance - an empirical and philosophical
perspective on the problem of moral decision making. Front. Behav. Neurosci. 12, 31 (2018).
https://doi.org/10.3389/fnbeh.2018.00031
47. Norman, D.A.: Emotional Design: Why We Love (or Hate) Everyday Things. Basic Books,
New York (2005)
48. Trentesaux, D., Rault, R.: Designing ethical cyber-physical industrial systems. IFAC-
PapersOnLine 50, 14934–14939 (2017). https://doi.org/10.1016/j.ifacol.2017.08.2543
Analysis of New Job Profiles for the Factory
of the Future
1 Introduction
Occupations and job profiles are evidenced in several areas of study, e.g., economics,
sociology, history and management, which shows their social, economic and relevance
to the job market in general. In this way, the topic is notorious and constantly required
to be updated, once there has been a continuous evolution of occupations and work
profiles since prehistory. In this sense, it is known that the evolution of job profiles is
linked to some important factors, such as social changes focused on human behaviour,
changes in policies and legislation, recession, and new technologies and means of com-
munication. Such factors were present in the several industrial revolutions and changed
the professional profiles across each era, as illustrated in Table 1. As illustrated in the
previous table, job profiles follow the demanded characteristics for each industrial rev-
olution, mainly related to technologies, means of communication and transportation,
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 262–273, 2021.
https://doi.org/10.1007/978-3-030-69373-2_18
Analysis of New Job Profiles for the Factory of the Future 263
and types of production. Some job profiles are relevant to the respective industrial rev-
olution, emerging to face the current market challenges, but others are non-existent in
the subsequent industrial revolution. As example, the lamplighter was highly demanded
during the 1st industrial revolution, but with the emergence of electricity and light bulbs
in the 2nd industrial revolution, this job profile was extinguished. However, some job
profiles remain in the next industrial revolution and others need to be adapted, e.g., the
robotic technician appears in the 3rd industrial revolution but will need to expand his
skills to be adapted to the digital characteristics required in the 4th industrial revolution.
In this sense, it is clear that the relevant characteristics of each industrial revolution
directly influence the job market scenario, professions, careers, job profiles, types of
profiles, and in particular the necessary skills for the workers carry out their responsi-
bilities and duties according to the demanded requirements. Several reports show that
75 to 375 million people around the world may change their professional category by
2030 due to the new job market scenario [1], and 8–9% of 2,66 billion workforce will
have new occupations by 2030 [2].
This situation is most evident in pandemic periods, such as COVID-19, where there
is a greater demand for technology and digital resources to mitigate the effects of phys-
ical distancing. According to [3], COVID-19 is the most serious health crisis that the
world is facing in this century, provoking a strong impact in the world job market, par-
ticularly causing the loss of 195 million jobs. However, due to the COVID-19 pandemic
in United States from February to March 2020, there was an increase in the demand by
some professionals focused on digitisation, e.g, approximately an increase of 20% of
cybersecurity engineers and 12% of net developers [4, 5]. Also illustrative is the increase
of 775% in demanding cloud services, reported by Microsoft, in regions where the phys-
ical distancing is being more impacting [6]. As a result, millions of people may need
to acquire new digital skills, and others will need to change careers and improve their
skills to be adapted to the new job market reality. In this context, there is a need to
264 L. Sakurada et al.
understand the challenges and trends about new job profiles, in order to help employ-
ers and employees to perform the necessary up-skilling initiatives to implement/attend
according to their individual needs.
Having this in mind, this work aims to identify the new job profiles for the FoF
across six technological sectors, namely Collaborative Robotics (Cobots), Additive
Manufacturing (AM), Mechatronics and Machine Automation (MMA), Data Analyt-
ics (DA), Cybersecurity (CS) and Human-Machine Interface (HMI), that emerge with
the introduction of digitisation in the context of the 4th industrial revolution. For this
purpose, several pieces of data were extracted and analyzed from different information
sources, namely technical and scientific literature and recruitment repositories, using
proper data analytics techniques and the feedback from experts. This analysis allowed
to characterize the requirements for the specialized training of the current workforce in
terms of technical and soft skills, and type and level of profile.
The rest of the paper is organized as follows: Section 2 describes the methodology
used to identify the new job profiles and Sect. 3 summarizes the preliminary catalogue
of 100 new job profiles for the target six sectors. Section 4 provides a characterization of
the new job profiles, particularly analysing the distribution per sector, type of profile and
level of profile, as well identifying the most relevant soft skills for each type of profile
and technical skills per sector. Section 5 rounds up the paper with the conclusions and
points out the future work.
2 Methodology
As previously described, Industry 4.0 is re-shaping the FoF and also contributing for
the emergence of new jobs or new profiles with skills and competences associated to
information and communications technology (ICT) and automation emergent technolo-
gies. Under the scope of the FIT4FoF project (https://www.fit4fof.eu/), the definition
of new job profiles will assist in informing education and training requirements for the
current workforce, allowing professionals around the world to adapt and develop skills
based on the FoF requirements, particularly in the six sectors aforementioned.
The adopted methodology to identify the new job profiles, illustrated in Fig. 1, fol-
lows an iterative approach that comprises three distinct phases: collection and analysis
of the data to identify at least 100 job profiles, consolidation of the characterization of
the identified job profiles and identification of relationship with technological trends
and relevant skills. Furthermore, this methodology considers the use of automatic data
analysis techniques, such as text mining, and also feedback from experts of each of the
addressed areas.
Analysis of New Job Profiles for the Factory of the Future 265
list of
technological
trends
catalogue
of new job relationship
literature data analysis between trends
review profiles
and job profiles
recruitment relationship
repositories between skills
(e.g., GlassDoor) and job profiles
characterization
refinement
list of
experts relevant
new skills
Fig. 1. Methodology to identify new job profiles for the factory of the future.
The first phase comprises the analysis of different data sources, namely a literature
review and an analysis of recruitment repositories, which should be mapped with the
results of the gaps in technical and soft skills described in [7]. The literature review
consists of a detailed systematic review of reports from consultancy companies or orga-
nizational entities to extract a list of job profiles that reflects the recent tendencies in this
field. The analysis of the recruitment repositories uses advanced data analytics tech-
niques combined with natural language processing to complement the identification of
new job profiles, taking into consideration the actual demand by the job market in each
one of the six sectors.
The analysis of these two data sources allows to compile a list of 100 new job pro-
files across the six industrial areas, being each one characterized by a short description,
a list of relevant soft skills, a list of technical skills, and the type and level of the profile.
The second phase aims to consolidate the characterization of the new job profiles,
namely refinement the list of soft and technical skills through the feedback collected
from experts and stakeholders, i.e., professionals with expertise in at least one of six
target sectors. Finally, and performed in parallel, the third phase is related to analysing
the catalogue of new job profiles in a perspective that allows to identify the relationship
with technological trends and emergent skills. This analysis allows to identify the skills
that most impact the job profiles, supporting stakeholders to prepare their skills agenda
towards training their workforce towards these new profiles.
The most important literature used in this review is mentioned in [8–20], from where
the majority of the 100 job profiles were identified. For instance, 10 different job pro-
files covering different sectors were retrieved from the Catálogo de Perfı́s Profesionais
Analysis of New Job Profiles for the Factory of the Future 267
de Futuro report [15], namely Advanced materials specialist, Intelligent robotics expert,
Drones engineering expert, Smart grids specialist, Cybersecurity specialist, Blockchain
expert, Real time systems expert, Extended reality architect, Circular economy special-
ist and Customer experience specialist. On the other side, some job profiles are identi-
fied in more than one reference, e.g., Big data analyst [12, 18], Factory virtual system
designer [8, 9, 12] or IoT solution technician [8, 10].
The job profiles were classified according to the type of profile, being considered
the following five categories (definitions adapted from www.dictionary.com):
In the same manner, the new job profiles are also classified in three levels: opera-
tional, tactical and strategical. These three levels are related to the scope of the decision
making over the time; if the decision has a short-term impact and related to individual
employees/units, the job profile will be categorized as operational. On the other hand,
if decisions persist over the time and influence the performance of the plant as a whole
then the levels will be classified as tactical and strategical, with strategical having a
longer influence in the performed decisions.
Additionally, each job profile has a set of soft skills and a list of technical skills,
that represents the requirements for the job position. The jobs shown in Table 2 are
associated to one sector due to simplicity of representation but the majority of them
are relevant to more that one sector. As examples, in a general way, there are profiles
related with Cobots that are also related with MMA and job profiles in DA that are also
related with CS.
Looking to the level of profiles, the majority of the new job profiles may be consid-
ered as “Tactical” jobs (62%), 28% were considered “Operational” jobs, and only 10%
as “Strategical” jobs. These percentages support the observed distribution of the types
of profiles since the majority of them are related with tactical level jobs positions, with
a minority being considered as strategic level jobs, and only the “Technician” type of
profile being categorised as an “Operational” level job position.
Considering the distribution of the new jobs across the six industrial sectors
included in this study, approximately 28% of the listed jobs are on Data Analytics
and 16% on the Cybersecurity areas. Together both areas comprise 44% of the new
job profiles revealing the “value of data” in FoF. Also 39% of the identified new job
profiles are distributed by Mechatronics/Machine Automation, Collaborative Robotics
and Human-Machine Interface areas. Finally 17% of the jobs are related to the Additive
Manufacturing sector. All of this categorisation revels the importance of the listed new
job profiles in the smart factories in the context of Industry 4.0.
Additionally, a deeper analysis was performed to identify if each identified job pro-
file is a really “new job” and consequently has a “new profile”, or if it is an “existing
job” but with a “new profile”. The result of this classification is illustrated in scattering
diagrams (see Fig. 3) where each number corresponds to a specific job profile listed in
Table 2, and the colour specifies the type of profile.
Analysis of New Job Profiles for the Factory of the Future 269
Fig. 3. Dispersion of new job profiles: left) industrial sector and type of profile, and right) type
and level of profile.
The analysis of both diagrams shows the dispersion of the job profiles included
in the catalogue, and brings together all the performed categorisation. It is possible to
verify that 64% of the job profiles were considered to be “New Job/New Profile”, and
36% of the jobs in the catalogue are existing jobs positions but with a new profile. As
an example, we can refer that we have sixteen job profiles labelled as “Technician”,
and so they are categorised as “Operational” level job profiles. Moreover, other types
of profiles were considered of “Operational” level, e.g., the “(72) Test engineer” job
position demanded by the cyber-security sector is considered as an “existing job” but
with a “new profile”, and categorised as an “Engineer” profile type. A similar analysis
can be performed for all the job profiles included in the catalogue.
In summary, it is important to point out that although the majority of the jobs identi-
fied in the catalogue are new jobs profiles positions needed by the employers, there are
also existing job positions that will have new profiles as such requiring new skills and
competencies.
Another aspect that can be highlighted is the fact that the FoFs will require more
workers with specific competencies since a significant number of the analysed job posi-
tions (72%) were labelled as tactical or strategic level were several specific skills and/or
competencies may be mandatory.
With the aim to emphasize the more relevant skills for the identified new job pro-
files, the soft and technical skills required for each job profile were also identified.
Figure 4 illustrates, in a network graph, the relationship between the required soft skills
and the different types of job profiles.
270 L. Sakurada et al.
Our analysis revealed that some of the listed soft skills are cross-cutting among the
considered types of profiles. For example, “critical/analytical thinking”, “team work”,
“capacity to adapt to new situations”, and “communication skills” are required in all
the considered types of profile. Nevertheless, some skills are more often required
by employers. On the other hand, “creativity”, “communication skills”, “leadership”,
“problem-solving”, and “team work” soft skills are of great importance to the “special-
ist”, the “architech”, the “developer”, and the “engineer” type of profiles. For a “tech-
nician” job profile, it is possible to notice that the set of the most demanded soft skills
is quite different because this is an operational level job profile and skills such as “team
work”, “capacity to adapt to new situations”, “continuous lerning”, and “continuous
skill development” are more often required.
A similar analysis was also conducted to understand the relationship between the
most often demanded technical skills and each one of the six industrial sectors included
in this study. Figure 5 illustrates the most required technical skills for each target sector.
Analysis of New Job Profiles for the Factory of the Future 271
Taking into account the technical skills that are demanded in the new jobs pro-
files, a large number of different technical skills was found since the job profiles cover
six different technical sectors. However, it is possible to emphasize some cross-cutting
skills and also some of the most relevant technical skills for each one of the six studied
industrial sectors.
We may point out that some technical skills such as “scheduling”, “smart sensors”,
“IoT”, “ML”, “programming”, “AI”, “digital skills”, “virtual reality”, “augmented real-
ity”, “optimisation”, “simulation”, “statistics”, and “communication networks” may
be considered cross-cutting skills since they are demanded on job profile positions
announcements of the different industrial sectors. For example, “AI” is a required skill
on job profile positions for all the industrial sectors and “digital skills” is a necessary
skill of AM, HMI, and Cobots sectors. Additionally, it is also possible to observe the
relevance of each skill for a specific industrial sector. According to the network graph
shown in Fig. 5, the greater the thickness of a line, the greater the relevance of that skill
for the sector. For example, considering the HMI industrial sector, the “augmented real-
ity”, “virtual reality” and “digital skills” skills, together with both “AI” and “program-
ming”, appear as the most relevant technical skills since they were frequently required
among the job profiles requirements of this sector.
272 L. Sakurada et al.
5 Conclusions
Along several industrial revolutions, the job profiles evolved to face the disruptive tech-
nological changes. Also presently, at the fourth industrial revolution, the introduction
of Industry 4.0 principles and technologies are re-shaping the workforce profiles, being
noticed a decrease in the demand for low-skilled activities and an increase of high-skill
activities. This change causes that a significant number of existing job profiles will be
obsolete and new job profiles will emerge.
This paper aims to identify the new job profiles for the FoF across six industrial
technological sectors, namely Cobots, AM, MMA, DA, CS and HMI. The performed
analysis allowed to compile a catalogue of 100 new job profiles that were characterized
and analysed in terms of technical and soft skills, type and level of profile. The charac-
terization of these new job profiles allowed to analyse the distribution by type and level
of profile, as well as industrial sector. A deeper analysis allowed to conclude about
the relevance of soft skills and technical skills for these new job profile, particularly
analysing the most relevant soft skills per type of profile and most relevant technical
skills per industrial sector.
It is also important to notice that the developed analysis can answer to what jobs
profiles the future holds in the FoF field. This information may assume a crucial role
to support companies’ managers and stakeholders to decide what upskilling initiatives
should be attended by their workforce according to the needs, particularities and goals
for their organization. In fact, having identified the relationship between new job pro-
files and relevant skills, the decision-makers can look for the positioning of relevant
skills in the desired type of profile and sector, and select the proper training programs’
topics.
Future work will be devoted to analyse the relationship of new job profiles with
technological trends and with the demand over the time found in job recruitment repos-
itories.
Acknowledgement
This work is part of the FIT4FoF project that has received funding from the
European Union’s Horizon 2020 research and innovation programme under grant
agreement n. 820701.
References
1. Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., Sanghvi, R., Saurabh, K.:
Jobs Lost, Jobs Gained: Workforce Transitions in a time of a automation. Mckinsey Global
Institute, December 2017
2. Lin, S.J.: Technological adaptation, cities, and new work. Rev. Econ. Stat. 93(2), 554–574
(2011)
3. Fine, D., Klier, J., Mahajan, D., Raabe, N., Schubert, J., Singh, N., Ungur, S.: How to rebuild
and reimagine jobs amid the coronavirus crisis. Mckinsey & Company, April 2020
4. Perry, T.S.: Tech Jobs in the Time of COVID: cybersecurity job openings explode, while the
job market gets tougher for Web developers and Ruby experts. IEEE Spectrum, April, 2020
Analysis of New Job Profiles for the Factory of the Future 273
5. Lund, S., Ellingrud, K., Hancock, B., Manyika, J.: COVID-19 and jobs: Monitoring the US
impact on people and places. Mckinsey Global Institute, April 2020
6. Verbist, N.: How COVID-19 is accelerating digitalization. https://hello.global.ntt/en-us/
insights/blog/how-covid-19-is-accelerating-digitalization. Accessed 20 May 2020
7. Leitão, P., Geraldes, C., Fernandes, P., Badikyan, H.: Analysis of the workforce skills for the
factories of the future. In: Proceedings of the 3rd IEEE International Conference on Industrial
Cyber-Physical Systems (ICPS 2020), pp. 353–358 (2020)
8. Ras, E., Wild, F., Stahl, C., Baudet, A.: Bridging the skills gap of workers in industry 4.0
by human performance augmentation tools: challenges and roadmap. In: Proceedings of
the 10th International Conference on PErvasive Technologies Related to Assistive Environ-
ments, pp. 428–432 (2017)
9. Kaji, J., Hurley, B., Devan, P., Bhat, R., Khan, A., Gangopadhyay, N., Tharakan, A.G.: Tech-
nology, Media and Telecommunications Predictions 2020. Deloitte Report (2019)
10. Mcafee, A., Brynjolfsson, E.: The Second Machine Age - Work, Progress, And Prosperity In
A Time Of Brilliant Technologies. WW NORTON & CO (2016)
11. Tseng, M.-L., Tan, R.R., Chiu, A.S.F., Chien, C.-F., Kuo, T.C.: Circular economy meets
industry 4.0: can big data drive industrial symbiosis? Resour. Conserv. Recycl. 131, 146–
147 (2018)
12. Mechanical Engineering Industry Association, “Industrie 4.0 in practice-Solutions for indus-
trial applications”, report (2016)
13. Olivan, A.D., Ser, J., Galar, D., Sierra, B.: Data fusion and machine learning for industrial
prognosis: trends and perspectives towards Industry 4.0. Inf. Fus. 50, 92-111 (2019)
14. Ruppert, T., Jaskó, S., Holczinger, T., Abonyi, J.: Enabling technologies for operator 4.0: a
survey. Appl. Sci. 8(9), 1650 (2018)
15. de Galicia, X.: Catálogo de Perfı́s Profesionais de Futuro. Technical report (2019)
16. ManuFuture High level group. Manufuture vision 2030, report (2018)
17. Basco, A.I., Beliz, G., Coatz, D., Garnero, P.: Industria 4.0: fabricando el futuro (2018)
18. Queiroz, J, Leitão, P., Barbosa, J., Oliveira, E.: Distributing intelligence among cloud, fog
and edge in industrial cyber-physical systems. In: Proceedings of the 16th International Con-
ference on Informatics in Control, Automation and Robotics (ICINCO 2019), vol. 1, pp.
447–454 (2019)
19. Mabkhot, M.M., Al-Ahmari, A.M., Salah, B., Alkhalefah, H.: Requirements of the Smart
Factory System: A Survey and Perspective. Machines 6(2), 23 (2018)
20. Benesova, A., Tupa, J.: Requirements for education and qualification of people in industry
4.0. Procedia Manuf. 11, 2195–2202 (2017)
Evaluation Methods of Ergonomics Constraints
in Manufacturing Operations for a Sustainable
Job Balancing in Industry 4.0
Abstract. Over the years the human factors are becoming increasingly decisive in
the organization of the manufacturing industry production process. In this article
we are overviewing how ergonomics are integrated in the complete job-scheduling
optimization process; we are specifically focusing on the collection of ergonomic
data. A large variety of tools and methods have been developed to assess physi-
cal and psychosocial risks in a working environment. In this article we review the
principal methods described in the literature, labelled under three main categories:
observational, self-evaluation and direct measurement. This large diversity of eval-
uation methods is directly linked with the flexibility required by health experts
to analyze precisely various situations in the field. Most of the ergonomic-based
job scheduling applications reviewed are using a different method which makes
it difficult to compare directly the efficiency of the subsequent optimization.
1 Introduction
Following the societal goal of Industry 4.0 to introduce the human factors in the man-
ufacturing process management, safety at work has become an even more important
concern for the manufacturing industry in order to improve performance and sustain-
ability. Over the last twenty years, the development of the manufacturing process has
increased work-related disease risk for workers, partially due to the transition of man-
ufacturing process towards the lean management. The lean management contributes to
the intensification of work and the reduction of cycle time in industry [1]. Workers in
manufacturing industries are usually performing repetitive tasks that expose them to an
intense physical workload that can induce work-related disorder such as musculoskeletal
disorders [2]. Musculoskeletal disorders (MSDs) are injuries and disorders that affect
the soft tissues of the human body (i.e., muscles, nerves, tendons, ligaments, etc.) and
restrict the body’s movement [3].
MSDs have a huge social impact, as a third of Europeans workers from every activity
sector are currently suffering from MSDs, which are affecting about 45 million workers
in Europe [4]. In Europe MSDs risks are amplified by the phenomena of “Ageing Work-
force” [5] which is the disparity between the proportion of workers aged 50 or more
and workers aged 25 or younger; the first category is currently the double of the second.
MSDs are causing deleterious effects on life quality for the workers and are the main
cause of sick leave and lost work days [6]. The cost of the lost productivity due to MSDs
is estimated to be close to 2% of the gross domestic product in Europe [5]. The main
risk factors for MSDs are the biomechanical constraints, but it is widely admitted that
occupational disease at work are caused by multifactorial constraints [7].
Arduousness of a task is the perceived effort to realize this task. It is the main
ergonomic data used to evaluate the physical characteristic of a job. In order to evaluate
this arduousness and the physical risk, three class of methods exist: observational meth-
ods, self-evaluation questionnaires and direct measurement methods [7]. The choice of
the ergonomic assessment method is based on the purpose of the evaluation, the char-
acteristics of the work to be assessed and the resources available for collecting and
analysing the evaluation. In order to improve their societal sustainability, manufacturing
organizations managers are working with ergonomists aiming to find solutions about
these ergonomic problematics to reduce physicals and psychological risks. In literature,
most of the ergonomic evaluation process follows the steps presented in Fig. 1.
Before taking any ergonomic action, the first step is to identify the physical risk such
as bad posture, heavy weight load or the repetitiveness aspect of tasks that could cause
MSDs for the workers. This identification can be done by registering the complaints
from workers who highlight an arduous situation during the operations they perform.
Important physical risks can also be highlighted with an augmentation of occupational
diseases or important absenteeism in the manufacturing plant or more precisely in the
workstation. These identifications lead to an ergonomic evaluation done with the help
of a health expert who selects a measurement method in order to expose the physical
risks and evaluate the risk level on the identified situation. When the evaluation has been
made, the objective is to find a solution that respects a budget and that reduces the risks
for the workers. The solution proposed is often an amelioration of the workstation design
276 N. Murcia et al.
or adding a technical solution, for example an exoskeleton to relieve the worker when
he carries a load. The ergonomic evaluation ends with the feedback from the worker and
the evaluation of the method used to improve the working situation.
Today, many technical solutions involving a modification of the workstation are
either already used in practice or, if not, too expensive to be achievable in manufacturing
industry. In both cases, the risks are still present as no solutions are 100% efficient. In
this context, we are interested by the methods developed by the managers after this full
ergonomic phase in order to reduce again the physical risks for the workers.
In this paper, we describe the global process of ergonomic-based job balancing
in the literature. We then compare the different methods used to gather data on the
different ergonomic risks; these methods are classified in three sub-categories which
are observational methods, self-evaluation methods and measurement-based methods.
We then propose a discussion on these methods and their use in ergonomic-based job
balancing.
Studies and investigations have shown that doing repetitive movements at work is one
of the principal risks factors for occupational disease, along with awkward posture and
heavy weight lifting [8]. First instances of job-rotation appeared in 1975 as a development
of the new Toyota production system and the first use of lean management [9]. This job
rotation idea came from the workers that wanted more flexibility for the time window
of their break. Whenever someone wanted to take his break another worker was filling
the gap at the workstation. Ergonomists and managers in the manufacturing industry
then started to develop job rotation as a solution to reduce the repetitive strain on the
workers. Most of the ergonomics-based job balancing process in the literature and their
application have a similar process.
For this process we identified three different phases highlighted in Fig. 2. The first
one is constructed around the identification and the measurement of the ergonomic risk
factors. The first important choice is the selection of ergonomic risk factors used in
the optimization, for example it can be centred around physical risks such as postural
risks, or psychosocial risks like job satisfaction. Either way, these ergonomic risks are
measured with the help of a health expert following a precise assessment method selected
beforehand. Once the ergonomic risk variables are defined and associated with a value
representing the risk level, the ergonomic-based optimization problem can be defined.
Given that ergonomic measures are often qualitative values, the data may require an
adaptation in order to fit in an optimization problem. For example, a colour-method
measure can be transformed into a score [10] or an exposure level can be transformed
into injury risk [11]. Job-Affectation optimization problems are mainly formulated as
a Line Balancing or a Job-Rotation problem [12–14]. The repartition of the physical
risk among the production operators is the most common objective in these different
formulations. Some studies also consider the economic aspect, making a multi-objective
formulation between economic and ergonomic variables [12].
When the problem has been formulated the remaining phase is to find a way to solve
the problem and to evaluate the result of the optimization which consists mostly in using
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 277
a solver or a meta-heuristic. The evaluation of the result is not trivial because ergonomic
variable are still hard to quantify but is still possible. A comparison with the initial
situation can also be realized and linked with the feedback the production operators can
give for a good evaluation of the optimization method. In the next chapters we will take
a deeper look into the phase 1 which is the evaluation of ergonomic risks at work and
make a review of the different ergonomic assessment methods.
3 Observational Methods
Observational methods are commonly used to evaluate the physical risks. Used by health
experts, it consists of examining the work process and evaluating the risk factors for the
worker according to a checklist or a grid. Duration of exposition, intensity of a task,
repetitive exposure to a risk and uncomfortable postural positions are evaluated.
These methods are commonly used by ergonomists to find dangerous situations.
In the manufacturing industry and especially in the automotive industry, observational
methods are widely used to assess ergonomic constraints for each workstation individ-
ually. The result is represented by a risk level on a given criteria, quantitatively with a
score or qualitatively with a colour code, red representing an important physical risk,
yellow a moderate risk and green a safe situation. There are more than 30 observational
methods [15] with sensible differences on the part of the body evaluated and threshold
for the different criteria [16]. Among these methods some of the most commonly used
are exposed in Table 1.
These observation-based assessments of the physical risks are mostly measuring the
risks of the postures and the intensity of a task on the whole body or some regions.
However, these methods often provide different results and are not directly comparable,
mostly because it depends on the observation and because there are no defined standards
for ergonomic measures [16, 17, 31]. It is indeed impossible for an observation method
278 N. Murcia et al.
Table 1. Comparison of the most common observational risks assessment methods in literature
to be optimal for all purposes and the parameters of the measure affect the selection of
the best method [16]. More advanced methods including video analysis of tasks have
been developed to improve the accuracy of the ergonomic assessment. Video analysis
allows ergonomists to check a given working situation multiple times and also allows to
have a reflection with the worker about the arduousness of this situation.
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 279
4 Self-evaluation Methods
Self-evaluation methods have been developed to collect data directly from the workers
by asking questions about their health and their perception of physical risks at work.
These methods are mostly used in studies to evaluate risks factors at work and their
impact over the subjects’ health. During studies on evaluation of ergonomic risks at
work, a self-evaluation questionnaire can be used to identify the different risk factors
and their impact on the workers’ health.
These ergonomics risks include physical risks such as incorrect posture, important
force exertion, repetitive movement and important weight lifting [31]. Personal infor-
mation like age, gender or height, are also demanded in a self-evaluation questionnaire
in order to highlight the links between these individuals information and the muscu-
loskeletal syndromes [32]. Self-evaluation questionnaires can also assess organizational
280 N. Murcia et al.
and psychosocial risk factors [33, 34]. The advantages of self-evaluation methods are
the possibility of making a survey over a large population and gather data over time.
In most articles using self-evaluation methods during an ergonomic investigation in a
manufacturing industry, the questionnaires are used as a starting point in order to identify
the prevalence in MSDs symptoms in the studied population. However the reliability of
these surveys might be altered by the feelings and out of work activities of respondents
and the possible misinterpretation of the questions [35]. Some examples of studies using
self-evaluation method are detailed in Table 2.
One of the most frequently used questionnaire is the Musculoskeletal Nordic Ques-
tionnaire, which collects data on the pain experienced by the workers over the seven
last days and the last 12 months for each body part, and relates them with their personal
information [40]. The Karasek questionnaire is another self-evaluation method aiming
to measure the stress at work by collecting data about psychosocial aspect of the job
[41].
Self-evaluation questionnaire are also used in large-scale medical cohorts studies in
order to evaluate the prevalence of MSDs in the general population and the associated
risk factors [42].
5 Measurement-Based Methods
Direct measurement methods consist in attaching sensors to the subject’s body segments
to measure the exposures variables at work [7]. Some of the tools used are electromyo-
graphy, accelerometers, force measurement tool and more recently the motion capture
technology is used in order to assess the exposures constraint of the subject while doing
a technical operation.
Whereas the two other methods are measuring a subjective interpretation of the
physical risks, direct measurement methods are giving an objective assessment about
the physical exposure. However, this physical exposure is assessed without taking into
account the operator feelings which makes it impossible to normalize this given physical
exposure among all the workers. These direct measurement methods were developed in
order to measure athlete’s capacities.
However, these methods are rarely applied in the manufacturing industry because
it is too costly to set in practice and it is almost impossible to gather data from a large
population. These methods are also inconvenient in practice because they collect a huge
amount of data that is difficult to process over a short time; it is often not enough to
show an evidence between the measure and the possible physical risk for the operator.
Actually, this high-quality precision is often not needed to select a technical solution for
reducing the physical risk on a workstation.
Other inconveniences of these methods are the difficulties to gather data on a long
time and the possible bias that the workers wearing the device might not do the task
during the experience as he would in practice.
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 281
6 Discussion
MSDs are developing over a lifetime; a proof of physical risk reduction with job-
balancing is hard to obtain because it requires a long-term study. However, the pro-
duction worker perception is a good indicator on the benefits of the scheduling meth-
ods and at least can show the efficiency of these optimization methods over short and
medium-terms.
In literature, there are over a hundred different identified methods in order to measure
the ergonomic risks [15]; in practice this number tends to grow because health experts can
adapt these methods to their exact situation. Furthermore, these methods often produce
different results for the same situation [16]. Some methods aim to measure the importance
of the physical risks whereas some methods measure a discomfort value or even the work-
related pain over a given period. This important disparity in ergonomic measurement
methods is echoed on the data used for the different ergonomic-based job-balancing
methods existing in the literature [12]. This disparity hardly allows directly comparing
the different solutions enumerated. However, the advantage of having a wide range of
ergonomic risks assessment method is to let health experts choose the best fitting method
with their needs and the environment constraints in which the measure is done. This gives
the opportunity to get more accurate data for a possible optimization taking into account
ergonomic-based data.
282 N. Murcia et al.
In the manufacturing industry, these methods are not often exploited, and mostly
at an experimental scale. Paradoxically, within the 4th industrial revolution, the most
used ergonomics assessment methods currently seem to be self-reported questionnaires
and observational methods, which are more traditional. This gap with the technology
development can be explained with the high cost and the time consumed of the most
recent measurement-based tools. This trend can also be explained with the fact that
ergonomic data is hard to be processed and used in mathematical models, and greatly
increases the complexity of ergonomic-based optimization algorithm.
To sum up, Industry 4.0 intends to take into account ergonomic-based criteria in the
real-time factory management, but the whole process is not ready yet. Managers would
like an integrated tool solving the problem in an automated manner, but the metrics are
not standardized, the technology is not ready and the models are too complex. Therefore,
a conclusion is that, at least in a transient phase, a focus should be brought to decision
support tools aiming at helping the decision to assign the tasks to the operators with an
evaluation of their past constraints.
7 Conclusion
This paper proposed an overview of the different tools used to do ergonomic assessments
in the manufacturing industry. Over the years, many tools have been developed which
come from a wide range of ergonomic assessment methods. On the field an ergonomic
assessment is often expensive because it requires time and the expertise of ergonomists.
Hence it is primordial to have a good knowledge on the environment to determine
the frame of the study and the potentials risks. This large range of tools explain the
diversity of the ergonomic data measurement methods in ergonomic-based job balancing
[12]. However, the complexity of line balancing algorithms taking into account such
constraints is generally preventing managers to integrate such tools in their workshops.
Therefore, decision support tools might be developed in order to cope with this current
technological deadlock.
References
1. Koukoulaki, T.: The impact of lean production on musculoskeletal and psychosocial risks: an
examination of sociotechnical trends over 20 years. Appl. Ergon. 45, 198–212 (2014)
2. Antwi-Afari, M.F., Li, H., Edwards, D.J., Pärn, E.A., Seo, J., Wong, A.Y.L.: Biomechanical
analysis of risk factors for work-related musculoskeletal disorders during repetitive lifting
task in construction workers. Autom. Constr. 83, 41–47 (2017)
3. Bernard, B.P., Putz-Anderson, V.: Musculoskeletal disorders and workplace factors; a critical
review of epidemiologic evidence for work-related musculoskeletal disorders of the neck,
upper extremity, and low back, U.S. Department of Health and Human Services (1997)
4. Parot-Schinkel, E., Descatha, A., Ha, C., Petit, A., Leclerc, A., Roquelaure, Y.: Prevalence
of multisite musculoskeletal symptoms: a French cross-sectional working population-based
study. BMC Musculoskelet. Disord. 13, 122 (2012)
5. Bevan, S.: Economic impact of musculoskeletal disorders (MSDs) on work in Europe. Best
Pract. Res. Clin. Rheumatology 29, 356–373 (2015)
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 283
6. Roux, C.H.: Impact of musculoskeletal disorders on quality of life: an inception cohort study.
Ann. Rheumatol. Dis. 64, 606–611 (2005)
7. David, G.C.: Ergonomic methods for assessing exposure to risk factors for work-related
musculoskeletal disorders. Occup. Med. 55, 190–199 (2005)
8. Van Tulder, M., Malmivaara, A., Koes, B.: Repetitive strain injury. Lancet 369, 1815–1822
(2007)
9. Muramatsu, R., Miyazaki, H., Ishii, K.: A Successful application of job enlarge-
ment/enrichment at Toyota. IIE Trans. 19, 451–459 (1987)
10. Moussavi, S.E., Zare, M., Mahdjoub, M., Grunder, O.: Balancing high operator’s workload
through a new job rotation approach: application to an automotive assembly line. Int. J. Ind.
Ergon. 71, 136–144 (2019)
11. Sobhani, A., Wahab, M.I.M., Neumann, W.P.: Incorporating human factors-related perfor-
mance variation in optimizing a serial system. Eur. J. Oper. Res. 257, 69–83 (2017)
12. Otto, A., Battaïa, O.: Reducing physical ergonomic risks at assembly lines by line balancing
and job rotation: a survey. Comput. Ind. Eng. 111, 467–480 (2017)
13. Padula, R.S., Comper, M.L.C., Sparer, E.H., Dennerlein, J.T.: Job rotation designed to prevent
musculoskeletal disorders and control risk in manufacturing industries: a systematic review.
Appl. Ergon. 58, 386–397 (2017)
14. Grosse, E.H., Calzavara, M., Glock, C.H., Sgarbossa, F.: Incorporating human factors into
decision support models for production and logistics: current state of research. IFAC-Papers
Online 50, 6900–6905 (2017)
15. Takala, E.-P., Pehkonen, I., Forsman, M., Hansson, G.-Å., Mathiassen, S.E., Neumann, W.P.,
Sjøgaard, G., Veiersted, K.B., Westgaard, R.H., Winkel, J.: Systematic evaluation of observa-
tional methods assessing biomechanical exposures at work. Scand. J. Work Environ. Health.
36, 3–24 (2010)
16. Chiasson, M.-È., Imbeau, D., Aubry, K., Delisle, A.: Comparing the results of eight methods
used to evaluate risk factors associated with musculoskeletal disorders. Int. J. Ind. Ergon. 42,
478–488 (2012)
17. McAtamney, L., Nigel Corlett, E.: RULA: a survey method for the investigation of work-
related upper limb disorders. Appl. Ergon. 24, 91–99 (1993)
18. Jaturanonda, C., Nanthavanij, S.: Heuristic Procedure for Two-Criterion Assembly Line Bal-
ancing Problem (2007). https://www.researchgate.net/publication/228366470_Heuristic_Pro
cedure_for_Two-Criterion_Assembly_Line_Balancing_Problem
19. Bautista, J., Alfaro-Pozo, R., Batalla-García, C.: Maximizing comfort in assembly lines with
temporal, spatial and ergonomic attributes. Int. J. Comput. Intell. Syst. 9, 788–799 (2016)
20. Hignett, S., McAtamney, L.: Rapid Entire Body Assessment (REBA). Appl. Ergon. 31, 201–
205 (2000)
21. Yoon, S.-Y., Ko, J., Jung, M.-C.: A model for developing job rotation schedules that elimi-
nate sequential high workloads and minimize between-worker variability in cumulative daily
workloads: application to automotive assembly lines. Appl. Ergon. 55, 8–15 (2016)
22. Karhu, O., Kansi, P., Kuorinka, I.: Correcting working postures in industry: a practical method
for analysis. Appl. Ergon. 8, 199–201 (1977)
23. Hellig, T., Mertens, A., Brandl, C.: The interaction effect of working postures on muscle
activity and subjective discomfort during static working postures and its correlation with
OWAS. Int. J. Ind. Ergon. 68, 25–33 (2018)
24. Occhipinti, E.: OCRA: a concise index for the assessment of exposure to repetitive movements
of the upper limbs. Ergonomics 41, 1290–1311 (1998)
25. Boenzi, F., Digiesi, S., Facchini, F., Mummolo, G.: Ergonomic improvement through job rota-
tions in repetitive manual tasks in case of limited specialization and differentiated ergonomic
requirements. IFAC-Papers Online 49, 1667–1672 (2016)
284 N. Murcia et al.
26. Otto, A., Scholl, A.: Reducing ergonomic risks by job rotation scheduling. Spectr. 35, 711–733
(2013)
27. Garg, A., Boda, S., Hegmann, K.T., et al.: The NIOSH Lifting Equation and Low-Back Pain,
Part 1, Human Factors, vol. 23 (2014)
28. Otto, A., Scholl, A.: Incorporating ergonomic risks into assembly line balancing, European.
J. Oper. Res. 212, 277–286 (2011)
29. Li, G., Buckle, P.: A practical method for the assessment of work-related musculoskeletal risks
- quick exposure check (QEC). In: Proceedings of the Human Factors Ergonomics Society
Meeting, vol. 42, pp. 1351–1355 (1998)
30. Moore, J.S., Garg, A.: The strain index: a proposed method to analyze jobs for risk of distal
upper extremity disorders. Am. Ind. Hyg Assoc. J. 56(5), 443–458 (1995). https://doi.org/10.
1080/15428119591016863
31. Yildirim, Y., Gunay, S., Karadibak, D.: Identifying factors associated with low back pain
among employees working at a package producing industry. J. Back. Musculoskelet. Rehabil.
27, 25–32 (2014)
32. Widanarko, B., Legg, S., Devereux, J., Stevenson, M.: Interaction between physical and
psychosocial risk factors on the presence of neck/shoulder symptoms and its consequences.
Ergonomics 58, 1507–1518 (2015)
33. Abubakar, M.I., Wang, Q.: Key human factors and their effects on human centered assembly
performance. Int. J. Ind. Ergon. 69, 48–57 (2019)
34. Bugajska, J., Żołnierczyk-Zreda, D., J˛edryka-Góral, A., Gasik, R., Hildt-Ciupińska, K.,
Malińska, M., Bedyńska, S.: Psychological factors at work and musculoskeletal disorders: a
one year prospective study. Rheumatol. Int. 33, 2975–2983 (2013)
35. Barrero, L.H., Katz, J.N., Dennerlein, J.T.: Validity of self-reported mechanical demands
for occupational epidemiologic research of musculoskeletal disorders. Scandinavian J. Work
Environ. Health 35, 245–260 (2009)
36. Landau, K., Rademacher, H., Meschke, H., Winter, G., Schaub, K., Grasmueck, M., Moelbert,
I., Sommer, M., Schulze, J.: Musculoskeletal disorders in assembly jobs in the automotive
industry with special reference to age management aspects. Int. J. Ind. Ergon. 38, 561–576
(2008)
37. Menzel, N.N., Brooks, S.M., Bernard, T.E., Nelson, A.: The physical workload of nursing
personnel: Association with musculoskeletal discomfort. Int. J. Nurs. Stud. 41, 859–867
(2004)
38. Márquez Gómez, M.: Prediction of work-related musculoskeletal discomfort in the meat
processing industry using statistical models. Int. J. Ind. Ergon. 75, 102876 (2020)
39. Acaröz Candan, S., Sahin, U.K., Akoğlu, S.: The investigation of work-related musculoskele-
tal disorders among female workers in a hazelnut factory: Prevalence, working posture,
work-related and psychosocial factors. Int. J. Ind. Ergon. 74, 102838 (2019)
40. Kuorinka, I., Jonsson, B., Kilbom, A., Vinterberg, H., Biering-Sørensen, F., Andersson,
G., Jørgensen, K.: Standardised Nordic questionnaires for the analysis of musculoskeletal
symptoms. Appl. Ergon. 18, 233–237 (1987)
41. Karasek, R., Brisson, C., Kawakami, N., Houtman, I., Bongers, P., Amick, B.: The Job
Content Questionnaire (JCQ): An instrument for internationally comparative assessments
of psychosocial job characteristics. J. Occup. Health. Psychol. 3, 322–355 (1998)
42. Zins, M., Goldberg, M., CONSTANCES team, : The French CONSTANCES population-
based cohort: design, inclusion and follow-up. Eur. J. Epidemiol. 30, 1317–1328 (2015)
43. Micheli, G.J.L., Marzorati, L.M.: Beyond OCRA: predictive UL-WMSD risk assessment for
safe assembly design. Int. J. Ind. Ergon. 65, 74–83 (2018)
44. Hu, B., Ma, L., Zhang, W., Salvendy, G., Chablat, D., Bennis, F.: Predicting real-world
ergonomic measurements by simulation in a virtual environment. Int. J. Ind. Ergon. 41, 64–71
(2011)
Evaluation Methods of Ergonomics Constraints in Manufacturing Operations 285
45. Oyekan, J., Prabhu, V., Tiwari, A., Baskaran, V., Burgess, M., McNally, R.: Remote real-
time collaboration through synchronous exchange of digitised human–workpiece interactions.
Future Gen. Comput. Syst. 67, 83–93 (2017)
46. Bortolini, M., Faccio, M., Gamberi, M., Pilati, F.: Motion Analysis System (MAS) for pro-
duction and ergonomics assessment in the manufacturing processes. Comput. Ind. Eng. 139,
105485 (2020)
Toward a Social Holonic Manufacturing Systems
Architecture Based on Industry 4.0 Assets
Abstract. For the last decade, the question of anthropocentric approaches has
made its way through the research, and fully techno-centred approaches have
been questioned. The integration of social relationships between the components
of systems has already been identified as a crucial issue for the future development
of reference architectures. However, the current research lacks a global approach
based on both consideration of the human as an integrated agent of the system
and the use of social concepts to characterize inter-agents’ relationships. The
purpose of this paper is to offer an overview of these aspects considered for
manufacturing control architectures and to outline some guidelines allowing to
revise the architecture of reference PROSA.
Keywords: Industry of the future · MAS · HMS · CPS · IoT · Social approach ·
Human integration
1 Introduction
Over twenty years have passed now since the proposition of the PROSA architecture
for Holonic Manufacturing Systems (HMS) [1]. Based on the constant evolutions of the
market and of the technology (especially Information and Communication Technologies
- ICT), many agile, adaptative and reconfigurable architectures for HMS control have
emerged [2–9]. These architectures are using various approaches like the centralized,
decentralized, hybrid or product-driven ones.
These architectures have been classified by Cardin et al. [10] as “generic”, “multi-
agent oriented”, “holonic architectures’ extensions”, “service and cloud oriented” or
“dynamic”. According to the authors data processing has been as well integrated into
these systems, although some issues were insufficiently tackled. Among them are: the
ability to adapt to unplanned issues, sustainability, data warehousing, and integration
of the human in the loop. In short, these architectures are still mainly techno-centred.
Therefore, they are difficult to implement within actual industrial systems and their
adaptation to future ones might not be obvious.
Hence, future work should aim to answer the two following questions:
Industry 4.0 [13] as a technological vision has burst out in 2011 during the Hanover Fair,
forwarding the effort that Germany was doing to promote computerization in industry.
In 2013, the Final Report of the Working Group Industrie 4.0 [13] was submitted, iden-
tifying several keys for successful implementation. If the concept was born in Germany,
many national initiatives are currently led across the world. We can especially men-
tion the USA’s “National Network for Manufacturing Innovation (NNMI)”, the Unit-ed
Kingdom’s “High Value Manufacturing Catapult (HVMC), the South Korean “Manu-
facturing Industry Innovation 3.0 Strategy”, the French “Industry of the future” and the
Chinese “Made in China 2025” [14]. The innovative vision and concepts that all these
288 E. Valette et al.
initiatives brought have marked the beginning of what is now considered as the fourth
industrial revolution, simply referred as “Industry 4.0”.
Today’s literature is mainly focused on the technological aspects and developments
that will be needed to support the transition to future industrial systems. Artificial Intel-
ligence and all its aspects (neural networks, Big Data, data mining & refining, Deep
Learning, etc.) might be the most widely known ones. These new technologies have
brought out the two new paradigms of Internet of Things (IoT) and of Cyber-Physical
Systems (CPS). In what follows, we will focus on these two frameworks that have already
been widely studied for the last 15 years. Our focus will be on the integration of human
factors that is at the heart of Industry 4.0’s considerations.
The CPS paradigm is commonly recognized as the main pillar of Industry 4.0. Since
the popularity of this concept is rather new in the scientific world (first enunciated by
Lee in 2006 [15]) and because of the wide range of its potential applications despite
standardization attempts, its definition and limits are still fuzzy and unclear. In fact, the
term of CPS is often associated with the one of Internet of Things (IoT) that appeared
in the 2000s [16]. Hence, IoT is the eldest concept, while the CPS term only appeared
7 years later.
For the Internet of Things, Madakan et al. [17] gave the definition of: “an open and
comprehensive network of intelligent objects that have the capacity to auto-organize,
share information, data, resources, reacting and acting in face of situations and changes
in the environment”. In this definition, the IoT is clearly considered as a link between
physical objects within a system.
Cyber-Physical Systems were defined by Lee [15] as “physical and engineered sys-
tems whose operations are monitored, coordinated, controlled and integrated by a com-
puting and communication core. This intimate coupling between the cyber and physical
will be manifested from the nano-world to large-scale wide-area systems of systems. And
at multiple time-scales”. Here, the CPS concept is related to the notion of “coupling”
between physical and computational objects, and this is the definition we will be willing
to stick to.
Considering the previous definitions, the Bagheri & Lee’s conception seems ade-
quate to represent CPS and IoT as we see them. IoT linking objects through horizontal
connectivity / synchronization and CPS using cloud and sensors connection to link phys-
ical objects to their digital twin through vertical connectivity / synchronization (Fig. 1)
[18].
El Haouzi [19] as well as Bordel and Alcarria [20] stated that the definition and use
of the terms CPS and IoT differ depending on the scientific community (mechatronic
engineering uses the CPS term) or the geographical area considered (America: CPS;
Europe and Asia: IoT). Hence, it is sometimes difficult to fully gets an author’s purpose,
for its comprehension and use of these terms might be unclear. For this reason, we will
not do any dichotomy based on the previous elements between CPS and IoT in the
Subsect. 2.3 and the Sect. 3.
Toward a Social Holonic Manufacturing Systems Architecture 289
In the previous subsection, we have presented CPS and IoT as pillars of Industry 4.0.
However, their industrial application is not obvious. The concepts of Industrial Internet
of Things (IIoT), Industrial Internet (II) or Cyber-Physical Production Systems (CPPS)
have emerged as an implementation of IoT and CPS within industrial contexts. Schneider
[21] stated that the IoT could be seen as divided into two main subsets: the Consumer
IoT and the Industrial IoT (respectively CIoT and IIoT). The CIoT would concern the
connectivity of things around humans while the IIoT would exclusively be concerned in
the connectivity around industrial things.
IIoT is defined as “A system comprising networked smart objects, cyber-physical
assets, associated generic information technologies and optional cloud or edge comput-
ing platforms, which enable real-time, intelligent, and autonomous access, collection,
analysis, communications, and exchange of process, product and/or service informa-
tion, within the industrial environment, so as to optimize overall production value”. The
IIoT is then considered as a fully techno-centred application of the IoT’s concepts in the
restricted area of an industrial system. In Boyes et al. publication [22], this approach is
confirmed. The functions of the IIoT were there defined as “to monitor, collect, exchange,
and analyse information so as to enable them to change their own behaviour, or else
instruct other devices to do so, without human intervention”.
Hence, authors like Schneider, Boyes or Gilchrist [21–23] are putting forward a
dichotomy between “humans” and “things” connectivity, humans being only considered
as the “customers” of connected industrial things. These reasonings are differentiating
the human operator from other agents and are accentuating the lack of human-oriented
considerations during the system’s design. This leads to a system where human agents
might face physical or mental overloads, lower their situational awareness, etc. perturbing
the completion of their tasks and the global system itself. These issues are at the very
basis of the “Magic human” phenomena [24] and are incompatible with Industry 4.0 s
considerations.
On the other side, Monostori’s development of CPS [25] presents the CPPS as an
interconnection of cooperative elements and subsystems “in situation-dependent ways,
on and across all levels of production, from processes through machines up to produc-
tion and logistics networks” that would be the communication’s enabler and support
290 E. Valette et al.
between “humans, machines and products alike”. In this conception, hu-man’s integra-
tion into systems is implicit: human-machine symbiosis is even enunciated as one of
the future R&D challenges for CPPS. Though, it has to be taken care-fully, for a too
strong dependence of humans on the system could raise important issues [26]. In the
next section, we will study some of the main approaches that have been initiated for
human integration.
3 Human Integration
3.1 Human’s Current Consideration
A little while before the advent of Industry 4.0, the lack of consideration of the hu-man
factors into the development of CPS led Wang [27] to promote the concept of Cyber-
Physical Social System (CPSS). With CPSSs, he assesses the importance of human fac-
tors’ integration within systems. In order to achieve this integration, along with cyber and
physical spaces, physiological, psychological, social and mental spaces are considered
[28, 29].
This concern has been shared by Schirner et al. [30] and Pirvu et al. [31] who have
respectively worked on the concepts of Human-In-The-Loop Cyber-Physical-Systems
(HITL-CPS) and Anthropocentric Cyber-Physical-Systems (A-CPS). The later devel-
opment of these concepts is represented by Cimini et al.’s Social Human-In-The-Loop
Cyber-Physical Production System (Social-HITL-CPPS) [32].
HITL-CPS consists in an embedded system enhancing a human being’s ability to
interact with his physical environment and A-CPS is defined as a reference architecture
integrating the three physical, computational/cyber and human components. The purpose
of these works is the integration of human factors into systems (mainly but not exclusively
industrial ones).
But even if the need to consider elements such as physiology, psychology, social and
mental aspects is recognized, a neat distinction is made between humans and “things”
constitutive of the system. The human is considered as a stranger needing to be integrated
within a system through interfaces and then stays distinct from other agents. These
approaches are forwarding technological development to link the human to the systems.
Concerning the Social HITL-CPPS, Cimini et al. [32] define humans as agents fully
integrated to the system. The authors have identified the interpretation of human agent’s
behaviour and their coordination with other agents as the two main challenges in the
integration of humans into social environments (and not only manufacturing ones). To
answer these challenges, a three-layer architecture has been proposed. This architecture
is connecting on one hand human users to the cyber part through user interfaces, and on
the over hand physical parts (i.e. non-human agents and environment) to the cyber part
through a network.
In all these approaches, human integration is achieved through human-machine or
human-system interfaces. Hence, these can be considered as techno-centred approaches
for human integration. If the term “social” is here used as a keyword to mark the
human-centred considerations of the authors, it can also refer to a completely different
conception.
Toward a Social Holonic Manufacturing Systems Architecture 291
In this vision, social relationships are established and exploited among things, but
not among their owners. The supportive architecture enabling object-object interactions,
services and resources discovering in order to relieve humans from any intervention is
concretely excluding it. Nevertheless, social relationships are an interesting way to set
up a better integration of the human holon into the holonic architecture, and to facilitate
the system’s acceptance by human operators.
Then, we have detailed on one hand techno-centred approaches for human’s inte-
gration within manufacturing systems, and on the other hand social approaches for the
integration of artefact agent. Our idea consists in exploiting the social concepts enun-
ciated by Atzori et al. [33] and to associate them to the holonic reference architectures
PROSA for human-integration.
1. Consideration and integration of human holons along with artefact ones. Scientific
locks concerning the way to model these holons - humans or artefact, the consider-
ation of physical, energetic or data transformations as well as the consideration of
human factors (physiological, psychological, etc.) will then arise.
2. Consideration of the relationships between these holons as more than data exchanges,
but with other forms of relationships that will be able to govern this holarchy. For
example, we can cite the service providing or customer - supplier relationships that
are usually found in Service Oriented Architectures (SOA) [39], symbiosis relation-
ships as proposed by Monostori et al. [25], or other forms of social organization like
Fiske’s Communal Sharing, Authority Ranking, Quality Matching & Market Pricing
[36]. These will have to be defined and formalized in order to be tested onto exper-
imental CPS-based platforms. The nature of the relationship would, for example,
give information about shared objectives, requirements, results, states, or actions.
Toward a Social Holonic Manufacturing Systems Architecture 293
3. Consideration of a recursive aspect of the holonic structure that goes beyond the
simple composition/decomposition of a holon into other ones, and that considers
the notions of social relationships. The same holon can belong to several different
holons depending on the nature of the social relationships between them (an operator
can belong to the work shift X AND to the work shop Y). This raises the issue of
the formalization of these relationships within the framework of HMS and of the
specification of their impact (for example on the nature of the information shared,
trust, etc.).
5 Conclusion
References
1. Brussel, H.V., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture for
holonic manufacturing systems: PROSA. Comput. Ind. 37(3), 255–274 (1998)
2. Morel, G., Panetto, H., Zaremba, M., Mayer, F.: Manufacturing enterprise control and man-
agement system engineering: paradigms and open issues. Annu. Rev. Control 27(2), 199–209
(2003). https://doi.org/10.1016/j.arcontrol.2003.09.003
3. Leitão, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing
control. Comput. Ind. 57(2), 121–130 (2006). https://doi.org/10.1016/j.compind.2005.05.005
4. Verstraete, P., Germain, B.S., Valckenaers, P., Brussel, H.V., Belle, J.V., Hadeli, N.A.:
Engineering manufacturing control systems using PROSA and delegate MAS. Int. J.
Agent-Oriented Softw. Eng. 2(1), 62 (2008). https://doi.org/10.1504/IJAOSE.2008.016800
5. Pujo, P., Broissin, N., Ounnar, F.: PROSIS: An isoarchic structure for HMS control. Eng.
Appl. Artif. Intell. 22(7), 1034–1045 (2009). https://doi.org/10.1016/j.engappai.2009.01.011
6. Le Mortellec, A., Clarhaut, J., Sallez, Y., Berger, T., Trentesaux, D.: Embedded holonic fault
diagnosis of complex transportation systems. Eng. Appl. Artif. Intell. 26(1), 227–240 (2013).
https://doi.org/10.1016/j.engappai.2012.09.008
7. Pach, C., Berger, T., Bonte, T., Trentesaux, D.: ORCA-FMS: a dynamic architecture for the
optimized and reactive control of flexible manufacturing scheduling. Comput. Ind. 65(4),
706–720 (2014). https://doi.org/10.1016/j.compind.2014.02.005
8. Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Dynamic self-organization in holonic multi-
agent manufacturing systems: the ADACOR evolution. Comput. Ind. 66, 99–111 (2015).
https://doi.org/10.1016/j.compind.2014.10.011
294 E. Valette et al.
9. Jimenez, J.-F., Bekrar, A., Zambrano-Rey, G., Trentesaux, D., Leitão, P.: Pollux: a dynamic
hybrid control architecture for flexible job shop systems. Int. J. Prod. Res. 55(15), 4229–4247
(2017). https://doi.org/10.1080/00207543.2016.1218087
10. Cardin, O., Derigent, W., Trentesaux, D.: Contribution des Architectures de Contrôle
Holoniques à l’Ind. 4.0, p. 9 (2018). https://hal.archives-ouvertes.fr/hal-01985716/document
11. Flemisch, F., Abbink, D., Itoh, M., Pacaux-Lemoine, M.-P., Weßel, G.: Shared control is the
sharp end of cooperation: towards a common framework of joint action, shared control and
human machine cooperation. IFAC-Papers 49(19), 72–77 (2016). https://doi.org/10.1016/j.
ifacol.2016.10.464
12. Valette, E., El-Haouzi, H.B., Demesure, G., Bou, V.: Toward an anthropocentric approach for
hybrid control archit.: case of a furniture factory, arXiv.org > cs > arXiv:1812.10395
13. Acatech, Securing the future of German manufacturing industry: Recommendations for imple-
menting the strategic initiative INDUSTRIE 4.0 - Final report of the Industrie 4.0 Working
Group, German Academy of Science and Engineering, Germany, April 2013
14. Bidet-Mayer, T.: L’industrie du futur à travers le monde, Synthèses Fabr., no. 4, March 2016
15. Lee, E.A.: Cyber-Physical Systems - Are Computing Foundations Ade-quate?, 2006, p. 10
16. Ashton, K.: That “Internet of Things” Thing, RFID J. 1 (2009)
17. Madakam, S., Ramaswamy, R., Tripathi, S.: Internet of Things (IoT): a literature review. J.
Comput. Commun. 03(5), 164–173 (2015). https://doi.org/10.4236/jcc.2015.35021
18. Bagheri, B., Lee, J.: Big future for cyber-physical manufacturing systems, Design
World (2015). https://www.designworldonline.com/big-future-for-cyber-physical-manufactu
ring-systems/
19. El Haouzi, H.B.: Contribution à la conception et à l’évaluation des architectures de pilotage
des systèmes de production adaptables : vers une approche anthropocentrée pour la simulation
et le pilotage, Habilitation à diriger des recherches, Université de Lorraine (2017)
20. Bordel, B., Alcarria, R., Robles, T., Martín, D.: Cyber–physical systems: extending pervasive
sensing from control theory to the Internet of Things. Pervasive Mob. Comput. 40, 156–184
(2017). https://doi.org/10.1016/j.pmcj.2017.06.011
21. Schneider, S.: The Industrial Internet of Things (IIoT): applications and taxonomy. In: Geng,
H. (ed.) Internet of Things and Data Analytics Handbook, pp. 41–81. Wiley, Hoboken (2016)
22. Boyes, H., Hallaq, B., Cunningham, J., Watson, T.: The Industrial Internet of Things (IIoT): an
analysis framework. Comput. Ind. 101, 1–2 (2018). https://doi.org/10.1016/j.compind.2018.
04.015
23. Gilchrist, A.: IIoT Reference Architecture, in Industry 4.0, pp. 65-86. Apress, Berkeley (2016)
24. Trentesaux, D., Millot, P.: A human-centred design to break the myth of the “Magic Human” in
intelligent manufacturing systems. In: Borangiu, T., Trentesaux, D., Thomas, A., McFarlane,
D. (eds.) Service Orientation in Holonic and Multi-agent Manufacturing, vol. 640, pp. 103–
113. Springer, Cham (2016)
25. Monostori, L.: Cyber-physical production systems: roots, expectations and R&D challenges.
Procedia CIRP 17, 9–13 (2014). https://doi.org/10.1016/j.procir.2014.03.115
26. Pacaux-Lemoine, M.-P., Trentesaux, D.: Ethical risks of human-machine symbiosis in Indus-
try 4.0: insights from the human-machine cooperation approach. IFAC-Pap. 52(19), 19–24
(2019). https://doi.org/10.1016/j.ifacol.2019.12.077
27. Wang, F.-Y.: The emergence of intelligent enterprises: from CPS to CPSS. IEEE Intell. Syst.
25(4), 85–88 (2010). https://doi.org/10.1109/MIS.2010.104
28. Liu, Z., Yang, D., Wen, D., Zhang, W., Mao, W.: Cyber-physical-social systems for command
and control. IEEE Intell. Syst. 26(4), 92–96 (2011). https://doi.org/10.1109/MIS.2011.69
29. Shi, X., Zhuge, H.: Cyber physical socio ecology. Concurr. Comput. Pract. Exp. 23(9), 972–
984 (2011). https://doi.org/10.1002/cpe.1625
30. Schirner, G., Erdogmus, D., Chowdhury, K., Padir, T.: The Future of Human- in-the-Loop
Cyber-Physical Systems, p. 10, January 2013
Toward a Social Holonic Manufacturing Systems Architecture 295
31. Pirvu, B.-C., Zamfirescu, C.-B., Gorecky, D.: Engineering insights from an anthropocentric
cyber-physical system: a case study for an assembly station. Mechatronics 34, 147–159 (2016).
https://doi.org/10.1016/j.mechatronics.2015.08.010
32. Cimini, C., Pirola, F., Pinto, R., Cavalieri, S.: A human-in-the-loop manufac-turing control
architecture for the next generation of production systems. J. Manuf. Syst. 54, 258–271 (2020).
https://doi.org/10.1016/j.jmsy.2020.01.002
33. Atzori, L., Iera, A., Morabito, G.: SIoT: giving a Social Structure to the Internet of Things.
IEEE Commun. Lett. 15(11), 1193–1195 (2011). https://doi.org/10.1109/LCOMM.2011.090
911.111340
34. Guinard, D., Fischer, M., Trifa, V.: Sharing using social networks in a com-posable Web of
Things. In: 2010 8th IEEE International Conference on Per-vasive Computing and Communi-
cations Workshops (PERCOM Workshops), Mannheim, Germany, March 2010, p. 702–707
(2010). https://doi.org/10.1109/PERCOMW.2010.5470524.
35. Mala, D.J. (ed.): Integrating the Internet of Things Into Software Engineering Practices. IGI
Global (2019)
36. Fiske, A.P.: The four elementary forms of sociality: framework for a uni-fied theory of social
relations. Psychol. Rev. 99(4), 689–723 (1992)
37. Leuvennink, J., Kruger, K., Basson, A.: Architectures for human worker integration in holonic
manufacturing systems. In: Borangiu, T., Trentesaux, D., Thomas, A. Cavalieri, S. (eds.) Ser-
vice Orientation in Ho-lonic and Multi-agent Manufacturing, vol. 803, p. 133–144. Springer,
Cham (2019)
38. Valckenaers, P., Brussel, H.V.: Design for the Unexpected: From Holonic Manufacturing
Systems towards a Humane Mechatronics Society. Butter-worth-Heinemann (2015)
39. Indriago, C., Cardin, O., Rakoto, N., Castagna, P., Chacòn, E.: H2CM: a ho-lonic architecture
for flexible hybrid control systems. Comput. Ind. 77, 15-28 (2016). https://doi.org/10.1016/j.
compind.2015.12.005
New Organizations Based on Human
Factors Integration in Industry 4.0
Interfacing with Humans in Factories
of the Future: Holonic Interface Services
for Ambient Intelligence Environments
Dale Sparrow, Nicole Taylor, Karel Kruger(B) , Anton Basson, and Anriëtte Bekker
1 Introduction
Industry 4.0 (I4.0) is a revolution in which developments in information and communi-
cation technology (ICT) are used to integrate and organize assets in a value chain. The
intended benefits of I4.0 are robustness, agility, and continuous improvement through
data analytics and prediction. I4.0 research has focused on digital and robotic assets
due to the digital nature of ICT [1–3]. Not much attention has been given to the human
aspect of I4.0, although many authors state the importance of designing human-centric
I4.0 systems [4].
Human workers are still unmatched in dexterity, flexibility, intelligence, and diversity
[5, 6]. The human role in manufacturing is increasing as a decision maker and strategist
and decreasing as a laborer, but problems still exist with the smooth integration of
human workers and their digital factory environment [7]. The objective of this paper is
to propose a method of interfacing with workers in factories of the future that facilitates
their interaction with digital management systems and machines.
In this paper, brief background on humans in I4.0 and trends in interface development
is given. The concepts of holonic interface services are discussed as promising means
to realize ambient intelligence environments (AmIEs). An architecture that manages
communication through the available interfacing services is then presented as part of a
digital administration shell for a human worker. Lastly, a case study is used to demonstrate
the interfacing component of the architecture and the effective use of holonic interface
services in an AmIE.
Exploring the matter of human integration, Rey, Carvalho and Trentesaux [8] report that
careful consideration needs to be given to the difference in how artificial systems and
humans interact. Pacaux-Lemoine et al. [9] discuss a human-centred approach to the
design of intelligent manufacturing systems - pointing out that modern manufacturing
systems must have human awareness, while keeping human decision making in the loop
at different levels of automation.
Peruzzini, Grandi and Pellicciari [10] identify that the integration of human inter-
faces needs a human-centred design along with human factors engineering. Advanced
interfacing technologies are then considered as a key enabler for the I4.0 vision [11].
Many interfaces, however, restrict the user physically by having to be in a specific loca-
tion or needing to wear cumbersome equipment, which negatively affect the qualities of
flexibility, dexterity, and mobility that humans have.
While identifying the new roles humans play in modern manufacturing systems, the
concept of Operator 4.0 has emerged [12]. Romero et al. [13] identified eight augmenta-
tions for I4.0 operators. Requirements for interfacing with humans may be extracted from
development of the Smarter Operator, an Operator 4.0 typology in which the operator
uses an intelligent personal assistant [14].
Humans are expected to play a larger part in decision making and problem solving
and the balance of ability, authority, control, and responsibility is becoming more crit-
ical and complex [15, 16]. Frameworks and models that try to address balancing these
complexities at higher level have been created such as the human-centred approach for
intelligent manufacturing systems [9]. These frameworks, require platforms that allow
for flexible and robust connections between interchangeable components - especially
those between the digital components and humans, which this paper explores in detail.
The earliest applicable interface to a data processing machine would be the 1822 Babbage
Analytical Engine, where the interface was physical manipulation of cams, clutches,
and other mechanical components. Ever since then, computing density, ergonomics,
technology, and hardware have improved to serve two purposes:
• Increase the bandwidth of information flow between the human and the machine.
• Give the operator more physical, creative, and mental freedom.
Interfacing with Humans in Factories of the Future 301
Using fine motor skills, voice commands, and gesture detection improves the amount
of information humans can send to machines; hence, the invention of the keyboard,
mouse, game-pad, and now haptic gloves and language processors. Screens allowed
machines to utilize the highest bandwidth information delivery to humans – their eyes
– and improved with the introduction of virtual reality technology accompanied by the
use of speech synthesis and haptic feedback.
Up until the rise of I4.0, interface design was focused on being specific to a task or
machine. This allowed designers to tighten their scope and accommodate the machine’s
limits and optimize it to the human user, since the amount of communication between
the two was limited to the task they were cooperating on. I4.0 brings new challenges in
interface design as entities in an I4.0 environment are expected to work in a changing
environment, on changing products and services, while optimizing their processes.
Flemisch et al. [15] discussed the importance of balancing the four cornerstones of
human-machine cooperation: ability, authority, control and responsibility. This requires
that the available options of communication between human and machine be flexible
and adaptable depending on the situation. A bottleneck of information flow often arises
between humans and machines, which is likely to worsen with the increased complexity
of accommodating flexibility and adaptability.
Like other human-in-the-loop frameworks, this higher level problem can only be
addressed when information is presented in more ergonomic ways and multiple channels
are available for capturing data from, and delivering data to, the human. It is crucial for
supporting real-time human decision making and achieving a successful balance of
ability, authority, control, and responsibility.
The Internet of Things, modern wearable interfaces, tablets, phones and smart envi-
ronments (e.g. with connected screens, projectors, cameras, lights, etc.) offer the required
flexibility and redundancy. However, configuring these interfaces to support HiLCPS
remains a challenge.
To address this challenge, humans must be supplemented by a digital administration
shell – a concept similar to a personal assistant in the form of a software robot, or softbot,
as developed by Rabelo, Romero and Zambiasi [14] that will elevate the human to CPS
level. The digital administration shell will need to select and use various interfaces, as
required by the location, pose, current activity, and attributes of a worker. To accommo-
date this functionality and information within a monolithic system could easily result in
unmanageable complexity. This paper thus proposes the use of holonic design principles
to achieve the required flexibility, reconfigurability, self-adaptation, and distribution that
will be needed.
This section briefly describes the concept of ambient intelligence environments (AmIEs)
and how these may support the freedom of movement, creativity, and personalization
that is demanded from modern human-machine interfaces (HMIs). Furthermore, HMIs
are categorized into personal and environmental interfaces.
302 D. Sparrow et al.
Personal interfaces are maintained by devices that belong to a specific human, for
either a given activity or general use, and can provide other systems or humans a direct
means of interfacing with that human. These interfaces can be customized and optimized
to fit the specific user and could be directly accessible by their digital administration
shell. Some examples of personal interfaces are those encountered in smart watches,
tablets, heart rate monitors, eye tracking devices, and cell phones.
Environmental interfaces, on the other hand, do not belong to a specific human.
Instead, environmental interfaces are used to gather data from, or present data to, a spec-
ified environment humans are in. Examples of environmental interfaces are closed-circuit
television cameras, digital displays, floor path lights and speakers. Table 1 highlights
identified differences in how personal and environmental interfaces would be used and
does explicitly segregate the functions.
Interfacing with Humans in Factories of the Future 303
This section describes the development of an AmIE through the use of holonic design
principles. The AmIE aims to improve user freedom, flexibility in communication,
and optimized information delivery. Furthermore, the use of holonic design principles
can support scalability, robustness, and self-organization of the interface components.
The section is structured according to the distinguishment that is made with regards to
information flow in an AmIE: information flow to, and from, the human.
Conveying information to human workers can be achieved through many channels, for
example by word of mouth, using mobile phones or even flashing lights. This section
describes a means of delivering information to the human with flexibility and robustness
through what will be called semiotic services.
the sign is presented. Signs, in the sense of semiotics, are not just pictorials and sym-
bols, but sounds, words, lights, or any stimulation through human senses that represent
some meaning to the human [22].
This multimodal interaction enables optimization and robustness of data delivery
since it allows equivalent information to be presented through different channels [1, 23].
For example, multimodal interaction could be achieved at a workstation by providing
instructions to a worker via a tablet (through text or sound) and an overhead projector
(by highlighting relevant areas of the workspace).
While screens, numerical displays, lights, and speakers are widely available tech-
nologies that deliver information to the human senses, these technologies are limited by
single modality and close vicinity. Smart glasses and head mounted displays are a form
of visually augmented reality that display computer generated scenes, and have been
demonstrated to facilitate training, stock management and maintenance [24, 25].
The World Wide Web Consortium (W3C) standards for multimodal media applica-
tions were created to consolidate information delivery to humans on different devices.
These standards could form the basis on how to expand this ontology to work for other
environmental and personal interface types in manufacturing, and not just screen-based
media [26]. Other promising technology in this regard is the Resource Description
Framework along with graph databases that can provide rich descriptions and com-
binations of interface services to achieve specific goals, similar to the IoT-Lite Ontology
and Manufacturing’s Semantics Ontology (MASON) [27–29].
Holons Providing Semiotic Services. It is expected that I4.0 environments should be
capable of multimodal semiosis through the integration and utilization of interfacing
technologies. Should these environments be represented as holonic systems, these inter-
faces would be integrated as Interface Resource Holons providing a semiotic service.
Each Interface Holon will be specialized in its particular modality to optimize and
personalize the delivery of a requested piece of information.
Apart from the Resource Holon responsibilities presented by Valckenaers and Van
Brussel [30], holons providing semiotic services should also perform responsibilities
pertaining to:
• Owning, managing, and controlling its physical rendering component (e.g. screen,
speaker, projector, etc.); and
• Optimizing the information delivery using knowledge of its modalities, human
information processing, the type of data given to it, and needs of the targeted human.
Models based on the cognition of information to avoid using the same processing
centres for two data streams, which processing centres are suited to which type of
information, and the bandwidth limitations of each sensor/processing centre pair are
well documented in texts such as Human Factors for Engineering [32], Cybersemiotics
[33], various other research from psychology, linguistics, graphic design, and works
such as those by Kahneman (author of Thinking, Fast and Slow [34]). This means that
a digital administration shell, or similar digital system accompanying the worker, can
make decisions on what available services to choose based on their worker, and the
situation.
The State Blackboard. The SBB serves as a synchronous, single source of truth on the
Human Resource Holon’s (HRH) current state. The human’s current physical, mental,
and biological state is updated by a modular component called the Observer, discussed
next. The SBB also reflects the state of the world of interest (WOI), as specified by
the Activity-Resource-Type-Instance (ARTI) architecture [36]. This ensures any critical
data for execution and safety monitoring will be available to the components of the HRH
and can be communicated to external holons.
The Observer. The Observer is responsible for gathering information on the human
from any available observation services. When a particular variable, say position or
heart rate, needs to be known, the Observer finds and subscribes to services that can
provide this information.
The Informer. The Informer serves to deliver information to the human using available
semiotic services. It can make decisions (based on the human’s observed state) on which
combination of semiotic services would be most effective. Any request to communicate
information to the human, from internal or external components, asks the Informer
to deliver the data. This ensures that message filtering and prioritization to available
semiotic services can be intelligently handled.
Interfacing with Humans in Factories of the Future 307
• Show that interfaces can be dynamically accessed based on their capabilities; and
• Show it is possible to make smart decisions on which interface services are chosen.
The case study required a worker be guided through steps of a composite layup
activity. The overseeing HRH-AS decided when to give the worker the next instruction or
correct the worker if a mistake was detected. However, the HRH-AS did not possess any
knowledge on the use of the interfaces available to its corresponding worker. The HRH-
AS subscribed to available Interface Holons in the environment that provided semiotic
services, which allowed it to communicate instructions through different channels and
modalities. Similarly, the HRH-AS required certain state variables of the human and
subscribed to the observation services of Interface Holons in the environment to obtain
the required information.
The case study utilized two types of semiotic services: workstation projection, which
provided a visual overlay on the surface of the workstation to indicate instructions and
offer guidance; and text notifications, which delivered information as text to the human.
Furthermore, the case study required the HRH-AS to observe multiple state variables
from the human worker – the observed variables are listed in Table 2.
308 D. Sparrow et al.
Variable Description
Worker pose The position of his hands and face while performing an
activity
Worker location The physical location of the worker within the
workstation
Work area state The state of components within the worker’s
workspace
Worker’s response to a question If a question is posed to the worker, his response is
observed and noted on the SBB
Error state and associated information Notifications obtained from the worker that something
is wrong during the activity and their report on the error
The Tablet and Workstation Projector Holons use Erlang for their administration
shell. The administration shell component for the POSH was written in JavaScript and
Interfacing with Humans in Factories of the Future 309
ran on a Node.js server. Network components could connect to the administration shell as
either a worker or a client. A worker for the POSH could be any HTML5 enabled device
with camera capability. When a device connected as a worker, the worker code was
served by the administration shell and the device became the sensor (physical resource)
component of the POSH.
When a network component connected as a client, it opened a service subscription
contract for providing the information. This information could, for example, be pose or
location information. The rate at which the subscriber desires the information formed
part of the contract. Multiple subscribers could obtain data from the POSH (limited by
the hardware it is run on).
Another Interface Holon, maintaining a personal interface in the form of a tablet,
hosted the semiotic and observation services listed in Table 3. A semiotic service, through
an environmental interface, was provided by the Workstation Projector Holon, which
allowed the rendering of text, images, and workstation overlays.
Figure 4 shows three different actions that demonstrate the developed interface services
in the activity workflow. In Fig. 4 (a) a question was rendered, asking the worker if the
activity could start. Due to the tablet offering combined services of text rendering and
binary answer observation, the tablet was chosen to render the question and observe the
answer. Although not developed here, the tablet’s text to speech capability and speech
recognition capability could automatically have been substituted for, or used in parallel
to, the text rendering and button observation depending on the state of the worker. As an
example, if the worker was busy mixing a pot of resin, and would not be able to touch
the screen, the HRH-AS would request the audio modality be used instead.
When the user selected START on the dialog shown in Fig. 4 (a), the observation was
noted on the SBB by a process execution component of the HRH-AS and it requested
that the workstation projection service render the first instruction as an overlay (shown
in Fig. 4 (b) and (c)). The HRH-AS subscribed to the pose observation service and the
workstation state observation service, which made the “worker pose” and “work area
state” state variables available on the SBB. The HRH-AS could make decisions, based on
this information, on how to render instructions. Both the Tablet Holon and Workstation
Projector Holon were chosen to display the textual part of the work instruction.
310 D. Sparrow et al.
Fig. 4. Three actions of a case study activity shown with their interface services in action
Acknowledgements. Funding from the National Research Foundation (NRF) through the South
African National Antarctic Programme (SANAP Grant No.110737) is thankfully recognized.
Interfacing with Humans in Factories of the Future 311
References
1. Baheti, R., Gill, H.: Cyber-physical systems. the impact of control technology, pp. 161–166
(2011)
2. Geissbauer, R., Vedso, J., Schrauf, S.: Industry 4.0: Building the Digital Enterprise. PwC
Global Industry 4.0 Survey Report (2016)
3. Schroeder, G.N., Steinmetz, C., Pereira, C.E., Espindola, D.B.: Digital Twin data mod-
eling with AutomationML and a communication methodology for data exchange. IFAC-
PapersOnLine. 49(30), 12–17 (2016)
4. Burns, M., Manganelli, J., Wollman, D., Laurids Boring, R., Gilbert, S., Griffor, E., Lee, Y.C.,
Nathan-Roberts, D., Smith-Jackson, T.: Elaborating the human aspect of the nist framework
for cyber-physical systems. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 62(1), 450–454
(2018)
5. Rother, M.: Toyota Kata: Managing People for Improvement Adaptiveness and Superior
Results. McGraw-Hill Education, New York (2010)
6. Loveday, S.: BMW Comments On Tesla Model 3 Production Woes. https://insideevs.com/
bmw-comments-on-tesla-model-3-production-woes/
7. Rauch, E., Linder, C., Dallasega, P.: Anthropocentric perspective of production before and
within Industry 4.0. Comput. Ind. Eng. 139, p. 105644 (2020)
8. Rey, G. Z., Carvalho, M., Trentesaux, D.: Cooperation models between humans and artifi-
cial self-organizing systems: motivations, issues and perspectives. In: Proceedings of the 6th
International Symposium on Resilient Control Systems, ISRCS 2013, pp. 156–161 (2013)
9. Pacaux-Lemoine, M.P., Trentesaux, D., Rey, G.Z., Millot, P.: Designing intelligent manufac-
turing systems through human-machine cooperation principles: a human-centred approach.
Comput. Ind. Eng. 111, 581–595 (2017)
10. Peruzzini, M., Grandi, F., Pellicciari, M.: Exploring the Potential of Operator 4.0 Interface
and Monitoring. Comput. Ind. Eng. 139, p. 105600 (2019)
11. Posada, J., Toro, C., Barandiaran, I., Oyarzun, D., Stricker, D., De Amicis, R., Pinto, E. B.,
Eisert, P., Döllner, J., Vallarino, I.: Visual computing as a key enabling technology for industrie
4.0 and industrial internet. IEEE Comput. Graph. Appl. 35(2), 26–40 (2015)
12. Romero, D., Bernus, P., Noran, O., Stahre, J., Fast-Berglund, A.: The operator 4.0: human
cyber-physical systems and adaptive automation towards human-automation symbiosis work
systems. In: Proceedings of the International Federation for Information Processing on
Advances in Production Management Systems, pp. 677–686 (2016)
13. Romero, D., Stahre, J., Wuest, T., Noran, O., Bernus, P., Fast-Berglund, A., Gorecky, D.:
Towards an operator 4.0 typology: a human-centric perspective on the fourth industrial rev-
olution technologies. In: Proceedings of the International Conference on Computers and
Industrial Engineering vol. 46, pp. 1–11 (2016)
14. Rabelo, R.J., Romero, D., Zambiasi, S.P.: Softbots supporting the operator 4.0 at smart factory
Environments. Adv. Prod. Manage. Syst. 2, 456–464 (2018)
15. Flemisch, F., Heesen, M., Hesse, T., Kelsch, J., Schieben, A., Beller, J.: Towards a dynamic
balance between humans and automation: authority, ability, responsibility and control in
shared and cooperative control situations. Cogn. Technol. Work 14(1), 3–18 (2012)
16. Jirgl, M., Bradac, Z., Fiedler, P.: Human-in-the-Loop issue in context of the cyber physical
systems. Int. Fed. Autom. Control PapersOnLine 51(6), 225–230 (2018)
17. Weiser, M.: The computer for the 21st century. Sci. Am. 265(3), 94–104 (1991)
18. Weiser, M., Gold, R., Brown, J.S.: The origins of ubiquitous computing research at PARC in
the late 1980s. IBM Syst. J. 38(4), 693–696 (1999)
19. Friedewald, M., Raabe, O.: Ubiquitous computing: an overview of technology impacts.
Telematics Inform. 28, 55–65 (2011)
312 D. Sparrow et al.
20. IST Advisory Group (ISTAG): Ambient Intelligence: From Vision to Reality, ISTAG Draft
Consolidated Report (2003)
21. Riva, G., Vatalaro, F., Davide, F., Alcañiz, M.: Ambient Intelligence. IOS Press, Amsterdam
(2005)
22. Bains, P.: The Primacy of Semiosis: An Ontology of Relations. University of Toronto Press,
Toronto (2006)
23. Thiran, J., Marques, F., Bourlard, H.: Multimodal Signal Processing: Theory and Applications
for Human-Computer Interaction. Academic Press, San Diego (2010)
24. Peden, R.G., Mercer, R., Tatham, A.J.: The use of head-mounted display eyeglasses for
teaching surgical skills: a prospective randomized study. Int. J. Surg. 34, 169–173 (2016)
25. Quint, F., Loch, F.: Using smart glasses to document maintenance processes. In: Weisbecker,
A., Burmester, M., Schmidt, A. (eds.) Humans and Computers 2015 - Workshop, pp. 203–208.
De Gruyter Oldenbourg, Stuttgart (2015)
26. W3C Multimodal Interaction Working Group, https://www.w3.org/
27. Abbas, A., Privat, G.: Bridging property graphs and rdf for iot information management.
In: Proceedings of the International Workshop on Scalable Semantic Web Knowledge Base
Systems, pp. 77–92 (2018)
28. Bermudez-Edo, M., Elsaleh, T., Barnaghi, P., Taylor, K.: IoT-lite ontology: a lightweight
semantic model for the internet of things. In: Proceedings of the IEEE Conferences on Ubiq-
uitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing
and Communications, Cloud and Big Data Computing, Internet of People, and Smart World
Congress, pp. 90–97 (2016)
29. Lemaignan, S., Siadat, A., Dantan, J., Siemenenko, A.: MASON: a proposal for an ontology
of manufacturing Domain. In: Proceedings of the IEEE Workshop on Distributed Intelligent
Systems: Collective Intelligence and its Applications, pp. 195–200 (2006)
30. Valckenaers, P., Van Brussel, H.: Design for the Unexpected: From Holonic Manufacturing
Systems towards a Humane Mechatronics Society. Butterworth-Heinemann, Waltham (2016)
31. Wickens, C.: Multiple Resources and Mental Workload. Hum. Factors 50(3), 449–455 (2008)
32. Sanders, M.S., McCormick, E.J.: Human Factors in Engineering and Design, 7th edn.
McGraw-Hill, New York (1993)
33. Brier, S.: Cybersemiotics. University of Toronto Press, Toronto (2008)
34. Kahneman, D.: Thinking Fast and Slow. Penguin Books, London (2012)
35. Sparrow, D. E., Kruger, K., Basson, A. H.: An architecture for the integration of human
workers into an industry 4.0 environment. Submitted to the International Journal of Production
Research (2020)
36. Valckenaers, P.: ARTI reference architecture – PROSA revisited. In: Borangiu, T., Trente-
saux, D., Thomas, A., Cavalieri, S. (eds.) Service Orientation in Holonic and Multi-Agent
Manufacturing. SOHOMA 2018. Studies in Computational Intelligence, vol. 803, pp. 1–19.
Springer, Cham (2019)
A Benchmarking Platform for Human-Machine
Cooperation in Cyber-Physical Manufacturing
Systems
christine.chauvin@univ-ubs.fr
1 Introduction
The fourth industrial revolution is, on one side, the opportunity to rework the foundations
of production systems, leading to heterogeneous development based on an increasing set
of new technologies and the development of models and methods aiming to integrate the
human in future production systems. In this paper, we consider the case of cyber-physical
manufacturing systems (CPMS) [1] interacting with humans, as a class of Cyber-Physical
Production Systems [2]. Cyber-Physical Systems (CPSs) are “systems of collaborating
computational entities which are in intensive connection with the surrounding physical
world and its on-going processes, providing and using, at the same time, data-accessing
and data-processing services available on the internet” [3]. Cyber-Physical Production
Systems “consist of autonomous and cooperative elements and sub-systems that are
getting into connection with each other in situation dependent ways, on and across all
levels of production, from processes through machines up to production and logistics
networks” [3].
On the other side, when it comes to test these new ideas, one must design and use
a platform to demonstrate or benchmark the results. Demonstration platforms are made
to show the feasibility of an idea and the benefits compared to a reference approach
while benchmarking platforms are made to compare contribution with others. Referring
to the work of Trentesaux et al. [4], “benchmarking is comparing the output of different
systems for a given set of input data in order to improve the system’s performance”.
Benchmarking in CPMS development is a critical aspect since research in CPMS has
still not led to the development of highly mature off-the-shelf solutions to be applied in
real industrial systems.
Developing a benchmarking platform is a common approach in system engineering
to evaluate contributions, and this holds also true in production engineering and for
CPMS. The scope of these platforms can differ depending on the research topic and the
technologies involved. Two main families of platforms are observed: 1) the development
of a technology and 2) the integration of a contribution in a system.
The development of a specific technology is generally completed by a subject-centric
platform, designed as a proof of concept [5] or designed to compare the results with
and without a suggested contribution [6]. This usually means that the scope of the
platform is limited to the scope of the technology. For example, cobotic systems are
focused on human-machine cooperation and as such, most developments are forgetting
the manufacturing system itself [7, 8]. On the positive side, these contributions usually
provide some of the latest developments in the application domain. However, some
technologies, such as Big data, requires strong simplifications, for example using a
data generator [9]. This may cause serious issues with the integration of human as these
generators will hardly reproduce the behaviour of human activity in the context of CPMS.
In [10, 11] a variety of advanced sensors are integrated in a platform and tested for both
sensors and control architecture development, but the platform remains very specific.
When addressing the integration of a contribution in a system, the scope of the plat-
form is usually quite large from design, as it is important to represent the variety of
interactions and events. “Integration” can be both technology-to-technology, human-to-
technology and human-to-human. A typical example concerns the challenge of commu-
nication in a system where multiple communication protocols exist [12]. The platform
presented in [12] tries to represent at best what is expected regarding the communi-
cations in Industry 4.0. In [4] the benchmarking platform is used in the development
of self-organizing entities and most of the work focusses on the control of the system.
These researches tend to model systems as realistically as possible since simplifications
may harden the evaluation of the contribution.
Let us detail two illustrative benchmarks that will be further exploited along this
paper. First, multiple platforms have been developed [4, 6, 13, 14], cf. Fig. 1, and are based
on the same academic but realistic production cells. These platforms are accompanied
by both an emulator and a simulator [14] developed in-site. The simulator was used
itself in the benchmarking project Bench4Star. This platform embeds the principle of
autonomous entities with autonomous shuttles, using the principle of intelligent products
and potential fields to schedule the production. As a digitalized system such as the
simulator uses a time reference (clock signal) to trigger the system’s evolution, it is
possible to alter the evolution speed of the system. In real-time scenario, the frequency
A Benchmarking Platform for Human-Machine Cooperation in CPMS 315
of the clock signal is adapted to the processing timestep such as the perceived evolution
speed corresponds to the one of the real systems. However, it is possible to obtain results
faster by using a higher frequency clock signal. If this principle is used on a new instance
initialized with the current state of the system, this provides a projection of the system
future state, but interaction with the human becomes unfeasible. Furthermore, if some
parameters are modified during the initialization, this principle can be used to perform
virtual commissioning [15], which is the simulation of a system modification before
physical implementation.
Fig. 1. S.MART flexible cell of Valenciennes (left), its simulator (centre) and SUCRé project
AGVs (right)
The second platform was developed during the SUCRé project [16, 17], in which a
fleet of ground robots is autonomous or remotely operated in the context of emergency
response. The study focused on Human-Machine cooperation through system analysis
and adaptation of autonomy levels. Thus, the fleet of ground robots can be considered
as automated guided vehicles (AGVs) which authorise human operator interaction at
three degrees, selected at his/her discretion. The availability of video feedback for the
human operator from the robots enables the environment monitoring and robots track-
ing independently of the environment itself. The fleet of AGVs is tasked with logistic
operations, which can be summarized as taking load X from point A and delivering it to
point B in the open environment of a flexible cell. This platform has been firstly adapted
and applied to another project [18] then secondly to the context of manufacturing for
the ANR HUMANISM project.
Linked to the previously introduced types of platform, some publications point out
the importance of human interactions with the system and encourage such a develop-
ment, requiring a benchmarking system able to take the human factor into consideration,
for example [19, 20]. Related to the human interaction aspect, multiple contributions are
now gathered around the term Operator 4.0 [21]. The technologies at disposal for human
operators are studied and developed in order to improve the skills and communications
of human operator. From those studies, new opportunities are observed regarding human
integration and work organisation. Meanwhile, few contributions actually had the oppor-
tunity to test their ideas at a global scale. This aspect is important as the idea of operator
4.0 lead to a new perception of the human operator’s place in the manufacturing system.
From this short overview, one can note that most of the time (the SUCRé platform
being an exception), benchmarking platforms developed in research are limited to spe-
cific applications and are not designed to be used in other contexts. Moreover, these
platforms hardly consider the evaluation of the quality and the consistency of the inte-
gration of the human (supervisor, operator, maintainer…) in future industrial systems.
316 Q. Berdal et al.
As a consequence, the aim of this paper is to suggest and specify a reusable benchmark-
ing platform with the will to evaluate the integration of the human in future industrial
systems, with a focus on CPMS. The re-investment activity of the SUCRé platform
described above triggered the will of the authors to suggest a more global approach to
design human-aware CPMS benchmarking platforms. In the context of our research, we
took the initiative to try to specify a benchmarking platform in which both emerging
Industry 4.0 technologies and the human could be integrated and evaluated together.
may include multiple factors depending on the domain and company policy. Further-
more, the inclusion of human operators brings the need to measure data not relevant to
the technical system itself (e.g., stress, situation awareness, mental workload, etc.). It
is however important to note that all relevant data from humans are not all obtainable
from the platform itself and may require pre and post-experimentation questionnaires
to collect subjective data. Obviously, the platform must be designed such as at least
objective data could be collected from in order to adapt to any situation. In addition, the
platform must support the integration of any new sensor an experiment team needs to
record relevant data.
If desired in the scenarios, the platform must enable the occurrence of perturba-
tions (pre-defined for example for reproducibility purpose or not). Typical perturbations
concern internal or external events, either resources (e.g., failure of a component), or
control (e.g., wrong execution of command, urgent order). The evaluation of a fail-
ure’s impact is important when working on human in order to evaluate possible chaotic
behaviours in industrial systems (e.g., the butterfly effect). It is also important to monitor
the human behaviour facing the complexity of the unexpected. It is important to note
that in automatic control it is compulsory to perturbate a system to identify it correctly,
with sufficient information and data. This principle applies here so perturbations are of
great importance to understand how and why humans decide and act.
Simulation technologies are important when considering such complex socio-
technical systems and the platform will gain from this technology. Indeed, simulations
are used to select the best course of actions depending on the current situation. The
complexity of the industrial system and its causal chain makes it difficult to use off-line
optimization algorithms in real-time. Without such simulations, it becomes difficult to
understand the causality of a decision or an action on the industrial system. Typically,
two types of simulation can be developed: an emulation of the industrial system to avoid
using a real one and speed up tests, and a simulation used as a forecasting tool to test dif-
ferent strategies before choosing one. Obviously, a platform can merge these two types
of simulations or can even hybridize simulated elements and real ones. In this context,
the use of a digital twin is suggested as depicted Fig. 2.
To reach this objective, the work started with what we have at disposal: 2 standalone
platforms previously built for other projects, described in the introduction (see Fig. 1).
Each platform was targeting a different part of the system: one for the logistics and one
for the production.
This platform has been elaborated from the specifications described in the previous
section. The resulting platform is depicted in Fig. 3. It is constructed on a digital twin,
integrating a digital shadow of an industrial system (in our experiments, the S.MART
flexible cell of Valenciennes using Arezzo [14]), updated and used on purpose when
required by the human to test strategies. Such a digital twin handles information about
the S.MART cell and a mean to control it, and thus is closely related to multiple other
technologies involved in Industry 4.0. The control system is done using potential fields
emitted by robots to attract shuttles conveying products.
Fig. 4. The physical reconstruction of the cell using the Arezzo emulator
In the digital twin, a shadow of the emulator, running off-line and used on demands from
the initial real-time emulator, projects the system state in the near future.
Figure 5 depicts the overall technical implementation of the platform. The ground
robot’s legacy control system was developed in C++. The experiment team is tasked
with the validation of the AGV’s physical position before allowing commands such as
loading or unloading products. The Arezzo emulator of the S.MART cell was developed
using NetLogo and interfaced using Java. As all data and interactions are digitalised, a
server running with the platform and exporting part of the interactions is added. As an
example, the production and logistic planning is performed using a web page. The server
sends updates to the interface using only raw data, leaving the graphical representation
and data exploitation to the remote client. A benefit of such an architecture, apart from
the decreased processing pressure on the platform, is that human operators working
on the platform are not bond to the physical location of the platform and any device
with network capability can be used. In addition, user specific interfaces and interaction
means can be considered without impacting the existing one. However, in our case, only
the experiment team has a full remote access to the platform with the ability to interact
with most of the system’s entities. This network enables an architecture reproducing the
capability of cloud platforms, even if it is quite limited in this case. Furthermore, the
communication is designed to use standard protocols and embeds on initialization a list
of available functionalities. This enables the insertion of new elements and the dynamic
linking to new functionalities.
The user interface being quite independent from the digital twin, the experiment
team does not need to modify the digital twin itself unless a very specific new function
needs to be integrated. This tends to separate the requirement of the experiment and
the benchmarking platform itself, avoiding the development of new versions of the
platform for each new research project. This principle eases work sharing between
research teams, the reproducibility of research and the expansion of the platform toward
new requirements.
A Benchmarking Platform for Human-Machine Cooperation in CPMS 321
Figure 6 shows for illustration purpose the planification interface for the human
supervisor (tactical level). The planification is displayed using a web browser and as
such, is completely independent of the system. The platform itself is not affected by
changes in the interfaces, as the server only serves the interfaces and the data on demand.
Works on human factor also benefit from the accessibility of the system and it
completes “vertical integration”. Every part of the platform, whenever related to the
operational level, tactical level, or strategic level of operation can cooperate with a
322 Q. Berdal et al.
human operator through a set of available functions and data feeds. This can be observed
in Fig. 6 where one of the remote interfaces gathers all functions related to the tactical
level of production and logistic planning. The possible functionalities are up to now
basic, such as adding a product to the production queue or changing the launching order.
The interface owns for that purpose a graphical representation of product ordering and
real-time system capabilities in order to help the human operator. All functionalities of
the system are exposed and can be exploited from any connected application.
In the current configuration for our research in HUMANISM, the human operator
is in charge of both production and logistic supervision, operating mainly at tactical
and operational levels. The human operator has a direct control over most actors of
the system, being the conveying shuttles or the AGVs, and can adapt their autonomy
depending on the situation. For example, in a nominal situation, the human operator can
give a high autonomy to the AGVs in order to benefit from their pathfinding algorithms,
while taking back control when a problem occurs like physical obstacles. However, the
system is not limited to this configuration and one may thing of another one to test on
our platform.
As specified, the key points to record in this situation are the following data: objective
data (e.g., events relevant to production) linked to the desired KPI (production and energy
in the HUMANISM project) and subjective data (e.g., relevant to the interaction with the
human operator or a measure of his/her mental workload completed by face movements
tracking) are gathered. Concerning objective data, the energy consumptions and activity
times are collected for each individual equipment (robot, shuttle…) over time. Then, the
results are completed by the logging of achieved products and the historization of every
major event (such as machine breakdown). Regarding the human system interaction, all
actions triggered by the operator result in an entry in the logs for both the element he/she
interacted with, the communication that occurred in the case of remote interfaces, and the
action implemented in such a way the chain of events can be replayed later. These data are
then completed by video recording of both the human operator (front and shoulder view)
and the physical platform (from two angles), completed by a face tracking of the human
operator. This is important in order to know for each situation which element attracted
the attention of the human operator and which action was decided and implemented. In
the current state of the platform, some sensors such as camera and face tracking systems
are decoupled from the system to be reusable for other developments in the future, as
specified. Experiments including human operator generally require a full experimental
protocol, including formatted teaching and explanations, monitoring of the experiment,
and an additional questionnaire at the end to gather the perception of the human. All this
protocol has been developed [26] but is not presented in this paper.
To summarize, the use of the suggested platform in the context of the ANR
HUMANISM project was specified as provided in Table 1.
This platform has been successfully used to operate a set of experiments with more
than 20 participants. These experiments lasted nearly 30 h. Results are being expertised,
but it is worth noting that the participants were able to point out specific advanced
cooperation needs with the digital twin and suggested to increase the autonomy of the
control system at the operational level.
A Benchmarking Platform for Human-Machine Cooperation in CPMS 323
4 Discussion
Working with an emulator of the cell and not on the real physical system, simplifies
the benchmarking process and will ease the future application on the real cell. It also
avoids the use of the real system, limiting costs and risks (bad usage) before real appli-
cations. Our platform offers multiple advantage related to the use of in-house software:
every element of the platform is customizable and open. This helps the realisation of
experiments through the accessibility of every element. The quick deployment of new
elements in the platform and the possibility to alter the behaviour of every entity offer a
great variety of possible scenarios and answer part of the requirements. The integration
of the human in experimentation is facilitated, but the development of specific coopera-
tion modes remains to be done, depending on the case study and on what is expected to
be evaluated.
While some points still need improvements, our platform follows the requirements
listed. Through further development, we expect such platform could become one of the
first, affordable human-aware CPMS benchmarking system available for the research
community. Collaboration with other research centres is possible, such that our plat-
form could become a resource centre for researchers. From our perspective, even the
specifications introduced are helpful for researchers to avoid the development of costly
(money, human resource and time) but un-reusable platforms. Until then, the principle
used is easily exploitable for those who already own digital platforms and who wish to
develop a more complete, generic platform.
324 Q. Berdal et al.
5 Conclusion
As the cooperation aspect between Humans and machines in CPMS becomes of an
increasing concern, a global benchmarking platform becomes necessary. Such a plat-
form was specified in this paper. An illustration was detailed, based on the concept
of digital twin. The work presented is intended to be useful for researchers aiming to
develop reusable platforms involving the human. Indeed, a lot of research laboratories
already have at their disposal most of the resources used in our example and multiple
digital platforms are publicly available. This makes our design principle affordable and
customizable to match most research topics relevant to the human in CPMS.
The next focus in the development of the platform is the design of a common
workspace to enable more elaborated human-machine cooperation processes. This is
the most important development prospect for our platform. Further efforts must also be
made in defining a more generic interaction model between entities.
Acknowledgements. This paper is carried out in the context of the HUMANISM ANR-17-
CE10-0009 research program, funded by the ANR “Agence Nationale de la Recherche”, and by
SUCRé project. The work presented in this paper is also partly funded by the Regional Council
of the French Region “Haut-de-France” and supported by the GRAISYHM program. The authors
gratefully acknowledge these institutions.
References
1. Jakovljevic, Z., Majstorovic, V., Stojadinovic, S., Zivkovic, S., Gligorijevic, N., Pajic, M.:
Cyber-physical manufacturing systems (CPMS). In: Majstorovic, V. and Jakovljevic, Z. (eds.)
Proceedings of 5th International Conference on Advanced Manufacturing Engineering and
Technologies, pp. 199–214. Springer International Publishing, Cham (2017). https://doi.org/
10.1007/978-3-319-56430-2_14
2. Cardin, O.: Classification of cyber-physical production systems applications: proposition of
an analysis framework. Comput. Ind. 104, 11–21 (2019). https://doi.org/10.1016/j.compind.
2018.10.002
3. Monostori, L.: Cyber-physical Production Systems: roots expectations and R&D challenges.
Procedia CIRP 17, 9–13 (2014). https://doi.org/10.1016/j.procir.2014.03.115
4. Trentesaux, D., Pach, C., Bekrar, A., Sallez, Y., Berger, T., Bonte, T., Leitão, P., Barbosa,
J.: Benchmarking flexible job-shop scheduling and control systems. Control Eng. Pract. 21,
1204–1225 (2013)
5. Cardin, O., Castagna, P., Couedel, D., Plot, C., Launay, J., Allanic, N., Madec, Y., Jegouzo,
S.: Energy-aware resources in digital twin: the case of injection moulding machines. In:
International Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing.
Springer series in computational intelligence, pp. 183–194. Springer (2019)
6. Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Dynamic self-organization in holonic multi-
agent manufacturing systems: the ADACOR evolution. Comput. Ind. 66, 99–111 (2015).
https://doi.org/10.1016/j.compind.2014.10.011
7. Cherubini, A., Passama, R., Crosnier, A., Lasnier, A., Fraisse, P.: Collaborative manufacturing
with physical human–robot interaction. Robot. Comput. Integr. Manuf. 40, 1–13 (2016)
8. Heo, Y.J., Kim, D., Lee, W., Kim, H., Park, J., Chung, W.K.: Collision detection for industrial
collaborative robots: a deep learning approach. IEEE Robot. Autom. Lett. 4, 740–746 (2019)
A Benchmarking Platform for Human-Machine Cooperation in CPMS 325
9. Han, R., Jia, Z., Gao, W., Tian, X., Wang, L.: Benchmarking big data systems: state-of-the-art
and future directions. ArXiv150601494 Cs. (2015)
10. Mezgebe, T.T., El Haouzi, H.B., Demesure, G., Pannequin, R., Thomas, A.: A negotiation sce-
nario using an agent-based modelling approach to deal with dynamic scheduling. In: Service
Orientation in Holonic and Multi-Agent Manufacturing. Springer Studies in Computational
Intelligence, pp. 381–391. Springer (2018)
11. Zimmermann, E., El Haouzi, H.B., Thomas, P., Pannequin, R., Noyel, M., Thomas, A.: A
case study of intelligent manufacturing control based on multi-agents system to deal with
batching and sequencing on rework context. In: Service Orientation in Holonic and Multi-
Agent Manufacturing. Springer Studies in Computational Intelligence, pp. 63–75. Springer
(2018)
12. André, P., Azzi, F., Cardin, O.: Heterogeneous communication middleware for digital twin
based cyber manufacturing systems. In: International Workshop on Service Orientation in
Holonic and Multi-Agent Manufacturing. Springer Studies in Computational Intelligence,
pp. 146–157. Springer (2019)
13. Pach, C., Berger, T., Sallez, Y., Bonte, T., Adam, E., Trentesaux, D.: Reactive and energy-
aware scheduling of flexible manufacturing systems using potential fields. Comput. Ind. 65,
434–448 (2014). https://doi.org/10.1016/j.compind.2013.11.008
14. Berger, T., Deneux, D., Bonte, T., Cocquebert, E., Trentesaux, D.: Arezzo-flexible manufac-
turing system: A generic flexible manufacturing system shop floor emulator approach for
high-level control virtual commissioning. Concurr. Eng. 23, 333–342 (2015). https://doi.org/
10.1177/1063293X15591609
15. Hoffmann, P., Schumann, R., Maksoud, T.M., Premier, G.C.: Virtual commissioning of man-
ufacturing systems a review and new approaches for simplification. In: ECMS, pp. 175–181.
Kuala Lumpur, Malaysia (2010)
16. Habib, L., Pacaux-Lemoine, M.-P., Millot, P.: Adaptation of the level of automation according
to the type of cooperative partner. In: 2017 IEEE International Conference on Systems, Man,
and Cybernetics (SMC), pp. 864–869. IEEE (2017)
17. Habib, L., Pacaux-Lemoine, M.-P., Millot, P.: Human-robots team cooperation in crisis man-
agement mission. In: 2018 IEEE International Conference on Systems, Man, and Cybernetics
(SMC), pp. 3219–3224. IEEE (2018)
18. Pacaux-Lemoine, M.-P., Habib, L., Sciacca, N., Carlson, T.: Emulated haptic shared control
for brain-computer interfaces improves human-robot cooperation (2020) https://ieeexplore.
ieee.org/Xplore/home.jsp. Accessed 15. May 2020
19. Trentesaux, D., Millot, P.: A human-centred design to break the myth of the “Magic Human”
in intelligent manufacturing systems. In: Service Orientation in Holonic and Multi-Agent
Manufacturing. Springer Studies in Computational Intelligence, pp. 103–113. Springer series
in computational intelligence (2016). https://doi.org/10.1007/978-3-319-30337-6_10
20. Sarter, N.B., Woods, D.D.: How in the world did we ever get into that mode? mode error and
awareness in supervisory control. Hum. Factors 37, 5–19 (1995). https://doi.org/10.1518/001
872095779049516
21. Romero, D., Stahre, J., Wuest, T., Noran, O., Bernus, P., Fast-Berglund, \AAsa, Gorecky,
D.: Towards an operator 4.0 typology: a human-centric perspective on the fourth industrial
revolution technologies. In: Proceedings of the International Conference on Computers and
Industrial Engineering (CIE46), Tianjin, China, pp. 29–31 (2016)
22. Pacaux-Lemoine, M.-P., Flemisch, F.: Layers of shared and cooperative control, assistance,
and automation. Cogn. Technol. Work 21, 579–591 (2019)
23. Pacaux-Lemoine, M.-P., Trentesaux, D., Rey, G.Z., Millot, P.: Designing intelligent manufac-
turing systems through human-machine cooperation principles: a human-centered approach.
Comput. Ind. Eng. 111, 581–595 (2017). https://doi.org/10.1016/j.cie.2017.05.014
326 Q. Berdal et al.
24. Negri, E., Fumagalli, L., Macchi, M.: A review of the roles of digital twin in CPS-based
production systems. Procedia Manuf. 11, 939–948 (2017). https://doi.org/10.1016/j.promfg.
2017.07.198
25. Drăgoicea, M., Borangiu, T.: A service science knowledge environment in the cloud. IFAC
Proc. 45, 1702–1707 (2012). https://doi.org/10.3182/20120523-3-RO-2023.00438
26. Pacaux-Lemoine, M.-P., Berdal, Q., Guérin, C., Rauffet, P., Chauvin, C., Trentesaux, D.: Eval-
uation of the cognitive work analysis methodology to design cooperative human- intelligent
manufacturing system interactions in industry 4.0. Submitted to CTW (2020)
Human-Machine Cooperation with Autonomous
CPS in the Context of Industry 4.0:
A Literature Review
LAMIH UMR CNRS 8201, Université Polytechnique Hauts-de France, Le Mont Houy,
59313 Valenciennes Cedex, France
Corentin.Gely@uphf.fr
Abstract. The aim of this paper is to study to what extent the current state-of-
the-art in human-machine cooperation can be applied or adapted in the emerging
context in Industry 4.0 where the “machines” are autonomous cyber-physical
systems with whom the cooperation is done. A review of 20 papers has been
realized. A discussion is made, pointing out the advances and limits of existing
state-of-the-art applied in the context of autonomous cyber-physical systems. An
illustration in the domain of the maintenance phase of an autonomous cyber-
physical system is provided to explain the conclusions of our review.
1 Introduction
Humans are standing on the threshold of another industrial revolution that will funda-
mentally change the way of living, working, communicating with each other [1]. This
fourth revolution is unfolding with all the new technologies that are derived from the
emergence of the Internet of Things and new technologies enabling the creation of a
virtual world (virtual reality, augmented reality…) [2]. The flow of new technologies is
becoming a new fundamental paradigm for humanity as well as for the Industry 4.0 [3].
Industry 4.0 does not just bring new technological advancements, but a new vision on
how the factories should operate with manufacturing products, providing services, and
managing assets and doing business in general [4].
A fundamental representation of this revolution is the concept of cyber-physical sys-
tem (CPS) [5]. By integrating the physical and cyber domains, CPS provides a function
that asks for a physical answer as well as supporting the digital representation of infor-
mation [3]. Similarly, it can also be defined as computer systems controlling physical
entities using sensors and actuators, with intelligence provided by software and data [5].
Such systems are becoming more intelligent due to the fact that they are able to under-
stand and change [7]. They become more and more autonomous, to the point where these
intelligent systems that merge the digital and physical worlds are capable of performing
tasks without any control [8]. CPS can be found all over any industrial sector. New
technologies and methods make the CPS smarter, in the sense they can perceive their
environment: through ad-hoc sensors, they can “see”, “hear”, “smell” [9]. Through infor-
mation, and communication, artificial intelligence, and decision-making technologies,
they can also interact with their environment (other CPS or humans), learn, decide and
act on the physical world, being more and more autonomous with time. Their autonomy
renders them self-driven in an open environment with strategies that even we, humans,
may not comprehend [8]. Consequently, the interactions engaged by an autonomous CPS
with its environment become more and more complicated to understand and to optimize,
both from a “machine” and a “human” point of view.
With this notion of autonomy, researchers and industrialists are conceiving new tech-
nical advances enabling autonomous CPS and humans to work jointly or to cooperate,
which is relevant to the concept of Human-Machine Cooperation (HMC) [10]. Cooper-
ation, as a specific kind of interaction, comes from the Latin words “co” (together) and
“operatio” (work, activity) and means “working together” or “the action or process of
working together” [11]. It can mainly be characterized as a situation where two actors
(typically, a human and a CPS in this paper) strive towards goals while having to interact
with each other during a task/operation due to a necessity of procedures or resources;
to deal with this interference both actors must facilitate each other tasks for the sake of
their interaction [12]. To achieve such cooperation, one actor must know what the other
is doing to know how to cooperate correctly with him/her/it [13]. The important notions
that result undoubtedly from this definition are goals, tasks / operation, and interferences.
Focusing on HMC, the emergence of such autonomous CPS has risen a lot of ques-
tions and problems: How can people predict autonomous CPS’s behaviour? What would
their relationship with humans be? To what extent can they be considered at the same
decision level as humans? There are a lot of more questions than answers for now. In this
paper, we address the following research question: to what extent the current literature
in HMC suggests accurate models and methods enabling humans and an autonomous
CPS to properly cooperate?
More precisely, this paper aims to study to what extent the current state-of-the-art
in HMC can be applied or adapted in the emerging context in Industry 4.0 where the
“machines” are such autonomous CPS. The review of the state-of-the-art is structured and
a discussion is proposed, pointing out the advances and limits of HMC when applied
to the context of autonomous CPS. An illustration in the domain of the maintenance
phase of an autonomous CPS (for example an autonomous train) is provided at the end
of the paper to illustrate the conclusions of our review. This review constitutes indeed
an important step in our work dealing with the design of an effective and consistent
HMC system within the specific context of the autonomous train, thus seen as a kind of
autonomous CPS.
The outlines of this chapter are thus the following: human cooperating with
autonomous CPS is specified in Sect. 2. Three typologies characterizing Human-machine
cooperation are proposed in Sect. 3. Section 4 contains the literature review, based on
these typologies. Section 5 details a case study dealing with the maintenance of an
autonomous train, as an autonomous CPS interacting with the human operator. This
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 329
case study is followed by a conclusion summarizing the main points of our works along
with some prospects.
1. Any actor (human or autonomous CPS) can trigger a cooperation need with others
to reach his/her/its sub-goal as and when needed. The cooperation need depends on
the expertise of the other actors with whom he/she/it is cooperating.
2. The cooperation must depend on the level of expertise of other actors, implying a
need to adapt exchanges, requests, and tasks accordingly.
3. The cooperation must allow an actor to assume that the other actors with whom
he/she/it is cooperating may not be able to answer requests, may not react in due
time, or may even provide false information in good faith. He/she/it may be overload
with tasks or focused on other tasks. If possible, an actor should be able by this
cooperation to monitor and to understand the activities of the other actors.
4. The cooperation must facilitate the appropriation of the context between the actor
initiating the need and the one he/she/it is soliciting who/which is not necessarily
fully aware of this context. The cooperation scenario must then take care of the
situation awareness of other actors also known as team-Situation Awareness [13].
5. Cooperation must alert the actor that others’ decisions may change for similar past
contexts because of learning effects.
6. The need for cooperation and the activities that compose a cooperation process may
evolve according to the context that generated the need for cooperation, as well as
their knowledge of each other. The sharing and trading of the tasks must be dynamic.
7. The same principle applies to different kinds of cooperation needs. These must be
articulated according to different levels of activity [1], ranging from strategic to
operational levels, depending on the current situation. Indeed, expectations, stakes,
and constraints are not the same for these different levels and should be handled in
different manners as well as they must evolve due to a dynamic interaction.
3 Typologies Suggested
Since the HMC domain is wide and has been studied for years in various application
fields, it occurred to be compulsory to organize our review. The review of the state-of-
the-art is structured according to three typologies that we developed and used to position
330 C. Gely et al.
and to analyze the different contributions: the first one relates to the type of cooperation
chosen, the second, the design method chosen, and the third one, the interaction model
used. The construction of these typologies has been done through firstly a top-down
approach based on the study of major publications and reviews in the field, consoli-
dated secondly with a complementary bottom-up approach through queries on HMC in
Elsevier, Springer, and IEEE databases.
Concerning the first typology, we shared the point of view of [1] and [15] who have
identified two different types of cooperation as a restricted application of the HMC princi-
ples according to time horizons, ranging from operational level for the short run, tactical
at a higher level for achieving the intermediate objectives and strategic at the higher
level. These types of cooperation are “shared and cooperative guidance and control”
where tasks are exchanged depending on each actor’s competence both at the tactical
and operational levels, and “shared control” which is restricted at the operational level,
see Fig. 1. The “shared and cooperative guidance and control” is defined as “trading of
authority, of missions and goals” while the “shared control” is defined as a “trading of
authority during an operation” [1].
Cooperational
e.g. Strategic Tactical Operational
metacommu- e.g. e.g. e.g.
nication navigation guidance control
Human machine system
Ego-system and
Tasks, values, environment
goals
Concerning the second typology, several types of design methods in the field of
HMC have been identified in our literature review and will be used to position and
analyze the contributions. These are: the human-machine balanced approach which is a
type of cooperation based on its knowledge of the human and machine actors and their
needs for cooperation to reach a common goal [16], the actor-centered design which is
gaining importance [14] and the Human-System Integration Design where the human
and the system are fully integrated, as illustrated by the concept of Operator 4.0 [17].
With the human-machine balanced approach, the HMCs are made for each actor so
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 331
that he/she/it is given information to properly achieve their mission while managing the
interference with the other actor through a common work space [16]. The actor-centered
design allows each actor to know the behavior of their interlocutor, regardless of being
a machine or a human [14]. The actor-centered design has been created for interaction
between two actors, two peers that need to cooperate for each mission. The Human-
System Integration Design is based on the emergence of new technologies giving more
and more information and options to the operators [17].
Concerning the third typology, several types of models and tools have been identified
during our review to model the interactions between actors. In the studied literature, one
can find mainly game theory [18], Fuzzy cognitive model [19], and Know-How/Know-
How-to-Cooperate (KH/KHC) [16]. Cooperative game theory is an interaction model
often used for a situation where two individuals or more are cooperating for a common
goal while each of them shares a common resource. Fuzzy model is an interaction model
based on the complexity and diversity of the information an actor can give, especially
when the flow of information is made by natural language, where the information is not
‘1’ and ‘0’, but ‘fuzzy’. KH/KHC is an interaction model used mostly with a human-
machine balanced approach enabling the use of operation skills of each actor as well as
their cooperation skills that allows them to know the other behaviour.
Figure 2 summarizes our typologies and the different types identified in the literature.
Shared &
CooperaƟve
Guidance and
Control
CooperaƟon type
Shared Control
Actor-Centered
Design
Human-machine
Design type
Human-Machine balanced approach
CooperaƟon
Human-System
IntegraƟon Design
CooperaƟve Game
Theory
InteraƟon model
Fuzzy Model
types
Fig. 2. The three typologies of HMCs proposed and their types identified in the literature.
4 Literature Review
4.1 Method
The protocol used to construct our review has been done according to the method detailed
in [20]. The major source of information used to identify the papers eligible for this review
332 C. Gely et al.
were the following scholarly databases: Elsevier, Springer, and IEEE. The queries used
are provided in Table 1. 20 resulting papers were then identified.
The seven specifications were listed to ensure that the cooperation with an autonomous
CPS is effective and consistent. Each of these specifications is essential to create a
cooperation that would be able to fully exploit the competence of each actor during
the cooperation. Consequently, it has been decided that the criteria be aligned with
these specifications, as explained in Sect. 1. Using criteria 1 through 7, evaluations of
contributions were made according to the following scale: 0/+ /++/+++, interpreted as
follows:
• 0: Article do not focus on the criterion and/or the approach does not suit the criterion,
• +: Article partially (but insufficiently) deals with the criterion and the approach
partially suits the criterion,
• ++: Article deals with the criterion and the approach suits well the criterion,
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 333
• +++: Article has well addressed that criterion and the approach perfectly suits the
criterion.
4.3 Results
Table 2 contains the results of our analysis. This table shows for each of the seven criteria
the strong points of a given positioned contribution:
1. Cooperation Triggering
2. Exchange of behaviour models concerning their expertise
3. Exchange of behaviour model concerning their activities
4. Team Situation Awareness
5. Adaptation of cooperation based on feedback
6. Dynamic sharing and trading of tasks
7. Dynamic exchange of information at each level of activity
Actor-Centered
Shared Control
Human-system
Shared &
Game Theory
Control
approach
balanced
Fuzzy Model
Reference
Criteria 1
Criteria 2
Criteria 3
Criteria 4
Criteria 5
Criteria 6
Criteria 7
KH/KHC
design
1 X X X ++ ++ ++ ++ + ++ ++
21 X X ++ ++ + ++ + ++ ++
22 X X X ++ + + ++ + ++ +
23 X X X ++ ++ + ++ + ++ +
17 X ++ ++ ++ 0 0 0 0
24 X ++ ++ ++ 0 ++ 0 0
25 X ++ ++ ++ 0 ++ 0 0
15 X X ++ ++ ++ + ++ + +
26 X X +++ ++ ++ + ++ + +
16 X X ++ ++ ++ ++ ++ + ++
27 X X ++ ++ ++ + ++ + +
28 X + + + + + ++ ++
2 X + + + + + ++ ++
18 X + + + + + ++ ++
29 X + + + + + ++ ++
30 X + + + + + ++ ++
31 X ++ ++ ++ ++ ++ ++ 0
32 X ++ ++ ++ ++ ++ ++ 0
33 X +++ ++ ++ +++ ++ ++ 0
14 X +++ ++ +++ +++ ++ ++ +
19 X +++ + + 0 + 0 ++
34 X ++ ++ ++ 0 + 0 ++
334 C. Gely et al.
it can adapt to the human fuzzy logic when the machine dynamically interacts using
the natural language of the human actor. Meanwhile, this approach hardly considers the
criteria 2, 4 and 5 since the fuzzy model does not enable an adaptation of the cooperation
based on the expertise of the human actor, on his/her future input on his/her experience
or even propose a strategy to properly make an actor aware of the situation the other
actor needs.
Söffker et al. explained in their paper how cooperative game theory is used and how it
can be applied to a human-machine situation with sharable and un-sharable goals (criteria
6 and 7) [30]. With this model of interaction, the cooperation enables to dynamically
exchange information between both actors in a way that negotiation can be done so
that they arrived at a solution proper for both of them. However, game theory has some
weaknesses which are due: 1) to the need to integrate every event that could occur into
this complex mathematical model, 2) to the need to design a global model of HMC to
be based on for solicitation and 3) its lack of adaptability to different behaviours and to
ensure actors’ awareness as feedback (criteria 1, 2, 3, 4 and 5).
4.4 Discussion
The previous section enables to highlight the strong and weak points of the 20 identified
papers. From our review, several elements can be retained. They are presented according
to the three proposed types.
Cooperation Types
Shared and Cooperative Guidance and Control are mostly used when the intervention
of both actors is aimed for a common goal. In the context of the seven specifications
introduced, this approach has the advantage of enabling dynamic interaction, exchanging
information concerning their missions and their tasks, making the trade of authority
possible. Meanwhile, this type makes it hard to ensure a correct negotiation concerning
each other strategies, as the ‘navigation’ has already been planned.
Since Shared Control assumes the change of control depending on the task in the
context of the seven specifications introduced, this approach has the advantage of being
able to solicit both actors while also giving the other actor situation awareness. Mean-
while, before the extended version of the shared control [35] it has the drawback of not
enabling dynamic exchanges of information concerning the different levels of activity;
but with this extended version shared control no longer has this problem, sharing and
trading information concerning their goals, their strategies, and their tasks.
Design Types
A human-machine balanced approach is adopted when priority is allocated to the human
during HMC. In the context of the seven specifications introduced, this approach has
the advantage to adapt to the human operator’s expertise and experience enabling the
machine to know the behaviour model of each human it is collaborating with. Meanwhile,
this approach hardly ensures a correct dynamic exchange concerning information using
a common work space facing the need of negotiation between a human actor and a
machine actor, where ‘acceptance’ and ‘imposition’ are the only options when an explicit
explanation based on operational information cannot be made [16].
336 C. Gely et al.
4.5 Synthesis
Considering the seven specifications, one can conclude that each of the introduced types
has its advantages and disadvantages in the context of autonomous CPS cooperating
with humans, and no contribution fully comply with these specifications. Meanwhile,
from our perspective:
• The most interesting cooperation type is the extended version of shared control mainly
because of its adaptability to share authority and to ease negotiation concerning goals,
missions, and task in evolving contexts implying several activity levels.
• The most interesting design type is actor-centred design mainly because of its sym-
metrical design for the cooperation and communication, of its adaptability to model
each actor’s behaviour and of its adaptability to the experience of each actor to find a
solution elaborated between ‘peers’, based on each level of activity and feedback.
• The most interesting interaction type is the cooperative game theory since it facilitates
the negotiation of non-sharable goals and usage of the different skills of each actor
easing the optimization of their interaction depending on their specific goals.
Most of the contributions in HMC adopted a point of view where the cooperation was
designed mainly to satisfy the goals of the human and adapt to his/her need, which lead
to the predominance of human-centric based cooperation systems. By combining shared
control and actor-centred design cooperation, the latter may become a real symmetric
collaboration allowing a two-way flow of information and exchange of goals, mission,
and tasks among actors, autonomous CPS, or humans.
As can be seen in the human-human interaction, not all interactions can be dealt
with ‘easily’, especially when actors have different goals, so the HMC must be prepared
for such situations when actors do not share their goals, the difficulty to cooperate lead-
ing to deadlock situations. This is why there is a need to include interaction modelling
approaches such as cooperative game theory, more than KH/KHC, or fuzzy models. By
combining game theory studies with other types of cooperation such as shared control,
traded control or actor-centred design cooperation, the cooperation created would allow a
mathematical approach to modelling of the interaction between an autonomous CPS and
human, and one of their interactions for any goals during a task. Such HMC would con-
sider any type of actors interacting (human-human, human-machine, machine-machine),
as a global model for any autonomous CPS, as a peer to the human actor.
Consequently, from this review we intend to adopt an actor-centred design method
with a shared cooperation approach coupled with game theory models in our development
of HMC with Autonomous CPS in the context of Industry 4.0. For illustration purposes,
the next section details a case study where these choices are illustrated on an application
we are working on.
5 Case Study
Our research work concerns the development of a specific autonomous CPS in the trans-
portation domain, the autonomous train. The autonomous train has to cooperate with
338 C. Gely et al.
different profiles of actors, being artificial (other autonomous trains, railways, infras-
tructure, etc.) or humans (customer, maintainer, onboard crew, fleet operator, etc.). Our
case study concerns a specific moment during the exploitation of this autonomous train,
which is its maintenance. Because of the increasing level of financial penalties when a
fleet of trains is not sufficiently available, maintenance is becoming a crucial moment in
its lifecycle. More, considering the importance of sustainability aspects, the surveillance
of energetical performance for ecological reasons as well as the surveillance of comfort
of the passengers depending on the functioning of the HVAC (heating, ventilation, and
air-conditioning) system are gaining importance.
During such a maintenance, the autonomous train must be able to inform, alert and
cooperate with the different actors about the maintenance operations that are needed
or could be done, aside from classical planned systematic maintenance activities. The
cooperation must be prepared to generate different alarms concerning the different roots
of the problem. The alarms will concern the onboard personnel as well as the fleet super-
visor. They may lead to the solicitation of an expert concerning the problem detected,
exchanging specific information for each interlocutor depending on their expertise. Since
it is autonomous, the train is able to diagnose its sub-systems and evaluate if it can fulfil
its missions [36]. If it cannot fulfil them immediately or during a given time window, a
maintenance process is triggered.
The process illustrated in this case study is the following: the health monitoring
of an autonomous train detects that the performance indicator measuring the energy
efficiency of one of its equipment (a door) goes beyond predetermined thresholds and is
still deteriorating. This event triggers the need for cooperation from the autonomous train,
meaning a solicitation is made by the train asking for collaboration for its maintenance
(this corresponds to specification #1).
After having completed a self-diagnosis, the autonomous train cooperates with the
fleet supervisor to allocate a maintenance dock (tactical level) able to proceed with the
maintenance tasks. When i is done, the train starts a second cooperation process with a
qualified maintenance operator (operational level) to realize the maintenance operations.
The model of this process (a GRAI net), depicted in Fig. 3, has been done using the GRAI
design methodology [37].
To illustrate more precisely the case study, we focus on the second cooperation
process. A possible corresponding cooperation scenario is depicted in Fig. 4.
During this scenario, the cooperation allows the autonomous train to remind the
human actor of the maintenance tasks (situation awareness) as well as to detect the
abnormal duration of the operation that the human is realizing so that it asks him if there
is any problem (specifications #2, #3 and #4). The cooperation process allows the train
to give its own estimation of the problem with its experience, taking into account what
operation was previously made to fix a problem so that it can check if the operation was
successful by calculating its new energy efficiency, updating its own system for future
usage (specification #5). In this scenario, the dynamic flow of information can be made
using text-speech software allowing an exchange of goals, missions, and operations
needed (specifications #6 and #7).
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 339
6 Conclusion
Autonomous CPS will evolve jointly with humans. This fostered us to address the issue of
human-machine cooperation between human and autonomous CPS. This paper presented
the results of a review made on this topic. For that purpose, a set of seven specifications
has been constructed, from which seven criteria were used to analyze the state-of-the-
art in the field on human-machine cooperation. The 20 contributions reviewed were
selected according to several keywords, addressing the cooperation approach, the design
approach, and the modelling of the interaction. To illustrate our work, a case study
focusing on the maintenance operation of an autonomous train has been provided. From
our review, it is clear that the existing literature does not fill the gap with the need
expressed in terms of these specifications. Indeed, in the future, autonomous CPSs are
considered in the same way as a human operator when cooperating with him, increasing
the need for symmetrical cooperation between actors, being human or artificial CPS.
Acknowledgement. The work described in this chapter was conducted in the framework of the
joint laboratory "SurferLab" founded by Bombardier, Prosyst and the Université Polytechnique
Hauts-de-France. This Joint Laboratory is supported by the CNRS and financed from ERDF
funds. The authors would like to thank the CNRS, the European Union, and the Hauts-de-France
region for their support. Parts of the work are also carried out in the context of the HUMANISM
No ANR-17-CE10–0009 research program, funded by the French ANR “Agence Nationale de la
Recherche”.
References
1. Flemisch, F., Abbink, D.A., Itoh, M., Pacaux-Lemoine, M.-P., Weßel, G.: Joining the blunt
and the pointy end of the spear: towards a common framework of joint action, human–machine
cooperation, cooperative guidance and control, shared, traded and supervisory control. Cogn.
Tech. Work 21, 555–568 (2019). https://doi.org/10.1007/s10111-019-00576-1
2. Lin, W.S., Zhao, H.V., Liu, K.J.R.: A game theoretic framework for incentive-based peer-to-
peer live-streaming social networks. In: 2008 IEEE International Conference on Acoustics,
Speech and Signal Processing, pp. 2141–2144, IEEE, Las Vegas (2008). https://doi.org/10.
1109/ICASSP.2008.4518066.
3. Cogliati, D., Falchetto, M., Pau, D., Roveri, M., Viscardi, G.: Intelligent cyber-physical sys-
tems for industry 4.0. In: 2018 First International Conference on AI for Industries (AI4I).
pp. 19–22. IEEE, Laguna Hills, USA (2018). https://doi.org/10.1109/AI4I.2018.8665681
4. Terziyan, V., Gryshko, S., Golovianko, M.: Patented intelligence: cloning human decision
models for Industry 4.0. J. Manuf. Syst. 48, 204–217 (2018). https://doi.org/10.1016/j.jmsy.
2018.04.019.
5. Dang, T., Merieux, C., Pizel, J., Deulet, N.: On the road to industry 4.0: a fieldbus architec-
ture to acquire specific smart instrumentation data in existing industrial plant for predictive
maintenance. In: 2018 IEEE 27th International Symposium on Industrial Electronics (ISIE),
pp. 854–859. IEEE, Cairns, Australia (2018)
6. Ratliff, L.J.: Incentivizing Efficiency in Societal-Scale Cyber-Physical Systems (2015).
https://escholarship.org/uc/item/6ck1z3x3
7. Oks, S.J., Fritzsche, A., Möslein, K.M.: Engineering industrial cyber-physical systems: an
application map based method. Procedia CIRP 72, 456–461 (2018). https://doi.org/10.1016/
j.procir.2018.03.126
Human-Machine Cooperation with Autonomous CPS in the Context of Industry 4.0 341
24. Zolotová, I., Papcun, P., Kajáti, E., Miškuf, M., Mocnej, J.: Smart and cognitive solutions for
operator 4.0. In: Laboratory H-CPPS Case Studies, Computers & Industrial Engineering, vol.
105471 (2018). https://doi.org/10.1016/j.cie.2018.10.032
25. Gammieri, L., Schumann, M., Pelliccia, L., Di Gironimo, G., Klimant, P.: Coupling of a
redundant manipulator with a virtual reality environment to enhance human-robot Coop. In:
Proceedings CIRP, vol. 62, pp. 618–623 (2017). https://doi.org/10.1016/j.procir.2016.06.056
26. Ballagi, Á., Kóczy, L.T., Pozna, C.: Man-machine cooperation without explicit communica-
tion. In: 2010 World Automation Congress, pp. 1–6 (2010)
27. Agah, A., Tanie, K.: Human-machine interaction through an intelligent user interface based
on contention architecture. In: Proceedings 5th IEEE International Workshop on Robot
and Human Communications, pp. 537–542. IEEE, Tsukuba (1996). https://doi.org/10.1109/
ROMAN.1996.568894
28. Xie, F., Liu, F.-M., Yang, R.-R., Lu, R.: Game-based incentive mechanisms for cooperation in
P2P networks. In: 2008 4th International Conference on Natural Computation, pp. 498–501,
IEEE, Jinan, Shandong, China (2008). https://doi.org/10.1109/ICNC.2008.100
29. Requejo, R.J., Camacho, J.: Evolution of cooperation mediated by limiting resources: con-
necting resource based models and evolutionary game theory. J. Theor. Biol. 272, 35–41
(2011). https://doi.org/10.1016/j.jtbi.2010.12.005
30. Söffker, D., Langer, M., Hasselberg, A., Flesch, G.: Modeling of cooperative human-machine-
human systems based on game theory. In: Proceedings of the 2012 IEEE 16th International
Conference on Computer Supported Cooperative Work in Design (CSCWD), pp. 274–281
(2012). https://doi.org/10.1109/CSCWD.2012.6221830
31. Fong, T., Nourbakhsh, I., Kunz, C., Fluckiger, L., Schreiner, J., Ambrose, R., Burridge, R.,
Simmons, R., Hiatt, L., Schultz, A., Trafton, J.G., Bugajska, M., Scholtz, J.: The peer-to-
peer human-robot interaction project. In: Space 2005, American Institute of Aeronautics and
Astronautics, Long Beach, California (2005). https://doi.org/10.2514/6.2005-6750
32. Dias, M.B., Kannan, B., Browning, B., Jones, E.G., Argall, B., Dias, M.F., Zinck, M., Veloso,
M.M., Stentz, A.J.: Sliding autonomy for peer-to-peer human-robot teams. In: Proceedings
of the International Conference on Intelligent Autonomous Systems, pp. 332–341 (2008)
33. Kaupp, T., Makarenko, A., Durrant-Whyte, H.: Human–robot communication for collabo-
rative decision making - a probabilistic approach. Robot. Autonomous Syst. 58, 444–456
(2010). https://doi.org/10.1016/j.robot.2010.02.003
34. Nukuzuma, A., Yamada, K., Harada, N., Ishimaru, K., Furukawa, H.: Decision support to
realize intelligent cooperative interactions. In: Proceedings of 1995 IEEE International Con-
ference on Fuzzy Systems, vol. 2, pp. 837–842 (1995). https://doi.org/10.1109/FUZZY.1995.
409780
35. Pacaux-Lemoine, M.-P., Itoh, M.: Towards vertical and horizontal extension of shared con-
trol concept. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics,
pp. 3086–3091, IEEE, Kowloon Tong, Hong Kong (2015). https://doi.org/10.1109/SMC.201
5.536
36. Sénéchal, O.: Performance indicators nomenclatures for decision making in sustainable con-
ditions based maintenance. IFAC-PapersOnLine 51, 1137–1142 (2018). https://doi.org/10.
1016/j.ifacol.2018.08.438
37. Chen, D., Doumeingts, G.: The GRAI-GIM reference model, architecture and methodol-
ogy. In: Bernus, P., Nemes, L., Williams, T.J. (eds.) Architectures for Enterprise Integration.
pp. 102–126, Springer US, Boston (1996). https://doi.org/10.1007/978-0-387-34941-1_7
Simulation on RFID Interactive Tabletop
of Working Conditions in Industry 4.0
{sophie.lepreux,sondes.chaabane,christophe.kolski}@uphf.fr
1 Introduction
The era of industry 4.0 and digitalization make companies evolve in a competitive
technology-driven world. The digital environment is based on two trends: massive dema-
terialization and interconnection of everything with everything. The world has entered a
new era of data and virtualization [12]. Both French and global industries are concerned
by this digital transformation. In result, this transformation impacts uses, behaviours,
activities and work modes [11]. The role of the human operator is not to perform dan-
gerous and hard tasks. He or she becomes a pilot of machines via mobile terminals and
dedicated user interfaces. The human operator is involved in decision-making processes
and executive operations in collaboration with other operators, with machines and phys-
ical systems. It is there essential to take the human factor in consideration and provide
human operators, as well as different other categories of stakeholders, with training and
means to adapt to the new working conditions. For this purpose, different methods are
possible such as paper/cardboard simulations or virtual reality [25, 30].
The simulation tool proposed within the framework of this research is based on a
set of advances in the field of Human-Computer Interaction on large horizontal surfaces
[24], in tangible interaction [6, 14] and in relation to serious games [9]. This simulation
support is developed on an interactive tabletop associated with tangible objects. It follows
various researches conducted on the TangiSense tabletop, which is equipped with RFID
technology [8, 13, 16, 18–20, 22]. This new simulation approach aims to encourage the
involvement of the stakeholders of future organizations and work situations in relation to
a set of scenarios, according to a playful and interactive approach. The aim is to improve
the design, training, and ownership of future working conditions.
This paper begins with a state of the art on the simulation of new organizations and
future working conditions, with design and transformation objectives. The approach
proposed for the simulation on interactive tabletop is then described. Next, the simula-
tion tool developed on the TangiSense interactive tabletop is explained and illustrated.
Principles and examples of implementation in Industry 4.0 are also presented. The paper
ends with a conclusion and research perspectives.
Fig. 1. Global view of the principle of simulation on interactive tabletop showing different
possible uses of the simulator
of the activity is considered as acceptable to satisfy the stakes of the project by taking
into account a set of criteria such as working or usage conditions, quality requirements,
service to the user, etc. This tool is a promising support to: (1) represent the prescription
scenarios (workspace, equipment,…) integrating technical aspects that are difficult to
represent on paper models and (2) its possible evolutions. It becomes also possible
to: (3) allow the different actors to make modifications on the prescription scenarios
represented on the tabletop, (4) to record the scenarios in order to (5) replay them later
or in the presence of other actors.
Interaction with the interactive tabletop through tangible objects is intended to facili-
tate appropriation. The objects used can be of different shapes and must allow to represent
in a playful and transitional way the workspaces to be arranged (reception, loading areas,
office, etc.), the equipment to be positioned (personal computers, machine tools, robots,
etc.), and to simulate the activity of the user(s) through avatar objects. The surface of
the tabletop can be used to display permanent information (partitions, loading deck) as
well as certain technical constraints (for instance buried pipes).
Figure 1 shows three possible stages of simulator use; these steps can be used cycli-
cally to follow the method suggested by Van Belleghem [30], according to the central
arrow or autonomously according to the needs and degree of progress in innovation.
The Simulation stage allows users to project themselves into new activities, according
to the scenarios foreseen in future developments. If needed, these new activities may be
compared with the current ones. The traces of the simulation are recorded. The resulting
recordings are used in the Design phase to show decision-makers the effects on users of
the different options proposed in the Simulation phase. Finally, the Integration activity
stage allows users to become familiar with the new work situations that have been chosen.
simulation helps to avoid design errors. These errors can have serious financial and
working conditions consequences.
However, the tools available do not easily allow this interaction with future users.
Reading the plans is not easy. Mock-ups are often costly in terms of time for their man-
ufacture and modifications. Different software exist, but they also require a significant
amount of time for configuration and technical knowledge (often sub-contracted to exter-
nal service providers). Moreover, they allow limited interactivity during the simulation
phases with the users.
Therefore, this first version of the simulator must satisfy several requirements. It
must be possible to represent and collectively modify the prescription elements in the
form of scenarios. It must allow the action of employees on the tabletop with avatars
(taking the form of figurines) which allow the actors of the simulation to transpose
themselves by playing the scenarios [4]. They also allow them to imagine themselves in
a situation by moving the object and to produce data. It is necessary to allow numerical
interactions through the visualization of processes and flows. In addition, the simulator
must allow a scenario to be replayed by modifying characteristic data (e.g. distances
travelled, number of tasks to be performed, etc.) so that they can be compared. This
comparison can be performed through tools integrated into the simulator, such as the
possibility of replaying the simulation, but also statistical analysis tools in the form of
bubble graphs, curves graphs, etc.
Figure 2(a) shows the tabletop with the simulator on which various objects equipped
with RFID tags are placed.
Fig. 2. (a) Principle of use of the simulator on tabletop with tangible objects placed on the table
and handled manually by users, (b) simulator screen page with available functionalities
The simulator allows combining both the tangible (through objects) and the virtual
(through the tabletop screen) items. The objects represent the avatars (characters), the
equipment (e.g. desks or machines). When a tangible object is moved by a user, the
display is adapted to show the distance travelled under this object thanks to the scale
integrated into the plan. In addition, when moving traces in the form of dots and/or
dashes are displayed under the tangible object.
Figure 2(b) shows the simulator menu, consisting of the following modes:
348 N. Vispi et al.
• Config: Allows configuring objects equipped with an RFID tag so that they are recog-
nized in the simulation tool. This mode is composed of two parts (see Fig. 3) allowing
to configure any object equipped with an RFID tag when it is placed on one of the
definitions:
• Save: allows to save the simulation to replay it later or to export it on a USB storage
media.
• Open: allows to open a simulation that has been saved.
• Reset: allows to reset the simulation tool by deleting all plans, backup folders, images,
instructions, etc.
• Load: allows to import graphic elements (images, plans) into the simulation tool
library; this function also allows you to export backups (from USB storage media).
Note that other tangible objects are predefined to interact with the simulation tool, for
example: to access the main menu, to select and delete virtual elements (for example to
delete a storage tank), to move virtual elements (for example to move a drilling station),
or to manage a simulation (play, stop, fast forward, save, etc.).
The design possibilities are therefore very extensive and make the simulation tool
adaptable to a very large number of situations. For example, organizational situations
implemented in Industry 4.0, as illustrated in the following section.
In this section, we present the deployment of the simulator, by including it into a regional
scheme in the Hauts-de-France region, France. Since 2015, Aract Hauts-de-France and
the Regional Agency for Innovation and Development have been working together as
part of regional schemes concerning Industries of the future. Indeed, considering the gap
with other countries such as Germany or South Korea, these schemes aim at financing
the modernization of the production apparatus of small and medium-sized enterprises as
well as intermediate-sized establishments. For this purpose, about a hundred consultants
specializing in different areas (robotics, information systems, lean management, etc.)
have been referenced.
We are here in a diffusionist model (in the sense of [27]), a technological "push"
putting modernization as an end in itself. But these technical and technological devel-
opments call for more global transformations as they necessarily modify the tasks, the
way of working, the organization or the required skills. Moreover, depending on their
culture and degree of acculturation, companies do not apprehend, conduct or prepare
these transitions in the same way. These are all pitfalls that justify the need to anticipate
and foresee these changes in their technical and technological as well as organizational,
social and human dimensions.
In this context, using simulation tools appears as a modality able to support these
transitions in their technical, organizational and human dimensions. This is how the
simulator deployment presented here was foreseen; it should be noted that this spin-off
follows several experiments.
Concerning the methodology, the deployment first required a selection of consultants
interested in testing the simulator and integrating simulation approaches in their normal
way of support. The selection has been made among those referenced by the regional
agency, through a call for expression.
The second stage consisted in training the consultants to the use of the simulator
and the simulation approach. We voluntarily choose consultants from different fields
350 N. Vispi et al.
The simulation approach was deployed on both retrospective and projective levels.
As an example, company #3 involved in the construction of a new building replayed on
a simulator an entire monthly production. It allowed to identify structural elements and
to enrich the specifications of their future building. It also led to organizational decisions
Simulation on RFID Interactive Tabletop of Working Conditions in Industry 4.0 351
that have be implemented in the current situation. As another example, in company #1,
employees modelled a new preparation process that was played on the simulator before
being tested in real.
Regarding results, beyond the impacts on the projects themselves effects have been
observed on three levels: company level, human level and on the regional scheme itself.
On company level, simulation has a structuring function, even in small companies
often bound by production objectives without dedicated time for project management.
Still at company level, simulation, because it offers a secure and reassuring frame-
work (right to make mistakes, possibility to restart, etc.), favours experimentation. It
encourages stakeholders to full-scale experimentation of the methods tested on the
simulator.
On the human level, this verbatim from the head of company #1 summarizes the
effects we observed: “working on the simulator allowed to free up very discreet person-
alities. The playful side engages everyone’s participation.”. In addition to the playful
aspect, the use of tangible objects refers the employees to their experiences, to real
work situations facilitating representation. It allows them to express their point of view
more easily. We also noticed a decisive role that simulation has in developing working
collectives. The case of company #3 at the beginning of the intervention highlighted
inter-individual tensions between employees and management. Firstly, the sequences
around the simulator unite employees and managers around a common project. Secondly,
simulation focuses discussions on real work overpassing individual matters making. It
makes possible for employees among themselves (such as employees and manager) to
share common issues. Finally, it brought to light everyone’s activity so that everyone can
measure its interdependence and therefore the need for dialogue to identify acceptable
compromises. Eventually, the effects go beyond the project itself to feed professional
dialogue and contribute to the development of collectives.
On the regional scheme level, previously described as a diffusionist model, we were
able to observe, through the process deployed (training of consultants, technical support
and learning seminars), the emergence of a collective dynamics that highlights the pitfalls
of diffusionist approach.
Thus, the training and technical support of the consultants contributed to develop
their skills on social and organizational dimensions that make Industry 4.0 projects
success or not. By bringing together the entire chain of actors (financiers, company
actors and consultants), seminars became expression space and time that did not exist
before. These times nourished collective learning and led to useful feedback for public
action. It particularly questioned the current diffusionist model.
In summary, deploying simulation methods through this simulator improved Industry
4.0 projects by a better consideration of organizational, social and human dimensions.
There are also collateral effects that go beyond the projects themselves: enhancing exper-
imentations, development of collective work and (horizontal and vertical) working rela-
tionships. All of these dimensions contribute to the overall performance and quality of
life at work [1].
Compared to other tools, the interactive tabletop allows several actors to interact at the
same time when virtual reality generally immerses one person at a time. Although several
actors immersion is technically possible, it still does not allow multiple interactions.
352 N. Vispi et al.
Still in comparison to virtual reality, the tabletop is a non-expert tool. It does not require
any computer development skills. So that it can be mobilized quickly and enhance
swarming capacities. Finally, the tabletop allows the visual representation of processes,
organizations, organization charts, while virtual reality is limited to spatial environments
(buildings, workstations).
To go further, we observed a better appropriation of the new environments and
organizations among the employees who participated in simulation sequences. This
suggests perspectives in using the tabletop in professional training processes, particu-
larly in alternating training, apprenticeship or on-the-job trainings. The tabletop can be
considered:
• In a preparatory way before a situation is set up. In particular when these situations
raise questions of costs (e.g. operations on products with high added value or high
profitability requirements), safety (e.g. crossing of vehicle/pedestrian flows, access to
dangerous areas) or seasonality (e.g. manufacture of products specific to the holiday
season in the agri-food industry).
• For reflective thinking after situational setting. The tabletop helps the trainee to verbal-
ize his/her activity, to understand the choices he/she made, to self-evaluation, and so
on. Such possibilities allow transforming the working experience into competences.
6 Conclusion
The simulation and appropriation of future working conditions imply an important need
in many areas. Coupling interactive tabletops and tangible objects, as presented in this
paper, provides new solutions to respond to this need. By combining digital and analog-
ical technologies, the proposed simulator has allowed a large deployment of simulation
by making low cost access to VSEs and SMEs. Using tangible objects has increased the
ability for decision support through the collection of digital data.
Using such a simulator, as proposed, is interesting to support companies and enter-
prises in their transformation towards Industry 4.0. This evolution will not only focus
on the dematerialization of practices but also on the development of new human organi-
zations. For future studies, three perspectives are identified. The first one concerns the
proposal of a global methodology of transformation based on fundamental concepts of
Industry 4.0. This methodology will help to master and stabilize this transformation. The
second perspective concerns the use of simulation to identify generic characteristics of
so-called operator 4.0 or smart operator 4.0 [11, 28]. In the literature, a list of possible
human factor barriers that may prevent the successful digital transformation are identi-
fied (e.g. the lack of standardized instructions for using digital tools, the lack of training
tools…) [23, 26]. Finally, it would be interesting to provide enterprises and companies
with a set of solutions based on simulation to overcome these barriers.
Simulation on RFID Interactive Tabletop of Working Conditions in Industry 4.0 353
Acknowledgements. The authors would like to thank Marie-Christine Lenain, at the initiative
of the simulator. They also thank Julian Alvarez who intervened on the Serious Game aspects of
the simulator, as well as Laurent Van Belleghem for his active participation in Integratic semi-
nars. Finally, they particularly thank ANR, Anact, Lionel Buissière of Hauts-de-France Innova-
tion Développement and all the consultants and companies involved in the action. This paper is
dedicated to our colleague and friend Pr. Christian Tahon.
References
1. Anact: Agir sur la qualité de vie au travail, coordonné par Pelletier, J. (2017)
2. Angelopoulou, A., Mykoniatis, K., Boyapati, N.R.: Industry 4.0: the use of simulation for
human reliability assessment. Procedia Manuf. 42, 296–301 (2020)
3. Barcellini, F., Van Belleghem, L., Daniellou, F.: Design projects as opportunities for the
development of activities. In: Falzon, P. (ed.) Constructive Ergonomics, pp. 150–163. USA,
Taylor and Francis, NY (2014)
4. Barcellini, F., Van Belleghem, L.: Organizational simulation: issues for ergonomics and for
teaching of ergonomics’ action. In: Proceedings of 11th International Symposium on Human
Factors in Organizational Design and Management (ODAM), pp. 885–890 (2014)
5. Bellifemine, F.L., Caire, G., Greenwood, D.: Developing Multi-Agent Systems with JADE,
Wiley (2007)
6. Blackwell, A.F., Fitzmaurice, G. Holmquist, L.E., Ishii, H., Ullmer, B.: Tangible user inter-
faces in context and theory. In: CHI 2007 Extended Abstracts on Human Factors in Computing
Systems, pp. 2817–2820, New York, ACM (2007)
7. Bobillier Chaumon, M-E., Rouat, J., Laneyrie, E., Cuvillier, B.: De l’activité DE simulation
à l’activité EN simulation : simuler pour stimuler, Activités, 15–1 (2018)
8. Bouabid, A., Lepreux, S., Kolski, C.: Design and evaluation of distributed user interfaces
between tangible tabletops. Univ. Access Inf. Soc. 18(4), 801–819 (2019)
9. Djaouti, D., Alvarez, J., Jessel, J.P.: Classifying serious games: the G/P/S model. In: Hand-
book of Research on Improving Learning and Motivation Through Educational Games:
Multidisciplinary Approaches, pp. 118–136. IGI Global (2011)
10. Fantini, P, Pinzone, M, Taisch, M.: Placing the operator at the centre of Industry 4.0 design:
modelling and assessing human activities within cyber-physical systems. Comput. Ind. Eng.
vol. 139 (2020)
11. Gazzaneo, L., Padovano, A., Umbrello, S.: Designing smart operator 4.0 for human values:
a value sensitive design approach. Procedia Manuf. 42, 219–226 (2020)
12. Guideline Industrie 4.0: Guiding principles for the implementation of Industrie 4.0 in small
and medium sized businesses (2015). https://industrie40.vdma.org/en/viewer/-/v2article/ren
der/15540546
13. Havrez, C., Lepreux, S., Lebrun, Y., Haudegond, S., Ethuin, P., Kolski, C.: A Design Model
for Tangible Interaction: Case Study in Waste Sorting, pp. 373–378. IFAC/IFIP/IFORS/IEA
Symposium on Analysis, Design and Evaluation of Human-Machine System, Kyoto, Japan
(2016)
14. Ishii, H., Ullmer, B.: Tangible Bits: towards seamless interfaces between people, bits and
atoms. In CHI 1997 Conference Proceedings, Atlanta, Georgia, USA, March 22–27, ACM
(1997)
15. Kinzel, H.: Industry 4.0 – Where does this leave the human factor? J. Urban Culture Res. 15,
70–83 (2017)
354 N. Vispi et al.
16. Kubicki, S., Lebrun, Y., Lepreux, S., Adam, E., Kolski, C., Mandiau, R.: Simulation in contexts
involving an interactive table and tangible objects. Simul. Model. Pract. Theory 31, 116–131
(2013)
17. Lebrun, Y., Adam, E., Kubicki, S., Mandiau, R.: A multi-agent system approach for interactive
table using RFID. In: 8th International Conference on Practical Applications of Agents and
Multi-Agent Systems (PAAMS 2010), pp. 125–134. Springer (2010)
18. Lebrun, Y., Adam, E., Mandiau, R., Kolski, C.: A model for managing interactions between
tangible and virtual agents on an RFID interactive tabletop: case study in traffic simulation.
J. Comput. Syst. Sci. 81, 585–598 (2015)
19. Lebrun, Y., Lepreux, S., Haudegond, S., Kolski, C., Mandiau, R.: Management of distributed
rfid surfaces: a cooking assistant for ambient computing in kitchen. In: 5th International
Conference on Ambient Systems, Networks and Technologies, ANT-2014 (June 2–5, 2014,
Hasselt, Belgium), Procedia Computer Science 32, pp. 21–28, Elsevier (2014)
20. Lepreux, S., Alvarez, J., Havrez, C., Lebrun, Y., Ethuin, P., Kolski, C.: Jeu sérieux pour le tri
des déchets sur table interactive avec objets tangibles : mise en œuvre et évaluation. Ergo’IA
’18, Proceedings of the 16th Ergo’IA "Ergonomie et Informatique Avancée" Conference (3–5
October), ACM, Bidart, France (2018)
21. Manches, A, O’Malley, C., Benford, S.: Physical manipulation: evaluating the potential
for tangible designs. In: Proceedings of the 3rd International Conference on Tangible and
Embedded Interaction 2009, Cambridge, UK, February 16–18, ACM, pp. 77–84 (2009)
22. Merrad, W., Habib, L., Héloir, A., Kolski, C., Krüger, A.: Tangible tabletops and dual reality
for crisis management: case study with mobile robots and dynamic tangible objects. In: ANT
2019 The 10th International Conference on Ambient Systems, Networks and Technologies
(April 29–May 2, 2019), Leuven, Belgium (2019)
23. Mikulic I., Stefanic A.: The adoption of modern technology specific to industry 4.0 by human
factor. In: Proceedings of the 29th DAAAM International Symposium, pp. 941–946, DAAAM
International, Vienna, Austria (2018)
24. Müller-Tomfelde, C.: Tabletops - Horizontal Interactive Displays, Springer (2010)
25. Pastré, P.: Apprendre par la simulation - De l’analyse du travail aux apprentissages
professionnels. Octarès Editions, Toulouse (2005)
26. Polet, P., Vanderhaegen, F., Wieringa, P.: Theory of safety related violation of system barriers.
Cogn. Tech. Work 4(3), 171–179 (2002)
27. Rogers, E.M.: Diffusion of Innovations. Free Press, 3rd edition (1983)
28. Romero, D., Noran, O., Stahre, J., Bernus, P., Fast-Berglund, Å.: Towards a human-centred
reference architecture for next generation balanced automation systems: human-automation
symbiosis. In: IFIP Advance Information Communication Technology (2015)
29. Sætren, G.B., Hogenboom, S., Laumann, K.: A study of a technological development process:
human factors—the forgotten factors? Cogn Tech Work 18, 595–611 (2016)
30. Van Belleghem, L.: Simulation organisationnelle: innovation ergonomique pour innovation
sociale. In: Dessaigne, M-F., Pueyo, V., Béguin, P. (eds.) Innovation et Travail: Sens et valeurs
du changement, Actes du 47ème Congrès de la SELF, 5–7 September, Lyon (2012)
Multi-agent Simulation of Occupant Behaviour
Impact on Building Energy Consumption
Vandoeuvre, France
1 Introduction
The world is accelerating towards a severe energy crisis due to high energy demand
compared to energy resources [1]. The energy crisis and sustainability have become
increasingly crucial topics among academia and industry. With the rising demands for
energy use and future concern of scarce energy resources, the need for energy efficiency
is growing steadily [2]. The energy challenge is one of the most significant events of
today’s society, and governments worldwide must adopt strategic policies to confront it.
According to the International Energy Agency, residential and commercial buildings
consume more than 40% of total primary energy and releases 33% of carbon dioxide
emissions in the universe [3, 4]. Two-third of the total energy utilized by the buildings
will be used within the households for heating, cooling, and lighting. The edifice is
the sector with greater potential and lowest cost for carbon dioxide reduction, and has
been named as a key potential contributor to bring down the energy consumption and
greenhouse emissions. To be able to optimize building energy performance, researchers
need a way to evaluate it.
Traditionally, building energy consumption is analysed by a simulation tool, which is
an effective approach for estimating building performance. Almost all of the simulation
tools are considered as static building features for determining building energy perfor-
mance [5]. For example, the occupants, electric equipment plugin, thermostat point and
lighting in an office follow static patterns without considering the dynamic interaction
with the building systems, occupants and environmental factors (weather conditions), to
mitigate their discomfort condition.
Delzendeh et al. [6] show that occupants influence building energy directly and indi-
rectly by manipulating light, equipment, thermostat, shade, domestic hot water (DHW)
and operable windows to maintain their comfort as depicted in Fig. 1.
presents the application case and finally, the conclusions section explains the advantage
and limitations of the proposed approach.
In the last decades, researchers emphasized the influence of occupant behaviour on build-
ing energy consumption. For example, Gilani [8] reports that the electricity consumption
of an office deviated by 30% from what was simulated. Experimentally measured build-
ing energy consumption demonstrated a large energy variation range by a factor of two
to five between edifices with the same function and located in the same weather region.
Gilani and Brien performed experimental investigation on one university residence room
for two academic years with different occupancy and observed a 20% energy consump-
tion variation for the last two academic years. Hong et al. [9] have shown that the impact
of occupant behaviour on building energy use reaches up to 300%. Hani [10] studied the
energy auditing for an office and the analysis revealed more energy consumption in non-
working hours than working hours. The use of daylight systems or intelligent lighting
control by reducing artificial lighting decreases energy use of the building for electric-
ity by about 15% [11]. Commonly in building simulation programs, fixed schedules
for typical days are assumed to represent the dynamic occupant behaviour (occupancy,
lighting) energy use in buildings.
Over the last years, researchers considered the occupants’ behaviour impact on the
simulation tool to simulate building performance. For example, Rijal et al. [12] modelled
the probability of window opening in terms of operative indoor/outdoor temperature.
Jacob et al. [13, 14] integrated multi-agent-based occupant behaviour modelling into
a residential building simulation tools. The dynamic occupant behaviour models were
developed as a functional mock-up interface (FMI) for co-simulation with energy plus
[15]. Mengda et al. [16] developed a dynamic occupant behaviour modelling framework
to improve the building energy simulation; the agent-based occupant behaviour model is
used for validation. However, researchers focused on the group-level behaviour models,
mainly used for the implementation process instead of performance analysis, and then
proposed requirements for future occupant behaviour models to be used in the design
of energy simulation tools. This paper describes individual-level occupant behaviour
models developed independently and coupled to the building.
Buildings have been portrayed as complex systems involving several kinds of dynam-
ically interacting components that are nonlinear, dynamic and complex. The aforemen-
tioned driving parameters can be modelled and simulated through thermal dynamic
building simulation tools. However, due to the complexity and stochastic nature of this
type of model, it is hard to completely indicate the influence of occupant behaviour
through dynamic energy performance simulation [15, 16]. Therefore, we need to find a
way to pair off the occupant behaviour on the building simulation.
Hong et al. [17] indicate that co-simulation allows a more realistic and robust rep-
resentation of occupant behaviour and aim to couple two or more simulation tools,
offering a data exchange environment between subsystems. Co-simulation offers a flex-
ible solution that allows considering network behaviour and physical energy system
state at the same time and also enables the possibilities of large scale system assessment
358 H. T. Ebuy et al.
• Area, that represents the various places where occupants and objects are. They have
facts corresponding to the environment variables (temperature, light), which come
from the building simulation.
• Agents, that correspond to the human occupants. They have beliefs about their goals
and current environment variables (temperature, humidity, light level, etc.…). They
execute work frames that contain activities to achieve their goals.
• Object, that models the building actuators such as lights, thermostat, HVAC, windows,
etc.… They have states that will be sent to the building simulation.
• At each simulation time-step of the energy simulation software, variables are received
and used to set the facts of the area.
Multi-agent Simulation of Occupant Behaviour Impact 359
• Occupants can detect these facts and react by triggering a work frame. In these work
frames, they express their needs for a change of a building device.
• Device perceives and reacts to the needs of the occupants, by changing their state.
• The state of the devices and some simulation parameters such as the number of
occupants in an area is collected and send to the building simulation software.
EnergyPlus is a new building performance simulation program that combines the best
capabilities and features. EnergyPlus, that comprises a completely new code written in
Fortran 90, is a broadly used building simulation tools that allows the analysis of energy
throughout the building and the thermal load [21] on a sub-hourly basis [22]. It is
important to outline that in EnergyPlus there is not a visual interface that allows users to
visualise and edit the building. In case these functions are needed, third-party software
tools need to be used.
The Co-simulation environment couples the occupant behaviour model BRAHMS
with multi-agent-based modelling with EnergyPlus. A Java application has been devel-
oped for this application; it communicates with the BRAHMS virtual machine using the
Java application programming interface (JAPI) of the BRAHMS platform. Using JAPI,
the co-simulation environment is able to start the agents, and alter the facts and belief
base. On the other side, it communicates with EnergyPlus using the Building Control
Virtual Testbed (BCVTB) interface by exchanging packets formatted using the Ptolemy
II standard over a TCP socket.
The information interchange between BRAHMS and EnergyPlus is represented in
Fig. 2. EnergyPlus simulates zone-based level and transfers output environmental fea-
tures such as illuminance to the BRAHMS as input to model occupant behaviour. From
BRAHMS, the occupant behaviour model’s output (which includes occupancy schedule
and building equipment states) is sent back to EnergyPlus and used for the next time
steps. The whole process of information exchange is transmitted through building con-
trol virtual testbed (BCVTB). This process is repeated until the simulation end time is
reached.
4 Application
To test the applicability and significance of the developed platform, a simple behaviour
scenario and case study are conducted.
• Each office had one external wall with window facing North East direction and has a
rectangular shape with a floor area of 206.3 m2 .
• The offices are occupied by 1 to 3 persons in this study, depending on the purpose of
the office. Its 3D view is shown in Fig. 4.
• The simulation coupling is conducted at the room level. The simulation period is for
a whole year (from January to December 2019).
• The weather data used for this simulation is extracted from online source.
• The time step for this simulation is set 10 min.
The control of the temperature was done using a thermostat controlling a HVAC
(Heating, Ventilation and air conditioning) systems, maintaining temperature in the range
of 22–23 °C.
The activity executed when the agent is in the office room is a compound activity:
it contains several repeating work frames competing for execution. The first one is to
simulate the work task that the occupant does (with some delay) and the second one
consists in turning the lighting on/off. The two work frames are triggered by the value
of the room light level and also by the belief they set, to avoid repeating execution.
So, when the lighting of the room is less than the considered threshold, the agent adds
a note to its belief base meaning that he needs light, and this triggers an action relative
to the lighting device to light on. The result of this action from BRAHMS simulation
environment is shown in Fig. 5. In the timeline view of agents, the location of the agent
is shown at the top, while its work frame is shown at the bottom of the picture (in blue)
and the activities in green (and orange for compound activities).
The occupant executes a main work frame that corresponds to his day routine related
to the lighting: arrival and departure. All these periods are implemented using activities.
Using stochastic values for the duration of these activities allows creating randomness
for the arrival, lunch and departure. A clock object is added to the multi-agent system
to send notification for the period of the day (morning, afternoon, evening) changes.
The simulation was run over a year with a time step of 10 min interval. The whole
year simulation result is difficult to represent in a single picture, hence a sample result is
extracted to show the relation of the various features as represented in Fig. 6. The figure
shows how the occupant reacts to changes of indoor lighting, and how it influences the
362 H. T. Ebuy et al.
energy consumption in an office building, which is the determinant factor in the lighting
device status.
The occupancy schedule is shown with the white area generated by a blue broken line,
and the occupant behaviour schedule is represented by the green area created by a small
green broken line, which varies depending on the value of the illuminance indicated by
the purple line. For example, when the agent departs from the office for lunch or has the
belief that the illuminance is beyond the threshold (9:00 to 11:00 and 15:00–17:00), it
tries to switch off the light; otherwise the opposite action will happen. The lighting and
occupancy values are represented by a Boolean number (0 for off, and 1 for on).
The energy consumption using co-simulation, which considers the dynamic occu-
pant behaviour, is much lower than that of simulation considering occupancy factors in
deterministic or predefined schedule, as presented in Fig. 7. A percentage of 30 to 40%
energy consumption saving is expected by integrating dynamic occupant behaviour.
The key concepts are to employ three types of entities: area, occupants and devices. The
latter two implement their behaviours using work frames that compete for execution.
The capacity to recursively embed the work frame in activity seems very promising to
simply describe occupant behaviours.
The case study shows that this modelling approach enables us to create a simple
case. The results confirm the results described in the literature: the occupants’ behaviour
can have a big impact on energy consumption, even in a simple case.
This case study has also shown some limitations of the BRAHMS multi-agent envi-
ronment. The principal one is the inability to run several work frames in parallel. This
is completely realistic for cognition-intensive activities (e.g. reading), but when dealing
with modelling of building occupants, it is very important to process perceptions con-
tinuously, in parallel with the work frame execution. Another limitation is also the poor
expressivity of the BRAHMS language (e.g. there is currently no support for more than
two operands or for nested arithmetic expressions). The maintenance problems of the
BRAHMS software represent also a concern for users.
We are therefore planning to develop a new simulation package specifically adapted
to building occupants, that would keep the advantages of BRAHMS while overcoming
its limitations.
Future work consists in performing more complex simulations to test the scalability
of the modelling approach. An open issue is also to create a robust reference model
for inter-occupants interactions (e.g. how they take into account the action of other
neighbour occupants).
Acknowledgments. The author acknowledge the financial support of the CPER FORBOIS 2–
2016-2020 project. Secondly, the author also acknowledge Campus France and the Ethiopian
Ministry of Science and Higher Education for their financial support.
References
1. De Silva, M. N. K., Sandanayake, Y.G.: Building energy consumption factors: a literature
review and future research agenda, Digital library University of Moratuwa, Sri Lanka, pp. 90–
99, 2012. https://dl.lib.mrt.ac.lk/handle/123/12050
2. Jia, M., Srinivasan, R.S., Ries, R., Bharathy, G.: Exploring the validity of occupant behavior
model for improving office building energy simulation. In: 2018 Winter Simulation Con-
ference (WSC), Gothenburg, Sweden, 2018, pp. 3953–3964. https://doi.org/10.1109/WSC.
2018.8632278
3. Paone, A., Bacher, J.P.: The impact of building occupant behavior on energy efficiency and
methods to influence it: a review of the state of the art, Energies, 11, 4 (2018)
4. Pérez-Lombard, L., Ortiz, J., Pout, C.: A review on buildings energy consumption information.
Energy Build. 40(3), 394–398 (2008)
5. Wang, C., Yan, D., Ren, X.: Modeling individual’s light switching behavior to understand
lighting energy use of office building. Energy Procedia 88, 781–787 (2016)
6. Delzendeh, E., Wu, S., Lee, A., Zhou, Y.: The impact of occupants’ behaviours on building
energy analysis: A research review. Renew. Sustain. Energy Rev. 80, 1061–1071 (2017)
7. Schaumann, D., Putievsky, N., Sopher, H., Yahav, J., Kalay, Y.E.: Simulating multi-agent
narratives for pre-occupancy evaluation of architectural designs. Autom. Constr. 106, 102896
(2018)
364 H. T. Ebuy et al.
8. Gilani, S., O’Brien, W.: Best Practices Guidebook on Advanced Occupant Modelling.
Carleton University, Ottawa, Canada (2018)
9. Turner, W., Hong, T.: A technical framework to describe occupant behavior for building
energy simulations, Lawrence Berkeley National Laboratory (2013)
10. Sait, H.H.: Auditing and analysis of energy consumption of an educational building in hot
and humid area. Energy Convers. Manag. 66, 143–152 (2013)
11. Piotrowska, E., Borchert, A.: Energy consumption of buildings depends on the daylight. In:
E3S Web Conference, vol. 14 (2017)
12. Rijal, H.B., Humphreys, M.A., Nicol, J.F.: Development of a window opening algorithm
based on adaptive thermal comfort to predict occupant behavior in Japanese dwellings. Japan
Archit. Rev. 1(3), 310–321 (2018)
13. Chapman, J., Siebers, P.O., Robinson, D.: On the multi-agent stochastic simulation of
occupants in buildings. J. Build. Perform. Simul. 11(5), 604–621 (2018)
14. Chapman, J., Siebers, P., Robinson, D.: Coupling multi-agent stochastic simulation of occu-
pants with building simulation, Envir. Phys. Des. (ePAD), The University of Nottingham, no.
2004 (2011)
15. Li, R., Wei, F., Zhao, Y., Zeiler, W.: Implementing occupant behaviour in the simulation of
building energy performance and energy flexibility: development of co-simulation framework
and case study. In: Proceedings 15th IBPSA Conference, October 2017
16. Jia, M., Srinivasan, R.S., Ries, R., Bharathy, G.: A framework of occupant behavior modeling
and data sensing for improving building energy simulation. Simul. Ser. 50(7), 110–117 (2018)
17. Chen, Y., Liang, X., Hong, T.: Simulation and visualization of energy- related occupant
behavior in office buildings. Build. Simulations 10, 785–798 (2017). https://doi.org/10.1007/
s12273-017-0355-2
18. Raad, A., Reinbold, V., Delinchant, B., Wurtz, F.: FMU software component orchestration
strategies for co-simulation of building energy systems. In: 3rd International Conference
Technology Advance Electronic Electron Computer Engineering TAEECE 2015, pp. 7–11
(2015)
19. Kashif, A., Ploix, S., Dugdale, J., Le, X.H.B.: Simulating the dynamics of occupant behaviour
for power management in residential buildings. Energy Build. 56, 85–93 (2013)
20. Lez-Briones, A.G., De La Prieta, F., Mohamad, M.S., Omatu, S., Corchado, J.M.: Multi-agent
systems applications in energy optimization problems: A state-of-the-art review. Energies
11(8), 1–28 (2018)
21. Sousa, J.: Energy Simulation Software for Buildings : Review and Comparison, Technical
Report, University of Porto. https://ceur-ws.org/Vol-923/paper08.pdf
22. Crawley, D.B., et al.: EnergyPlus: Creating a new-generation building energy simulation
program. Energy Build. 33(4), 319–331 (2001)
Intelligent Products and Smart
Processes
Intelligent Products through SOHOMA Prism
1 Introduction
In the framework of the IMS (Intelligent Manufacturing Systems) community, and as
demonstrated by the Auto ID Centre developments in 2000–2003, the related concepts of
Internet of Things and Intelligent Product sought to connect physical objects with digital
information and even "intelligence" associated with the object. Indeed, substantial infor-
mation distribution improves data accessibility and availability compared to centralized
architectures. Product information may be allocated both within fixed databases and / or
within the product itself, thus leading to products with informational and / or decisional
abilities, referred as “Intelligent Products”. This concept has been widely discussed over
more than two decades, beginning with 1997, when several authors separately presented
the notion of product intelligence to describe an alternative, product-oriented way in
which supply chains and automated manufacturing might work, [1–4]. The models pro-
posed described manufacturing and supply chain operations in which parts, products
or orders (collections of products) would monitor and potentially influence their own
progress through the industrial supply chain. The supply chain model based around
product intelligence provided a conceptual focus for these developments. A very simple
search on Google Scholar1 of articles related to this concept reveals that more than 300
papers have been published on the subject since 2000. This number does not include
articles related to “intelligent product” design (more related to the mechanical engineer-
ing field or the smart “PSS” field). Many different definitions of “Intelligent Products”
1 www.scholar.google.fr / search: all in title: “intelligent products” OR “intelligent product”.
have been proposed. A comparison of the different types is provided in [5]. Reviews also
exist [6]. The objective of this paper is not to produce yet another review on Intelligent
Products. In the framework of SOHOMA 2020, it rather aims at underlining: 1) how the
SOHOMA community helped to develop and spread this concept, via scientific proposals
and industrial applications, 2) how the SOHOMA community is shaping the future of
the intelligent product concept.
50 30%
40 25%
20%
30
15%
20
10%
10 5%
0 0%
2011 2012 2013 2014 2015 2016 2017 2018 2019
The co-occurrence network based on keywords is done via automatic processing exe-
cuted with VOSviewer, a specific software tool dedicated to bibliometrics network. Other
tools exist like R and its bibliometrics package, Pajek, Sci2, Cytoscape, etc. VOSviewer
has been selected due to its user-friendly interface and its ability to process easily biblio-
graphic databases3 . This network is presented in Fig. 3 where it is clear that the clusters
are strongly interrelated yet still having independent threads. It is composed of nodes
representing the different paper keywords, and links representing relations between these
nodes. The shorter the links, the more the keywords are used in the same papers. It is
then possible to define clusters of keywords, based on their co-occurrence distance. In
this paper, these clusters are interpreted as domains or categories in the IP research field.
The network thus shows a total of 6 clusters:
The annual distribution of IP papers in each cluster is shown Fig. 2. It can be seen
that, from the beginning, papers have been published on PDS and PLIM themes in the
SOHOMA conferences. In this respect, the theme PI within the framework of IP has
been addressed later, in 2014. The more recent cluster is the DT one, where papers have
been proposed from 2018.
10
8
6
4
2
0
2011 2012 2013 2014 2015 2016 2017 2018 2019
One first analysis would be on the type of the different clusters. Indeed, clusters 1 to
3 are related to research works originated from works in the IP field while clusters 4 to
5 are more related to theory, methods and tools used to realize the IP concept. Cluster 6
is far from the others, meaning it is not too interlinked with the other ones. It can thus
be interpreted as a relatively new cluster and new research field as well.
In the rest of this section, these clusters are detailed, focusing on the timeline of the
related research works. Only the clusters related to core SOHOMA research fields are
tackled. Each reference cited hereafter comes from SOHOMA conference proceedings.
Product-Driven Systems (PDS) are defined in [9] as a way to optimize the whole product
lifecycle by dealing with products, whose informational content is permanently bound
to their virtual or material contents and thus are able to influence decisions made about
them, participating actively to different control processes in which they are involved
throughout their life cycle. They introduce some examples highlighting the way PDS
can improve the global product lifecycle performance in the design phase, production
phase or use phase. Designing a PDS is a challenge involving three fundamentals aspects:
functions, architecture, and interactions.
Intelligent products are the core of a PDS. The functional features given to the
intelligent products in the PDS are then essential when designing the PDS. Research
works in this area are also intertwined with Cluster 2 and will therefore be described
Intelligent Products through SOHOMA Prism 371
Fig. 3. Intelligesssnt Product related keywords co-occurrence network obtained from SOHOMA
proceedings
372 W. Derigent et al.
later. The architecture of a PDS in another facet of the PDS design and many research
works address this issue. In particular, [10] presents the PROSIS framework, an evo-
lution of PROSA, where the Staff Holon is replaced by a Simulation. In this isoarchic
architecture, Ambient Control Entity (ACE), located near the Resource Holon, allows
each I_Holon (informational part of a Product Holon) to call ambient services related
to decision making or communication. Guided by the idea that all distributed intelli-
gent approaches are mimicking nature and human behaviour, several researchers also
explored bio-inspiration to build PDS. For example, [11] defines a control architecture
based on the Viable System Model called PDS-VSM exhibiting interesting recursive
properties as natural structures like living organisms do.
Bio-inspiration has been also widely used in the framework of the third and last
aspect of a PDS, i.e. interactions. Several bio-inspired approaches have been proposed
in the past, as Ant Colony Optimization and the Firefly Algorithm. [12] proposes a mech-
anism inspired from stigmergy using the notion of volatile knowledge. Mobile products
can share with each other information (knowledge) about their environment, whose con-
fidence decreases through time. This mechanism allows to propagate knowledge about
perturbations and to return to normal situation in a distributed way without a coordinator.
Negotiation approaches are investigated in [13]. They propose a negotiation heuristic
based on the notion of critical ratio ((Due_date – current_date) / total shop time remain-
ing). Products are negotiating their schedules with other products by exchanging this
value. The collaboration mechanism between Intelligent Products can also be formalized
using Multi-criteria Decision Making (MCDM) techniques. In [10, 14], the collaboration
mechanism is based on the AHP/ANP (Analytic Hierarchy Process / Analytic Network
Process).
PDS have pros and cons. They are highly agile and reactive, and they allow for
a potentially greater involvement of the end customer. However, they are not widely
accepted in industries because of the lack of performance proofs, mainly due to their
myopic behaviour allowing them to make decisions locally efficient but inconsistent
with the global objective. This nevertheless depends highly on the use case and working
conditions. Indeed, performances of a distributed architecture can be as good as with
a centralized driving solution, as stated in [15]. To help PDS to be less myopic, [16]
illustrates how the centralization of data via a discrete-event observer can help to achieve
better decisions. This notions of “discrete-event observer” is an online simulation model
running in parallel of the manufacturing system observed. One can note that this notion
is not so far from the notion of Digital Twin that arises years later.
Another strategy to counteract myopia is to mix predictive scheduling and reactive
control. A state of the art was produced in this respect [17] and several works of the
SOHOMA community address this issue. [18] propose a PDS employing a scheduling
rule-based evolutionary simulation-optimization strategy to dynamically select the most
appropriate local decision policies to be used by the products when disturbances appear.
The originality is to choose decision policies (i.e. dispatch rules) instead of fixed sched-
ules, allowing more flexibility at the product level. Another problem is to know when to
switch from the predictive scheduling to the reactive one, and when to come back. Since
the decision is not binary, a fuzzy model of the switching mechanism is used in [19,
20]. A last method proposed by the SOHOMA community in the framework of PDS is
Intelligent Products through SOHOMA Prism 373
to bring some robustness into the scheduling strategies, thanks to operational research
works dealing with scheduling under uncertainties [21].
Systems evolve and PDS also do, meaning their architecture, configuration, states can
change through time. To react as correctly as possible, a fine understanding of the system
evolution is needed. Giving these learning abilities to a PDS is a central challenge in a
world where manufacturing systems will constantly change and be reconfigured thanks
to Industry 4.0 technologies. It is also the opportunity to transform PDS into Predictive
Manufacturing Systems as defined in [22]. [23] states that the synchronization of physical
and information flows in a PDS implies that large data volumes may be exploited to
create the necessary knowledge and information for product decision-making. The study
illustrates how product information can be processed via neural networks to obtain
shop-floor knowledge, i.e. a function computing the lead time of a product from the
beginning of its manufacturing to the input queue of a bottleneck. This function can then
be integrated into the product memory for further uses. [24] considers that Intelligent
Products should also be able to reuse their past experiences to enhance their decisional
performances. Past experiences are then stored and Reinforcement Learning (here a
Q-learning algorithm) is then used to select the best action a product may do in each
situation.
Some industries are willing to use these techniques and some SOHOMA articles
describe applications of PDS via some case studies in the manufacturing industry [25,
26].
As a conclusion, PDS are advanced manufacturing systems that can lead to better
agility and reactivity, with a better integration of the end customer. SOHOMA explored
this area and brings many contributions, from proofs of concepts to valuable implemen-
tations in industry and generic framework architectures (PDS architectures, interactions
between products, coupling of centralized and decentralized decision-making, machine
learning and so on).
The other facet of the Intelligent Product is related to the data. It is a cluster as important
as the first one, with many contributions made in this field by the SOHOMA community.
A first important research work is the paper of [27] which retraces the history of the
Intelligent Products in the Supply Chain from 2002 to 2012. This paper cites the original
definition of the Intelligent Product introduced in [1, 28]5 :
An Intelligent Product is a product (or part or order) that has part or all of the
following five characteristics:
Two levels of product intelligence are commonly defined: IP level 1 regroups features
1 to 3, while IP level 2 covers all five features.
The management of product information all along the product’s lifecycle is then
referred as Product Lifecycle Information Management (PLIM) and ensured by Product
Lifecycle Information Management Systems as detailed in [29]. In the 2000s, the classic
implementation of the intelligent product reflected the developments being made in the
Cambridge AutoID centre. Indeed, a unique ID is stored on a low-cost RFID tag attached
to the product. This ID can be resolved to a network pointer to a linked database and a
decision-making software agent. The information is then available all along the supply
chain or even all along the product lifecycle. Such systems can provide new services in
different phases of the lifecycle, obviously in the manufacturing phase or logistic phase,
but is not limited to. For example, as described in [30], it could be used to provide new
repair services for domestic appliances, by providing appliance lifecycle data and part
designs respectively to diagnostic services and 3D printing services.
PLIM systems are basically distributed data management architectures. In this regard,
several works and members of the SOHOMA community highly contributed to this
field. Different architectures, messaging protocols and formats have been proposed as
described in [31]. The EPCIS architecture is one of the well-known distributed data
management architecture, standardized by GS1 and specially adapted to product tracking
in the supply chain [32]. These manufacturing concepts have been also applied in other
domains, as described in [33], where the EPCIS standard has been applied to workforce
management in hospitals, leading to the ‘aTurnos’ cloud-based solution6 . DIALOG is
another architecture proposed by SOHOMA members, based on a multi-agent system
distributed in every actor of a given supply chain. In this architecture, a specific messaging
protocol initially called PMI (Product Messaging Interface) and further named QLM
(Quantum Lifecycle Management) is used. As EPCIS, QLM is now a standard and is
detailed in [34]. This paper also underlines why such messaging standard are needed in
Business-to-Business infrastructures and demonstrates via use cases how flexible QLM
is, compared to the other existing messaging protocols.
The intelligence and as a result the information of the product can also be on the prod-
uct itself. According to [35], product intelligence is not a primary function of a product,
but comes as secondary functions (i.e. communicating, triggering, memorizing) that can
be embedded in the product (also referred as “target system”) and also available online.
Because the activeness is linked with the target system, the secondary functions follow
the product all along its lifecycle from manufacturing to recycling. These functions can
be added or removed, moved into the target system or online, depending on the phase
requirements [36]. As a result, several applications of the activeness concern different
phases of the lifecycle. For example, the work of [37] applies the activeness concept dur-
ing the use phase, in order to give to the products augmented monitoring and analysing
functions. It has also been used in the logistic phase and applied to smart containers, as
will be described later.
To store the intelligence directly on the product, many different devices can be used,
not only RFID but also micro-computers or wireless sensor nodes. Because these devices
have more computing power and memory than classic RFID tags, they can execute all
6 https://www.aturnos.com/.
Intelligent Products through SOHOMA Prism 375
or part of the product secondary functions. In [38], the evolution from communicating
products (products equipped with RFID tags) to autonomous products is described.
The authors provide a case study of a flexible manufacturing system where products are
evolving from IP level 1 to IP level 2. Indeed, transport carriers, originally equipped with
RFID tags, receive a miniaturized electronic device composed of a CPU, a RFID reader,
some direction actuators, sensor, and a HMI. Works in the communicating materials or
Physical Internet detailed after also use Wireless Sensor Nodes (WSNs) to realize the
intelligent product concept.
Communicating materials are materials equipped with micro-electronic components,
either RFID tags (Kubler et al. 39) or self-powered WSNs embedded into the material
[40]. The interests of such material are diverse: (a) because of their data storing capacity,
they can convey all information related to design, manufacturing and logistics, useful
during the BOL (Beginning Of Life – design, manufacturing and construction) and the
EOL (End Of Life – dismantlement and recycling) of a product; (b) given their ability to
sense their environment and process-related information, they can also be used during
the MOL (Middle Of Life – exploitation and maintenance) as intelligent sensors, mainly
to perform health monitoring. The material could be either wood, textile or concrete. The
first works dealing with communicating materials are addressing the data dissemination
/ data replication issues in this type of materials. Lately, the work of [41] aims to explore
the monitoring capability of communicating materials, by developing concrete beams
equipped with energy aware WSNs. Indeed, this concrete beam can monitor its status in
nearly real-time.
This last work shows that IP (communicating materials and so on) can exhibit mon-
itoring functions, and can be aggregated to build a global architecture monitoring the
whole system performances. This is another aspect of the Intelligent Product that has
attracted attention in the SOHOMA community. In [42], a multi-agent system is embed-
ded into wireless nodes to manage a wireless data acquisition platform applied on indus-
trial systems. In this architecture, each sink node manages a cluster of wireless sensors.
Each sink node contains three interacting agents respectively responsible for config-
uring/reconfiguring the cluster, aggregating/filtering data and communicating with the
previous agents/other sink nodes. This wireless architecture is illustrated in an oil and
gas refinery. [43] deals with the development of data management systems for fleets
of trains, able to gather, memorize, manipulate and communicate data coming from
equipment. They demonstrate the interest of holonic semi-heterarchical architectures
where each holon is a product composing the whole system. From this conclusion, they
develop the Surfer Data Management Architecture and apply it to train transportation
[44]. In the same vein, distributed monitoring is explored in [45]. The target system is
a manufacturing shop floor, where each component of a system (resource of product) is
linked to a monitoring agent. These agents then send monitoring data via a monitoring
data stream which are read, aggregated, and stored by a monitoring controller agent.
The monitoring controller agent can also send data via the monitoring data stream, and
its data may be aggregated by another monitoring controller agent. This way to proceed
is equivalent to previous architectures, highly scalable.
In the SOHOMA community, these works on PLIM have been applied in manufac-
turing, supply chain [46], and also in the agriculture and agri-food domain [47], in the
376 W. Derigent et al.
pharmaceutical [48] and building sectors [49, 50]. Finally, information systems dealing
with intelligent products have been proposed in SOHOMA with a specific importance
given to the distribution of information on the product/on the network. EPCIS, DIALOG,
the activeness concept or the communicating materials are among the main contributions
presented in SOHOMA conferences.
3.4 Cluster 6: Towards Digital Twins at the Core – One of the Future Trends?
Among all the clusters of interest, cluster 6 is the smallest in terms of number of papers.
This cluster regroups keywords related to the virtual representation of the product and
especially the keyword “Digital Twin”, that has emerged recently in the manufacturing
community (thus explaining the size of the cluster). Indeed, The Digital Twin is a new
paradigm in simulation and modelling, defined by [57]7 as “an integrated multi-physics,
multi-scale, probabilistic simulation of a complex product [using] the best available
physical models, sensor updates, etc., to mirror the life of its corresponding twin”.
Moreover, as can be seen Fig. 3, this cluster is the farthest from the others. It means
that it has the least connection with the other clusters for the moment in the IP community.
It can be interpreted as a new field of research, interesting for the IP community. However,
Digital Twin has a wider spectrum than the IP community, and many other SOHOMA
works and sessions have already addressed this field.
In [58], the author discusses the history of PROSA and presents its evolution toward
ARTI – Activity Resource Type Instance architecture. ARTI makes a strong separation
between the Intelligent Agents and Intelligent Beings, emphasizing the fact that the
Intelligent Beings are describing what is more than what is decided. In the framework of
Industry 4.0, the reality of these intelligent beings could be represented by Digital Twins,
which then become the unique “contact persons” to access the world-of-interest. The
notion of Digital Twin thus becomes crucial generally for HMS (Holonic Manufacturing
System) and for PDS. One of the first explorations of the connection between ARTI and
Digital Twin reported in SOHOMA is the work of [59] in the context of semi-continuous
production processes.
In [60], the authors propose a dynamic adaptation process of M-BOMs (manufac-
turing bill of materials) based on BIM Building Information Modelling (BIM) and CPS
assets in the building sector. The use of BIM assets and real time follow-up based on
CPS paradigm could be a source of valuable data to support the planning and monitoring
activities throughout building life cycle and pave the way for introducing the digital twin
as core system for these activities.
Digital Twin also begins to play a role in smart asset management, as underlined by
[60]. The authors propose a framework for future development of smart asset manage-
ment during the operations and maintenance phase, integrating the concept of Digital
Twin. They argue that the future framework should be divided into three layers (smart
asset layer, smart asset integration layer, smart DT-enabled asset management layer).
Compared to “old” PLIM systems, it appears a need to store and manage the lifecycle
not only of data and information, but also of simulation models like the DTs, that should
be easily connected to real-life data.
This research theme is still new in SOHOMA, but merging DT and IP is already
stressed by SOHOMA papers as a future interesting field. SOHOMA contributed to this
merge by proposing adapted control and information management architectures.
4 Conclusions
During the past 20 years, a lot of different works have been done in the world of Intelligent
Products and in particular, over the last 10 years systems based on the Intelligent Product
concept have undoubtedly found a home in the SOHOMA series of workshops. This
paper has retraced the evolution of connected topics that have been gravitating around the
IP concept for decades through the lens of SOHOMA, and this accurately reflects the fact
that Intelligent Products are a broad concept rather than a specific industrial solution. This
study also lists the number of IP-related papers produced in the SOHOMA conferences
(around 50). It demonstrates that, for ten years, this notion has been a rich concept
for SOHOMA, as well in production control with data management. The SOHOMA
community has participated to this scientific adventure, evidenced by the important
ratio of papers dedicated to this theme, even if less important during the last years.
SOHOMA members have contributed to Intelligent Product research in a number of
different areas (referred to as clusters in this paper). Three main clusters have been iden-
tified as more representative contributions, thanks to a bibliography analysis: Product-
Driven Systems, Product Lifecycle Information Systems and Physical Internet. For each
cluster, a synthesis of the works done by the community has been provided. One last and
smallest cluster is perhaps related to one of the future trends of IP - namely the rapid take
up in recent times of digital twins. This concept can be seen as an encompassing one -
as it extends the perimeter of the intelligent product to intelligent “anything” and pushes
for the emergence of new methods (e.g., data science) and new requirements of real
time, observation, data mining. In addition to these clusters we have observed a number
of common methods and tools being used in the development of intelligent product-
based approaches: multi-agent systems, traceability approaches and the development
and deployment of embedded devices.
In the past, the advent of technologies such as RFID tags and WSN helped to con-
cretize the IP concept. In the future, the envisaged development of tools and methods
associated with Industry 4.0, the development of infrastructure for IoT, human-object
integration in industrial operations and the development of increasingly autonomous
capabilities in industrial systems will certainly provide a fantastic playground for
this concept, which will still evolve thanks to the never-ending work of passionate
researchers.
Appendix
See Table 1
Intelligent Products through SOHOMA Prism 379
Table 1. List of references extracted from SOHOMA Proceedings, ranked by cluster and year
Table 1. (continued)
Table 1. (continued)
References
1. Wong, C.Y., Mcfarlane, D., Zaharudin, A.A., Agarwal, V.: The intelligent product driven sup-
ply chain. In: Proceedings IEEE International Conference on Systems, Man and Cybernetics,
pp. 4–6 (2002)
2. McFarlane, D., Sheffi, Y.: The impact of automatic identification on supply chain operations.
Int. J. Logist. Manag. 14(1), 1–17 (2003)
3. Kärkkäinen, M., Ala-Risku, T., Främling, K.: The product centric approach: a solution to
supply network information management problems? Comput. Ind. 52(2), 147–159 (2003)
4. Morel, G., Grabot, B.: Special issue on intelligent manufacturing. Eng. Appl. Artif. Intell.
16(4), 271–393 (2003)
5. McFarlane, D., Giannikas, V., Wong, A.C.Y., Harrison, M.: Product intelligence in industrial
control: Theory and practice. Annu. Rev. Control 37(1), 69–88 (2013)
6. Meyer, G.G., Främling, K., Holmström, J.: Intelligent products: a survey. Comput. Ind. 60(3),
137–148 (2009)
7. Srinivasan, R., McFarlane, D., Thorne, A.: Identifying the requirements for resilient pro-
duction control systems. In: Studies in Computational Intelligence, vol. 640, pp. 125–134.
Springer (2016)
382 W. Derigent et al.
8. Sallez, Y., Montreuil, B., Ballot, E.: On the activeness of physical internet containers. Stud.
Comput. Intell. 594, 259–269 (2015)
9. Trentesaux, D., Thomas, A.: Product-driven control: a state of the art and future trends. IFAC
Proc. 45(6), 716–721 (2012)
10. Dubromelle, Y., Ounnar, F., Pujo, P.: Service oriented architecture for holonic isoarchic and
multicriteria control. Stud. Comput. Intell. 402, 155–168 (2012)
11. Herrera, C., Berraf, S.B., Thomas, A.: Viable system model approach for holonic product
driven manufacturing systems. Stud. Comput. Intell. 402, 169–181 (2012)
12. Adam, E., Trentesaux, D., Mandiau, R.: Volatile knowledge to improve the self-adaptation
of autonomous shuttles in flexible job shop manufacturing system. Stud. Comput. Intell. 594,
219–231 (2015)
13. Mezgebe, T.T., El Haouzi, H.B., Demesure, G., Pannequin, R., Thomas, A.: A negotiation
scenario using an agent-based modelling approach to deal with dynamic scheduling. Stud.
Comput. Intell. 762, 381–391 (2018)
14. Zimmermann, E., El-Haouzi, H.B., Thomas, P., Pannequin, R., Noyel, M.: Using analytic
hierarchical process for scheduling problems based on smart lots and their quality prediction
capability. Stud. Comput. Intell. 803, 337–348 (2019)
15. Raileanu, S., Parlea, M., Borangiu, T., Stocklosa, O.: A JADE environment for product driven
automation of holonic manufacturing. Stud. Comput. Intell.s 402, 265–277 (2012)
16. Cardin, O., Castagna, P.: Myopia of service oriented manufacturing systems: benefits of data
centralization with a discrete-event observer. In: Studies in Computational Intelligence (2012)
17. Cardin, O., Trentesaux, D., Thomas, A., Castagna, P., Berger, T., Bril, H.: Coupling predictive
scheduling and reactive control in manufacturing: state of the art and future challenges. Stud.
Comput. Intell. 594, 29–37 (2015)
18. Gaham, M., Bouzouia, B., Achour, N.: An evolutionary simulation-optimization approach to
product-driven manufacturing control. In: Studies in Computational Intelligence, vol. 544,
p. 283–294. Springer (2014)
19. Li, M., El Haouzi, H.B., Thomas, A., Guidat, A.: Fuzzy decision-making method for product
holons encountered emergency breakdown in product-driven system: an industrial case. Stud.
Comput. Intell. 594, 243–256 (2015)
20. Derigent, W., Voisin, A., Thomas, A., Kubler, S., Robert, J.: Application of measurement-
based AHP to product-driven system control. Stud. Comput. Intell. 694, 249–258 (2017)
21. Aubry, A., Bril, H., Thomas, A., Jacomino, M.: Product driven systems facing unexpected
perturbations: how operational research models and approaches can be useful? In: Studies in
Computational Intelligence (2017)
22. Babiceanu, R.F., Seker, R.: Manufacturing operations, internet of things, and big data: towards
predictive manufacturing systems. Stud. Comput. Intell. (2014)
23. Thomas, P., Thomas, A.: An approach to data mining for product-driven systems. In: Studies
in Computational Intelligence, vol. 472, p. 181–194. Springer (2013)
24. Bouazza, W., Sallez, Y., Aissani, N., Beldjilali, B.: A model for manufacturing scheduling
optimization through learning intelligent products. Stud. Comput. Intell. 594, 233–241 (2015)
25. Zimmermann, E., El Haouzi, H.B., Thomas, P., Pannequin, R., Noyel, M., Thomas, A.: A
case study of intelligent manufacturing control based on multi-agents system to deal with
batching and sequencing on rework context. In: Studies in Computational Intelligence (2018)
26. Queiroz, J., Leitão, P., Barbosa, J., Oliveira, E., Garcia, G.: An agent-based industrial cyber-
physical system deployed in an automobile multi-stage production system. Stud. Comput.
Intell. 853, 379–391 (2020)
27. McFarlane, D., Giannikas, V., Wong, A. C. Y., Harrison, M.: Intelligent products in the
supply chain-10 years on, in Service orientation in holonic and multi agent manufacturing
and robotics, p. 103–117. Springer (2013)
Intelligent Products through SOHOMA Prism 383
28. McFarlane, D., Sarma, S., Chirn, J.L., Wong, C.Y., Ashton, K.: Auto ID systems and intelligent
manufacturing control. Eng. Appl. Artif. Intell. 16(4), 365–376 (2003)
29. Derigent, W., Thomas, A.: situation awareness in product lifecycle information systems. Stud.
Comput. Intell. 762, 127–136 (2018)
30. Cuthbert, R., Giannikas, V., McFarlane, D., Srinivasan, R.: Repair services for domestic
appliances. Stud. Comput. Intell. 640, 31–39 (2016)
31. Derigen, W., Thomas, A.: End-of-life information sharing for a circular economy: existing
literature and research opportunities. Stud. Comput. Intell. 640, 41–50 (2016)
32. Främling, K., Parmar, S., Hinkka, V., Tätilä, J., Rodgers, D.: Assessment of EPCIS standard
for interoperable tracking in the supply chain. In: Studies in Computational Intelligence, vol.
472, pp. 119–134 (2013)
33. Ansola, P.G., García, A., de Las Morenas, J.: IoT visibility software architecture to provide
smart workforce allocation. In: Studies in Computational Intelligence, vol. 640, pp. 223–231
(2016)
34. Kubler, S., Madhikermi, M., Främling, K.: QLM messaging standards: introduction and com-
parison with existing messaging protocols. In: Service Orientation in Holonic and Multi-Agent
Manufacturing and Robotics, vol. 544, pp. 237–256. Springer (2014)
35. Sallez, Y.: The augmentation concept: How to make a product “active” during its life cycle.
Stud. Comput. Intell. 402, 35–48 (2012)
36. Sallez, Y.: Proposition of an analysis framework to describe the “activeness” of a product
during its life cycle part ii: method and applications. In: Studies in Computational Intelligence,
vol. 544, pp. 271–282. Springer (2014)
37. Basselot, V., Berger, T., Sallez, Y.: Active monitoring of a product: a way to solve the “lack
of information” issue in the use phase. Stud. Comput. Intell. 694, 337–346 (2017)
38. Quintanilla, F.G., Cardin, O., Castagna, P.: Evolution of a flexible manufacturing system:
from communicating to autonomous product. In: Studies in Computational Intelligence, vol.
472, pp. 167–180 (2013)
39. Kubler, S., Derigent, W., Thomas, A., Rondeau, É.: Key factors for information dissemination
on communicating products and fixed databases. Service Orientation in Holonic and Multi-
Agent Manufacturing Control, Paris 402, 89–102 (2012)
40. Mekki, K., Derigent, W., Rondeau, E., Thomas, A.: Communicating aircraft structure for
solving black-box loss on ocean crash. In: Studies in Computational Intelligence (2018)
41. Wan, H., David, M., Derigent, A.: Holonic manufacturing approach applied to communicate
concrete: concept and first development. In: Studies in Computational Intelligence, Springer
(2020)
42. Taboun, M.S., Brennan, R.W.: Sink node embedded, multi-agent systems based cluster
management in industrial wireless sensor networks. Stud. Comput. Intell. 640, 329–338 (2016)
43. Trentesaux, D., Branger, G.: Data management architectures for the improvement of the
availability and maintainability of a fleet of complex transportation systems: a state-of-the-art
review. Stud. Comput. Intell. 762, 93–110 (2018)
44. Trentesaux, D., Branger, G.: Foundation of the surfer data management architecture and its
application to train transportation, international workshop on service orientation in holonic
and multi-agent manufacturing. Stud. Comput. Intell. 762, 111–125 (2018)
45. Morariu, O., Morariu, C., Borangiu, T.: Resource, service and product: real-time monitoring
solution for service oriented holonic manufacturing systems. Stud. Comput. Intell. 544, 47–62
(2014)
46. Tsamis, N., Giannikas, V., McFarlane, D., Lu, W., Strachan, J.: Adaptive storage location
assignment for warehouses using intelligent products. Stud. Comput. Intell. 594, 271–279
(2015)
384 W. Derigent et al.
47. Cojocaru, L.E., Burlacu, G., Popescu, D., Stanescu, A.M.: Farm management information
system as ontological level in a digital business ecosystem. In: Studies in Computational
Intelligence, vol. 544, pp. 295–309. Springer (2014)
48. Răileanu, S., Borangiu, T., Silişteanu, A.: Centralized HMES with environment adaptation
for production of radiopharmaceuticals. Stud. Comput. Intell. 640, 3–20 (2016)
49. Pǎtraşcu, M., Drǎgoicea, M.: Integrating agents and services for control and monitoring:
managing emergencies in smart buildings. Stud. Comput. Intell. 544, 209–224 (2014)
50. Thomson, V., Zhang, X.: Improving the delivery of a building. Stud. Comput. Intell. 640,
21–29 (2016)
51. Montreuil, B.: Toward a physical internet: meeting the global logistics sustainability grand
challenge. Logist. Res. 3(2), 71–87 (2011)
52. Ballot, E., Gobet, O., Montreuil, B.: Physical internet enabled open hub network design for
distributed networked operations. Stud. Comput. Intell. 402, 279–292 (2012)
53. Rahimi, A., Sallez, Y., Berger, T.: Framework for smart containers in the physical interne. In:
Studies in Computational Intelligence, vol. 640, pp. 71–79. Springer (2016)
54. Krommenacker, N., Charpentier, P., Berger, T., Sallez, Y.: On the usage of wireless sensor
networks to facilitate composition/decomposition of physical internet containers. In: Studies
in Computational Intelligence, vol. 640, pp. 81–90. Springer (2016)
55. Pujo, P., Ounnar, F., Remous, T.: Wireless holons network for intralogistics service. Stud.
Comput. Intell. 594, 115–124 (2015)
56. Pujo, P., Ounnar, F.: Cyber-physical logistics system for physical internet. In: Studies in
Computational Intelligence, vol. 762, pp. 303–316 (2018)
57. Glaessgen, E., Stargel, D.: The digital twin paradigm for future NASA and US Air Force vehi-
cles. In: 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials
Conference, p. 1818 (2012)
58. Valckenaers, P.: ARTI reference architecture - PROSA revisited. Stud. Comput. Intell. 803,
1–9 (2019)
59. Borangiu, T., Oltean, E., Răileanu, S., Anton, F., Anton, S., Iacob, I.: Embedded digital twin
for ARTI-type control of semi-continuous production processes. Stud. Comput. Intell. 853,
113–133 (2020)
60. Lu, Q., Xie, X., Heaton, J., Parlikad, A.K., Schooling, J.: From BIM towards digital twin:
strategy and future development for smart asset management. Stud. Comput. Intell. 853,
392–404 (2020)
Multi-protocol Communication Tool
for Virtualized Cyber Manufacturing Systems
LS2N UMR CNRS 6004, University of Nantes and IUT de Nantes, 2, rue de la Houssini ère,
44322 Nantes Cedex, France
{Pascal.Andre,Olivier.Cardin,Fawzi.Azzi}@ls2n.fr
1 Introduction
Using service orientation for manufacturing systems enables to abstract the heteroge-
neous nature of the devices and their providers and also of legacy applications. In ser-
vice oriented manufacturing systems, a key concept is abstraction. Manufacturing enti-
ties such as resources, people, products, orders are considered as objects (e.g. actors
or holons) that exchange messages to call the services provided by entities. From low
physical actions to high level product order, every process can be seen as a service,
making it a convenient scalable paradigm to design manufacturing system. Even more,
this scalable paradigm enables to see human processes as well as business processes as
services and integrates them with the manufacturing process [8, 15, 16].
However, this unifying paradigm hides the complexity of implementations: some
services are located in the cloud while other are distributed on a range of cyber-physical
systems (the manufacturing resources), due to the various nature of the devices and their
providers and also to legacy applications. The bottleneck problem does not rely to the
scalability of the service composition and orchestration [17] but to the communication
F. Azzi—Sincere thanks to Nicolas Vannier, Maxence Coutand, Tom Le Berre and Khaled Amirat
for their active work on the implementation.
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 385–397, 2021.
https://doi.org/10.1007/978-3-030-69373-2_27
386 P. André et al.
means that support the service interactions. At the model level, e.g. in SysML [23],
objects call services by message send (or signals). At low levels, these can be simple
(remote) method calls in a program or service invocation through network layers e.g.
the 7 layers of the OSI Model according to interoperable (or not) communication proto-
cols including recent IoT devices. The manufacturing software systems, and especially
the digital twins, are based on distributed scalable services that exchange through com-
munication middleware which can be Remote Procedure Call (RPC) , Object Request
Brokers (ORB), Message Oriented Middlewares (MOM), Enterprise Application Inte-
gration (EAI) frameworks or an Enterprise Service Bus (ESB) solution. Consequently
the system’s interoperability remains a fundamental and still challenging quality of soft-
ware and hardware systems. In addition, the current trend of virtualization and encapsu-
lation of manufacturing resources and controls into cloud networked services highlights
more and more the need for a careful management of the overall communication net-
work from the cloud to the equipment.
In order to handle heterogeneous communications, we proposed a practical solu-
tion for multi-protocol communication for small manufacturing systems [2]. The main-
tenance of such systems becomes tricky when the communication statements are
widespreadly merged in the application code. The communication software mainte-
nance of manufacturing systems should support device and machine evolution and
face the technical debt of legacy application since there are never ending technological
changes. Software evolution happens during the whole system life cycle and may intro-
duce heterogeneity that will impact the communication middleware. We investigated the
separation of the communication concerns from other aspects of the application in order
to improve the system evolution and make it adaptable and reconfigurable to different
contexts (resources, workshop...). We compared various approaches and exhibited a
solution. In this paper, we present a running implementation of the multi-protocol com-
munication tool (MPCT) that is generic enough to be implemented in various service
oriented manufacturing systems.
The paper is organised as follows. We recall the problem statement and we present
the architecture of our solution to handle heterogeneous communication abstraction
in Sect. 2. In Sect. 3 we describe the tool design and discuss implementation issues.
Section 4 illustrates the application of the tool on a part of the Sofal case study. The
applicability of the approach is discussed in Sect. 5. In conclusion, we draw perspectives
for a larger integration and development of the tool in an actual manufacturing context.
2 Background
In [2], we exhibited the need for a Multi-Protocol Communication Tool (MPCT) that
would handle the communication in the interfaces of the holonic manufacturing systems
(HMS) [10, 22], either towards external applications, or even towards holons embedded
in physical resources if the HMS is distributed on several physical assets. In a Service
model, a functionality is implemented by services provided by entities [3]. In our case,
the entities are holons and the general model is formalised as a Service-oriented HMS
(SoHMS) [8, 19]. The entities provide services in their interface and require services
from other entities. Provided services are not necessarily atomic calls and may have
Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 387
a complex behaviour in which other services might be needed (called) and messages
can be exchanged. The service call and return (call back) are also implemented by
synchronous or asynchronous messages.
In these distributed architectures, a main issue is the communication between the
heterogeneous entities. Indeed, even if this is usually considered as a “technological”
concern, communication is an essential stake for distributed software. Among the exist-
ing architectures, a mixed solution offers a balanced compromise for HMS based con-
trol systems [2]. Considering the interfaces, MPCT is meant to handle (1 ↔ N) com-
munications, i.e. one generic interface on one side and N protocols on the other side.
Therefore, the generic and standardized interface has to be defined for the one side.
An important task is to handle the fundamental differences between the protocols. For
example, an event-based protocol and a push protocol will be fundamentally different
implementations of service messages. We need to aggregate the different behaviours in
order to fit to the standard interface.
(MPCT) component plays the role of mediator (or federation) between the digital twins
which are themselves distributed in both the HMS part and the resource part. The MPCT
offers a generic, modular and configurable standalone application to handle any type of
heterogeneous communication all along the architecture. The objective is to simplify as
much as possible those technical issues (decoupling) in order to make the developers
focus on the most valuable tasks of the HMS, including the decision making algorithms
and negotiation patterns for example.
Fig. 4. Example of object collaboration to handle a communication between MPCT and HMS
As an example in Fig. 4, the MPCT interacts with the HMS using TCP-sockets.
The coordinator loads a SocketServer to the ComProtocolLoader, installs a communica-
tion handler from the ProtocolFactory and runs it to exchange messages with the HMS.
Recall that HMS uses the generic protocol.
In Fig. 5, when receiving an incoming message from the HMS, the coordinator
extracts the meta-information of the message. If this is the first message to the tar-
get resource, the coordinator queries the configuration manager to get the resource
data, loads the adequate protocol from the ComProtocolLoader, installs a communication
handler from the ProtocolFactory and runs it to exchange messages with the resource,
according to rules of the resource protocol.
3 Message transformation
The protocol handler converts this messaging protocol into the resource specific
protocol (in/out) according to each resource communication requirements. The message
transformation service enables to re-format the message structure from one protocol to
another according to user-defined transformations. Transformation libraries must be
provided here for standard protocols (Fig. 6).
390 P. André et al.
1 Evolvable configurations
The configuration part of the MPCT
resources aims to store persistent “communi-
cation” data for different resources. This infor-
mation is mandatory for protocol and message
translation as we mentioned above. It includes
common data such as the resource name, the
protocol, the host and port addresses and also
optional (specific) data like a MQTT topic or
a channel. For example, Fig. 7 shows a con-
figuration mapping between the HMS and the
resource number 2. The HMS connection is Fig. 7. Examples of resource configura-
based on sockets, as detailed in the collabora- tion (JSON format)
tion example of Fig. 4. The resource commu-
nicates using a MQTT protocol on topic moveProduct. It is currently made persistent
by JSON files which are actually more portable than databases.
New protocols may be
required. The abstract class
ComProtocolLoader includes the
common structure between the
different communication proto-
cols already implemented. It
groups together all the attributes
such as the addresses, names and
ports. It also combines the trans-
formation method to make a call
on the Message Transformation
and runs the module where the
developer will have to imple-
ment the communication logic of
this new protocol as described in
Fig. 8. The ProtocolFactory plays Fig. 8. Adding a new protocol
the role of a protocol generator
for which an implementation has to fulfil its specification. We get a more effective man-
agement of the different protocols but the counterpart is that we need to write source
code inside the MPCT. New facilities are under study to make it generated instead of
coded.
We implemented the MPCT in Java by means of design patterns [9] e.g. Factory,
Adaptor, Composite State, Proxy, Mediator or Facade in order to get standard evolv-
able applications1 . Optimisations can improve the performances. For example, when
several resources use the same MQTT instance, we group them in a single communi-
cation adaptor. Also we plan to use MPCT as a middleware for resource-to-resource
communications bypassing HMS.
1 https://gitlab.univ-nantes.fr/E168727Z/capstone2019.
392 P. André et al.
We obtain the screen of Fig. 11. During the communications, each exchange
between a resource and the HMS implies two network communication sessions and
the conversion of the messages according the communication protocol configuration.
Recall the messages can be plain text or XML files that must conform to the message
structure definitions.
The MPCT application itself is designed as an assembly of four modules. Table 1
provides some software metrics of the MPCT project: resourceConfig (rC), message-
Transformation (mT), protocolHandler (pH) and mpct-GUI.
The MPCT is planned to be integrated in the SOHMS framework during the next
release of the Sofal product line which includes new kinds of resources.
394 P. André et al.
5 Discussion
The topic of Intelligent Manufacturing represents a large scientific interest, aiming at
developing innovative control software for the new generation of manufacturing sys-
tems. Various trends can be identified, among which intelligent products [14] or Holonic
Control Architectures for example. A common feature of all these approaches is the
decentralization of the decision making process in order to cope as efficiently as possi-
ble with the disruptions happening on the system [21]. As a matter of fact, the commu-
nication between the entities is a major topic during the development and implementa-
tion phases, as the efficiency of the communication has a direct impact on the efficiency
of the overall architecture.
Two main aspects have to be dealt with: negotiation and connectivity. Negotiation is
the core of lots of studies, evaluating the best interaction protocols to enhance the global
performance of the control architectures [5]. Connectivity represents the hardware and
software possibilities to connect various elements together, including legacy systems.
Connectivity issues currently take an increasing importance in the development of inno-
vative control architectures, especially with the development of service-oriented [11],
cloud-based architectures [13] or Digital Twin-based architectures [6].
Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 395
6 Conclusion
References
1. An, Y., Zhang, Y., Zeng, B.: The reliable hub-and-spoke design problem: models and algo-
rithms. Transp. Res. Part B Methodolog. 77, 103–122 (2015)
2. André, P., Azzi, F., Cardin, O.: Heterogeneous communication middleware for digital twin
based cyber manufacturing systems. In: Borangiu, T., Trentesaux, D., Leitão, P., Boggino,
A.G., Botti, V.J. (eds.) Proceedings of SOHOMA, Studies in Computational Intelligence,
vol. 853, pp. 146–157. Springer (2019)
3. André, P., Cardin, O.: Trusted services for cyber manufacturing systems. In: Borangiu,
T., Trentesaux D., Thomas, A., Cardin, O. (eds.) SOHOMA, pp. 359–370. Springer IP (2018)
4. Antzoulatos, N., Castro, E., Scrimieri, D., Ratchev, S.: A multi-agent architecture for plug
and produce on an industrial assembly platform. Prod. Eng. Res. Devel. 8(6), 773–781 (2014)
5. Borangiu, T., Raileanu, S., Trentesaux, D., Berger, T., Iacob, I.: Distributed manufacturing
control with extended CNP interaction of intelligent products. J. Intell. Manuf. 25(5), 1065–
1075 (2014)
6. Bottani, E., Cammardella, A., Murino, T., Vespoli, S.: From the cyber-physical system to the
digital twin: the process development for behaviour modelling of a cyber guided vehicle in
m2m logic. In: XXII I Summer School “Francesco Turco” – Industrial Systems Engineering,
pp. 96–102 (2017)
7. Gamboa Quintanilla, F., Cardin, O., L’Anton, A., Castagna, P.: Virtual Commissioning-
Based Development and Implementation of a Service-Oriented Holonic Control for Retrofit
Manufacturing Systems. In: Borangiu, T., Trentesaux, D., Thomas, A., McFarlane, D. (eds.)
SOHOMA, no. 640 in Studies in Computational Intelligence, pp. 233–242. Springer IP
(2016)
8. Quintanilla, F.G., Cardin, O., L’anton, A., Castagna, P.: A modeling framework for manufac-
turing services in service-oriented holonic manufacturing systems. Eng. Appl. Artif. Intell.
55, 26–36 (2016)
9. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable
Object-Oriented Software. Addison-Wesley Longman Publishing Co. Inc, USA (1995)
10. Giret, A., Botti, V.: Engineering holonic manufacturing systems. Comput. Ind. 60(6), 428 –
440 (2009). Collaborative Engineering: from Concurrent Engineering to Enterprise Collab-
oration
11. Jiang, P., Ding, K., Leng, J.: Towards a cyber-physical-social-connected and service-oriented
manufacturing paradigm: social manufacturing. Manuf. Lett. 7(Supplement C), 15–21
(2016)
12. Kruger, K., Basson, A.: Erlang-based control implementation for a holonic manufacturing
cell. Int. J. Comput. Integr. Manuf. 30(6), 641–652 (2017)
13. Liu, X.F., Shahriar, M.R., Al Sunny, S.M.N., Leu, M.C., Hu, L.: Cyber-physical manufactur-
ing cloud: Architecture, virtualization, communication and testbed. J. Manuf. Syst. 43(Part
2), 352–364 (2017)
14. McFarlane, D., Sarma, S., Chirn, J.L., Wong, C.Y., Ashton, K.: Auto id systems and intelli-
gent manufacturing control. Eng. Appl. Artif. Intell. 16(4), 365–376 (2003)
15. Moraes, E.C., Lepikson, H.A., Colombo, A.W.: Developing Interfaces Based on Services
to the Cloud Manufacturing: Plug and Produce, pp. 821–831. Lecture Notes in Electrical
Engineering. Springer Berlin Heidelberg (2015)
16. Morariu, C., Morariu, O., Borangiu, T.: Customer order management in service oriented
holonic manufacturing. Comput. Ind. 64(8), 1061–1072 (2013)
17. Papazoglou, M.P.: Service-oriented computing: concepts, characteristics and directions. In:
WISE, pp. 3–12. IEEE Computer Society (2003)
Multi-protocol Communication Tool for Virtualized Cyber Manufacturing Systems 397
18. Raileanu, S., Borangiu, T., Morariu, O., Iacob, I.: Edge computing in industrial IoT frame-
work for cloud-based manufacturing control. In: 2018 22nd International Conference on
System Theory, Control and Computing (ICSTCC), p. 261–266 (2018)
19. Rodrı́guez, G., Soria, Á., Campo, M.: Artificial intelligence in service-oriented software
design. Eng. Appl. Artif. Intell. 53, 86–104 (2016)
20. Sandita, A.V., Popirlan, C.I.: Developing a multi-agent system in jade for information man-
agement in educational competence domains. Proc. Econ. Finan. 23, 478–486 (2015)
21. Schuhmacher, J., Hummel, V.: Decentralized control of logistic processes in cyber-
physical production systems at the example of ESB logistics learning factory. Proc. CIRP
54(Supplement C), 19–24 (2016)
22. Van Brussel, H.: Holonic Manufacturing Systems, pp. 654–659. Springer, Berlin (2014)
23. Weilkiens, T.: Systems Engineering with SysML/UML: Modeling, Analysis. Design. The
MK/OMG Press, Elsevier Science (2008)
Is Communicating Material an Intelligent
Product Instantiation? Application
to the McBIM Project
Research Centre for Automatic Control, CRAN CNRS UMR 7039, Université de Lorraine,
Campus Sciences, BP 70239, 54506 Vandoeuvre-lès-Nancy, France
{h.wan,m.david,w.derigent}@univ-lorraine.fr
In the framework of the Intelligent Manufacturing Systems (IMS) community, the use
of the Internet of Things gave rise to the concept of intelligent product. Indeed, sub-
stantial information distribution improves data accessibility and availability compared
to centralised architectures. Product information may be allocated both within fixed
databases and/or within the product itself, thus leading to products with informational
and/or decisional abilities, referred as “Intelligent Products”. Many different definitions
of “Intelligent Products” have been proposed. A comparison of these different types is
provided in [1].
In 2010, a new paradigm was proposed in [2], introducing communicating materi-
als, i.e. materials able to communicate with their environment, process and exchange
information, and store data in their own structure. Besides, they also have the capability
to sense their environment and measure their own internal physical states.
The concept has been applied in different works, from different perspectives. Diverse
early prototypes were designed (or simulated) for the needs of the manufacturing and
the construction industries, by spreading micro-electronics devices into a material. The
material can be either wood, textile or concrete. The interests for such materials are
diverse: (a) because of their data storing capacity, they can convey all information related
to design, manufacturing and logistics, useful during the BOL (Beginning Of Life –
design, manufacturing and construction) and the EOL (End Of Life – dismantlement and
recycling) of a building; (b) given their ability to sense their environment and process
related information, they can also be used during the MOL (Middle Of Life – exploitation
and maintenance) as intelligent building sensors, mainly to perform structural health
monitoring. In our works, the inserted devices were either RFID tags [3] or self-powered
wireless sensor networks (WSNs) embedded into the material [4]. Both works deal with
the data dissemination/data replication in this type of materials, which is an issue related
to the second capability. However, these works did not address the issue related to the
first capability, i.e. the composition/decomposition of communicating materials.
The rest of the paper is organized as follows: Sect. 2 details the problem of composi-
tion/decomposition in communicating materials. Indeed, from a holonic perspective, it
is shown that composition/decomposition of communicating materials is similar to the
composition/decomposition of a group of product holons. Section 3 introduces the differ-
ent works existing in the holonic and multi-agent literature around recursive holarchies
and multi-agent systems, sometimes applied to WSN. Section 4 details the approach
used to model the communicating material. Section 5 demonstrates the approach by
instantiating it on the ANR McBIM (Materials Communicating with the BIM) project.
Even if studied for a long time, no clear formal definitions of the communicating material
have been provided until now and a first definition is proposed hereafter. This definition
helps to understand that a communicating material can be theoretically considered as an
infinite group of Product Holons.
In our definition, the Communicating Material is an Intelligent Product with two
additional capabilities:
• The capability of being intrinsically and wholly communicating: even if the product
undergoes a physical transformation, the resulting pieces shall still be communicating
materials. Two operators of physical transformation are considered: composition and
decomposition. Composition is an operator that gathers 2 to N communicating mate-
rials into one. Decomposition is the inverse operator, that divides one communicating
material into 2 to N different pieces, still communicating materials.
400 H. Wan et al.
• The capability of managing its own data: the material should be able to manage its own
data according to the events occurring in its environment. For instance, the material
could decide itself to propagate/replicate specific data onto different material parts
because a physical transformation is scheduled, thus avoiding data loss.
Characteristics (1) to (5) are inherited from [5], characteristic (6) is from [6]. This
one is important since it underlines the capacity of a product to monitor its own status or
properties. Characteristic (7) is the only one completely dedicated to the communicating
material concept and is mandatory to build a communicating material. Let P1 , P2 and
P3 be three products (intelligent or not); the composition operator C is then defined as
follows (Eq. 1):
C
(P1 , P2 ) −→ P3 (1)
Agents a0 i are “Elementary” agents which have a physical applicative part and a
recursive agent part (holon). Agents a1 i , a2 i and so on … represent agent of higher level
i and are individually called “Partial” agent (no physical part).
A “Complete Elementary” agent is composed by its elementary agents and all their
directly related “Partial” agents (from abstraction levels). “Composed” agent are the
few “Partial” agents gathered from the same abstraction level. Elementary agents are
dynamically added to the holarchy. At the same time, depending on the states of the
agents, composed agents are also automatically created or destroyed respectively via
composition and reduction mechanisms. Composition occurs when one agent reaches a
Is Communicating Material an Intelligent Product Instantiation? 403
certain state (detected via observation functions). In that state, it launches a negotiation
protocol with the agents belonging to the same level in order to aggregate the agents
into one composed agent. To generate the new composed agent and its relations with the
other agents, the authors introduce the notion of transformation functions. Indeed, the
VOWELS paradigm [13] states that a MAS can theoretically decomposed as {Agents,
Environment, Interaction, Organisation}. As a result, four types of transformation func-
tions are introduced, one for each component of the MAS, respectively PA, PE, PI and
PO. PA is an operator grouping agents from one level and associating them to an agent of
the upper level. PE is an operator grouping elements of the environment from one level
into an element of the environment from the upper level. PI is an operator transforming
interactions from agents of a level to interactions to the upper level. Perceptions/actions
and messages of agents of a level N are transformed thanks to PI to perceptions/actions
and messages of agents of level N + 1. PO is an operator that transforms relations
between agents. A relation between agents of level N grouped in different agents of
level N + 1 is transformed by PO into a relation between agents of level N + 1.
MAS is a framework for the design of global control with distributed systems, but it
is most of time employed to structure a little part of it (MAS for defining IP behaviour
for control in manufacturing or in logistics, MAS for controlling a specific feature of a
WSN, …). Currently, resource management services in existing WSN MAS solutions
are tightly coupled with applications, and generic resource management services still
need to be developed. CM implies controlling the devices network all along the lifecycle
and to consider CM capacities to define new services. As demonstrated in Sect. 2, many
new services will depend on the product abstraction level. That is why a recursive and
dynamic holarchy (or agent-based model) to exploit CM capacities is needed. In this
regard, the MAS-R model seems to be generic enough to be used as a basis to structure
the Holonic Architecture needed for the Communicating Material.
French laboratories and one company. CRAN works on the network and information
management, LAAS designs the sensing and communicating nodes, LIB studies data
interoperability and all these works are implemented by 360 SmartConnect/FINAO SAS.
The communicating concrete (see Fig. 5) consists of a concrete structure where
many sensing and communicating nodes are spread. The sensing nodes will periodically
monitor the physical parameters (like temperature, humidity …) of the concrete. Com-
municating nodes aggregate received data and transmit them to remote servers thanks
to BIM standards. Besides, manufacturing data (like physical properties or information
related to manufacturing actors) may also be considered. The communicating concretes’
behaviours may be different along their lifecycle. During the manufacturing phase, the
WSN nodes are inserted and initialized. The communicating concretes periodically (by
example every hour) monitor their physical status, store the physical propriety informa-
tion and manufacturing actor information. During the construction phase, communicat-
ing concretes will be assembled. As communicating concretes arrive, auto-organization
is then needed to dynamically define a 3D network to achieve energy savings. Concrete
must frequently report its status to ensure the construction safety and updates the net-
work organization. When the construction is completed, the 3D static WSN will regularly
(every half-day) monitor structure health data (such as cracks, temperature, corrosion,
etc.) to ensure the maintenance of the building. The communicating concrete element
must last several decades from the manufacturing phase to the latest of the exploitation
phase.
This part shows how we can use the MAS proposal with the physical McBIM ele-
ments. As described in Sect. 5.1, concrete pieces will pass through different phases
(manufacturing, logistics, construction, exploitation); the frequencies at which commu-
nicating concretes produce data must evolve over time. Because McBIM elements can
interoperate each other, and in order to control the lifetime of the services provided by
406 H. Wan et al.
communicating concretes, the WSN have also to be reorganized. We focus on the energy
control in the section.
From that point, data generated by sensors are sent to the Concrete Agent thanks to
the PI operator. Indeed, temperatures monitored by each sensor are aggregated into a
average temperature at the Concrete Agent level. In a second step, green and red sensor
nodes (S) detect another concrete. In the MAS, this meeting leads to interactions between
the Elementary Agent (E) associated with different “Composed Agent level 1” (C1 and
C2).
Aggregation rules are shared by every agent (E, C, MC, …) whatever its abstrac-
tion level and defined if the green and red pieces have to be engaged in a relationship
(case 2 and 3) or not (case 1: elementary agents interaction). If they are related, (C)
Agents exchange information to decide about the kind of the relationship. Green and red
communicating concretes can be associated (case 2: only composed agents association).
This case can represent temporary relationship (by example during storage or transport
phase). In that case, this relation between agents of lower levels is then transformed
into a higher relation between concrete agents thanks to PO operator. In a more inten-
sive relationship, the two McBIM pieces can aggregate their network, their models and
theirs properties (case 3). This last case creates the new “Composed Agent level 2” (MC)
thanks to the PO operator followed by the PA operator.
any type of recursive and dynamic Holarchies (not only Communicating Materials). To
complete this work, transformation function and negotiation protocols still have to be
clearly defined and instantiated on the McBIM Project.
Acknowledgements. The authors thank for the financial support from the French National
Research Agency (ANR) under the McBIM project, grant number ANR-17-CE10–0014.
References
1. McFarlane, D., Giannikas, V., Wong, A.C.Y., Harrison, M.: Product intelligence in industrial
control: theory and practice. Annu. Rev. Control 37(1), 69–88 (2013)
2. Kubler, S., Derigent, W., Thomas, A., Rondeau, É.: Problem definition methodology for
the “{C}ommunicating {M}aterial” paradigm. In: 10th IFAC Workshop on Intelligent
Manufacturing Systems, Lisbon, vol. 10, pp. 198-203 (2010)
3. Kubler, S., Derigent, W., Thomas, A., Rondeau, E.: Embedding data on “communicating
materials” from context-sensitive information analysis. J. Intell. Manuf. 25(5), 1053–1064
(2014)
4. Mekki, K., Derigent, W., Zouinkhi, A., Rondeau, E., Thomas, A., Abdelkrim, M.N.: Non-
localized and localized data storage in large-scale communicating materials: probabilistic and
hop-counter approaches. Comput. Stand. Interfaces, 44, 243–257 (2016)
5. Wong, C.Y., Mcfarlane, D., Zaharudin, A.A., Agarwal, V.: The intelligent product driven sup-
ply chain. In: Proceedings IEEE International Conference on Systems, Man and Cybernetics,
pp. 4-6 (2002)
6. Ventä, O.: Intelligent products and systems, Technol. Theme-Final Report. VTT, Espoo VTT
Publ. p. 304 (2007)
7. Koestler, A.: The ghost in the machine. Hutchinson (1967)
8. Koestler, A.: Janus: A summing up. Bull. At. Sci. 35(3), 4 (1979)
9. Botti, V., Giret, A.: ANEMONA: a Multi-agent Methodology for Holonic Manufacturing
Systems, vol. 53 (2008)
10. Sallez, Y., Montreuil, B., Ballot, E.: On the activeness of physical internet containers. Comput.
Ind. 81, 96–104 (2016)
11. Delgado, C.: Organisation-based co-ordination of wireless sensor networks. University of
Barcelone (2014)
12. Hoang, T.: Un modèle multi-agent récursif générique pour simplifier la supervision de
systèmes complexes artificiels décentralisés. Université de Grenoble, Grenoble, France (2012)
13. Demazeau, Y.: From interactions to collective behaviour in agent-based systems. In:
Proceedings of the 1st. European Conference on Cognitive Science, Saint-Malo (1995)
14. McBIM Consortium, WebPage of the McBIM Project (2018). https://mcbim.cran.univ-lorrai
ne.fr
15. Derigent, W., et al.: Materials communicating with the BIM: Aims and first results of the
MCBIM project, in Structural Health Monitoring 2019: Enabling Intelligent Life-Cycle
Health Management for Industry Internet of Things (IIOT). In: Proceedings of the 12th
International Workshop on Structural Health Monitoring (2019)
16. Wilensky, U.: NetLogo. Cent. Connect. Learn. Comput. Model. Northwest. Univ. Evanst.
(1999). https://ccl.northwestern.edu/netlogo/
The Concept of Smart Hydraulic Press
1 Introduction
Hydraulics and the fundamentals of fluid mechanics have been known since the seven-
teenth century. With the development of the economy came the demand for faster, more
efficient, flexible and accurate systems. In the recent decade, a major step forward has
been made in the modernization of hydraulic components and systems. The utilization
of advanced components in process systems is low due to poor communication within
the system.
The Industry 4.0 (I4.0) framework has been proposed since 2010 and has been in the
validation phase for 10 years [1]. With the main idea of networked hydraulics, CPS and
the intelligent network of distributed smart components and systems, which results in
high- efficient automation, became even more important in terms of self-configuration
and self-awareness of the systems itself. It will increase the flexibility of hydraulic
systems in the direction of agile manufacturing, which will be ready to be implemented
in the smart factory [2].
Frequently used elements in smart factory are smart products, processes and mate-
rials, connected in the smart network of cyber-physical systems (Smart-NetCPS). The
design methodology of a smart system supported by process data for processing and
storage, by cloud technologies, Internet of Things (IoT), communication technology,
machine learning, simulation analysis, real data analytics, AI, and smart components
such as actuators and sensors is shown in Fig. 1 [3–5].
Fig. 2. Scheme of servo hydraulic press divided into several functional sub-systems
lead to fracture and wrinkling of the product or in worst case tool damage. The purpose
of an intelligent hydraulic system is to control and self-adjust the blanking force in order
to prevent fracture and wrinkling, which can be determined by monitoring the forming
force or by machine vision.
To ensure constant inlet servo valve pressure conditions, the mission of the control
agent is to regulate the speed of the electric motor that drives the hydraulic pump. The
algorithm monitors the operation of the hydraulic unit and eliminates the influence of
cavitation, while ensuring improved energy consumption.
The selection of the right forming tool depends on the shape of the product. Based on
the collected data from the product family, the algorithm is able to automatically select
the right tool for the forming operation. The digital twin in the background evaluates
possible scenarios in parallel, allowing the control agent to adjust the control parameters
based on complex computation.
Fig. 3. Integration of a hydraulic press process into the smart factory production process [15]
While the deep drawing process is being performed in real time, digital twin contin-
uously improves the process in parallel and collects information via digital agents that
make decisions based on smart algorithms. The digital twin stores simulated data on a
server in a local cloud and provides traceability of information. Typically, hydraulic pro-
cesses are controlled by PLC controllers based on an analytical method that describes
the system. A local agent compares the reference with the calculated parameters and
performs compensation in the system using AI-based algorithms.
With this approach, abnormal deviations and anomalies in the system’s behaviour can
be detected in real time. The deviations can be analysed by digital agents which provide
the new control strategy, i.e. perform the adaptive control. System data is collected in a
local cloud. However, the importance of agility communication and the connection of
the local agents MAS is of great importance [15].
The definition of a digital twin is, from our point of view, an integrated prediction of a
scaled physical model with a simulation that defines its functioning with a probability
factor. An example of the digital twin concept of a hydraulic system for a deep drawing
process is shown in Fig. 4. During the execution of the process, a simulation method of a
process mirrored into a digital system is performed in parallel, which corrects the device
and predicts future activity using AI techniques. Synergies between real and simulation
data increase productivity, stability and interaction between components connected to
the virtual world [16].
414 D. Jankovič et al.
Figure 5 shows the digital model proposed for a hydraulic press. It consists of several
important subsystems:
• The input database where all initial parameters are collected as well as new data
provided by the expert system.
• The digital fault-diagnosis system, i.e. the digital twin where the data from the database
and the process parameters are collected in the cloud platform and used as input for all
simulation processes. Here the improvement loop, executed by advanced algorithms
(digital agents) is used to perform parameter auto-correction and decision-making in
real-time.
• The visualization system, which is responsible to monitor the important data defined
by the digital agents. Based on gathered information describing the system behaviour,
the real time monitoring and control signals are sent back to the system controller to
execute improved cycle.
The expert system concept can be integrated into the NI LabView environment.
416 D. Jankovič et al.
The vast majority of hydraulic systems still have a constant discharge pump set
to the maximum required load, resulting in unused hydraulic energy being lost in the
fluid circulation process. Energy conservation can be achieved by developing a control
method for adjusting the rotational speed of the servo motor based on the requirements
of the press operation. By improving the load on the hydraulic energy source, the energy
consumption of the system is lower and the forming process is more stable, resulting in
better formability and product quality. However, servo motors have a lower efficiency at
lower loads. A higher load on the drive motor increases energy efficiency.
The design of a suitable simulation model integrated in the digital twin of the
hydraulic power unit and the development of an algorithm to control and improve the
hydraulic energy consumption may lead to corrective measures which set up the ideal
servo motor speed and reduce the level of vibrations in the system [22].
Fig. 6. Automatic tool change and RFID recognition of the specific tool
418 D. Jankovič et al.
This paper proposes a solution that can be gradually integrated into any smart hydraulic
system to improve its performances. With a modular upgrade, it is necessary to develop
a multi-agent system in the MES layer. By increasing the number of monitoring and
control subsystems, the complexity of the hydraulic CPS increases; the problem of data
acquisition is solved with a programmable system interface, while diverse informa-
tion formats requires different communication protocols. As described, the mirroring
of the real-time environment to the digital model and machine learning technique is of
crucial importance. With proper data acquisition and analysis, good decision-making
capabilities can be achieved by artificial agents.
Real-time, immediate failure detection or worn die tool estimate result in quick
artificial agent response and prevent further damage. By controlling product quality by
visual and acoustic methods, better results can be obtained; however, the research shows
that cracks and other unwanted defects cannot be detected in many cases by monitoring
process parameters. The proposed tool change method represents a big step forward
in the development of automatic smart hydraulic systems. A faster company response
time can be achieved introducing industrial robots with actuators of high precision. The
energy saving method is more efficient in a system where many parallel hydraulic presses
use the same energy source. With a lower servo rotation speed, there is less vibration,
making the process more stable.
Future research will focus on the realization of the presented concepts for improving
the performance of the hydraulic CPS. The real prototype is based on the proposed design;
the experimental analysis will be performed in order to validate the given solutions. The
main challenge will be the development of AI algorithms that must be implemented by
help of digital agents to perform the predictive analysis and influence the control strategy.
The Concept of Smart Hydraulic Press 419
The development of the digital twin and its implementation in the real environment will
be tested and eventual corrections to improve smart hydraulic system will be made.
Acknowledgment. The work was carried out in the framework of the GOSTOP program
(OP20.00361), which is partially financed by the Republic of Slovenia – Ministry of Educa-
tion, Science and Sport, and the European Union – European Regional Development Fund. The
authors also acknowledge the financial support from the Slovenian Research Agency (research
core funding No. (P2–0248)).
References
1. Bauernhansel, T., et al.: WGP-Standpunkt Industrie 4.0. Available via DIALOG (2016).
https://wgp.de/wp-content/uploads/WGP-Standpunkt_Industrie_4-0.pdf. Accessed 14 Apr
2020
2. Tao, F., et al.: Digital twins and cyber-physical systems toward smart manufacturing and
industry 4.0: correlation and comparison. Engineering 5, 653–661 (2019)
3. Barasuol, V., et al.: Highly-integrated hydraulic smart actuators and smart manifolds for
high-bandwidth force control. Front. Robot. AI 5, 137–150 (2018)
4. Schuler (2020). https://www.schulergroup.com. Accessed 14 Apr 2020
5. Bosch Rexroth (2020). https://www.boschrexroth.com/en/xc. Accessed 14 Apr 2020
6. Nord, J.H., et al.: The internet of things: review and theoretical framework. Expert Syst. Appl.
133, 97–108 (2019)
7. Parrott, A., Warshaw, L.: Industry 4.0 and the digital twin, p. 17. Deloitte University Press
(2017)
8. Ferreira, J.A., et al.: Close loop control of a hydraulic press for springback analysis. J. Mater.
Process. Technol. 177, 377–381 (2006)
9. Zhang, Z. et al.: Research on Springback control in stretch bending based on iterative
compensation method. Math. Prob. Eng. (2019). ID 2025717
10. Dion, B.: Creating a Digital Twin for a Pump. Available vie DIALOG (2017). https://
www.ansys.com/-/media/ansys/corporate/resourcelibrary/article/creating-a-digital-twin-for-
a-pump-aa-v11-i1.pdf. Accessed 14 Apr 2020
11. Xie, J., et al.: Virtual monitoring method for hydraulic supports based on digital twin theory.
Mining Technol. 128, 77–87 (2019)
12. Mittal, S. et al.: Smart manufacturing: Characteristics, technologies and enabling factors.
Proc. Inst. Mech. Eng. Part B. J. Eng. Manuf. (2017). January 2019. Special Section: Smart
Manufacturing and Digital Factory, Vol. 233(5), ResearchGate, pp. 1342–1361
13. Linjama, M. et al.: High-performance digital hydraulic tracking control of a mobile boom
Mockup Adj. In: 10th International Fluid Power Conference, Dresden, Germany, March 2016,
Digital Hydraulics, Paper A-1, pp. 37–48 (2016)
14. Wang, S., et al.: Implementing smart factory of industrie 4.0: an outlook. Int. J. Distrib. Sensor
Netw. 12(1) (20116). ID: 3159805
15. Resman, M., et al.: A new architecture model for smart manufacturing: a performance analysis
and comparison with the RAMI 4.0 reference model. Adv. Prod. Eng. Manag. 14, 153–165
(2019)
16. Tao, F., et al.: Digital twin-driven product design framework. Int. J. Prod. Res. 57, 3935–3953
(2019)
17. Glaessgen, E.H., Stargel, D.S.: The digital twin paradigm for future NASA and U.S. Air
force vehicles. 53rd Structures, Structural Dynamics, and Materials Conference, Hawaii,
April 2012. Special session on the Digital Twin, US, NASA Technical Reports Server (2012)
420 D. Jankovič et al.
18. Lee, J. et al.: Recent advances and trends in predictive manufacturing systems in big data
environment. In: Conference on Industrial Informatics, Germany, July 2013, Manufacturing
Letter, vol. 1, pp. 38–41. Elsevier, US (2013)
19. Rojko, A.: Industry 4.0 concept: background and overview. Int. J. Interact. Mobile Technol.
(iJIM) 11, 77–90 (2008)
20. Alcácer, V., Cruz-Machado, V.: Scanning the Industry 4.0: a literature review on technologies
for manufacturing systems. Eng. Sci. Technol. Int. J. 22, 899–919 (2019)
21. Fillatreau, P., et al.: Sheet metal forming global control system based on artificial vision
system and force-acoustic sensors. Robot Comput. Integr. Manuf. 24, 780–787 (2008)
22. Li, L., et al.: An energy-saving method to solve the mismatch between installed and demanded
power in hydraulic press. J. Cleaner Prod. 139, 636–645 (2016)
23. Meermann, L., et al.: Sensors as drivers of Industry 4.0. Available via DIALOG (2019). https://
assets.ey.com/content/dam/ey-sites/ey-com/de_de/topics/industrial-products/ey-study-sen
sors-as-drivers-of-industry-4-0.pdf?download. Accessed 12 Feb 2020
Distributed Dynamic Measures
of Criticality for Telecommunication
Networks
1 Introduction
2 Methods
We describe three criticality measures for nodes in a network. Each is computed
from local network structural information, and we augment them to dynamically
compute time dependant node states. We then validate them using an augmented
susceptible-infected-susceptible (SIS) model [8] with incremental infection, rep-
resenting congestion spreading in a multiagent system with fixed storage.
We chose these measures since each has flexibility to incorporate dynamic
node states and belongs to a different measure class. To compute criticality, the
first, local centrality counts the degrees of local nodes, the second, Wehmuth
centrality uses local structural measures and the third, local bridging cen-
trality considers possible paths through a local region. We compare how accu-
rately these three measures estimate criticality, and explain the simulation
model.
i
Hi (u) = Γj (u).
j=1
emitted later. To model this, suppose that all nodes have a queue length, or
weight. Then rather than summing over numbers of nodes, each node is counted
the same number of times as their queue lengths, which we list along a row vector,
c ∈ Rn , where position of a value corresponds to a node. Similarly, instead of
a set, one can use a binary vector representation to denote the set of nodes at
most i edges away from a node, denoted H i ∈ {0, 1}n . Suppose both are row
vectors, to then rewrite Eq. (1) as
CL (u) = cH i (w)t .
v∈Γ1 (u) w∈Γ1 (v)
This has the advantage of giving extra weight to local regions with more
queued data packets. This serves as a useful estimate of criticality over time. To
implement this renormalise the spread of CL outputs to the range [0, 1], where
being closer to one suggests greater criticality. This preserves both ranking and
scaling.
It compares this value between nodes, where higher values suggest greater
criticality. Both are centrality measures and require augmentation to compute
criticality. We use the bridging coefficient [10], which describes embedding
of a node within a connected component using local information. It is defined
as the reciprocal of the node’s degree, divided by the sum of degrees of its
neighbourhood. For a node, u, the formula of the bridging coefficient, β : V →
(0, 1], is ⎧ 1
⎪
⎪
⎨ du , du > 0
β(u) = di
⎪
⎪ i∈Γ (u)
⎩
1, du = 0.
By multiplying the sociocentric betweenness centrality and bridging coeffi-
cient, we obtain the sociocentric bridging centrality, CB : V → R, such
that
CB (u) = Bs (u) · β(u). (3)
This can be changed into the local bridging centrality by replacing the
sociocentric betweenness in Eq. (3) with the egocentric betweenness, and rewrit-
ing it into
CB (u) = Bc (u) · β(u). (4)
To create a dynamic measure, use dynamic queue lengths in nodes so that
data packet flow affects network criticality, augmenting the local bridging cen-
trality with node weights associated with criticality. This uses both egocentric
betweenness and the bridging coefficient. The bridging coefficient estimates the
likelihood that a node is on a bridge between clusters. This is purely structural, so
is unchanged, but the egocentric bridging coefficient may be naturally extended.
Queues are made of data packets which follow paths, and egocentric betweenness
is a path measure, so we may weight each path by the sum of its nodes’ queue
Distributed Dynamic Measures of Criticality 427
lengths, which for a given node u ∈ V we denote as cu . For the set of shortest
paths between nodes v and w, denoted Pv,w , achieve this by redefining ρv,w and
ρv,w (u) as
ρv,w = cx ; ρv,w (u) = cx .
P ∈Pv,w x∈P P ∈Pv,w x∈P
u∈P
/ u∈P
By inserting the new ρv,w and ρv,w (u) into Eq. (2), and this into Eq. (4), we
obtain the weighted localised bridging centrality.
this gives the set of all dynamic node attributes, C = {Q, CL , CW , CB }. Let us
denote
(c) (f )
Bt,i,u : Ti , S, V, C → R
(c)
For comparative analysis, we normalise At,i with respect to t for each
∗
(c)
attribute and simulation using min-max feature scaling, generating At,i :
{Ti , Si , V, C} → [0, 1]. We then compute the mean squared error (MSE) across
∗
(C ) ∗ (C ) ∗
(C )
time for At,i L , At,i W , and At,i B , versus the normalised aggregated queue
∗
(Q)
length, At,i , and denote it as
∗
(c)
Mic = MSE Ai , c ∈ {CL , CW , CB }.
We find the mean for all simulations for each measure, getting the accuracy
to which each measure estimates the criticality of each node within the network,
giving
Distributed Dynamic Measures of Criticality 429
n
Mic
c
M̄ = i=0
, c ∈ {CL , CW , CB }.
n
We analyse this procedure for a network model with n = 15 nodes, edge
attachment number m = 3, infection rate β = 0.9, recovery rate γ = 0.5, and
infection threshold I = 3. We simulate a simple network at and beyond capacity.
Mean simulation runtime was 4. Network G is visualised in Fig. 1. Node size
corresponds to Q of each node which has been assigned randomly.
In Table 1 we explore the output of simulation S1 , initialised from the green node
in Fig. 1. It shows that weighting measures with queue lengths better tracks the
progression of data packets within the network. The aggregated queue length is
forward looking, suggesting that the dynamic criticality measures detect risky
nodes.
This is seen for dynamic measures in Fig. 2, which is normalised for compar-
ative inspection. This is only information for a single simulation instance. It is
not obvious which measure more closely estimates future progression. Combin-
ing the MSEs of each measure from queue length for each simulation instance
obtains the average MSE, denoted M̄ c .
Results show that dynamic local centrality performs best, with M̄ CL = 0.102,
followed by dynamic bridging centrality, with M̄ CB = 0.162, and dynamic
Wehmuth centrality is the worst, with M̄ CW = 0.197. This may be because the
model estimates dynamics via epidemic spread, since momentary rate at which
a node obtains data packets is strongly determined by the number of packets
held by the given node’s neighbours, and dynamic local centrality directly counts
this.
430 Y. Proselkov et al.
(c)
Table 1. Aggregated attributes for a simulation S1 , 5 timesteps, containing At,1 for
all measures.
Fig. 2. A plot showing the relationship between normalised aggregated measures and
(c)
queue length from simulation S1 , containing At,1 for c ∈ C.
All values are bounded in [0, 1], so, in the contexts of this test, each mea-
sure computes criticality with between 80% and 90% accuracy, which is quite
acceptable.
Our criticality measures give a distributed, computationally efficient and fast
method of finding an impact indicator to inform maintenance prediction models.
This allows for real time control of any network system. Adding raw data such
as condition can give probability of failure and other prognostics KPIs. Com-
bining these gives a risk ranking of nodes to help order priority for proactive
maintenance. This is a three step framework, first collecting distributed data,
generating prognostic KPIs, and informing an optimal maintenance plan, shown
in Fig. 3. This will minimise packet drops, latency, congestion, and maximise
network operative capacity.
This can be integrated into: telecommunications systems for proactive main-
tenance; autonomous vehicle networks for proactive routing to minimise traffic
jams; supply networks, where actors only have primary or secondary connec-
tion information; and any system of dynamically communicating agents. With
such measures, agents will be able to quickly and reliably establish their short
term criticality, allowing for swift, inexpensive action to ensure ongoing network
function.
Distributed Dynamic Measures of Criticality 431
4 Future Research
In this paper, we have developed and compared the accuracy of three distributed
dynamic measures of nodal criticality within a network. Dynamic and distributed
approaches had not previously been combined in such a manner. We tested each
measure within an augmented SIS and found that for our test they predict
criticality with high accuracy. Dynamic local centrality did best, though it yet
is unclear why. To our knowledge, no measures which approach the problem
of dynamically and distributedly predicting criticality of nodes like this have
been previously developed, and it is exciting that they have such high proactive
accuracy, suggesting there it is worth researching more dynamically obtained
measures for prediction. They are necessary to deal with increasing data traffic
demands, especially if more COVID-19-like events occur in the future, where
greater network requirements are suddenly imposed on an already at capacity
system.
We will need deeper statistical analysis to learn the true accuracy of the mea-
sure family, including test repetition, comparing static and classic network mea-
sures, and multi-dimensional analysis. We would also like to learn how network
structure and model configuration affect the result of the distributed dynamic
measures.
In future we will test the dynamic measures in different network models, such
as telecommunications data packet routing models, or supply network heuristic
movement models. We also plan to test the measures on a spectrum of network
topologies, as well as real life network topologies, such as the BT network that
was studied within [6], to gain insights into the their dynamics. We will also
study the impact of information reach, or how many hops away from itself a
given node takes information from, framed as the relationship between accuracy
and computational, time, and communication complexity. This will contribute to
the general theory of value of information in distributed network analysis, and
has applications in any system with limited awareness actors, such as supply
chains.
References
1. Albert, R., Barabasi, A.: Statistical mechanics of complex networks. Rev. Mod.
Phys. 74, 47–97 (2002)
2. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.: Complex networks:
Structure and dynamics. Phys. Rep. 424, 175–308 (2006)
3. Chen, D., Lü, L., Shang, M., Zhang, Y., Zhou, T.: Identifying influential nodes in
complex networks. Physica A. 391, 1777–1787 (2012)
4. Cohen, R., Erez, K., Ben-Avraham, D., Havlin, S.: Resilience of the Internet to
random breakdowns. Phys. Rev. Lett. 85, 4626–4628 (2000)
5. Freeman, L.C.: A set of measures of centrality based on betweenness. Sociometry
40, 35 (1977)
6. Herrera, M., Perez-Hernandez, M., Kumar Jain, A., Kumar Parlikad, A.: Criti-
cal link analysis of a national Internet backbone via dynamic perturbation. In:
Advanced Maintenance Engineering, Services and Technologies. IFAC, Virtual
(2020)
7. Marsden, P.V.: Egocentric and sociocentric measures of network centrality. Soc.
Netw. 24, 407–422 (2002)
8. Kermack, W.O., McKendrick, A.G., Thomas, W.G.: A contribution to the math-
ematical theory of epidemics. Proc. R. Soc. Lond. 115, 700–721 (1927)
9. Moura, J., Hutchison, D.: Cyber-physical systems resilience: state of the art,
research issues and future trends. In: arXiv preprint (2019)
10. Nanda, S., Kotz, D.: Localized bridging centrality for distributed network analysis.
In: 2008 Proceedings of 17th International Conference on Computer Communica-
tions and Networks, IEEE, St. Thomas, US Virgin Islands (2008)
11. Peterson, I.: Fatal Defect: Chasing Killer Computer Bugs. Times Books, New York
(1996)
12. Wang, J., Liu, Y.H., Jiao, Y., Hu, H.Y.: Cascading dynamics in congested complex
networks. Eur. Phys. J. B. 67, 95–100 (2009)
13. Wehmuth, K., Ziviani, A.: Distributed location of the critical nodes to network
robustness based on spectral analysis. In: 2011 7th Latin American Network Oper-
ations and Management Symposium, IEEE, Quito, Ecuador (2011)
Physical Internet and Logistics
A Multi-agent Model for the Multi-plant
Multi-product Physical Internet Supply Chain
Network
Valencia, Spain
Abstract. Supply chains are complex systems and stochastic in nature. Nowa-
days, logistics organizations are expected to be efficient, effective, and responsive
while respecting other objectives such as sustainability and resilience. In this
work, a multi-agent model is proposed for a multi-plant, multi-product supply
chain network that supports an open network with n nodes (plants, retailers, etc.).
Three replenishment policies are proposed with different criteria of selection.
A multi-agent simulation tool was used to implement the proposed multi-agent
model. Different scenarios and configurations, varying from static to dynamic, are
defined and tested. The first objective of this work is to compare the performance
of physical internet supply chain and classical supply chain networks using hold-
ing and transportation costs as key performance indicators. The second goal is to
assess the performance of different replenishment policies for multi-plant, multi-
product physical internet supply chain network. Experiment results validate the
efficiency of the model to assess the performance of supply chain and to optimize
the replenishment decisions.
1 Introduction
Today, supply chains are characterized by high complexity due to the competitiveness
between companies, the structures, processes, etc. The management of these systems is
critical and includes all the processes that transform raw materials into final products and
their distribution to the customers [3, 4]. Many activities affect the total costs of supply
chains, such as the forecasting and replenishment rules used [13]. In fact, the procurement
and the distribution are two vital actions in supply chains. These operational decisions
have a direct impact on inventory and transportation costs. On the other side, supply
chain disruptions and risks affect the performance of supply chains. Nowadays, reducing
logistics costs and facing perturbations is a priority of many companies. To achieve such
a goal, a new paradigm named “Physical internet” was proposed in 2011 as a solution
to improve the global logistics performance in terms of economic, environmental and
social efficiency [8]. The physical internet network represents an open global logistics
supply chain based on physical, digital and operational interconnectivity. Contrary to the
traditional supply chain network that is based on a multi-echelon hierarchical structure
composed of plants, distribution centres, and retailers [6], the physical internet network
is composed of new components like PI-hubs, PI-containers, PI-movers, etc. [4, 5].
The network of PI-hubs is open and interconnected with other supply chain networks.
The replenishment orders from any sales point can be served by any PI-hub around the
network and the PI-hubs can be supplied by any other hub or the plant [14]. A solution of
inventory control problem in Physical Internet Supply Chain Network (PISCN) concerns
the assignment of the customer’s demands to the hubs, and the hubs to the plants or
other hubs [15]. The inventory control in Classical Supply Chain Network (CSCN) is
addressed by many researchers in literature. A mathematical formulation was proposed
by [2] for the inventory control in a multi-plant single-product supply chain to satisfy the
dynamic demand of the customers. A non-linear mathematical formulation for location
inventory to determine the quantities of products to send from plants to warehouses
and then to retailers was also presented [1]. Recently, the inventory control problem in
PISCN context has attracted attention, such as the problem of integrated production-
inventory-distribution decision in PI [6], with a mixed integer linear problem (MILP)
model to find multi-period decision-making.
To study a real system made up of interconnected elements where each of these
systems has its own dynamics, researchers may use mathematical models, simulation
tools and distributed approaches to study and evaluate the performance of such complex
systems. Multi-agent models are considered as successful and suitable solutions to solve
complex dynamic problems. In fact, the multi-agent architecture is promising due to the
interactions, the collaboration and the cooperation between agents and theirs reactivity
to deal with uncertain events [7].
A simulation study was proposed by [14] to assess the performance of PISCN com-
posed of one plant, three hubs and two retailers by keeping the same network and data
while changing the network interconnectivity. Authors in [9] proposed two multi-agent
simulation models to compare the performance of PISCN and CSCN with external
perturbations using the same supply chain network configuration. However, only one
network configuration is evaluated. The tested network is composed by one plant, three
hubs, and two retailers [9].
This paper is an extension of the work presented in [9]. It considers a multi-plant
multi-product physical internet supply chain network. The proposed multi-agent model
is designed to support an open network of supply chains. A multi-agent simulation tool
is used to implement the proposed multi-agent model. Different static and dynamics
scenarios with perturbations are tested. The holding and transportation costs are used
as key performance indicators (KPI) to compare the performance of physical internet
and classical supply chain with a multi-plant and multi-product configuration. Different
replenishment policies are also tested and compared with PISCN.
A Multi-agent Model for the Multi-plant Multi-product Physical Internet 437
The remaining of this paper is organized as follows. The problem studied is described
in Sect. 2. The proposed multi-agent model is detailed in Sect. 3. The implementation of
the proposed model is described in Sect. 4. Simulation scenarios are presented in Sect. 5.
The experimental obtained results and the analysis are given in Sect. 6. Conclusions and
some future works are given in Sect. 7.
2 Problem Description
3 Multi-agent Model
As mentioned before, this work is an extension of the study in Nouiri et al. [9]. Contrary
to the work of [9], where the single plant inventory problem in PISCN was studied
and only one case study was tested, the multi-plant multi-product PISCN is considered.
The multi-agent model is adopted as a promising suitable solution to provide reactive
decisions and to deal with supply chain’s perturbations. New replenishments policies
are also proposed to solve the inventory control problem. In the next subsections, the
description of the model’s design and agents’ actions are detailed. The replenishment
policies are presented.
best source (Plant or other PI-hub) to fulfil the hub demand. TA moves to the requested
node to transport goods and updates the travelled distance and the transportation costs.
As described before, perturbations can occur in the PI-hub and distribution centre. Some
retailer’s demand can be delayed or missing. Perturbations can be external, related to the
unavailability of the routes between PI-nodes which affects the transportation of goods
(accident) and delays trucks, or internal perturbations like an insufficient inventory stock
in PI-hubs or insufficient production quantity of plants, etc. [10].
• Random Method: for each retailer demand or hub demand, a random hub is selected
as a node for replenishment.
• Closest Method: the closest hub to retailers is always selected as a good replenish-
ment node to fulfil retailers’ demands. We assume that each retailer has the same
demand on each day. The criterion of selection is the distance between nodes without
consideration of the inventory stock level.
• Hybrid method: this method combines the two previous methods. It uses the distance
and the level of stock to select the replenishment source. For 50% of simulation time,
the closest hub is selected; for the other half, the hub that has the biggest inventory
level is selected to balance inventory stock in all nodes.
Note that all these policies can be used only in case of PISCN configuration because
of the full connectivity between nodes that offers multiple choices. In case of CSCN, the
440 M. Nouiri et al.
link between retailer and distributed centre is unique and fixed. In case of perturbations,
a tardiness of delivery should be calculated.
Figure 5 presents the inputs and outputs of the proposed simulator model. Different
parameters must be configured. The user can easily modify various parameters in the
model using such tools as sliders and switches; he can choose between testing static
or dynamic scenarios. An important parameter that should be defined in each simu-
lation model is the horizon to be simulated. The unit time used is days. The T vari-
able represents the total number of days. The type of retailer’s demand can be daily
or periodic. Nbr_Hubs, Nbr_Rretailers, Nbr_trucks, Nbr_Plants are variables specify-
ing respectively the no. of PI-hubs, retailers, trucks and plants in the network. Posi-
tion_nodes, distance_nodes and demand_quantity are files containing the position of
all nodes in the map, the distance between nodes and the demand quantity of retailers.
The Supply_Chain_Network_Type variable specifies a PISCN or a CSCN. Price_km and
Price_Stock are variables used to calculate the holding and transportation costs.
The current outputs of the proposed model are:
There are two main steps in the simulation. The first one consists to setup the supply
chain network with a specific configuration (number of nodes, links, input data, etc.).
The corresponding agents are created and the attributes are initialized. The second step
is to start simulation in the defined horizon.
where T is the horizon of simulation (total number of days), and the daily holding cost
of each hub is calculated as follow:
T
TC = (daily_Transportation_cost_truck) (3)
t=0
where T is the horizon and the daily transportation cost of each truck is calculated as:
The total holding cost (HC) is calculated by each HA from the daily holding cost;
the value of HC is updated. The total transportation cost (TC) is calculated by the TA;
after each delivery the travelled distance and the costs are updated.
5 Simulation Scenarios
The data set and configurations of some simulation scenarios are described below.
The multi-agent model was tested on a real network with real data received from [6].
The network is composed of two plants (2P), three hubs (3H) and two retailers (2R).
The data of monthly white sugar consumption rate was gathered from January 2015
to September 2019. The daily demand was generated from the real monthly demand of
white sugar consumption during November-December 2017 (Office of Agricultural Eco-
nomics of Thailand (http://www.oae.go.th). The unit holding cost equals 180 THB per
m3 , Integrated Logistics Services of Thailand (http://www.ils.co.th/th/pricing/) in 2019;
the unit transportation cost is 1.85 THB/km/ton from Bureau of standards and Evaluation
( http://hwstd.com/Uploads/Downloads/9/ 07.pdf ), 2016. Configuration and data
(distances, demand, stocks) are identically for both supply chains.
Table 1 presents the results of the simulation tests of the first scenario with the first
configuration. All PI-hub inventory stocks are initialized with a fixed value 850. The
results of simulation tests are summarized in Table 1. As can be seen from Table 1,
the holding cost values of PISCN are lower than classical SCN while using both
replenishment methods. The deviation of HC equals to 0,57%.
PISCN CSCN
(compared with
PISCN)
HC TC HC TC
Closest method 9368265 184020 +0,6% +23,7%
Random method 9386265 247996 +0,4% −8,2%
However the transportation cost gap between CSCN and PISCN while using the
closest method as a replenishment policy equals 23,68%. The PI reduces the transporta-
tion cost when the closest method is selected as best replenishment source. However, a
compromise between both KPI must be achieved.
A Multi-agent Model for the Multi-plant Multi-product Physical Internet 445
• Results of Scenario 2
Each configuration is tested for five replications and the average value of the objective
function is presented in bold; the results of simulation tests with fixed value of initial
inventory stock are summarized in Table 2. As can be seen, for different tested replen-
ishment policies different values of KPI are obtained. The results prove the important
impact of the replenishment method on holding and transportation costs. Using the clos-
est method for replenishment decreases the HC and TC values compared to the random
replenishment method. The gap of HC between both methods is low and equals 0,46%;
however the gap of TC equals 34,55%. The closest method aims to favour the closest
node for replenishment to reduce travelled distance, CO2 emission and transportation
cost which are the main objectives of the PI paradigm.
Table 2. The results of second scenario with fixed value of initial inventory stock
Table 3 summarizes the results of the simulation with a randomly generated initial
inventory level of PI hubs. As we can see the deviation becomes more important for both
cases. The holding cost deviation equals 7,83% and the transportation cost deviation is
equal to 34,69%. Figure 7 presents the daily variation of inventory stock of hubs.
• Results of Scenario 3
The results of simulation tests under perturbations are summarized in Table 4. A random
PI-hub or distribution centre is selected to be unavailable for one day in case of low
level of perturbation and five days in case of medium level of perturbation. In Table 4,
the holding cost of CSCN is bigger than the PISCN. The gap of HC under low level
perturbation equals to 0.66% and under medium level equals 1,16%. However, the TC
decreased as there are five missing delivery missions due to perturbations. However in
case of PISCN, the perturbation does not affect too much the performance in term of
HC, due to the replenishment flexibility.
446 M. Nouiri et al.
Table 3. The results of second scenario with random values of initial inventory stock
PISCN CSCN
(difference
with PISCN)
Perturbations HC TC HC TC
Average Low level 9368265 184020 +0.6% +32%
Average Medium level 9386265 229325 +1.1% −2%
8 Conclusion
This work refers to a multi-plant multi-product physical internet supply chain network.
A multi-agent model with three replenishment policies are proposed for multi-plant
multi-product PISCN. The proposed model is designed to support different supply chain
A Multi-agent Model for the Multi-plant Multi-product Physical Internet 447
configurations with large numbers of suppliers, hubs and retailers. A multi-agent simu-
lation tool was used to implement the proposed model. The performances of PISCN and
CSCN are compared using multi-agent simulation. After testing the simulator in differ-
ent static and dynamic scenarios, the results show that PSICN is more efficient than the
classical one especially for the transportation cost which was significantly improved in
the physical internet supply chain. The perturbations acting on PI-hubs or distribution
centres have a different impact on the supply chain performance. The simulation results
show the importance of the replenishment policy on transportation and holding costs.
In future work, additional constraints will be taken into consideration and other
approaches will be developed to optimize replenishment decisions. Future studies will
be led to test internal and external perturbations.
An interesting direction of this work is to couple this model with internal PI-hub
models to provide a closer view of internal decisions that affect the global decision and
performance of supply chain networks.
References
1. Dai, Z., Aqlan, F., Zheng, X., Gao, K.: A location-inventory supply chain network model
using two heuristic algorithms for perishable products with fuzzy constraints. Comput. Ind.
Eng. 119, 338–352 (2018)
2. Darvish, M., Larrain, H., Coelho, L.C.: A dynamic multi-plant lot-sizing and distribution
problem. Int. J. Prod. Res. 54(22), 6707–6717 (2016)
3. Montreuil, B., Ballot, E., Fontane, F.: An Open Logistics Interconnection model for the Phys-
ical Internet. In: 14th IFAC Symposium on Information Control Problems in Manufacturing,
Bucharest, Romania, vol. 45, no. 6, pp. 327–332 (2012)
4. Montreuil, B., Meller, R.D., Ballot, E.: Towards a physical Internet: the impact on logis-
tics facilities and material handling systems design and innovation. In: Proceedings of the
International Material Handling Research Colloquium (IMHRC), pp. 1–23 (2010)
5. Montreuil, B., Meller, R.D., Ballot, E.: Physical Internet Foundations. In: Borangiu, T.,
Thomas, A., Trentesaux, D. (eds.) Service Orientation in Holonic and Multi Agent Manufac-
turing and Robotics. Studies in Computational Intelligence, vol. 472, pp. 151–166 Springer,
Cham (2013)
6. Ji, S.F., Peng, X.S., Luo, R.J.: An integrated model for the production-inventory-distribution
problem in the Physical Internet. Int. J. Prod. Res. 57(4), 1000–1017 (2019)
7. Kantasa-Ard, A., Bekrar, A., Sallez, Y.: Artificial intelligence for forecasting in supply chain
management: a case study of White Sugar consumption rate in Thailand. IFAC-PapersOnLine
52(13), 725–730 (2019)
8. Kantasa-Ard, A., Nouiri, M., Bekrar, A., El Cadi, A.A., Sallez, Y.: Dynamic Clustering of PI-
Hubs Based on Forecasting Demand in Physical Internet Context. In: Borangiu, T., Trentesaux,
D., Leitão, P., Giret Boggino, A., Botti, V. (eds.) Service Oriented, Holonic and Multi-agent
Manufacturing Systems for Industry of the Future. Studies in Computational Intelligence,
vol. 853, pp. 27–39, Springer, Cham (2019)
9. Nouiri, M., Bekrar, A., Trentesaux, D.: Inventory control under possible delivery perturbations
in physical internet supply chain network. In: 5th International Physical Internet Conference,
pp. 219–231 (2018)
10. Nouiri, M., Bekrar, A., Trentesaux, D.: An energy-efficient scheduling and rescheduling
method for production and logistics systems. Int. J. Prod. Res. 58, 1–21 (2019). https://doi.
org/10.1080/00207543.2019.1660826
448 M. Nouiri et al.
11. Sallez, Y., Berger, T., Bonte, T., Trentesaux, D.: Proposition of a hybrid control architecture for
the routing in a Physical Internet cross-docking hub. IFAC-Papers On Line 48(3), 1978–1983
(2015)
12. Trentesaux, D., Giret, A.: Go-green manufacturing holons: a step towards sustainable
manufacturing operations control. Manuf. Lett. 5, 29–33 (2015)
13. Van der Heide, L.M., Coelho, L.C., Vis, I.F., Van Anholt, R.G.: Replenishment and denomi-
nation mix of automated teller machines with dynamic forecast demands. Comput. Oper. Res.
114 (2020). https://doi.org/10.1016/j.cor.2019.104828
14. Yang Y., Pan S., Ballot E.: A model to take advantage of Physical Internet for vendor inventory
management, IFAC-Papers On Line, Volume 48(3), 1990–1995 (2015)
15. Yang, Y., Pan, S., Ballot, E.: Mitigating supply chain disruptions through interconnected
logistics services in the physical internet. Int. J. Prod. Res. 55(14), 3970–3983 (2017)
Survey on a Set of Features for New Urban
Warehouse Management Inspired by Industry
4.0 and the Physical Internet
alexandre.berger@laposte.fr
Abstract. City logistics is one of the most significant branches of supply chain
management. It deals with the logistics and transportation activities in urban areas.
This research area has recently experienced exponential growth in publications. In
this article, we introduce a new urban warehouse. This customer-oriented logistics
facility aims to respond to last-mile delivery challenges. To do so, the use of
Industry 4.0 methods and technologies, as well as elements of the Physical Internet
paradigm, are explored. The possibility of validating the elements mentioned with
a La Poste project, which aims to create an experimental laboratory, is proposed.
1 Introduction
Logistics is presented as a significant branch of supply chain management (SCM) in the
literature. This subject relates to “the process of planning, implementing, and controlling
an efficient and effective flow and storage of goods, services and related information from
point of origin to point of consumption for the purpose of conforming to a customer’s
requirements” [1].
The logistics associated with the consolidation, transportation, and distribution of
goods in cities is called city logistics. This notion has been described by Taniguchi
as “The process of optimizing both logistics and transport activities done by private
companies in urban areas while considering the traffic environment, traffic congestion
and energy consumption within the framework of a market economy”.
Of the logistics activities, this article takes interest in the management of storage
spaces (warehouses), as they are essential components in logistics. A warehouse is an
intermediate storage point to smooth the relationship between time and demand, and can
perform distribution and value-added services [2]. Until recently, these parcel storage
spaces were located in outer suburbs [3].
Climate change issues, the trend of growing online shopping sales and demand for
instant delivery have put pressure on adopting more time-saving logistics practices and
locating order fulfilment facilities strategically in locations with direct access to con-
sumer markets [4]. Therefore, cities are facing the reintroduction of logistics spaces
and facilities in inner urban areas. Because the challenges of city logistics change con-
tinually, so do the opportunities to improve city logistics. Facing current challenges of
sustainable development, profitability, traceability, customer satisfaction and last-mile
deliveries, the aim of this article is to introduce a new warehouse type, an urban ware-
house, and demonstrate how the concepts of Industry 4.0 and Physical Internet can be
used to solve the issues of this new urban warehouses management.
The remainder of the paper is organized as follows: Sect. 2 presents a review of the
challenges of urban logistics in terms of the impact on warehouses and the issues to
solve. Section 3 explains how the concepts of Industry 4.0 and Physical Internet will
be used to resolve the issues of the new urban warehouses that have been presented.
Section 4 introduces an application case to demonstrate the applicability of the quoted
elements. Finally, Sect. 5 leads to a discussion and conclusion.
The development of e-commerce has led to a rapid increase in demand for new urban
distribution services, such as “fast delivery,” “same-day delivery” (sometimes even going
down to 1-h and 2-h delivery options), “direct-to-consumer delivery,” including the
last-mile delivery challenges. This has imposed a heavy burden on urban traffic (such
as traffic congestion, inconvenience and inefficiency), the environment (such as visual
disturbance, greenhouse gas emissions, and a waste of resources) welfare (such as noise,
accidents and public health) and governance (such as land scarcity and uncontrolled
spread) [5]; topics covered in numerous publications. This trend also impacts logistics
facilities management.
The last mile of the supply chain is considered one of the most, or even the most,
expensive, inefficient and polluting part of the supply chain [6]. Last mile logistics take
place from an order penetration point, such as the urban warehouse presented above,
to the final consignee’s preferred destination point for the reception of goods [7]. In
this context, the main objective of logistics and supply chain management is the same:
providing good service (the right product at the right time and at the right place) at a low
cost.
So, how does this new kind of warehouse meet this objective? The next section
will discuss what this objective implies for the management of urban warehouses in the
context of last-mile delivery challenges.
Warehouses perform the basic functions of receiving, storage, order picking, and
shipping [8]. Some are more complex, also performing distribution and value-added
activities. The main value-added activities are [9]:
Table 1. The new urban warehouse specifications versus classical warehouse ones
Optimization: The optimization of flows and processes in a warehouse has a major role:
it allows for gains in performance. In fact, working on the value chain helps identify and
reduce waste, unnecessary stock and handling activities and loss of space.
Survey on a Set of Features for New Urban Warehouse Management 453
Traceability: This makes it possible to know a product’s origin and to follow its path
throughout the supply chain. It allows customers to follow the progress of their order
preparation and shipment, as well as to manage the physical flows of reverse logistics,
such as consigned objects.
Reliability: An organization and its systems need to be reliable in order to meet the
deadlines promised to customers and to avoid registration errors caused by manual
recording. This first issue has a significant impact on customer satisfaction, and the
second can lead to inventory variances and loss of information in a computer system.
Responsiveness: The aim is to improve the organization via the data-processing system
in order to give the possibility of grouping orders for a single customer or to efficiently
manage selection errors.
Security: This is a critical factor for any organization. The employer must ensure the
safety of its employees. The layout and use of the facilities must comply with certain
rules. For example, companies must ensure the ergonomics of workspaces will prevent
musculoskeletal disorders (MSDs). The company needs also to ensure the security of its
data.
3.1 Survey of the Literature on 4.0 methods and Technologies that Can Be
Applied to Urban Logistics Issues
In the supply chain strategy, there are tensions between the competing priorities of
cost, flexibility, speed and quality [16]. It can also include the zero-emission goal, a
priority that has only recently been taken into consideration. Technologies supporting
454 A. Edouard et al.
Industry 4.0 can improve one or more of these priorities individually, or in combination
with one another. Many studies, such as German Manufacturing 4.0 [17], point out
key technologies. A literature survey was conducted to identify the main categories of
Industry 4.0 that is suitable for urban logistics.
Big Data (BD): Logistic processes generate a large amount of data, in particular col-
lected by IoT technologies. The collection and comprehensive evaluation of this data
from many different sources help support real-time decision making. There is a technol-
ogy by which we can conduct analysis and that is Big Data. It allows for the analysis
and separation of what is important and unimportant, helping to draw conclusions and
support more effective knowledge transfer to achieve business goals [12].
Cloud Computing (CC): This is a large-scale, low-cost and flexible processing unit for
computing and storage based on IP connections. Some of the most relevant features are:
the possibility of ubiquitous network access, the ability to increase or decrease capacity
at will and the independent location of resource pools [18].
Cyber-Physical systems (CPS): These systems contain two main functional compo-
nents: advanced connectivity which ensures real-time data collection from the physical
world and information cyberspace feedback, and intelligent data management, analysis,
and calculation functions to build cyberspace [19].
IoT: This technology makes the creation of information without human observation
possible; it allows also field devices to communicate and interact with each other [20].
For example, according to [16], Radio Frequency Identification (RFID) tags and readers
promise a revolution in tracking and monitoring inventory. Thanks to the Internet of
Things, the transport process of goods, parcels and letters can be monitored [12]. Track-
ing and tracing have become faster, more accurate, more predictable and more secure.
In the event of delays, customers can be informed of complications in advance.
Simulation and Digital Twins (SDt): Real-time data is used to reflect the physical world
in a “digital twin” (virtual model), which can include machines, products, and people
[20]. It can be used to test, simulate and optimize the organization and, for example,
reduce the time from a picked order to departure [21].
shipping. In addition, drones can carry different sensors to record data (visual and audio)
for monitoring and surveying operations. The automation of activities is motivated by
improving quality (robots can perform more precise tasks than humans repeatedly) or
safety (prevent musculoskeletal disorders (MSDs) [16].
Cybersecurity (Cyb): With large amounts of data now available through IoT, a company
will want to store such data in an accessible and secure manner. One possible solution
to this storage problem is blockchain. Blockchain is a distributed security ledger which
can be accessed and written from anywhere; its data is not stored in a central location,
and after a block is added to the chain it cannot be changed [16].
This paradigm was first introduced by Benoit Montreuil. [24] states that “The Physical
Internet (PI) goal is to form an efficient, sustainable, resistant, adaptable and flexible
open global logistics web based on physical, digital and operational interconnectivity
through world standard norms, interfaces and protocols. The main PI building blocks are
new modular load units (named PI-containers), new supply chain interfaces: logistics
centres equipped with new handling and storage technologies and cooperative logis-
tics networks. PI includes forming of appropriate modular units (PI-containers), which
shall have "smart" characteristics and which shall enable full usage of load and storage
capacities. These load units shall optimally move across logistics networks due to their
capability to communicate with each other and with resources for transfer located in
the logistics hubs (π-hubs). This digital interconnectivity shall enable an encapsula-
tion of goods in world standard “smart” green modular containers with possibilities to
communicate between each other using all advantages of IoT.” Many studies discuss
this concept and promise interesting solutions for the near future. Among them, some
developments in last-mile delivery solutions have been proposed, such as smart locker
bank network design studies using PI containers that demonstrate that modular designs
can perform just as well as fixed-configuration designs, while being more flexible [25,
26].
456 A. Edouard et al.
4 An Application Case
In Vietnam, traditional postal service providers such as VN Post, EMS, and Viettel Post
have joined the fast delivery service business [7]. Le Groupe La Poste has decided to use
its existing nationwide post office and network of postmen (more than 80,000 throughout
the country) to deliver goods throughout France. This network allows “instant” delivery
to cities, if they are coupled with efficient logistics facilities. So, La Poste considered
taking advantage of another one of its assets, the availability of well-placed square meters
in city centres to create new urban warehouses (storage areas and fast delivery points).
The company refurbished a 600 m2 space, located in a high activity area in Paris’
city centre. This ongoing project aimed to create an experimental laboratory in order to
Survey on a Set of Features for New Urban Warehouse Management 457
maintain the lead in the market of La Poste Group. The company would to be able to
offer the best services related to stock management, order picking, delivery, and return
management. This place would allow for an experiment to be done on the adaptation
of Industry 4.0 methods and technologies to the problems of urban logistics, and more
precisely to the challenge of last mile delivery in order to develop new optimized urban
warehouses. It would also examine the benefits that the Physical Internet paradigm could
bring about. The project aims to be carried out in collaboration with start-ups and partner
schools to ensure that technologies of the future are kept up to date.
First offers are already being proposed as:
The last solution that can be mentioned is the use of the concept of PI-containers
from the Physical Internet in order to create a package system adapted to the trolleys of
458 A. Edouard et al.
La Poste, the CE 30 (Fig. 3). This system is equipped with a product security system
and a tracking and routing system in order to ensure it is followed up on as well as the
optimization of its route. Moreover, it is a tool adapted to environmental impact problems
because it can be translated into reusable boxes.
All the elements cited remain suggestions. Faced with the work’ s complexity, com-
promises will have to be made in order to best respond to the problems of these logistics
facilities. The project will be divided into subsets in order to focus. The work must first
start with the formalization of expectations and the prioritization of the issues, followed
by a proposal of tools and methods, feasibility studies and then tests.
References
1. Mentzer, J.T., Keebler, J.S., Nix, N.W., Smith, C.D., Zacharia, Z.G.: Defining supply chain
management. J. Bus. 22(2), 1–25 (2001)
2. Higgins, C., Ferguson, M., Kanaroglou, P.: Varieties of logistics centers. Transp. Res. Rec.
2288, 9–18 (2012)
3. Dablanc, L., Rakotonarivo, D.: The impacts of logistics sprawl: how does the location of
parcel transport terminals affect the energy efficiency of goods’ movements in Paris and what
can we do about it? Proc. Soc. Behav. Sci. 2(3), 6087–6096 (2010)
4. Kang, S.: Relative logistics sprawl: measuring changes in the relative distribution from ware-
houses to logistics businesses and the general population. J. Transp. Geogr. 83, 102636
(2020)
5. Hu, W., Dong, J., Hwang, B.G., Ren, R., Chen, Z.: A scientometrics review on city logistics
literature: Research trends, advanced theory and practice. Sustainability 11(10), 1–27 (2019)
6. Gevaers, R., Van de Voorde, E., Vanelslander, T.: Characteristics of innovations in last mile
logistics - using best practices, case studies and making the link with green and sustainable
logistics. Assoc. Eur. Transp. Contrib. 1–8 (2009)
7. Phuong, D.T.: Last-mile logistics in Vietnam in industrial revolution 4.0 : opportunities and
Challenges, vol. 115, no. Insyma, pp. 172–176 (2020)
8. Yavas, V., Ozkan-Ozen, Y.D.: Logistics centers in the new industrial era: a proposed
framework for logistics center 4.0. Transp. Res. Part E Logist. Transp. Rev. 135, 101864
(2019)
Survey on a Set of Features for New Urban Warehouse Management 459
9. Grundey, D., Rimienė, K.: Logistics centre concept through evolution and definition. Eng.
Econ. 54(4), 87–95 (2007)
10. Den Berg, J.P.V., Zijm, W.H.M.: Models for warehouse management: classification and
examples. Int. J. Prod. Econ. 59(1), 519–528 (1999)
11. Juntao, L.: Research on Internet of Things technology application status in the warehouse
operation. Int. J. Sci. Technol. Soc. 4(4), 63 (2016)
12. Witkowski, K.: Internet of Things, big data, industry 4.0 - innovative solutions in logistics
and supply chains management. Proc. Eng. 182, 763–769 (2017)
13. Shiau, J.Y., Lee, M.C.: A warehouse management system with sequential picking for multi-
container deliveries. Comput. Ind. Eng. 58(3), 382–392 (2010)
14. Duclos, L.K., Vokurka, R.J., Lummus, R.R.: A conceptual model of supply chain flexibility.
Ind. Manag. Data Syst. 103(5–6), 446–456 (2003)
15. Pan, S., Ballot, E., Huang, G.Q., Montreuil, B.: Physical Internet and interconnected logistics
services: research and applications. Int. J. Prod. Res. 55(9), 2603–2609 (2017)
16. Olsen, T.L., Tomlin, B.: Industry 4.0: opportunities and challenges for operations manage-
ment. Manuf. Serv. Oper. Manag. 22(1), 113–122 (2020)
17. Cerfa, N.: Transformation numérique de l ’ industrie : l ’ enjeu franco-allemand (2018)
18. Dopico, M., Gomez, A., De la Fuente, D., García, N., Rosillo, R., Puche, J.: A vision of
industry 4.0 from an artificial intelligence point of view. In: Proceedings 2016 International
Conference Artificial Intelligence ICAI 2016 - WORLDCOMP 2016, pp. 407–413 (2016)
19. Lee, J., Bagheri, B., Kao, H.A.: A cyber-physical systems architecture for industry 4.0-based
manufacturing systems. Manuf. Lett. 3, 18–23 (2015)
20. Rüßmann, M.: Industry 4.0: future of productivity and growth in manufacturing, Bost.
Consult., April 2015
21. Tjahjono, B., Esplugues, C., Ares, E., Pelaez, G.: What does Industry 4.0 mean to Supply
Chain? Proc. Manuf. 13, 1175–1182 (2017)
22. Tang, C.S., Veelenturf, L.P.: The strategic role of logistics in the industry 4.0 era. Transp. Res.
Part E Logist. Transp. Rev. 129, 11 (2019)
23. Taniguchi, E., Thompson, R.G., Yamada, T.: New opportunities and challenges for city
logistics. Transp. Res. Proc. 12, 5–13 (2016)
24. Maslarić, M., Nikoličić, S., Mirčetić, D.: Logistics response to the industry 4.0: the physical
internet. Open Eng. 6(1), 511–517 (2016)
25. Faugère, L., Montreuil, B.: Hyperconnected pickup and amp. delivery locker networks. In:
Proceedings 4th International Physics Internet Conference, vol. 6, p. 14, July 2017
26. Faugère, L., Montreuil, B.: Smart locker bank design optimization for urban omnichannel
logistics: assessing monolithic vs. modular configurations. Comput. Ind. Eng. 139, 14 (2018)
27. Christopher, M.: The agile supply chain. Ind. Mark. Manag. 29(1), 37–44 (2000)
Multi-objective Cross-Docking
in Physical Internet Hubs Under Arrival
Time Uncertainty
1 Introduction
In the global supply chain, cross-docking hubs have a major role in maintaining
a continuous flow of products and raw materials from suppliers to retailers with
minimal temporary storage in between [14,26]. Inside the cross-docks, products
are transferred from inbound vehicles to outbound ones which can be trucks,
trains or ships. A high performance of the these platforms requires a good syn-
chronization between inbound and outbound vehicles in order to minimize the
waiting time at the docks and the temporary storage level which reduces signif-
icantly the total distribution cost [10].
Recently, the concept of Physical Internet has been introduced to change
the way products are transported, handled, stored and shipped through the
supply chain using standardized PI-containers [19]. The goal of the PI is to
improve the global sustainability of the supply chain from the environmental,
social and economical aspects. In the context of PI, the cross-docking hubs are
fully automated using standardized PI-containers instead of regular pallets [9,16]
and automated sorting zones instead of forklifts [1,20]. There are different types
of cross-docking PI-hubs depending of the nature of the inbound and outbound
vehicles, for example the Road-Rail PI-hubs are used to transfer PI-containers
from trucks to trains, the Rail-Road PI-hubs transfer PI-containers in the reverse
direction, the Road-Road PI-hubs transfer PI-containers between trucks.
This paper deals simultaneously with the truck scheduling and PI containers
grouping in Road-Rail PI-hub. Solving such issues involves a complete knowledge
about all the problem parameters that are generally fixed by experts or defined
based on historical data. However, in real-world situations, these parameters
are always subject to change due to unforeseen events. In the literature, only
few works have been performed to handle such circumstances. There are many
recent studies [15,24] where authors have particularly considered uncertainty in
truck arrival times. Our work follows this direction and proposes a new chance
constrained programming approach to solve the problem.
The remaining of this paper is organized as follows: Sect. 2 presents the lit-
erature review on the Road-Rail PI-hub related problems and the uncertainty
in the cross-dock scheduling literature. Section 3 defines the problem descrip-
tion and its mathematical formalization. Section 4 shows the obtained computa-
tional results. Finally, Sect. 5 concludes this work and gives some future works
directions.
2 Literature Review
The cross-docking optimization literature is very large and many reviews have
been proposed to classify cross-docking studies according to their decision level:
strategical, tactical and operational level [2,14,27]. In the context of Physical
Internet, the Road-Rail hubs represent cross-docks that manage the transfer
of PI-containers automatically from trucks to train’s wagons through different
steps. First, the products have to be unloaded from the inbound trucks and,
temporary stocked in a maneuvering area. Then, the PI-containers are trans-
ferred using PI-conveyors, and loaded to train’s wagons.The reader can refer to
[1,20] for a complete description of the functional design of the Road-Rail and
Road-Road cross-docking PI-hubs.
462 T. Chargui et al.
3 Problem Description
The Road-Rail PI-hub is used to transfer PI-containers from the inbound trucks
to the outgoing trains. As shown in Fig. 1, the Road-Rail PI-hub contains a set
Multi-objective Cross-Docking in Physical Internet Hubs 463
of wagons composing the outbound train on the Rail-Rail section, a set of PI-
conveyors in the Road-Rail section and temporary storage areas. In this platform,
the PI-containers received from inbound trucks are automatically carried out to
their destination in the train wagons. A complete functional design of the Road
Rail PI-hub with the different key performance indicators can be found in [1].
parameter and related constraints which have been adapted to handle uncer-
tainty. In fact, the uncertain arrival time of each truck is defined using a time
interval with minimal and maximal values instead of a single one, and chance
constrained programming have been used to solve the problem. Besides, for the
objective function the multi-objective Lexicographic Programming approach is
used instead of weighting coefficients. The advantage of using a Lexicographic
Programming approach is that the priority of each objective is defined by its
order. This is very useful especially when the objectives have different measure
units which is the case in the proposed model. The priority of each objective can
be defined by the decision maker.
The objective is to schedule inbound trucks and to group PI-containers in the
outgoing wagons while minimizing the distance travelled by these PI-containers
to arrive to wagons, the number of used wagons, and the total tardiness of
inbound trucks. However, inbound trucks can unload PI-containers having dif-
ferent destinations, which increases the complexity of assigning the trucks to the
dock doors.
The parameters and variables used in this paper are as follows:
Model parameters:
1 if the container i is in the truck h
Ahi =
0 Otherwise
1 if d is the destination of the container i
Sdi =
0 Otherwise
Multi-objective Cross-Docking in Physical Internet Hubs 465
Decision variables
Binary variables:
1 if the container i is assigned to the wagon w
xiw =
0 Otherwise
1 if the truck h is assigned to the dock k
yhk =
0 Otherwise
1 if the wagon w is used
uw =
0 Otherwise
1 if d is the destination of the wagon w
ewd =
0 Otherwise
⎧
⎪
⎨1 if trucks h1 and h2 are assigned to the same
gh1 h2 = dock and h1 is a predecessor of h2
⎪
⎩
0 Otherwise
Continuous variables:
Objective function
The objective of the mathematical model is to minimize three objective func-
tions in a given Lexicographic Programming order. This order is defined by the
decision maker:
• F W : The number of used wagons
• F D : The distance travelled by the PI-containers
• F T : The tardiness of the inbound trucks
W
N
W
H
Minimize : F W , F D, F T = uw , diw , fh (1)
w=1 i=1 w=1 h=1
466 T. Chargui et al.
Constraints
W
xiw = 1 (∀i = 1...N ) (2)
w=1
N
xiw Li ≤ Q (∀w = 1...W ) (3)
i=1
D
xiw + xjw ≤ Sdi Sdj + 1 (∀i, j = 1...N, ∀w = 1...W : i = j) (4)
d=1
W
|w1 − w2 | + 1 ≤ ewd + M (2 − (ew1 d + ew2 d ))
w=1
(8)
(∀d = 1...D, ∀w1 , w2 = 1...W, w1 = w2 )
W
|w1 − w2 | + 1 ≤ uw + M (2 − (uw1 + uw2 )) (∀w1 , w2 = 1...W, w1 = w2 ) (9)
w=1
u1 = 1 (10)
diw ≥ |Pk − Rw | + Y − M (2 − (Ahi yhk + xiw ))
(11)
(∀i = 1...N, ∀w = 1...W, ∀k = 1...K, ∀h = 1...H)
K
yhk = 1 (∀h = 1...H) (12)
k=1
Constraint (2) ensures that each PI-container is loaded into one wagon. Con-
straint (3) ensures that the capacity of each wagon is not exceeded. Constraint
(4) guarantees that each wagon contains PI-containers with the same destina-
tion. Constraint (5) forces the wagon to be used if a PI-container is assigned to
Multi-objective Cross-Docking in Physical Internet Hubs 467
it. Constraints (6) and (7) define the destinations of each wagon. Constraint (8)
ensures that the wagons are consecutive if they have the same destination. In
constraint (9) all the used wagons must be consecutive. Therefore, there must
be no empty wagon between two used ones. Constraint (10) ensures that the
wagons are used from the beginning of the train. Constraint (11) calculates the
PI-containers’ distance. In constraint (12), each truck is assigned to one dock.
Constraints (13) and (14) handle the assignment and the sequencing of the trucks
at the docks. The constraint (16) guarantees that the unloading of a truck always
begins after the unloading of the previous one and changeover time, if there is
more than one truck assigned to the same dock. Constraints (15), (17) and (18)
handle the scheduling of the trucks. Constraints (19) and (20) ensure that the
variables are binary and positive.
These inequalities are always satisfied since possibility and necessity degrees
are between 0 and 1. The obtained solution is similar to the deterministic
case.
2. 0 < α < 1 and β = 0:
∼
P os (rh ≥ Eh ) ≥ α ⇒ rh ≥ α Eˆh + (1 − α) Eh
∼
N ec (rh ≥ Eh ) ≥ 0 ⇒ rh ≥ Êh
The constraint is always verified for the necessity measure. Only the possibil-
ity measure has to be checked.
4. α = 1 and 0 < β < 1:
∼
P os (rh ≥ E h ) = 1 ⇒ rh ≥ Êh
∼
N ec (rh ≥ E h ) ≥ β ⇒ rh ≥ β E h + (1 − β) Êh
This is the most difficult case. The satisfaction of the constraint depends on
the worst scenario E h .
According to the chosen value of α and β the arrival time of each truck will
vary in the interval [E h , E h ]. This variation impacts the quality of obtained
solution.
4 Experimental Results
This section is devoted to test the feasibility of the PCCP model. The experi-
ments are performed using IBM CPLEX (Version 12.9) on a PC with 2.4 GHz
Intel Core i3 CPU and 4 GB of RAM. The model is tested on a small illustrative
example of a Road-Rail PI-hub composed of three inbound docks, an outbound
train with ten outgoing wagons, and three inbound trucks. Each truck carries
PI-containers with various sizes and three different destinations. The changeover
time is set to 5 min. The arrival time, expected departure time and the process-
ing time for the three trucks (Eh , Fh , Jh ) are set to (10, 34, 24), (17, 29, 12) and
Multi-objective Cross-Docking in Physical Internet Hubs 469
(24, 60, 36). Then, a variation of +/− 10 min for the arrival time of each truck
∼
Eh has been considered. The objective is to find a scheduling of inbound trucks
and a grouping of PI-containers in the wagons while minimizing the number of
used wagons (F W ), the PI-containers’ travelled distance (F D ), and the total
tardiness of inbound trucks (F T ).
The problem has been solved using a multi-objective Lexicographic Pro-
gramming approach where the order of objectives defines their priority. The
mechanism in such approach consists in minimizing the first objective function
without considering the other ones. Then, once an objective is fixed, the latter
is considered without changing the obtained value of the previous one. The fixed
objective becomes a constraint for the remaining ones. The process continues
until all the objectives are minimized. In the proposed experiments, the three
objectives are minimized in four different combinations of lexicographic order.
Table 1 presents the obtained results. The first column shows the four com-
binations of objectives’ lexicographic orders. For each combination, the model
is tested on different uncertainty scenarios: In the first one (α = β = 0), where
trucks arrive their expected time without uncertainty. Then, different levels of
uncertainty have been considered by varying the values of α and β between 0
and 1. This variation implies the variation of the trucks arrival time.
objective to optimize. Similarly, when the distance is the prior objective (in the
second and third combinations), the minimal travelled distance value is obtained.
For each combination, the two objectives (number of wagons and PI-containers’
distance) are not affected by the variability of trucks’ arrival time. The objectives
are impacted only by the lexicographic order. Considering the minimization of
trucks’ tardiness, the delays are considerably high when the tardiness objective
is the less prioritized one as in the first and second combinations. Nevertheless,
in the third and fourth combinations, once the priority of tardiness is higher than
the number of wagons or the distance, the delays are considerably minimized,
especially for scenarios where the constraint is weakened (cases where values of
α and β are low).
When the value of α is low, especially for the first two combinations of the
objective functions, the tardiness is minimized, which results in an improved
solution. In the case of α = 1 and β = 0, an average solution is obtained equal
to the deterministic scenario. However, once β starts to increase, the tardiness
become higher which provides a more stable solution that can remain feasible
in case of any trucks delay. Both thresholds α and β can be set by the decision
maker (manager of the Road-Rail PI-hub) depending on the intensity of the
daily trucks congestion. The lexicographic order of the objective can also be set
depending on the priority and importance of each objective for the Road-Rail
PI-hub.
5 Conclusion
References
1. Ballot, E., Montreuil, B., Thivierge, C.: Functional design of physical internet
facilities: a road-rail hub. In: 12th IMHRC Proceedings, Gardanne, France (2012)
2. Boysen, N., Fliedner, M.: Cross dock scheduling: Classification, literature review
and research agenda. Omega 38(6), 413–422 (2010)
3. Chargui, T., Bekrar, A., Reghioui, M., Trentesaux, D.: Multi-objective sustain-
able truck scheduling in a rail-road physical internet cross-docking hub considering
energy consumption. Sustainability 11(11), 3127 (2019)
4. Chargui, T., Bekrar, A., Reghioui, M., Trentesaux, D.: Proposal of a multi-agent
model for the sustainable truck scheduling and containers grouping problem in a
road-rail physical internet hub. Int. J. Prod. Res. (2019). https://doi.org/10.1080/
00207543.2019.1660825
5. Chargui, T., Bekrar, A., Reghioui, M., Trentesaux, D.: A simulation-optimization
approach for two-way scheduling/grouping in a road-rail physical internet hub.
IFAC-PapersOnLine 52(13), 1644–1649 (2019). 9th IFAC Conference on Manufac-
turing Modelling, Management and Control MIM 2019
6. Charnes, A., Cooper, W.W.: Chance-constrained programming. Manage. Sci. 6(1),
73–79 (1959)
7. Dubois, D., Prade, H.: Operations on fuzzy numbers. Int. J. Syst. Sci. 9(6), 613–626
(1978)
8. Dubois, D., Prade, H.: Possibility theory. In: Meyers, R.A. (ed.) Encyclopedia of
Complexity and Systems Science, pp. 6927–6939. Springer, New York (2009)
9. Gazzard, N., Montreuil, B.: A functional design for physical internet modular han-
dling containers. In: Proceedings of 2nd International Physical Internet Conference,
Paris, France, 06–08 July 2015
10. Gibson, B.J., Hanna, J.B., Defee, C.C., Chen, H.: The Definitive Guide to Inte-
grated Supply Chain Management: Optimize the Interaction Between Supply
Chain Processes, Tools, and Technologies. Pearson Education, London (2014)
11. Jain, R.: A procedure for multiple-aspect decision making using fuzzy sets. Int. J.
Syst. Sci. 8(1), 1–7 (1977)
12. Konur, D., Golias, M.M.: Analysis of different approaches to cross-dock truck
scheduling with truck arrival time uncertainty. Comput. Ind. Eng. 65(4), 663–672
(2013)
13. Konur, D., Golias, M.M.: Cost-stable truck scheduling at a cross-dock facility with
unknown truck arrivals: a meta-heuristic approach. Transp. Res. Part E Logist.
Transp. Rev. 49(1), 71–91 (2013)
14. Ladier, A.L., Alpan, G.: Cross-docking operations: current research versus industry
practice. Omega 62, 145–162 (2016)
15. Ladier, A.L., Alpan, G.: Robust cross-dock scheduling with time windows. Comput.
Ind. Eng. 99, 16–28 (2016)
16. Landschützer, C., Ehrentraut, F., Jodin, D.: Containers for the physical Internet:
requirements and engineering design related to FMCG logistics. Logist. Res. 8(1),
8 (2015)
17. Liu, B., Iwamura, K.: Chance constrained programming with fuzzy parame-
ters. Fuzzy Sets Syst. 94(2), 227–237 (1998). https://doi.org/10.1016/S0165-
0114(96)00236-9
18. Meng, Q., Wang, T.: A chance constrained programming model for short-term liner
ship fleet planning problems. Maritime Policy Manage. 37(4), 329–346 (2010)
472 T. Chargui et al.
19. Montreuil, B.: Toward a physical Internet: meeting the global logistics sustainabil-
ity grand challenge. Logist. Res. 3(2–3), 71–87 (2011)
20. Montreuil, B., Meller, R.D., Thivierge, C., Montreuil, Z.: Functional design of
physical internet facilities: a unimodal road-based crossdocking hub. In: Progress
in Material Handling Research. vol. 12, edited by B. Montreuil, A. Carrano, K.
Gue, R. de Koster, M. Ogle, and J. Smith, 379–431. MHI, Charlotte, NC, USA
(2013)
21. Pach, C., Sallez, Y., Berger, T., Bonte, T., Trentesaux, D., Montreuil, B.: Routing
management in physical Internet crossdocking hubs: study of grouping strategies
for truck loading. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D.
(eds.) Advances in Production Management Systems. Innovative and Knowledge-
Based Production Management in a Global-Local World, IFIP Advances in Infor-
mation and Communication Technology, vol. 438, pp. 483–490. Springer, Berlin
Heidelberg (2014)
22. Pan, S., Trentesaux, D., Ballot, E., Huang, G.Q.: Horizontal collaborative trans-
port: survey of solutions and practical implementation issues. Int. J. Prod. Res.
57(15–16), 5340–5361 (2019)
23. Pishvaee, M., Razmi, J., Torabi, S.: Robust possibilistic programming for socially
responsible supply chain network design: a new approach. Fuzzy Sets Syst. 206,
1–20 (2012)
24. Rajabi, M., Shirazi, M.A.: Truck scheduling in a cross-dock system with multiple
doors and uncertainty in availability of trucks. J. Appl. Environ. Biol. Sci 6(7S),
101–109 (2016)
25. Shen, J., Zhu, Y.: Chance-constrained model for uncertain job shop scheduling
problem. Soft. Comput. 20(6), 2383–2391 (2016)
26. Theophilus, O., Dulebenets, M.A., Pasha, J., Abioye, O.F., Kavoosi, M.: Truck
scheduling at cross-docking terminals: a follow-up state-of-the-art review. Sustain-
ability 11(19), 5245 (2019)
27. Van Belle, J., Valckenaers, P., Cattrysse, D.: Cross-docking: state of the art. Omega
40(6), 827–846 (2012)
28. Walha, F., Bekrar, A., Chaabane, S., Loukil, T.M.: A rail-road PI-hub allocation
problem: active and reactive approaches. Comput. Ind. 81, 138–151 (2016)
29. Walha, F., Chaabane, S., Bekrar, A., Loukil, T.: The cross docking under uncer-
tainty: state of the art. In: 2014 International Conference on Advanced Logistics
and Transport (ICALT), pp. 330–335 (2014)
30. Wang, X., Ning, Y.: Uncertain chance-constrained programming model for project
scheduling problem. J. Oper. Res. Soc. 69(3), 384–391 (2018)
A Hybrid Simulation Model to Analyse
and Assess Industrial Sustainability Business
Models: The Use of Industrial Symbiosis
1 Introduction
Resources of the planet are finite; in 2019 it was considered that 1.75 planets are needed
to regenerate in one year the natural resources consumed. If the consumption of raw
materials rises, so does waste generation. Resources and waste management are key to
meeting the future needs of society in a sustainable manner (Demartini et al. 2016). Waste
prevention activities or policies such as restricting planned obsolescence in electronic
products and measures like minimizing product weight or design for disassembly will
contribute to tackle these issues (Demartini et al. 2019). A reduction in the consumption
of natural resources and the amount of waste generated would also be accomplished if a
shift to circular economic and production systems mimicking the self-sustaining closed
loop systems found in nature (such as the water cycle), is put into practice. A circular
economy aims at transforming waste back into resources, by reversing the dominant
linear trend of extracting, processing, consuming or using and then disposing of raw
materials, with the ultimate goal of preserving natural resources while maintaining the
economic growth and minimizing the environmental impacts (Cobo et al. 2018). To this
purpose, ‘System thinking’ is the approach required to understand and analyze complex
systems.
A complex system can be defined as a ‘set of mutually dependent elements which
interact one to another towards a purpose’. In such a context, the sustainability challenge
requires a mutual synergy between markets and governments to lead the world toward
long term prosperity. In order to approach sustainability with a chance of success, it
must be seen as multidisciplinary challenge involving different systems interconnected
through mutual feedbacks (Williams et al. 2017). Industrial symbiosis (IS) is considered a
relevant strategy which can improve all the dimensions of sustainability. It is based on the
resources sharing, i.e. waste is used as raw material for other processes (Chertow 2000).
Companies in an industrial symbiosis context can cooperate to share energy, labour,
knowledge, logistics and expertise (Demartini et al. 2018). How to develop organisations
with a sense of purpose and how to build a sustainable competitive advantage are key
challenges.
Policy makers can play a fundamental role amplifying or reducing the effect of
circular economy by means of public investments and/or tax incentives, removing legis-
lation, technological or financial barriers through effective policy measures, leading to
steady economic growth with business opportunities across the whole economy. Through
subsidies and supportive taxation, policy makers can reduce the risks in establishing
sustainable business models such as defining recycling policies, global standards and
goals. It is also important to underline that there is another critical element that should
be carefully analysed; this is the development of policy considering the technological
advancement in recycling and waste processing and the interaction between the nega-
tive (i.e. pollution, emission) and positive (i.e. technological innovation) externalities.
However, the complexity of laws and different regulations throughout the world can
harm circular economy. This research proposes a hybrid approach (Agent Based (AB)
and System Dynamics (SD)) that favours the redesign of companies through indus-
trial symbiosis strategies and evaluate the impact of specific policies on its transition.
The paper is organized as follows. Section 2 describes the main characteristics of the
hybrid framework. Section 3 presents the model implementation while Sect. 4 reports
results and discussions. Finally, in Sect. 5 final consideration and future developments
are discussed.
the steel industry, the pulp industry and the cement industry. The latter industrial sector
can buy wastes from steel plants and pulp plants; these wastes are used by the various
cement plants in substitution of virgin inputs during their output production processes;
we assume that the cement plants produce concrete, composed by inert and clinker. Steel
plants produce artificial inert that can be used by cement plants in substitution of the
natural inert and the industrial sector of pulp plants produce eco-clinker that can be used
by cement plants in substitution of the natural clinker. The hybrid model is composed
by seven different agent classes: the pulp industry which produces as waste artificial
clinker; the steel industry which produces as waste artificial inert; the cement industry
that can buy from both steel plants and pulp plants wastes in order to produce a “green
concrete”; public landfills providing a waste disposal service; virgin inert and clinker
suppliers performing as IS competitors; the Government acting as a regulator.
The different populations live within a main agent that works as “environment”:
inside this agent, that can be considered as a country or a region, the populations are
distributed randomly and can interact with each other developing various dynamics
including the IS. The interactions among agents, which are characterized by a finite
rationality and bounded capabilities of computation, take place through a decentralized
artificial market in which virgin and waste materials are sold and bought. Another impor-
tant feature of the model is its partial stock-flow consistency, both from a monetary and
a material (physical) perspective, see Lavoie and Godley (2014); Godley and Lavoie
(2016); Godin and Caverzasi (2014); this consistency is verified only for the IS interest
variables. This characteristic allows to evaluate the economic impact and benefits of the
IS, in fact, each agent (or firm) belonging to the various industrial sectors considered is
endowed with a balance sheet which includes the details concerning assets and liabilities;
in this way eventual economic benefits are accounted. The model structures has been
implemented according to a hybrid paradigm of simulation which consists in the union
of Agent-based and System dynamics modelling; the integration of the two paradigms
has been made following a “process-environment format” in which the System dynamics
is included in the Agent-based modelling and models the internal structure of the various
agent, see Abdelghany and Eltawil (2017) and Demartini et al. (2018).
The agent-based approach allows to represent the heterogeneity characterizing a
complex IS network, composed by firms each one different from the others; it enables
to capture effectively the effect of the interactions between the various industrial sectors
and moreover, it allows to take into account the external feedbacks originated by the
market dynamics at the macro level. System dynamics permits to catch the internal
feedbacks characterizing both management and manufacturing dynamics, see Sterman
(2000); Forrester (1961). Each agent (or firm) is composed by two different System
dynamics sub-structures: the manufacturing process structure and the economic one.
Notwithstanding these substructures are distinct, the manufacturing process structure
passes data to the economic one in order to account costs and revenues and also set
the output price through a mark-up mechanism. The various firms differ in geographic
location and size; they are randomly distributed inside a fictitious “square region”, and
the position is defined by two different uniform probability distributions, one for each
reference axes “X” and “Y”. The firms’ size is represented by their production capacities
that follow a Pareto distribution.
476 M. Demartini et al.
Fig. 1. The flowchart representing the IS decision-making process of a generic cement plant i;
rectangles indicate actions, while rhombus represent decisions
cp − pw ≤ l + ctrl (1)
where: cp are the pre-processing cost which a firm must sustain in order to adapt the
waste features to the receiving industrial plant needs, pw is the waste material selling
price, l is the landfill tax, and ctrl are the transportation costs to the landfill. This fitness
function verifies the existence of an economic convenience for the industrial plants which
sell waste: it compares the potential revenue of a hypothetical IS with the traditional
management waste costs. As a matter of fact, in absence of symbiosis exchanges, steel
plants and pulp plants search the nearest landfills in order to dispose their waste.
The waste price pw is crucial in order to establish an IS. Because of the competition
with virgin material suppliers to sell their products, sellers link their pw on the virgin
material price pv according to the following relation:
pw = h · pv (2)
where h is varied by the seller in order to find a symbiotic partner. In fact, the seller
reduces h until it establishes an IS or reaches the minimum price that verifies the economic
feasibility of IS1. Adopting this modelling assumption, sellers always put their products
into the market even if prices are not aligned with the average value. The landfill tax l
represents a demand-side policy which aims to stimulate the waste demand inside the
economic system by decreasing the average minimum price that verifies the IS feasibility:
the higher the l value, the lower the minimum price.
During the years, various governmental actions have been implemented and the funda-
mental role of Government is highlighted by the results of several national programs; in
particular, one of the most famous governmental initiative related to IS is represented
by the NISP2 (National IS Programme), which was funded in UK by public institu-
tions; the efficacy of this initiative proves that policy makers can clearly stimulate the
establishment of IS networks, see Mirata (2004); the Government action can work on
several fronts, in particular on fiscal, informational and organizational level. Therefore,
not only top-down IS can be promoted and designed by politicians, but also a favourable
environment for bottom- up self-organized IS can be created using specific economic
and fiscal policies, for example tax incentives, subsidies and landfill tax. Thus, policies
can encourage the creation of an environment in which IS can constitute a real economic
benefit for the various industrial plants and an environmental and social advantage for
the entire community. Furthermore, the use and the effectiveness of these policies has
been widely studied, also through an agent-based model perspective, see Albino et al.
478 M. Demartini et al.
(2016); Fraccascia et al. (2017). In order to study the role of policy makers in promot-
ing the birth of self-organized IS, we develop an enriched version of the model which
includes the Government agent. The aim of this special agent is to foster the creation
of symbiosis between companies in a financially sustainable way mixing two different
market-based policies, i.e. landfill tax and economic subsidy. Being heterogeneous, IS
between the three populations of agents are promoted through different levels of taxes
and subsidies.
In this section the results concerning the sensitivity analysis are reported. In particular
the investigation is related to two of the most important quantities which influence the
economic feasibility of the IS, namely the landfill tax l and the economic subsidy. First,
the sensitivity analysis is presented, concerning landfill tax which is characterized by
the absence of the economic subsidy.
Fig. 2. For any values of the landfill tax l, the figure shows median, first and third quartile of
distributions over 20 seeds of time averages of: the artificial clinker exchanged every week (a), the
average artificial clinker price (b), the virgin clinker used in the production process every week
(c), the number of IS established between pulp and cement plants (d). Time averages refer to
two-year-long time period.
In this respect, as shown in Fig. 2 (a), the increase of the landfill tax l involves an
enhancement of the symbiotic exchanges between industrial sectors; a higher landfill
tax stimulates potential sellers to put their waste on the market trying to sell it also
to a lower price, as it is visible in Fig. 2 (b). The landfill tax controls, with the other
variables visible in the Eq. 1, the minimum price threshold at which companies are
willing to sell their waste instead of disposing it into a public landfill. It’s interesting to
notice that in Fig. 2 (a) the various data distributions assume a S-shape with the landfill
tax increase reaching a maximum range of artificial material exchanged values, and
this is also reflected by the number of IS established, see Fig. 2 (d); this fact happens
A Hybrid Simulation Model to Analyse and Assess 479
notwithstanding the production of waste of both steel plants and pulp plants can saturate
the cement industry input demands.
As mentioned above, the dimensions of the steel plants and pulp plants industrial
sectors are related to the cement plants population size: in case of a full IS between
industrial sectors, the input demand of the cement industry is saturated by the waste
produced by the other sectors and the various cement plants do not use virgin material
in their production process. The landfill tax is not the only feasibility factor which can
condition the establishment of an IS, see Eq. 1. The model takes into account various
economic variables, as the costs related to pre-processing treatments cp that represent a
further economic barrier to the symbiosis feasibility; waste needs to be transformed in
order to be adopted by other industrial processes as input and the pre-processing activities
are executed internally by the waste producers. Furthermore, geographical factors can
influence and block symbiotic exchanges both from the sellers and buyers’ perspective;
steel and pulp plants could be located near a public landfill and, at the same time, cement
plants may be closer to virgin material sellers. For this reason, notwithstanding the high
sensitiveness presented by the model to the landfill tax, the virgin material consumption
results to be always greater than zero, as it is shown in Fig. 2 (c).
In this section results related to the enriched version of the model are presented. Gov-
ernment intervenes dynamically in order to foster the birth of self-organized IS between
firms controlling two different policies: landfill tax and economic subsidy. Figure 3
shows the reserves owned by Government during simulation (SGPP and SGSP ).
Fig. 3. Three time series representing the reserves of money owned by Government and used in
order to finance economic expenditures. In particular, it shows: total money owned by Government
SG , reserves related to SPs and CPs symbiosis SGSP and reserves related to PPs and CPs symbiosis
SGPP . Time series refer to a specific replication which is representative of the system average trend.
Figure 4 hows raw materials used in production processes each week during simu-
lation. At the beginning of the simulation, CPs mainly purchase materials from virgin
suppliers because of the economic convenience of their products. It is worth noting
that there is also a certain number of self-organized IS inside the economy before a
480 M. Demartini et al.
Fig. 4. Various time series. In particular: virgin and artificial clinker used in CPs production
processes (a), virgin and artificial inert used in CPs production processes (b). All-time series refer
to a specific replication which is representative of the system average trend.
direct Govern intervention. As soon as the Government starts intervening, the level of
symbiosis inside the economy increases. Around the hundredth week, the consumption
of symbiotic material experiences a period of slight oscillation related to competitive
behaviour of virgin material suppliers that try to contrast IS practice. However, further
interventions are able to stabilize the system encouraging symbiosis at the expense of vir-
gin material suppliers. Even if these suppliers fight IS during simulation, Government
policies stimulates also learning-by-doing mechanism related to pre-processing costs
reinforcing symbiosis itself: the lower the pre-processing costs, the lower the minimum
price that verifies the economic feasibility of symbiosis for both SPs and PPs. Halfway
through the simulation, the quantity of symbiotic material sold surpasses the virgin one
pointing out the effectiveness of Government policy mix. At the end of the simulation,
the situation is reversed compared to the beginning: the quantity of symbiotic material
sold is considerably higher compared to the virgin one for both SPs and PPs. Further-
more, it is worth highlighting that this remarkable result is achieved without external
financial interventions but only mixing these two policies.
Therefore, the model shows the possibility to promote the creation of self-organized
IS inside the economy in an effective and sustainable way avoiding external financial
expenditures. This point turns out to be crucial because it witnesses the possibility to
promote the transition towards a Circular Economy effectively without forcing firms but
only creating an economic environment in which industrial innovation is convenient.
Obviously, in order to obtain these results, significant efforts in order to organize and
implement the Govern strategy are required. In this respect, Government should activate
specific taskforces devoted to the detection of industrial sectors potentially interested to
this practice and to the monitoring of symbiosis trend within the economic system.
6 Conclusions
This paper proposes the application of a hybrid approach to focus efforts towards the
(re)modelling of industrial symbiosis processes and effects on the transition towards
A Hybrid Simulation Model to Analyse and Assess 481
References
Demartini, M., Orlandi, I., Tonelli, F., Anguita, D.: Investigating sustainability as a performance
dimension of a novel manufacturing value modeling methodology (MVMM): from sustain-
ability business drivers to relevant metrics and performance indicators. XXI Summer School
“Francesco Turco” (2016)
Demartini, M., Evans, S., Tonelli, F.: Digitalization technologies for industrial sustainability.
Procedia Manuf. 33, 264–271 (2019)
Cobo, S., Dominguez-Ramos, A., Irabien, A.: From linear to circular integrated waste management
systems: a review of methodological approaches. Resour. Conserv. Recycl. 135, 279–295 (2018)
Williams, A., Kennedy, S., Philipp, F., Whiteman, G.: Systems thinking: a review of sustainability
management research. J. Clean. Prod. 148, 866–881 (2017)
Chertow, M.R.: Industrial symbiosis: literature and taxonomy. Ann. Rev. Energy Environ. 25,
313–337 (2000)
Demartini, M., Tonelli, F., Bertani, F.: Approaching industrial symbiosis through agent-based
modeling and system dynamics. In: Studies in Computational Intelligence, vol. 762, pp. 171–
185. Springer (2018)
Lavoie, M., Godley, W.: Kaleckian models of growth in a coherent stock-flow monetary framework.
In: The Stock-Flow Consistent Approach (2014)
Godley, W., Lavoie, M.: Monetary Economics: An Integrated Approach to Credit, Money, Income,
Production and Wealth. Palgrave Macmillan, UK (2016)
482 M. Demartini et al.
1 Introduction
The optimization of assembly and handling systems and processes (AHSP) is important
to reduce costs, shorten lead times, delivery times, etc. and thus ensure the competitive-
ness of companies. It has been clearly shown [1–3] that failures can lead to a reduction
in overall equipment effectiveness (OEE), which can account for up to 50% of costs. For
this reason, it is essential to optimize AHSP. To optimize the AHSP, different approaches
and methods have been used to effectively achieve the optimal or near-optimal solution
[4]. One of the most effective methods to optimize such systems is the optimization with
on-line simulation using digital twins and digital agents [5–7].
For research and testing purposes of the expert systems for real integration into
the process control system of the production line, we have set up a production line in
the laboratory. The production line consists of six assembly workstations, intermediate
buffers at the assembly workstations, two handling industrial robots, variable tools with
six disposal stations and a material warehouse with a capacity of 45 storage locations for
incoming material and end products (Fig. 1). More than 3.67 × 1070 different products
and variants can be assembled on the production line.
Our approach consists of digital agents in combination with the digital twin, and
this allows to automatically making quick and smart decisions. It is well known [8, 9]
that the use of discrete event simulation or the digital twin is a very effective tool for
“what-if” scenarios for any kind of production system. In our case, we have transformed
a real production system with all its features and limitations into a virtual factory and
upgraded the digital twin with digital agents. The task of the digital agents is to receive
input from the digital twin and on this basis find a quick or best solution.
Each digital agent has its own work or task. It is the digital agent’s job to find a solution
quickly and automatically as soon as it receives a request from a digital twin and to return
this solution to the digital twin. Digital Agents communicate directly with the digital
twin. Some Digital Agents also communicate with each other, further accelerating the
operation of the digital twin itself. Digital Agents work completely independently of
each other, but there is also a connection to a global Digital Agent. The Global Digital
Agent’s main task is to coordinate the proper operation of all other Digital Agents.
The developed system of digital twins and digital agents was integrated into the expert
system (Fig. 2) and combined with the real production system via the cloud. The expert
system receives all input data about the production system and the orders (production
plan) via cloud. The expert system then calculates an optimal production plan and sends
it through the cloud to the real system, where the implementation of this optimal plan
begins. The expert system also checks the effects of disturbances that occur in the real
system and, if necessary, corrects the production plan so that it eliminates the influence
of the disturbances or the disturbance itself, and sends the data through the cloud to the
real system - all in real time.
Realization of an Optimal Production Plan in a Smart Factory 487
Fig. 2. On line simulation by using the digital twin as additional expert system integrated into
the process
The remainder of this short article is structured as follows. In the next section, the
research and development of an expert system is outlined, followed by the combination
of an expert system to an on line simulation in Sect. 3. Concluding remarks can be found
in Sect. 4.
The main element of the expert system is the digital twin (DT) of the process, since
it is the backbone of a smart factory. The digital twin was built in Tecnomatix Plant
Simulation. The research and development of the ASHP digital twin was done in two
basic steps:
The basic goal of this step is to obtain information about the chronological sequence of
orders via the digital twin. For this reason, we have converted the entire ASHP of the
production line and segments into digital form (Fig. 3).
488 H. Zupan et al.
When a digital twin of the ASHP is built, the model itself must be designed to allow
inputs that are intended for actual production. The model allows the use of different
input data and is fully parametric. At the same time, it lists all essential features of an
actual ASHP.
Based on the assumptions of the digital ASHP model and the characteristics of the
actual ASHP, we have designed a logical scheme of the production model. In off-line
mode, the twin operates using the input data defined in Fig. 3 as boundary conditions of
the model. All other required data, parameters (times of individual operations, material
flow logistics, allowed movements of robots, times of installation and maintenance of
robots, conveyor speeds, etc.), constraints, characteristics and resources are obtained by
the digital twin from the knowledge base of the production process. Based on these input
data, constraints, properties and resources, the digital twin calculates the outputs. These
outputs are available in the form of indexes, tables, graphs or indicators.
solution based on inputs. In this way, the digital agents enrich the digital twins and on
the other hand speed up the operation of the digital twins. It is very important that the
digital twin is properly verified and validated along with all digital agents, because only
then can we trust the results that the expert system provides us with.
In our case, we have built several digital agents that cover a variety of important
tasks within the process:
• D.A.I. - a digital agent that sets the correct initial state of the warehouse, buffers,
robots and conveyor in the digital twin;
• D.A.O. - a digital agent that checks which orders can be carried out on the basis of
the desired orders received from the production plan, using the available resources;
• D.A.M. - a digital agent that plans the order production on assembly stations;
• D.A.J. - a digital agent that generates all handling and assembly operations for robots;
• D.A.D. - a digital agent responsible for quality control and troubleshooting;
• G.D.A. - a global digital agent that oversees the operation of all digital agents and
ensures the correct sequence of communications, and
• D.A.A. - a digital agent with an expert system designed to optimize the production
schedule using an intelligent algorithm.
Digital agents communicate directly with the digital twin some digital agents also
communicate with each other, which further accelerates the operation of the digital twin
itself. Digital agents work completely autonomously from each other, but there is also a
490 H. Zupan et al.
connection to a global digital agent. The Global Digital Agent’s main task is to coordinate
the proper sequence of operation of all other DA. See Fig. 5.
Fig. 5. Digital agents and their communication with the digital twin and with each other
3 Combining the Digital Twin and Digital Agents with the Expert
System
After proper validation and verification of the digital twin and all agents, we merged the
digital twin with all digital agents. The last agent we merged was the DA that includes
an expert system and is responsible for the product plan optimization (D.A.A.), which
includes our own intelligent Flip and Insert (FI) algorithm [10]. D.A.A. provides or
calculates the optimal or near-optimal production plan based on all input parameters.
The expert system has been integrated into digital twins using program code and tables.
Before starting the production plan optimization, D.A.A. is provided with a production
plan specified by D.A.N.
Three cyclical steps are required for the expert system to function properly:
• In the first step, D.A.A. creates the n iteration of the production plan, which it sends
to the digital twin,
• In the second step, the digital twin checks and calculates or gives a solution for this n
iteration of the production plan, and
• In the third step, the digital twin sends this solution back to the expert system (D.A.A.),
which checks this solution and decides whether it is the final solution (optimal or near
optimal) or the new iteration with the first step begins.
With the expert system and D.A.A., various target parameters can be optimized. In
our case we define two target parameters:
Realization of an Optimal Production Plan in a Smart Factory 491
• The first target parameter is the total throughput time of all orders (tALL ), which should
be kept as short as possible, and
• The second target parameter is the minimization of the total processing time of the
individual jobs (*tALL ).
Our experiment consists of six different scenarios. Each scenario contains different
orders. The results with the first target parameter are shown in Table 1 and the results
with the second target parameter in Table 2.
Table 1. Optimization results of individual scenarios with expert system and FI algorithm if the
target parameter is the total flow time of all orders
Scenario Initial time tALL Optimized time tALL Improvement [%] Calculating time
1 1 h 44 s 59 min 53 s 1,40% 58 s
2 1h3s 59 min 3 s 1,67% 58 s
3 1 h 4 min 48 s 1 h 4 min 28 s 0,51% 40 s
4 1 h 4 min 53 s 1 h 4 min 33 s 0,51% 40 s
5 1 h 8 min 56 s 1 h 5 min 20 s 5,22% 1 min 3 s
6 1 h 9 min 12 s 1 h 6 min 14 s 4,29% 1 min 43 s
Table 2. Optimization results of individual scenarios with expert system and FI algorithm if the
target parameter is the sum total of all the flow times of individual orders.
From Tables 1 and 2 we can see that with the expert system and the FI algorithm we
can successfully optimize the production plan according to various target parameters.
492 H. Zupan et al.
At first sight the results in Table 1 do not seem to be good, but it should be considered
that the reason for this is mainly the long operating times and that the order plan itself
does not have such an influence on the end time of all orders. In Table 2 it can be seen
that the different order scheduling has a great influence on the sum of all throughput
times of the individual orders (up to 38% improvement). However, it is interesting to
note that in both cases we get the same throughput times of all orders. This means that
it makes more sense for the target parameter to take the sum of all order times (*tALL ),
since this optimizes each order individually and the order times of all orders (tALL ).
The connection between the real system and the digital world was established via the
cloud using the SQL Lite library. In order for the system to function properly, it is
important that all commands and interfaces are executed in the correct order. This is
done by a global digital agent that acts as the conductor of the entire system, sending
commands to other digital agents that perform their tasks. In our system, the sequence
of steps is as follows (Fig. 6):
Fig. 6. The order in which steps are taken to successfully combine the real system and the digital
system (digital twin and digital agents).
1. G.D.A. sends a command to D.A.I. to set the current state of the real system in DT.
2. D.A.I. retrieves information about the current system via the cloud.
Realization of an Optimal Production Plan in a Smart Factory 493
3. D.A.I. sets the initial state to DT, which reflects the real state of the real system.
4. G.D.A. sends the D.A.O. command to add orders to DT.
5. D.A.O. retrieves a production plan from the ERP system and checks which orders
can/cannot be made based on the boundary conditions of the system.
6. D.A.O. forwards orders that can be made in DT.
7. G.D.A. sends the D.A.M. command to sort the work on orders by individual
assembly workplaces.
8. D.A.M. selects the optimum workstation for each order issued by D.A.O. and sends
this information to DT.
9. G.D.A. sends the D.A.A. command to create an optimal production plan.
10. D.A.A. determines the optimal production plan through the iteration loop and DT.
11. G.D.A. sends the command to D.A.J. for sequencing and generating operations on
robots.
12. D.A.J. generates the operations that the robots must perform and their sequence
according to the orders. It then passes all this information to DT.
13. DT then calculates the scripts based on all the digital agents’ data obtained and
transmits all the data on that scenario back to the cloud.
14. The production plan is then started in the real system.
15. If there is a possible disturbance in the real system, G.D.A. sends a command to
D.A.D. to identify the disorder and, if possible, correct it.
16. D.A.D. checks the nature of the fault and suggests appropriate remedial action. It
sends these measures back to DT.
DT returns then to step 13 and repeats the steps until the production plan is
successfully completed.
In on-line simulation, it is important that the digital system (digital twin and digital
agents) and the real system are in constant two-way communication. This is important
because if there is a deviation of the real system from the planned schedule, the dig-
ital system can immediately detect and verify this deviation (eliminate disturbances,
repair the production plan, etc.) and perform an optimization to provide new data that is
immediately sent back through the cloud to the real system (Fig. 7).
Fig. 7. The feedback loop between the digital system and the real system
We also conducted an additional experiment to test the feedback loop between the
digital system and the real system. The experiment proved that the feedback loop between
the real system and the digital system also works. The experiment showed that the
494 H. Zupan et al.
production plan was changed due to the interference. When image processing detected
a product defect in a real system, it sent this information to the digital system via the
cloud. Then the D.A.D. checked what kind of defect it was and sent DT solutions to
correct the error. This information was then DT sent via the cloud to a real system. The
fault changes the production time and also increases the flow time.
4 Conclusion
In this article we presented how we have created an expert system that uses the digital
twin (DT) and digital agents (DA) to continuously monitor and optimize the assembly
and handling system and process (AHSP) of a production line.
The first step was to set up an off-line simulation model, which is the basis for DT
the AHSP. When building a digital twin, we considered all the features, resources and
constraints of a real production line. We built DT in the way that it behaves exactly like
the real AHSP.
In the next step we built several types of local digital agents (DA) to perform a real-
time simulation of AHSP. Each DA has its own functionality and intelligence that helps
perform DT with a specific task. The DA main task is to automatically find a solution from
the digital twin and send it back to the DT as soon as possible. The global digital agent is
responsible for the proper and chronological functioning of all agents. After successful
validation and verification of “DT and DA” we combined DT with the intelligent FI
algorithm and built an additional digital agent to cover these tasks. Experiments have
shown that with the digital system (DT + DA) we are able to optimize the real AHSP
of the line production, which was proven in the laboratory on the real AHSP of the
production line.
In the last part of the work we combined the developed digital system with the real
system over the cloud. In this way we created all necessary frameworks for the on-line
simulation and thus developed an expert system that is connected to the real system
and performs the control and optimization in real time. The cloud-based expert system
receives all inputs on the real line and the desired orders (production plan). Based on
these inputs it calculates and offers an optimal production plan with DT and DA and
then sends all this data back to the real system via the cloud. The real system starts to
implement the optimal production plan. The expert system also checks the effects of
disturbances that occur in the real system and, if necessary, corrects the production plan
so that it eliminates the influence of the disturbances or the error itself and sends the
data via the cloud to the real system - all in real time.
We have successfully used individual segments of the expert system (Flip and Insert
algorithm, digital twins and digital agents) in a real industrial environment. The mod-
ular and parametric structure of the expert system also enables further research and
development.
Acknowledgment. The work was carried out in the framework of the GOSTOP programme
(OP20.00361), which is partially financed by the Republic of Slovenia – Ministry of Educa-
tion, Science and Sport, and the European Union – European Regional Development Fund. The
authors also acknowledge the financial support from the Slovenian Research Agency (research
core funding No. (P2-0248)).
Realization of an Optimal Production Plan in a Smart Factory 495
References
1. Ylipää, T.: Correction, prevention and elimination of production disturbances. PROPER
project description, Department of Product and Production Development (PPD), Chalmers
University of Technology, Gothenburg (2002)
2. Andersson, C., Bellgran, M.: On the complexity of using performance measures: enhancing
sustained production improvement capability by combining OEE and productivity. J. Manuf.
Syst. 35, 144–154 (2015)
3. Bellgran, M., Aresu, E.: Handling disturbances in small volume production. Robot. Comput.-
Integr. Manuf. 19, 123–134 (2003)
4. Pinedo, M.L.: Scheduling: Theory, Algorithms, and Systems. Springer Science + Business
Media, LLC (2012)
5. Rao, Y., He, F., Shao, X., Zhang, C.: On-line simulation for shop floor control in manufacturing
execution system. In: Xiong, C., Liu, H., Huang, Y., Xiong, Y. (eds.) Intelligent Robotics and
Applications, pp. 141–150. Springer, Berlin (2008)
6. Kamat, V.R., Menassa, C.C., Lee, S.H.: On-line simulation of building energy processes:
need and research requirements. In: Proceedings of the 2013 Winter Simulation Conference:
Simulation: Making Decisions in a Complex World, pp. 3008–3017 (2013)
7. Ang, A.T.H., Sivakumar, A.I.: Online multi objective single machine dynamic scheduling
with sequence-dependent setups using simulation-based genetic algorithm with desirability
function. In: Winter Simulation Conference, pp. 1828–1834 (2007)
8. Zupan, H., Herakovič, N., Starbek, M.: Hybrid algorithm based on priority rules for simulation
of workshop production. Int. J. Simul. Model. 15, 29–41 (2016)
9. Zupan, H., Herakovič, N., Žerovnik, J., Berlec, T.: Layout optimization of a production cell.
Int. J. Simul. Model. 16, 603–616 (2016)
10. Zupan, H., Herakovič, N., Žerovnik, J.: A hybrid metaheuristic for job–shop scheduling
with machine and sequence-dependent setup times. In: 13th International Symposium on
Operational Research in Slovenia, Bled, Slovenia, pp. 129–134 (2015)
Dynamic Scheduling of Robotic Mildew
Treatment by UV-c in Horticulture
1 Introduction
Robotics knows a huge evolution and starts being used in several fields such as
industry, rehabilitation, and agriculture. In horticulture, researchers are devel-
oping several types of robots to cultivate or treat plants [15]. In the literature,
several works on harvesting robots can be found, like the robot presented in [16]
which uses artificial vision to move easily between the cauliflower plants. In [14],
another harvesting robot that collects watermelons is presented, which shows the
ability of some robots to harvest heavy crops. Moreover, there are also robots
equipped with sprayers that are used for plant treatments. For instance, [11]
describes the design of a robot able to detect the powdery mildew and spray the
diseased plants in order to reduce the quantity of spray and avoid the treatment
c The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 496–507, 2021.
https://doi.org/10.1007/978-3-030-69373-2_36
Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 497
of safe plants. The work presented in this paper is part of an European project
called UV-Robot. In our case, we have a robot that treats the mildew fungus in
horticulture using C-type ultraviolet radiation (UV-c). UV-Robot replaces the
chemical spray treatment with UV-c treatment. In [17], the authors have shown
that UV-c treatment improves the production of strawberries.
The UV-Robot must treat the rows of plants that are affected by mildew
during their growth. The energy supply of the robot is based on a battery that
can ensure a continuous functioning during 3 h in average and needs 2.5 h to be
fully charged. Mildew disease has an evolutionary and spreading behaviour that
follows a stochastic process. To achieve an optimal treatment schedule, decision
support tools are needed. The simulation-optimization approach allows to solve
the dynamic scheduling of UV-Robot systems. This approach has showed good
performance in several works; it has the advantage of allowing the simulator
to learn and adapt over time with the best behaviour [20]. For instance, [12]
works on the problem of air transport of military aircraft of the United States
using approximate dynamic programming. This approach is also used within the
simulators of [8] and [21] which are developed to optimize the vehicles routing
for the collection of conditioned bio-waste.
Since few decades, the Multi-Agent Systems (MAS) paradigm has emerged
as an effective approach for complex systems modelling and simulation. [18]
describes the MAS environment as the context in which the agents will evolve.
For [4], a MAS is a set of entities called agents, sharing a common environ-
ment which they are able to perceive and on which they can act. The simulator
developed within this work represents the environment for testing our UV-Robot
processing system. We will run optimization algorithms inside the simulator in
order to plan the robot tasks.
Dynamic scheduling is a problem studied in several systems, such as on
robots, manufacturing machines or distribution chains. Several works have stud-
ied this problem from which we can cite the article by [5], where a machine
processes jobs that arrive continuously. A case on several resources (multipro-
cessor in a computer) is presented in [13], which executes a set of tasks that
arrive dynamically. In dynamic scheduling, it is generally the time of arrival or
departure (duration) of tasks which is dynamic. In our case, both the occurrence
and spread of mildew disease on plants are dynamic.
In the case of dynamic duration, we explored the lead of scheduling problems
with deteriorating jobs. In this problem, the tasks duration follows a degradation
process as in [7], where the temperature of the ingots drops after they come out
of the oven. In [6] and [1], the tasks are degraded in the same way starting from
T0 according to a linear degradation equation. The processing of tasks is done
by job batch in [6,7] and [1]. These batches can be processed in parallel, which
is not possible in our case with a single robot.
This paper proposes solving a dynamic scheduling problem with evolving task
duration due to the evolution of the mildew level. A sim-optimization approach
based on a genetic algorithm and a multi-agent simulation is used to implement
the dynamic solution. NetLogo [19] simulation software is used to implement the
498 M. Mazar et al.
optimization and simulation algorithms. In this study, the agent based archi-
tecture is used for its capacity to present and simulate complex systems with
centralized GA based decision making (only one active agent). To the best of
our knowledge, there is no work proposing the resolution of dynamic scheduling
of robot in horticulture.
The article is organized as follows. Section 2 presents the simulation model
with MAS. Section 3 details the disease behaviour model and its estimation.
Then, two optimization algorithms are proposed in Sect. 4. Section 5 describes
and discusses the results obtained. Finally, the conclusion and the perspectives
of this work are given in Sect. 6.
2 Simulation Model
This section presents the simulation model of the robotized mildew treatment
process which is based on MAS, and explains the role of agents and the interac-
tions between them. Before building the simulation model, we carefully studied
our system to define all the agents. In the UV-Robot treatment system, a robot
equipped with UV-lamps performs treatment missions of infected plant rows
in the greenhouse. The robot moves to the battery charging station after each
mission. The robot is also equipped with a smart e-nose to inspect the level of
plant disease. The smart e-nose absorbs chemicals substances around the plants,
then calculates the level of mildew for each plant section. The robot performs a
measurement of the entire greenhouse using e-nose and then begins treatment
knowing that the disease progresses during treatment. However, the robot can-
not launch a mission before being given the authorisation from the monitoring
agent. The monitoring system is composed of a central computer able to control
the robot missions, collect data about the greenhouse and represent the state
of the system on a dashboard for the grower. It allows the grower to manually
control the robot, plan its missions and update the mildew level.
The evolution of mildew infection level influences directly the UV-c dose to apply,
i.e. the duration of treatment. To adjust the UV-c treatment doses, the robot
changes its speed according to the infection level of the plant. When the infection
level is high, the robot treats the plant with a low speed, so the plant receives
a sufficient dose of UV-c radiation. Moreover, the energy consumption of the
robot is proportional to the applied treatment dose. As UV-c lamps represent
the biggest part of the robot energy consumption, when the robot is moving
slowly with lamps turned-on, it consumes more energy even if the consumption
of the motor is low.
In order to properly calibrate our resolution algorithms in our system, the
behaviour of mildew needs to be simulated to bring it closer to reality. We used
data from [2], which represents the evolution over time of the level of mildew
in vineyards in 2007. Figure 2 shows the IGT2007 mildew behaviour curve, and
our estimation curve fˆ(t). This data can be approached by the time series in
discreet or continues representation.
c
fˆ(t) = (1)
1 + be−at
500 M. Mazar et al.
Fig. 2. Real mildew behaviour IGT2007 [2] and estimated logistic function fˆ(t)
To compute a and b, we take a point in the graph from the IGT2007 curve
(40, 0.468) and construct the following two Eqs. (2) and (3):
c
= 0.468 (2)
1 + be−40a
ln(b)
= 83.054 (3)
a
After solving both Eqs. (2) and (3), we obtain a ≈ 0.096 and b ≈ 2936. Thus
fˆ(t) is given by Eq. (4).
30
fˆ(t) = (4)
1 + 2936e−0.096t
To get a simulated disease behaviour similar to the one represented by func-
tion (4), we proceed by trying empirically several evolution probabilities that
steer the transition of plants infection rate from the current level to the superior
one. We carried out several tests using the simulator. The retained probability
function of disease level transition is given by Eq. (5).
[level mildew] is the plant’s current level of mildew and [last treatment] is
the number of days elapsed since the last treatment. For the propagation of
Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 501
4 Dynamic Scheduling
Bin-packing is a known operational research problem that can be used for the
modelling of robotic task planning. It consists in filling bins with items while
respecting the size of the bins. The goal of this problem is to minimize the number
of used bins. In our problem, bins correspond to missions, and items correspond
to treatment tasks performed during missions. The size of each mission is limited
by the capacity of the battery that provides the electric energy necessary for
robot movement and for UV-lamps operation when performing treatment tasks.
As the objective is to eradicate the disease from the greenhouse as soon as
possible, the goal is to minimize the number of missions. Table 1 summarizes the
analogy between our problem and the bin-packing problem. The mathematical
model of this problem is detailed in [9]. Authors have already resolved it in the
static case using a genetic algorithm and an exact method. The model will not
be detailed in this paper.
In this section, two genetic algorithms are proposed to solve the problem of
treatment missions planning for semi-static and dynamic cases. The dynamic
case takes into account the evolution behaviour of mildew in the greenhouse.
the robot does all the treatments by visiting the rows selected within a mission
in an ascendant order of their identification number (from the left to the right
of the greenhouse).
The coding scheme of an individual in the GA is represented by a matrix
as shown in Fig. 3, where lines represent the treatment missions and columns
are the rows of the greenhouse. If a row j is to be treated during a mission
i, the value of the element ij is equal to 1. The sum of power consumption of
each mission (vector multiplication of “mission” line vector and “consumption”
column vector) should not exceed the robot battery capacity, otherwise the indi-
vidual is discarded. The evaluation of each chromosome takes into account the
energy consumed for the displacements of the robot to reach the charging sta-
tion. The initial population is created with a greedy heuristic, which takes the
treatments of large size at the beginning and fulfill the missions while respecting
the capacity of the robot. Since the goal is to minimize the number of missions,
the chromosome with fewer missions is the best individual. Figure 3 presents the
crossing between two parents at a random point which gives two children as the
output.
Figure 4 shows how the mutation operation works in the proposed GA. Each
child has a 60% of chance to be mutated. During the mutation, the algorithm
randomly selects 2 lines of the chromosome matrix. Then, each element equal to
1, in each line, has a 50% chance to switch with the corresponding element of
the other line (element with the same j). After each iteration of GA, 10% of the
best individuals are selected to be included in the new generation.
fˆ (3 + 5.5(i − 1) + tm ) αi
Pi = −M (6)
N brplants /4
5 Experimentation
We consider a greenhouse containing 100 rows of strawberry plants; each row
has 100 m of length and contains 100 plants. A UV-C robot with an autonomy of
3 h and a charging time of 2.5 h is used to treat the mildew. We assume that in
the initial conditions, 50% of plants are infected with different levels of disease.
In the first step of experiment, we launched several simulations with DGA
to calibrate its parameter α to comply the model developed for vineyards to
the evolution of mildew for strawberry. Then, we launched 5 simulations of each
algorithm (DGA and GA) in a dynamic environment where the disease evolves
at each simulation step. The time horizon of each simulation is 20 h, which is
enough for the treatment of all infected plants in the greenhouse. In both the GA
and the DGA, the population size is 20 and the limit number of generations is 50.
These parameters were determined empirically through several trials. Figure 7
draws the evolution of the level of the robot battery as a function of time. It
compares three curves with different values of the parameter α (36, 50 and 60).
The decreasing segment in the curves is linked to the time of treatment, and
the increasing segment is relative to the time of battery charging. In order to
increase the lifespan of the battery, we limit the consumption to 80% of battery’s
capacity, as it is recommended by experts [10]. To compare the curves in Fig. 7
we have added the line EL which represents 20% of the battery charge. So we
will choose DGA with α = 60 (DAG60) for the next simulations because its
curve respects the minimum level of energy.
Figure 8a shows the evolution of level of disease over time. There are two
curves in the figure: the blue solid curve which represents the level of mildew
using DGA60 and the green pointed curve which is related to the use of GA. We
can clearly observe that using GA allows the robot to finish the treatment of
mildew in 1000 min, whereas it takes 1100 min using DGA. Moreover, we notice
that using the GA is risky and does not allow using the robot in an autonomous
way. In fact, during the execution of the scenario using GA, the grower restarts
manually the robot several times because it has not enough energy in the battery
to return back to the charging station. In fact, an extra energy consumption may
happen when the actual level of mildew is more than expected. We also notice
that the level of disease increases from time to time in both graphs during the
robot’s charging period.
In Fig. 8b, there are also two curves relative to DGA60 and GA drawing
the level of battery energy as function of time. Both curves correspond to the
average of several simulations for each algorithm. The GA scenario does not
respect the battery capacity constraint, where the robot uses more than 80% of
its battery capacity. As shown in Fig. 8b, this level reaches a 100% of battery
capacity several times, which necessitates the intervention of the grower to bring
the robot to its charge station.
The average time for a simulation with GA is 16 min and 41 s, and that
of DGA60 is 21 min and 36 s. This increase in computation time is due to the
additional computation of the prediction of mildew level.
As a conclusion, we can say that even if the use of GA allows less treatments
and computation times, it is still a risky scenario because it can not ensure the
total autonomy of the robot and needs the intervention of the grower several
times. Moreover, the use of DGA60 can be considered more efficient because
it respects the capacity constraint concerning the use of only 80% of the total
battery capacity, which allows a total autonomy of the robotic treatment. In
addition, the use of DGA60 gives realistic scenarios and allows possible real
implementation.
506 M. Mazar et al.
6 Conclusion
This paper studies the dynamic task scheduling problem, applied to UV-c treat-
ment of plants in horticulture. The difficulty was in scheduling tasks to treat dis-
eases having a dynamic evolutionary behaviour. The use of a simulator allowed
testing our algorithms in the dynamic case. We improved a Genetic Algorithm
(GA), previously proposed, to Dynamic Genetic Algorithm (DGA) allowing the
autonomous execution of treatment with respect to the battery capacity con-
straint in the dynamic environment. The results provided by DGA show better
accuracy of the treatment with more compliance to the technical battery con-
straints and give the possibility to launch real life horticultural tests.
In perspective, we plan to add a preventive treatment that allows controlling
the evolution and the propagation of disease. We will introduce the case of a
multi-robots for several greenhouses, which will increase the number of active
agents and allow the use of distributed intelligence such as Contact Net protocol,
or Potential Fields.
Acknowledgment. This research was possible thanks to e1.35 million financial sup-
port from the European Regional Development Fund provided by the Interreg North-
West Europe Program in context of UV-Robot project.
References
1. Cheng, T., Kang, L., Ng, C.: Due-date assignment and single machine scheduling
with deteriorating jobs. J. Oper. Res. Soc. 55(2), 198–203 (2004)
2. Claude, M.: Mildiou de la vigne - bilan de la campagne 2007. In: Actualités Phy-
tosanitaires, pp. 99–105. IFV (2007)
3. Davis, L.: Handbook of Genetic Algorithms. CumInCAD, NY (1991)
4. Hassas, S.: Systèmes complexes à base de multi-agents situés. Mémoire
d’Habilitation à Diriger les Recherches, University Claude Bernard Lyon (2003)
5. Li, J., Wang, P., Geng, C.: The disease assessment of cucumber downy mildew
based on image processing. In: 2017 International Conference on Computer Net-
work, Electronic and Automation (ICCNEA), pp. 480–485. IEEE (2017)
6. Li, J.Q., Song, M.X., Wang, L., Duan, P.Y., Han, Y.Y., Sang, H.Y., Pan, Q.K.:
Hybrid artificial bee colony algorithm for a parallel batching distributed flow-shop
problem with deteriorating jobs. IEEE Trans. Cybern. 50, 2425–2439 (2019)
7. Li, S., Ng, C., Cheng, T.E., Yuan, J.: Parallel-batch scheduling of deteriorating jobs
with release dates to minimize the makespan. Eur. J. Oper. Res. 210(3), 482–488
(2011)
8. Mazar, M., Constant-Meney, V., Sahnoun, M., Baudry, D., Louis, A.: Simulation et
optimisation de la tournée des véhicules pour la collecte de biodéchets conditionnés
(2017)
9. Mazar, M., Sahnoun, M., Bettayeb, B., Klement, N., Louis, A.: Simulation and
optimization of robotic tasks for UV treatment of diseases in horticulture. Oper.
Res. 1–27 (2020). https://doi.org/10.1007/s12351-019-00541-w
10. Mei, Y., Lu, Y.H., Hu, Y.C., Lee, C.G.: A case study of mobile robot’s energy
consumption and conservation techniques. In: Proceedings of the 12th International
Conference on Advanced Robotics, ICAR 2005, pp. 492–497. IEEE (2005)
Dynamic Scheduling of Robotic Mildew Treatment by UV-c in Horticulture 507
11. Oberti, R., Marchi, M., Tirelli, P., Calcante, A., Iriti, M., Tona, E., Hočevar, M.,
Baur, J., Pfaff, J., Schütz, C., et al.: Selective spraying of grapevines for disease
control using a modular agricultural robot. Biosyst. Eng. 146, 203–215 (2016)
12. Powell, W.B.: Approximate dynamic programming: lessons from the field. In: Sim-
ulation Conference, 2008, WSC 2008, Winter, pp. 205–214. IEEE (2008)
13. Sahni, J., Vidyarthi, D.P.: A cost-effective deadline-constrained dynamic schedul-
ing algorithm for scientific workflows in a cloud environment. IEEE Trans. Cloud
Comput. 6(1), 2–18 (2015)
14. Sakai, S., Iida, M., Osuka, K., Umeda, M.: Design and control of a heavy material
handling manipulator for agricultural robots. Auton. Robots 25(3), 189–204 (2008)
15. Sistler, F.: Robotics and intelligent machines in agriculture. IEEE J. Robot.
Autom. 3(1), 3–6 (1987)
16. Southall, B., Hague, T., Marchant, J.A., Buxton, B.F.: An autonomous crop treat-
ment robot: part I a Kalman filter model for localization and crop/weed classifi-
cation. Int. J. Robot. Res. 21(1), 61–74 (2002)
17. Takeda, F., Janisiewicz, W., Smith, B., Nichols, B.: A new approach for strawberry
disease control. Eur. J. Hortic Sci. 84(1), 3–13 (2019)
18. Tranier, J.: Vers une vision intégrale des systèmes multi-agents. Ph.D. thesis, Uni-
versité Montpellier II, Montpellier, Thèse de doctorat (2007)
19. Wilensky, U., Evanston, I.: NetLogo: Center for Connected Learning and
Computer-based Modeling, pp. 49–52. Northwestern University, Evanston (1999)
20. Wu, T., Powell, W.B., Whisman, A.: The optimizing simulator: an intelligent anal-
ysis tool for the military airlift problem. Unpublished Report. Department of Oper-
ations Research and Financial Engineering, Princeton University, Princeton (2003)
21. Xu, Y., Sahnoun, M., Mazar, M., Abdelaziz, F.B., Louis, A.: Packaged bio-waste
management simulation model application: Normandy region, France. In: 2019 8th
ICMSAO, pp. 1–5. IEEE (2019)
Understanding Data-Related Concepts in Smart
Manufacturing and Supply Chain Through Text
Mining
1 Introduction
The ever-increasing computing power, storage capacity, data availability, and faster inter-
net connection have led to industrial systems recently supported by new technologies
such as virtual reality, Internet of Things or cloud computing. [1] identified nine main
pillars for the realization of Industry 4.0, also known as smart manufacturing. One of
these major enablers is big data analytics (BDA), which has arisen from the explosion of
industrial data generation, reaching around 1000 Exabytes per year [2]. BDA has been
shown to provide benefits both in production planning and control [3] as well as in sup-
ply chain management [4]. Indeed, data-driven companies have experienced increased
productivity and profit [4], allowing for paybacks of around 10–70 times their initial
investments in data warehousing facilities [5].
In the past decade, academia and industry have widely studied how to manage and
leverage these newly massive amounts of generated data to gain efficiency and prof-
itability. This has led to the extensive use of already existing data-related concepts as
well as to the emergence of new ones. In fact, data science is a research field that gathers
multiple disciplines such as statistics, artificial intelligence (AI), information theory and
computer science, to manage and analyze large datasets. Hence, several related concepts
like data mining (DM), machine learning (ML), and AI, which are strongly related to
each other, have often been used interchangeably in practice. As stated by [6], the differ-
ences between these terms are not fully clear in literature. This is also suggested by the
authors of [7], who appreciate that there is little consensus around concepts such as big
data. One reason for this may be the fact that some of these ideas have rapidly evolved
in short time-lapses, making them difficult to define. Additionally, the development of
the data science field has been significantly influenced by marketing activities [7], which
may also explain existing trends in the usage of different terminologies. For instance, the
key concept of AI has been increasingly used over the past three years and sometimes
even morphed with the terms data science and ML [8]. Yet, a wrong use of concepts is
likely to hinder the adoption of BDA in companies, as the understanding of fundamental
concepts and tools of BDA is an essential prerequisite for implementation.
In this context, authors above outlined the need for fundamental statements and a
coherent nomenclature of data science. As the related concepts rapidly evolve, the time
dimension should be considered when exploring these definitions. Related work suggests
that data-related concepts have not been defined nor disambiguated yet through a text
mining approach in the context of smart manufacturing and supply chain. However, the
literature on data science should contain valuable information on the way the scientific
community understands and perceives the different data-related concepts. Therefore, this
paper has two main research objectives: first, to analyze the related literature based on a
text mining approach to build a consistent understanding of seven data-related concepts
in the context of smart manufacturing and supply chain management; and secondly, to
identify future trends in these two domains from the text mining analysis. The seven
data-related concepts (presented in extenso for the sake of clarity) that will be analysed
are the following: machine learning, data analytics, big data analytics, data mining,
artificial intelligence, data engineering, and data management.
The remainder of this article is organized as follows: Sect. 2 reviews related work and
explains the contribution of this study. Section 3 illustrates the research methodology.
Section 4 presents the results. Finally, Sect. 5 concludes this research and proposes future
work perspectives.
In the past, few authors have tried to define and analyse the relations between the various
concepts of data science from different perspectives. [7] defined big data and analytics
with a particular focus on unstructured data such as text, audio, video, and social media.
[9] proposed definitions of several key concepts such as data science, DM, ML, AI,
510 A. Nguyen et al.
big data, and analytics. The relations between these concepts and how they have been
understood and used over years were also explored based on Google trends [8, 9]. For
instance, the authors outlined that the term DM was more popular than data science until
2016 and had then evolved to become “a split concept between ML and data science
itself” [9]. Recently, [6] proposed a definition and disambiguation of the key concepts
of AI, DM, ML and statistics, based on their fundamental objectives. However, related
work suggests that data-related concepts have not been defined nor disambiguated yet
through a text mining approach in the context of smart manufacturing and supply chain.
Text mining can be defined as the use of statistical analysis, computational linguistics
and ML to extract information from text data [7]. It has been already used to explore
the literature of data-related topics applied in smart manufacturing or supply chain.
For instance, [10] used natural language processing to analyse about 4000 technical
abstracts, identify groups of topics, perspectives and research interest through years.
[11] performed a study using the VOSviewer to recognize research trends regarding the
field of supply chain resilience through the analysis of around 3000 research papers.
In their literature review, [3] employed VOSviewer to identify the keywords related to
most recent results on ML applied to production planning and control. Nevertheless,
even if these papers used text mining, none of them focused into providing definitions
or disambiguation for data-related concepts.
This paper builds on the hypothesis that the way in which authors have used the
data-related terminology over time may contain valuable information on key concepts
and trends. Thus, it aims to analyse the related literature based on a text mining approach
to build a consistent understanding of data-related terms. Additionally, future trends in
the context of smart manufacturing and supply chain are identified.
3 Methodology
The research approach adopted in this paper is detailed in Fig. 1. And consists of the
following steps: collect the research material; define the research objectives; perform
the analysis on a sample of 3858 scientific articles using text mining.
The research material was collected using the method proposed by [12] to perform
literature reviews. This method has been successfully applied in other studies whose
objective was to derive knowledge from scientific literature. For instance, [3] employed
it to select 93 articles to assess the state of the art of ML in production planning and
control; [13] analysed 23 case studies to review the state of Industry 4.0 in small and
medium enterprises, and [14] explored the integration of ERP systems in and between
organizations by reviewing 35 papers. However, these studies have all performed a full
text analysis of the paper sample, which greatly limits the number of papers that can
be considered. Instead, text mining allows handling a much larger number of articles.
The three scientific databases SCOPUS, ScienceDirect, and IEEE were surveyed during
the period 20–21 March 2020 using the following string chain in titles, abstracts, and
keywords: (“machine learning” OR “data analytics” OR “big data analytics” OR “data
mining” OR “artificial intelligence” OR “data engineering” OR “data management”)
AND (“supply chain” OR “Industry 4.0” OR “smart manufacturing”). As Industry 4.0
was first introduced in the Hannover Fair in 2011, only papers published from 2011
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 511
were selected (R1). Additionally, only results labelled as “Review Articles”, “Research
Articles” or “Book Chapters in ScienceDirect (RSd) were considered. Finally, results
were merged, and duplicates were removed, resulting in a final sample of 3858 articles.
This paper has two main objectives: first, to derive a common understanding of the
concepts of machine learning, data analytics, big data, data mining, artificial intelligence,
data engineering and data management, based on the way authors have used them; sec-
ond, to identify trends and research opportunities in the context of smart manufacturing
and supply chain. However, the term data engineering was eventually not analysed as
no paper using it in its title, abstract, or keywords was found.
The VOSviewer software was employed to analyse and visualize the text metadata
(i.e. titles, abstracts, and keywords) of the final sample of articles. It allowed defining a
lexical field of data science composed of 47 terms. It also enabled the identification of new
research trends in smart manufacturing and supply chain based on average publication
years. Additionally, the relatedness (based on the Jaccard coefficient and terms co-
occurrences in titles, abstracts and keywords) between the six concepts herein analysed
512 A. Nguyen et al.
and the 47 data science terms was computed and the usage frequencies of the concepts
over time were also compared.
4 Results
This subsection analyses how the key concepts of ML, data analytics, big data, DM, AI,
and data management have been used and understood in the scientific literature from
2011 to 2020. Figure 2 displays the usage frequency in related publications of the six
concepts analysed. It enables identifying trends in the terminology of data science.
Fig. 2. Usage frequency of ML, data analytics, big data, DM, AI, and data management from
2011 to 2020
Table 1 provides for each of these concepts the ten most related notions in the lexi-
cal field of data in industrial management. The relatedness (Rel.) between two terms is
obtained by computing the Jaccard coefficient based on terms co-occurrences in titles,
abstracts, and keywords. It enables disambiguating the concepts by identifying key dif-
ferences between them. Thus, the analysis of Fig. 2 combined with Table 1 allows
understanding how the six different key concepts of data science have evolved over
time.
Machine Learning
The concept of ML encapsulates several common notions with DM, which may explain
why both have often been used synonymously [6]. These notions, namely prediction,
classification, neural network, big data, and optimization refer to tasks and techniques
that enable to derive knowledge from large amounts of data. However, ML has been
more often used together with data sources (e.g. internet of things, sensors) while DM
has been rather associated with the notion of decision-making. This suggests that ML
refers to the application of techniques while DM rather refers to a decision-support
process which is in accordance with the definition proposed by [6]. In addition, Fig. 2
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 513
highlights that the usage of the term ML has been continuously increasing over the past
decade and was the most popular in 2019 among the six concepts herein analysed.
Data Analytics
In the 2010s, the terms big data and data analytics have often been used together to form
the concept of “big data analytics”, as Table 1 testifies it. Also, they have been widely
associated with terms that relate to the generation or processing of large amounts of data
by new technologies (e.g. internet of things, cloud-computing, sensors). While big data
514 A. Nguyen et al.
refers to these data streams, analytics rather refers to how to leverage them for decision
support, which is expressed by the terms decision-making and optimization.
Big Data
Since the term big data refers to the data streams that are generated by new technologies
such as the Internet of Things (IoT) and sensors, it is a core concept strongly related
to data analytics, AI, ML, data management and DM. Also, cloud-computing is found
to be close to big data, as it enables the processing of massive datasets. The number
of articles using this term in their title, abstract, or keywords has soared from 2011
onwards, exceeding 25% of the total number of articles in 2018. However, the concept
seems to have become less popular in 2019, which highlights a recent evolution of the
terminology used to describe these data streams.
Data Mining
The analysis of Table 1 suggests that the concept of DM has close meaning to data
analytics and ML. A key difference lies in the fact that DM and data analytics serve a
specific purpose which is decision-making, while ML relates to techniques allowing to
perform tasks (e.g. prediction, classification). Figure 2 highlights that the term DM has
significantly lost popularity from 2015 onwards while the term data analytics has been
increasingly used in publications. This suggests that the two concepts have been under-
stood as synonyms and that data analytics has progressively replaced DM. Nevertheless,
it is noteworthy that the concept of data analytics is more likely to be associated with
technologies such as cloud-computing and the Internet of Things. Hence, it seems that
concepts related to DM mainly raise theoretical considerations, while data analytics is
closer to pragmatic concepts.
Artificial Intelligence
Table 1 highlights that AI is a transversal concept that includes the notions of ML,
information systems security (e.g. blockchain, security), and data technologies (e.g.
cloud computing). As such, it can be considered as the broadest concept that encapsulates
ML, data analytics, big data, DM, and data management.
Data Management
Data management is generally defined as the process that collects, stores, prepares, and
retrieves data in a secure way and ensures data quality [7]. This is also suggested by
its high relatedness to the notions of data sources (e.g. rfid, sensors) and security (e.g.
security, blockchain). It is therefore a complementary concept to DM and data analytics,
which refer to processes that analyse and extract knowledge from data. However, the
decline in use of this term as well as the high relatedness between AI and the notions of
data sources, security, and information technology solutions suggest that the concept of
data management has been in practice included and replaced by the term AI.
chain. Figure 3 provides a visualization of the network of the top 50 most frequent terms.
In such a network, each concept is represented by a bubble. The keyword frequency is
depicted by the size of the bubble and its label. Co-occurrence of keywords is represented
by lines, where thicker lines suggest higher co-occurrence frequencies. Regarding the
spatial distribution of terms in the network, keywords which are normally used together
will be shown closer in the network. Finally, the scale at the bottom of the image maps the
colour of concepts to their average publication year (AvgY). Additionally, the concepts
used in the queries were highlighted with a red frame. Findings suggest that terms such
as blockchain (AvgY 2019.04), digital twin (AvgY 2018.91), the Industrial Internet of
Things or IIoT (AvgY 2018.90) are the most recent topics in research.
Fig. 3. Network visualization for the top 50 most common author keywords
Blockchain presents itself as the most recent topic. Additionally, it has a direct link
with the term traceability. This suggests that there may be a growing research interest
into the use of blockchain to improve the traceability through the supply chain or the
production process.
IIoT and digital twin being recent topics support findings in the literature review
performed by the authors of [3], which suggests that research in ML applied to man-
ufacturing rarely uses IoT technologies to collect data. Additionally, authors also state
516 A. Nguyen et al.
that there are few studies exploring the adaptation of ML models to the dynamics of
the manufacturing system through new data. Hence, the fact that recent trends focus on
IIoT and digital twins suggests that researchers are working on the integration of IoT
technologies and ML as well as on the adaptation of ML models by using digital twins
to obtain updated data of the production process.
Predictive maintenance (AvgY 2018.74) was found to be a recent trend too. This
indicates that research in manufacturing is still strongly focusing on harnessing main-
tenance data to improve the overall production process. Furthermore, deep learning
(AvgY 2018.69) still being a recent topic points out a growing interest in the use of neu-
ral network-based architectures to solve industrial problems. This is probably because
deep learning architectures normally excel at complex tasks in domains such as computer
vision or natural language processing.
Finally, it seems that concepts related to Industry 4.0 (2018.52) and smart manu-
facturing (AvgY 2018.25) are, by far, more recent than those related to supply chain
management (AvgY 2016.38) and supply chain (AvgY 2015.96). This may suggest
that researchers are widening the scope of concepts such as Industry 4.0 and smart
manufacturing to encompass topics related to supply chain.
5 Discussion
To further explore the disambiguation concepts, this section provides a summary table
with similarities and differences that have been identified through the text mining app-
roach presented above. In Fig. 4., the cells that are coloured in green describe similari-
ties between the six terms analysed while those coloured in red outline key differences
between them. Additionally, as this research aims at enhancing the understanding of data
science concepts, the cells on the diagonal of Fig. 4. Also provide proposed definitions
of the six terms, based on key references in the literature [7, 12–14]. This representation
was inspired from the study performed in [6].
From the results, it appears that terms such as data analytics, ML, AI, and DM
have very close meaning. Nevertheless, some differences can be appreciated: It seems
that the term data analytics is better suited for applications of techniques that harness
data to serve a specific purpose of decision-making. For instance, a system able to
improve customer service through the analysis of calls records should be referred to
as analytics [7]. Furthermore, data analytics employs a broad set of multidisciplinary
techniques to achieve its objective. Machine learning should be employed when referring
to algorithms that are able to improve performance on a specific task (e.g. using support
vector machines to pick the most appropriate type of production plan rescheduling [15]).
AI rather relates to systems able to behave like humans with respect to a specific task (e.g.
a conversational agent or chatbot aiming to provide human-like answers to questions
[16]). Finally, DM is mainly related to the use of statistical models or algorithms to
discover hidden patters and generate knowledge (e.g. using association rule mining to
harness textual maintenance reports [17]).
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 517
Fig. 4. Proposed similarities, differences, and definitions for the explored data-related concepts
This study employed a text mining approach and the method of a systematic literature
review to analyse 3858 scientific articles on data science in smart manufacturing and
supply chain. This analysis has allowed deriving a common understanding and disam-
biguation of six key concepts in data science, namely machine learning, data analytics,
big data, data mining, artificial intelligence and data management from the way authors
have used them in the literature. Furthermore, recent research trends in Industry 4.0 and
supply chain management with data-driven approaches were identified.
Regarding the disambiguation of data-related concepts, it was found that authors
have progressively used broader terms such as AI, that encapsulate several other key
concepts (e.g. machine learning and data management). The difference between ML,
DM and data analytics was discussed. The former was found to be strongly related to
algorithmic techniques such as neural networks to perform tasks such as classification
or clustering, while the two others were associated with decision-making processes.
518 A. Nguyen et al.
Recent literature pieces on Industry 4.0 and supply chain management have exten-
sively studied the use of blockchain to improve traceability, IIoT, and digital twins. Also,
predictive maintenance and deep learning applications were found to be still attracting
research interest.
Future work will mainly focus on the extension of this method to a general study
not exclusively focusing on Industry 4.0, smart manufacturing, and supply chain. Addi-
tionally, including industrial sources in the research material should provide further
information on global trends in using data-related concepts. For instance, this study has
revealed that the term data engineering is very little popular amongst academia and yet it
is considered by industrial practitioners as an essential component of data science [18].
References
1. Ruessmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., Harnisch, M.:
Industry 4.0: the future of productivity and growth in manufacturing. Boston Consult. Group
9, 54–89 (2015)
2. Tao, F., Qi, Q., Liu, A., Kusiak, A.: Data-driven smart manufacturing. J. Manuf. Syst. 48,
157–169 (2018). https://doi.org/10.1016/j.jmsy.2018.01.006
3. Usuga Cadavid, J.P., Lamouri, S., Grabot, B., Pellerin, R., Fortin, A.: Machine learning applied
in production planning and control: a state-of-the-art in the era of industry 4.0. J. Intell. Manuf.
31, 1531–1558 (2020). https://doi.org/10.1007/s10845-019-01531-7
4. Waller, M.A., Fawcett, S.E.: Data science, predictive analytics, and big data: a revolution that
will transform supply chain design and management. J. Bus. Logist. 34, 77–84 (2013). https://
doi.org/10.1111/jbl.12010
5. Rainer, C.: Data mining as technique to generate planning rules for manufacturing control in
a complex production system. Springer (2013). https://doi.org/10.1007/978-3-642-30749-2
6. Schuh, G., Reinhart, G., Prote, J.P., Sauermann, F., Horsthofer, J., Oppolzer, F., Knoll, D.:
Data mining definitions and applications for the management of production complexity. In:
52nd CIRP Conference on Manufacturing Systems, pp. 874–879. Elsevier B.V., Ljubljana
(2019). https://doi.org/10.1016/j.procir.2019.03.217
7. Gandomi, A., Haider, M.: Beyond the hype: big data concepts, methods, and analytics. Int.
J. Inf. Manage. 35, 137–144 (2015). https://doi.org/10.1016/j.ijinfomgt.2014.10.007
8. Mayo, M.: The Data Science Puzzle - 2020 edn. https://www.kdnuggets.com/2020/02/data-
science-puzzle-2020-edition.html
9. Mayo, M.: The data science puzzle, explained. https://www.kdnuggets.com/2016/03/data-sci
ence-puzzle-explained.html/2
10. Sharp, M., Ak, R., Hedberg, T.: A survey of the advancing use and development of machine
learning in smart manufacturing. J. Manuf. Syst. 48, 170–179 (2018). https://doi.org/10.1016/
j.jmsy.2018.02.004
11. Bevilacqua, M., Ciarapica, F.E., Marcucci, G.: Supply chain resilience research trends: a
literature overview. IFAC-PapersOnLine 52, 2821–2826 (2019). https://doi.org/10.1016/j.ifa
col.2019.11.636
12. Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)
13. Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach. Prentice Hall Press,
Harlow (2009)
14. Tiwari, S., Wee, H.M., Daryanto, Y.: Big data analytics in supply chain management between
2010 and 2016: Insights to industries. Comput. Ind. Eng. 115, 319–330 (2018). https://doi.
org/10.1016/j.cie.2017.11.017
Understanding Data-Related Concepts in Smart Manufacturing and Supply Chain 519
15. Wang, C., Jiang, P.: Manifold learning based rescheduling decision mechanism for recessive
disturbances in RFID-driven job shops. J. Intell. Manuf. 29, 1485–1500 (2018). https://doi.
org/10.1007/s10845-016-1194-1
16. Leong, P.H., Goh, O.S., Kumar, Y.J.: An embodied conversational agent using retrieval-based
model and deep learning. Int. J. Innov. Technol. Explor. Eng. 8, 4138–4145 (2019). https://
doi.org/10.35940/ijitee.L3650.1081219
17. Grabot, B.: Rule mining in maintenance: analysing large knowledge bases. Comput. Ind. Eng.
139, 1–5 (2018). https://doi.org/10.1016/j.cie.2018.11.011
18. Dhungana, S.: On building effective data science teams. https://medium.com/craftdata-labs/
on-building-effective-data-science-teams-4813a4b82939. Accessed 16 May 2020
Benchmarking Simulation Software Capabilities
Against Distributed Control Requirements:
FlexSim vs AnyLogic
1 Introduction
The advent of the Industry 4.0 paradigm introduces a set of information and communica-
tion technologies that allow both information processing to be distributed, and decision-
making to be decentralized over several autonomous and intelligent production entities
including smart manufacturing assets (machines, robots, material handling devices, etc.),
augmented operators and intelligent products [1]. This distribution/decentralization par-
ticularly encourages the design and development of distributed, product- driven control
architectures, where intelligent products can play more active roles in operational and
decision-making processes [1].
As manufacturers are often reluctant to experiment with new control architectures
directly on their production systems [2] mainly due to risk aversion considerations (loss
of production capacity, functionality, quality, performance, etc.), they prefer first assess-
ing the control architecture using simulation before implementing it on real scale. Indeed,
simulation is a widespread practice that offers a methodology and a set of tools [3] to test,
evaluate, compare and validate different design alternatives at lower costs and almost
without risk. However, as will be discussed in Sect. 2, there is still a lack of papers
that provide guidelines to benchmark the capabilities of available simulation software
against the requirements of distributed product-driven control in order to select the sim-
ulation software that offers the set of capabilities that better meet these requirements.
The aim of this paper is then to provide such guidelines based on the benchmark-
ing of two representatives of available simulation software: FlexSim and AnyLogic.
FlexSim (https://www.flexsim.com/) is considered due to its high ranking [4], and as a
representative of discrete event simulation software. AnyLogic (https://www.anylogic.
com/) is considered as a representative of multi-agent based modelling and simulation
software [5].
The remaining of the paper is organized as follows: Sect. 2 reviews the related works
with respect to the use of simulation in distributed manufacturing control. Then, Sect. 3
provides an analysis of distributed control requirements, which are illustrated on a case
study (cf. Sect. 4) and further matched against the capabilities of FlexSim (cf. Sect. 5.1)
and AnyLogic (cf. Sect. 5.2). Finally, a conclusion provides a comparison of the strengths
and weaknesses of each software, and some future works are presented.
performance assessment shows that very few of them exist. None of the above-mentioned
references provides an analysis of the capabilities of available simulation software to
meet requirements of distributed product-driven control. Therefore, this paper is an effort
to fill in this gap. The paper compares the strengths and weaknesses of two available
simulation software with respect to the implementation of distributed, product-driven
control. FlexSim is considered due to its high ranking, and as a representative of discrete
event simulation software [4]. AnyLogic is considered as a representative of multi-agent
based modelling and simulation software [5]. Both software provide trial and evaluation
versions available on-line that could be used to implement the case study considered in
Sect. 4.
K 3
1 Decisional
Scheduling I
Product #2 process
expert D
4
5 Resource #3
Product #1
Resource #2
Resource #1
1. Production entities (item numbered (1) in Fig. 1): simulators shall be able to
consider several features of several types of production entities:
(b) The resources that offer different services to the products (e.g. transformation
services for machines, maintenance and quality control as support services,
transport and storage services for material handling systems);
(c) Decision entities, human and/or artificial, to synchronize, coordinate and
perform analysis and decision-making processes.
2. Informational structures (item numbered (2) in Fig. 1): simulators shall enable
considering entity attributes and properties related to product and process specifi-
cations (e.g. bills of materials, routings, machining and process parameter settings,
tolerances, services required to obtain a given product, etc.), as well as indicators and
descriptors of the evolution of manufacturing processes (e.g. key performance indica-
tors, statuses and reports describing normal, tolerable, satisfactory and/or abnormal
operating conditions). To achieve product driven control, simulators shall enable
intelligent products to handle informational structures that are compliant with a DIK
model [16]:
(a) D (for Data) represents the properties and statuses of production entities and
processes, as well as the events generated by, or occurring to, production enti-
ties in interaction with each other and within their environment. Data can be
considered as raw facts, without meaning, issued from measurements (e.g. data
acquisition from sensors, such as velocity, temperature, pressure, etc.);
(b) I (for Information) obtained by some data processing to add meaning to raw
data, for example to have answers to questions, such as “what” event happened,
“when” and “where” did it happen, “who” or “what” generated it, “how” is it
described and eventually “who” is in charge of dealing with it;
(c) K (for Knowledge) represents expertise and can be seen as groups of information
that are linked by semantic relations.
(a) Direct interaction (item numbered (4) in Fig. 1), for example using direct
communication channels and exchanges of messages.
(b) Indirect interaction (item numbered (5) in Fig. 1), using the environment and
communication channels such as blackboards or stigmergy [17].
(c) More complex interactions, such as negotiation protocols, should be enabled.
5. Functioning modes (item numbered (6) in Fig. 1): To achieve product-driven con-
trol, simulators shall enable intelligent products to represent and be aware of all
operational settings of the manufacturing system in terms of both normal, degraded
and disturbed operational conditions.
It is worth noting that production entities and informational structures are common
to many production systems and thus easily handled by simulators. However, decision-
making processes, interactions and functioning modes are rather business dependent,
specific to each production system, and particularly related to the distribution design
choices and mechanisms of the suggested control architectures. The implementation of
these requirements will challenge the capabilities of simulation software in terms of
ease of use, ease of configuration, ease of custom code programming and existence of
pre-built libraries. The case study of Sect. 4 is built to evaluate the above requirements
through different interactions between entities.
4 Case Study
system (AS/RS), two equivalent machines M1 and M2 and an AS/RS to store work in
progress (WIP) and finished products. As the main purpose of the paper is not focused on
complex product design and manufacturing, product routings with only one operation (to
be performed interchangeability either on machine M1 or machine M2) are considered.
Machines are subject to failures, and products are subject to quality defects. The scrap
areas receive WIP products if they do not meet quality requirements. Decision and quality
control points are located on the main conveyor loop as milestones so that intelligent
products check updates about indicators and make decisions.
When a product leaves a production resource (M1 or M2), it crosses a quality control
point where it acquires indicators about its quality. According to this data, and using a
decision mechanism (cf. Sect. 4.2), the product can make one of four possible product
decisions (PD) at quality control points:
These decisions are taken using a mechanism such as the one detailed in the next
paragraph.
at each decision and quality control point in reaction to availability and reliability distur-
bances. Starting from the raw material AS/RS, the global objective for each product is
to reach the finished products AS/RS. As in [18], three types of criteria are considered,
related to: a) production costs, b) processing and transportation times and c) machine
reliability. Each criterion type is associated with a set of indicators. A product applies
AHP at decision points to select a decision among PD1 to PD4, and at quality control
points to select a decision among PD5 to PD8. First the product acquires the indicators
associated with the type of decision point. Then the product performs pairwise compar-
isons between decisions according to each indicator. Then, compari-sons of decisions
according to indicators are aggregated to comparisons of decisions according to criteria.
Finally, the decision that best suits the global objective is selected. We suggest reference
[18] for more details relative to the different steps of implementation.
Production Entities: FlexSim offers a rich and user-friendly library containing sim-
ulation model objects that can be used to design simulation models. A source node is
used to simulate and customize the product arrivals, and to assign the processing cost
and time indicators to products as flow items. Three queuing objects for raw material,
WIP and finished product inventories are constructed for the waiting areas. Conveyors
are constructed to move the flow item. Machines are simulated using two production
servers. Two sink nodes are used to simulate the scrapping areas 1 and 2.
Informational Structures: FlexSim allows different ways to store and process data
and information. It can route items through different resources based on data embedded
in a flow item since its creation at source nodes. It can connect with external data
sources (e.g. MS Excel spreadsheets) and databases, such as ERP, MES, HMI, PLC and
OPC servers and exchange data using Open Database Connectivity1 (ODBC) connection.
FlexSim defines specific indicators for each model object. For example, the holding
costs associated with inventory are defined in the queuing objects. All indicators are
communicated by objects and stored in two global tables named “Indicators at Di” and
“Indicators at Qj”. The role of the tables is to provide data to decision and quality
control points in order to perform the AHP calculations. Global tables enable indirect
communication between products and production resources. When a decision is assigned
to a product, FlexSim stores this decision in a global list that can be exported to Excel.
Interactions: In FlexSim, library objects are connected to define different process flows
and to allow the exchange of physical flows between model objects. The flow of informa-
tion between objects can be implemented by sending direct messages on state conditions.
Custom communication protocols between objects can be programmed on the FlexSim
snippet using either FlexScript (FlexSim’s proprietary language) or C++. Products are
considered as inert flow items that move through the model objects according to their
predefined routings, without having active roles or interactions with other entities. At
decision and quality control points, the supervisor can change the criteria and weightings
of the AHP mechanism using FlexSim interfaces.
Decision-Making: as products are inert flow items, they cannot directly process infor-
mation or do calculations, and consequently cannot be directly endowed with decision
mechanisms. To solve this problem, the AHP mechanism is implemented on decision
and quality control points (i.e. outside the product). When a product reaches a decision
or quality control point, a customized logic encoded on each point allows updating the
product routing. Such logic can be programmed using FlexScript or C++ programming
languages. FlexSim can directly compile custom code written in C++ via its snippet.
It can create.dll2 files in C++ and link them to FlexSim. It can connect with other
programming environments and languages, such as Python and R.
Functioning Modes: FlexSim enables defining customized indicators to represent
product quality, and customized routines to change the values of these indicators. For
1 ODBC is a standard application programming interface (API) for accessing database manage-
ment systems (DBMS).
2 Dynamic-link library (DLL) is Microsoft’s implementation of the shared library concept in the
Microsoft Windows and OS/2 operating systems.
528 A. Attajer et al.
AnyLogic enables modelling all entities (products, resources, storage, and scrapping
areas) as agents using agent-based modelling.
Informational Structures: Product cost and time indicators are defined as productA-
gent(s)’ related parameters since their creation. Each product knows its production cost
and processing time on M1 and M2. Before a product is processed on a machine, the
quality indicator qp does not contain any value. AnyLogic assigns the value of the indi-
cator qp embedded in productAgents using a set of statistical distribution functions to
simulate product defects. ProductAgents can consult each time the other indicators from
all agents. The native Java environment supports custom Java code, external libraries,
and external data sources.
Table 1. Summary of simulation software capabilities for product driven distributed control
requirements. (Legend: Good, Fair, Poor)
AnyLogic FlexSim
Production entities
Products
Resources
Decision-making entities (human and/or artificial)
Interactions
Product-Product
Product-Production resource
Product-Human
Production resource- Production resource
Functioning modes and disturbances
Machine disturbances
Product disturbances
Intelligence level of the entities (except product for FlexSim)
Associate informational structures with production entities (except
product for FlexSim)
Decision-making (except product for FlexSim)
We are considering an extension of the work to take into account several types of
disruptions in the production environment (e.g., late delivery of raw materials, conveyor
breakdown, etc.) in order to progress more on this work and further develop our model.
Even several products can be interconnected (a network of products able to communicate
with each other); in this case the products can share their experiences when they make
a decision, and they can update the set of actions.
References
1. Derigent, W., Cardin, O., Trentesaux, D.: Industry 4.0: contributions of holonic manufacturing
control architectures and future challenges. J. Intell. Manuf., 1–22 (2020)
2. Leitão, P., Mařík, V., Vrba, P.: Past, present, and future of industrial agent applications. IEEE
Trans. Ind. Inf. 9(4), 2360–2372 (2013)
3. Mourtzis, D.: Simulation in the design and operation of manufacturing systems: state of the
art and new trends. Int. J. Prod. Res. 58, 1–23 (2019)
4. Dias, L.M.S., Vieira, A.A.C., Pereira, G.A.B., Oliveira, J.A.: Discrete simulation software
ranking - a top list of the worldwide most popular and used tools. In: Proceedings of the 2016
Winter Simulation Conference, pp. 1060–1071 (2016)
5. Abar, S., Theodoropoulos, G.K., Lemarinier, P., O’Hare, G.M.P.: Agent based modelling and
simulation tools: a review of the state-of-art software. Comput. Sci. Rev. 24, 13–33 (2017)
6. Meyer, G.G., Framling, K., Holmstrom, J.: Intelligent products: a survey. Comput. Ind. 60,
137–148 (2009)
7. Kovalenko, I., Tilbury, D., Barton, K.: The model-based product agent: a control oriented
architecture for intelligent products in multi-agent manufacturing systems. Control Eng. Pract.
86, 105–117 (2019)
8. Dias-Ferreira, J., Ribeiro, L., Akillioglu, H., Neves, P., Onori, M.: BIOSOARM: a bio-inspired
self-organising architecture for manufacturing cyber-physical shopfloors. J. Intell. Manuf.
29(7), 1659–1682 (2018)
9. Zhang, L., Zhou, L., Ren, L., Laili, Y.: Modeling and simulation in intelligent manufacturing.
Comput. Ind. 112, 103123 (2019)
10. Swain, J.J.: 2019 Simulation Software Survey, Software Survey (2019). https://pubsonline.
informs.org/do/10.1287/orms.2019.05.10/full/. Accessed 18 Apr 2020
11. Guimarães, A.M.C., Leal, J.E., Mendes, P.: Discrete-event simulation software selection for
manufacturing based on the maturity model. Comput. Ind. 103, 14–27 (2018)
12. Fumagalli, L., Polenghi, A., Negri, E., Roda, I.: Framework for simulation software selection.
J. Simul. 13(4), 286–303 (2019)
13. Schreiber, S., Fay, A.: Requirements for the benchmarking of decentralized manufacturing
control systems. In: IEEE International Conference on Emerging Technologies and Factory
Automation, ETFA (2011)
14. Mönch, L.: Simulation-based benchmarking of production control schemes for complex
manufacturing systems. Control Eng. Pract. 15(11), 1381–1393 (2007)
15. Cardin, O., L’Anton, A.: Proposition of an implementation framework enabling benchmark-
ing of Holonic manufacturing systems. In: Studies in Computational Intelligence, vol. 762,
pp. 267–280 (2018)
16. Ackoff, R.: From data to wisdom. J. Appl. Syst. Anal. 16(1), 3–9 (1989)
17. Valckenaers, P., Kollingbaum, M., Van Brussel, H.: Multi-agent coordination and control
using stigmergy. Comput. Ind. 53(1), 75–96 (2004)
18. Ounnar, F., Ladet, P.: Consideration of machine breakdown in the control of flexible production
systems. Int. J. Comput. Integr. Manuf. 17(1), 69–82 (2004)
A Proposal to Model the Monitoring
Architecture of a Complex Transportation
System
Issam Mallouk1,2(B) , Thierry Berger1 , Badr Abou El Majd2 , and Yves Sallez1
1 University Polytechnique des Hauts-de-France, LAMIH UMR CNRS n°8201,
59313 Valenciennes, France
{Issam.Mallouk,Thierry.Berger,Yves.Sallez}@uphf.fr
2 Mohamed V University, FSR, CeReMAR, LMSA Lab, Rabat, Morocco
Issam_Mallouk@um5.ac.ma, Abouelmajd@fsr.ac.ma
1 Introduction
Nowadays, enterprises in the transportation sector must face huge competition and must
deal with important societal, economic and environmental issues [1]. Fleet operators aim
to maintain and increase the availability and reliability of the fleet to optimize the ratio
between operation and maintenance. Many studies have concluded that maintenance
accounts for more than 60% of the overall life cycle costs of complex moving systems
(e.g., planes, trains, ships…) [1]. In this context, CBM (Condition-based Maintenance)
and PHM (Prognostics and Health Management Maintenance) are essential approaches
to improve the performance of a fleet [2]. These two approaches lay on the exploita-
tion of an efficient monitoring architecture, diagnosing the failures and degradation of
vehicles’ subsystems and evaluating their impact on availability and reliability of the
fleet. If many works propose monitoring architectures in different domains (manufac-
turing, transportation…), they use often dedicated tools and there is a lack in terms of
generic model. To solve this issue, this paper proposes a generic model to design efficient
monitoring architectures.
The paper is organized as follow. In the first section, a short literature review of the
monitoring architectures is presented. The second section is dedicated to the proposition
of a holonic model able to describe the information chain and the different decisional
processes associated to the monitoring architectures. In the third section, this model is
applied to the monitoring of a truck fleet. Finally, conclusion and prospects are offered
in the last section.
2 Motivations
This part presents a brief analysis of the existing literature in the domain of monitoring
architectures. A monitoring architecture aims to detect, localize and identify the problems
that occur in a system [3]. Two levels are generally considered: the “vehicle” level
relative to the moving systems (i.e. vehicles) and the “fleet” level (e.g. maintenance
centre) where stakeholders (e.g., fleet manager, fleet maintainer) can take decisions to
improve the value chain associated with the transportation process [4]. In the rest of
the paper, it is assumed that a fleet is composed of vehicles, which are complex moving
systems that can be decomposed into subsystems (which can themselves be decomposed
into subsystems).
In the domain of condition monitoring and diagnostics of machines, the ISO
13374 standard provides a generic model of a monitoring architecture, using 6-layer
processing blocks [5]. These layers progress from raw data acquisition to maintenance
advisories. A fundamental part of any monitoring architecture is the stage “fault diagno-
sis”, responsible for detecting faults and isolating the faulty components to be repaired.
This standard is used as a guideline to monitor not moving machines in industrial pro-
cesses as well as vehicles [6]. Depending of the distribution of these six layers between the
“vehicle” level and the “fleet” level, several monitoring architectures can be considered.
In [7], the author proposes a typology with four types of architectures: “centralized”,
“edge-centralized”, “decentralized” and “decentralized and cooperative”.
In centralized architectures, the six layers introduced before are located in the “fleet”
level, which is responsible for collecting and processing all raw data from the vehicles
[7]. Most of these architectures use Big Data technologies [8]. However, this type of
architecture (primarily due to latency and data throughput) does not allow diagnosis
algorithms to be executed in real-time and (to limit data volume) does not consider rele-
vant information on the local context of the vehicle (e.g., external temperature, specific
functioning mode) [9].
The edge-centralized architecture is more recent and characterized by the introduc-
tion of intermediate “cloud node networks” for computing and communication using
edge-computing technologies [10]. This architecture lays on the creation of an interme-
diate level (between the vehicle level and the fleet level) insuring storage and processing
of raw data. This type of architecture can be considered as a technological evolution of
the previous centralized architecture.
In decentralized architectures, embedded diagnosis units (i.e. located on the vehicle)
support the 4th ISO layer and realize fault diagnosis of some vehicle’s sub-systems.
However these units, operating independently, use only limited observations of their
subsystems and do not communicate with each other [11]. Such architectures are then
534 I. Mallouk et al.
unable to perform cross analysis for errors discrimination and can lead to erroneous
diagnoses [7].
In the decentralized and cooperative architecture, the embedded diagnosis units
at the same level cooperate among them to enrich their local observations, take into
account the context of the subsystems and provide then more robust diagnose [12]. This
last architecture was applied successfully by our team to the fault diagnosis of door
systems of a railway transportation system [13]. In [7], this architecture was equally
applied for the monitoring of a fleet of trains.
In recent works [14], our team has proposed an approach to model the information
chain from “product” to “stakeholders”. This approach is based on a holonic view of the
product and a decision-making view of the processing of DIK (i.e. data, information, and
knowledge) collected on the product and its operational context. This approach (tech-
nologically agnostic) focuses on the final decision maker (i.e. stakeholder) so the choice
of a specific type of architecture (i.e., centralized, decentralized…) can be considered
in a second stage. In the next section, the generic model associated to this approach is
adapted to support the monitoring architecture of a fleet of vehicles.
3 Proposition
– Requirement #1: The model must be able to deal with the complexity of a fleet of
vehicles (e.g., trucks, trains, planes…) composed of several subsystems;
– Requirement #2: The model must be sufficiently generic to deal with the various needs
of the stakeholders implied in the fleet management;
– Requirement #3: The model must be technologically independent.
The next paragraphs propose a modelling approach able to fulfil these requirements.
– The cargo on which the process is applied (e.g. freight transported from A to B by a
truck);
– The human in the vehicle generally called ‘driver’. This one plays several roles during
the transportation process: obviously the main role is relative to the vehicle driving;
a second role is relative to the loading/unloading of the cargo; a third role, as local
stakeholder, concerns interventions on the vehicle (i.e. basic maintenance operations).
A Proposal to Model the Monitoring Architecture of a Complex Transportation System 535
– The task that defines how the process is managed for a specific function FPj . It is
characterized by prescribed procedures (e.g., how the cargo must be transported,
loaded and unloaded) and some performance criteria (e.g., transportation time, energy
consumption, safety of the cargo);
– The environment in which the transportation process occurs. Two main facets are
considered: physical (e.g., external temperature, hygrometry) and non-physical (e.g.
freight transportation legislation).
The secondary functions aim to enhance performance criteria (e.g., fleet availability
via predictive maintenance of the vehicles, monitoring of the freight). As described in
Fig. 1, these functions are handled by support systems (3) located on “vehicle” (i.e.
internal support system) and “fleet” (i.e. external support system) levels.
Primary Secondary
functions functions
3
4
STAKEHOLDER #1
FLEET
Fleet level
Vehicle level
STAKEHOLDER
VEHICLE Vi
Intervention
Raw
data
Cargo
1
Vi
Intervention
Environment
2 Context Ci
At the “vehicle” level, internal support systems exploit the raw data flow collected
on the transportation process by sensors and instrumentation associated to the different
subsystems (e.g. data sent by the Electronic Control Unit associated to the engine).
As previously explained, in the Decentralized and Cooperative architecture, the internal
support systems can generate refined and accurate information (i.e. diagnosis) to external
support systems (e.g. the remote maintenance centre).
At the “fleet” level, external support systems composed of diagnosis expert resources
(human and/or artificial) generate expertise results to the implied stakeholders (4). Exter-
nal support systems are immersed in a context relevant of the “fleet” level (i.e., transporta-
tion regulatory, financial aspects). Taking into account the generated expertise results,
536 I. Mallouk et al.
the stakeholders can then schedule interventions on the vehicle (e.g. replacement of a
part that is wearing out).
The generic characteristics of the primary and secondary functions allow fulfilling
Req. #2. The modelling of the secondary functions is detailed in the next section.
Decomposition
levels
STAKEHOLDER
Fleet level
FLEET
0
H0
STAKEHOLDER
fs1n Vehicle diagnosis
VEHICLE #2
1 fs2n
Vehicle level
H1 H2
2 fs11n fs12n
H11 H12 H121 H22
fs111 n
V12 V22
3 H111 H112 H113 H114 H115 H211 H212
C12 C22
V111 V112 V113 V114 V115 V211 V212
Engine Engine
C111 C112 C113 C114 C115 C211 C212
C11 C21
Wheels C1 Wheels C2
Vehicle #1 Vehicle #2
V12
V22
fsi n considered (fsi n ∈ Fsi ; Fsi set of all secondary functions for Hi) associated with the
system Vi . The latter and its context Ci constitute the holon body.
At each level i, a holon autonomously supports a decisional process relative to the
considered secondary function. In addition, the holon interacts with other holons at the
same level and (when relevant) with holons located on i + 1 and i − 1 levels, through
a collaboration space. In this last one, only concerned holons (on the same level and
upper and lower levels) can optimize their decisions. The optimization strategy can be
based on numerous principles from the literature, e.g. from the multi-agent community.
Depending of the complexity, this cooperation can be realized through, for example,
simple data sharing or a more complex distributed decision making process.
In the holarchy represented in Fig. 2 the vehicle is composed of several sub-systems
(e.g., engine, wheels, undercarriage…) and at each holon Hi a diagnosis process is
supported by a secondary function fsi n , giving birth to recursive diagnostic structures.
To perform its activity, each holon exploits data collected, relative to level, vehicle
subsystems and their respective contexts, and applies a decisional process; this process
is detailed in the next paragraph.
– D (for Data) are raw facts without meaning issued from measurements (e.g.,
temperature of the environment, pressure of the vehicle tyres).
– I (for Information) are obtained by adding informative details to data (D), such as
“when”, “where”, “who”, “how”, “what” (e.g., vehicle (what); in a warehouse (where);
at a specific time (when)).
– K (for Knowledge) can be seen as groups of information (I) that are linked by semantic
relations (e.g. engine V12 used in vehicle V1 in an operational context where the
external temperature of the environment is 35 °C).
According the Rasmussen’s approach [18], three levels can be distinguished for the
decision-making processes:
– At lower level, a reactive behaviour (or skill-based behaviour) can be used to generate
alarms by exploiting raw data (D). For example, a vehicle can send an alarm in case
of problem on the cargo (e.g. break in the cold chain for perishable goods).
– At mid-level, rule-based behaviour can exploit different information (I) sources to
generate refined information. For example, a model-based approach can be used by
an embedded diagnosis system to identify suspect components if a failure occurs on
vehicle equipment.
– At higher level, knowledge (K) can be used to improve understanding of the situation.
For example, a detailed analysis of the transportation context can lead a human expert
to explain a problem on the vehicle.
538 I. Mallouk et al.
The two previous models (i.e. DIK and Rasmussen) taken as references are generic
and technologically agnostic, fulfilling then the third requirement (Req. #3). They per-
mit to define both the nature of informative details (DIK) and cognitive levels of the
decisional process. In this context, it can be envisaged to exploit matching theoretical
fields (e.g., AI, analytics, ontology…) and associated technologies (e.g. embedded, edge
computing, cloud computing…).
In the next section, the proposed model is instantiated through a use case of
monitoring of a trucks fleet.
4 Use Case
The use case concerns a fleet of vehicles transporting hazardous substances. The oper-
ational information relative to the transport of dangerous substances is provided by
STMF [19], a leading Moroccan company in this sector. The project is currently ongo-
ing; only an overview of the planned developments is given and no implementation detail
is provided.
Ensuring a high level of reliability of the vehicles is a very important parameter
in order to achieve properly the delivery missions. A minor accident as a tyre burst
can cause the detachment of the tank containing the hazardous substance with dramatic
consequences for the transport company, the other road users and the environment.
Ignoring or failing to correctly set the tyre pressure may lead to accidents, can affect the
vehicle’s fuel consumption and tyre lifespan [20].
Relatively to the modelling approach presented in the previous part, the primary
function FP1 “transport cargo from A to B” is considered with:
In this application, the focus is held on secondary functions relative to the vehicle
tyres. According Fig. 2 and the holonic approach, several levels are considered, as
exhibited Fig. 3:
At the “tyre” level (Level #3), for each tyre a secondary function (e.g. fs111 n for the
holon H111) performs monitoring via temperature/pressure raw data. The direct tyre
pressure monitoring system employs temperature/pressure sensors on each wheel. The
current TPMS (Tyre Pressure Monitoring Systems) are based on pressure thresholds
A Proposal to Model the Monitoring Architecture of a Complex Transportation System 539
[21]. However, the internal temperature varies with the pressure and can generate a
false alarm. In fact the variation of tyre internal temperature can be considerably high,
especially in the hot Moroccan climate. The pressure can then exceed the fixed threshold
and induce a false fault alarm. It is imperative to dynamically determine the appropriate
pressure at certain temperature to avoid a false alarm and also to detect early tyre defects.
In [21], a rule-based approach, mixing temperature and pressure data, is used to evaluate
the tyre status and generate alarm. The future developments will take inspiration of this
approach.
At the “rolling undercarriage” level (Level #2), a secondary function (e.g., fs11 n
for the holon H11) takes into account the status of all the vehicle tyres (i.e. those of the
trailer and those of the tractor) and evaluates the possible consequence on the vehicle
handling. First, false alarms can be filtered taking into account the context and the
condition of the other tyres. For example, depending of the payload and of the vehicle
speed, in a left bend, if all the right wheels have a different pressure from the other left
wheels, the pressure problem can be explained by the context. This filtering approach
was successfully applied by our team for the diagnosis in railway applications [13].
Secondly, a vehicle handling status can be obtained via precise modelling and simulation
[22] taking into account tyre wear rates, temperatures, pressures and the position of the
tyres in the whole undercarriage. This vehicle handling status assessment is done by
using a rule-based approach and can be displayed to the on-board stakeholder (i.e. the
driver), detailing truck and trailer status.
At the “vehicle” level (Level #1), a secondary function (e.g., fs1 n for the holon
H1) determines the impact of the vehicle handling status on the transport mission. A
“knowledge-based” approach is used to integrate multiple factors: the vehicle handling
status determined by the previous fs11 n function, information on the current mission
(e.g., distance remaining to be covered, emergency associated to the freight) and on the
vehicle context (e.g., outside temperature, type of road). This impact is communicated
to the driver who must take a decision (e.g., stop and change the tyre, go to the nearest
repair shop, continue the mission by reducing the speed or changing the itinerary).
The previous secondary functions will exploit modelling and simulation support
organized in a vehicle digital twin [23]. As depicted in Fig. 3, the raw data collected on
the tyres are used to update the digital twin.
At the “Fleet” level (Level #0), a secondary function fs0 n , supported by an external
system, performs a global comparative analysis of the data provided by the different
vehicles relatively to the tyres. A predictive analysis on the collected data can generate
information for conditional maintenance operations, suggestion on driving patterns on
various road conditions to both the driver and fleet owner [24]. At the end of this analysis,
a view of the health of the vehicles tyres (i.e. remaining useful life (RUL)) and recom-
mendations (e.g. swapping tyres on axles) are presented to the concerned stakeholders
(i.e., maintenance expert, fleet manager). As outlined in [25], maintenance predictions
can be enhanced by combining the deviations in on-board data (from internal support
systems) with off-board data (from external support system) sources such as mainte-
nance records and failure statistics. At this level, Big Data analytics tools are classically
used [24]. For example, approaches to classify time series and to detect abnormality can
540 I. Mallouk et al.
be useful. In this context, deep learning and more precisely LSTM (Long Short Term
Memory) models are promising candidates.
Health of the
vehicles tyres
/
Fleet level
fs0n Recommenda
0 Monitoring of the
-tion
FLEET
tyres collective MAINTENER
fs1n
1 VEHICLE Simulation Transport mission Prediction
DIGITAL results prognosis VEHICLE
TWIN DRIVER
Knowledge-based
Vehicle
Vehicle level
The result of the global analysis allows updating information (e.g. tyre wear rate) and
knowledge (e.g. tyre grip behaviour depending of the road surface) used in the vehicle’s
digital twin.
5 Conclusion
This paper has proposed a model for the informational chain of systems in the field of
transportation, that can be customized for the concerned stakeholders. This model con-
siders each subsystem and its operating context. The system composition is modelled as
A Proposal to Model the Monitoring Architecture of a Complex Transportation System 541
a holonic hierarchy. To each holon head is associated one or several secondary functions
that provide the diagnosis of the health status of the targeted system in the holon’ body.
A use case illustrates how the proposed system can be used for tyre monitoring,
in the context of a fleet of vehicles transporting hazardous substances. Four levels of
monitoring, analysis and decision were proposed: tyre, set of tyres, vehicle and fleet of
vehicles.
The future work will be devoted to completing the use case and to detail how to
choose the candidate architecture and technologies to implement this preliminary system
architecture. More particularly, the focus will be put on the different analytics tools used
on-board (at vehicle level) and off-board (at fleet level).
References
1. Mbuli, J.W.: A multi-agent system for the reactive fleet maintenance support planning of a fleet
of mobile cyber-physical systems: application to rail transport industry. Doctoral dissertation.
Université Polytechnique Hauts-de-France (2019)
2. Trentesaux, D., Branger, G.: Data management architectures for the improvement of the
availability and maintainability of a fleet of complex transportation systems: a state-of-the-
art review. In: Service Orientation in Holonic and Multi-Agent Manufacturing, pp. 93–110.
Springer, Cham (2018)
3. Pencolé, Y., Cordier, M.O.: A formal framework for the decentralised diagnosis of large
scale discrete event systems and its application to telecommunication networks. Artif. Intell.
164(1–2), 121–170 (2005)
4. Bengtsson, M.: Condition Based Maintenance on Rail Vehicles-Possibilities for a more
effective maintenance strategy (2003)
5. ISO 13374-1:2003 - Condition monitoring and diagnostics of machines - Data processing,
communication and presentation - Part 1: General guidelines. https://www.iso.org/cms/ren
der/live/en/sites/isoorg/contents/data/standard/02/18/21832.html
6. Alanen, J., Haataja, K., Laurila, O., Peltola, J., Aho, I.: Diagnostics of mobile work machines
(2006)
7. Adoum, A.F.: An intelligent agent-based monitoring architecture to help the proactive main-
tenance of a fleet of mobile systems : application to the railway field, Doctoral dissertation.
Université de Valenciennes et du Hainaut-Cambrésis (2019)
8. Chen, J., Lyu, Z., Liu, Y., Huang, J., Zhang, G., Wang, J., Chen, X.: A big data analysis and
application platform for civil aircraft health management. In: 2016 IEEE Second International
Conference on Multimedia Big Data (BigMM), pp. 404–409. IEEE (2016)
9. Jianjun, C., Peilin, Z., Guoquan, R., Jianping, F.: Decentralized and overall condition moni-
toring system for large-scale mobile and complex equipment. J. Syst. Eng. Electron. 18(4),
758–763 (2007)
10. Klas, G.: Edge computing and the role of cellular networks. Computer 50(10), 40–49 (2017)
11. Qiu, W., Kumar, R.: Decentralized failure diagnosis of discrete event systems. IEEE Trans.
Syst. Man Cybern.-Part A: Syst. Hum. 36(2), 384–395 (2006)
12. Zhang, Q., Zhang, X.: Distributed sensor fault diagnosis in a class of interconnected nonlinear
uncertain systems. Ann. Rev. Control 37(1), 170–179 (2013)
13. Le Mortellec, A., Clarhaut, J., Sallez, Y., Berger, T., Trentesaux, D.: Embedded holonic fault
diagnosis of complex transportation systems. Eng. Appl. Artif. Intell. 26(1), 227–240 (2013)
14. Basselot, V., Berger, T., Sallez, Y.: Information chain modeling from product to stakeholder
in the use phase - application to diagnoses in railway transportation. Manuf. Lett. 20, 22–26
(2019)
542 I. Mallouk et al.
15. Sallez, Y., Berger, T., Deneux, D., Trentesaux, D.: The lifecycle of active and intelligent
products: the augmentation concept. Int. J. Comput. Integr. Manuf. 23(10), 905–924 (2010)
16. Koestler, A.: The ghost in the machine (1967)
17. Ackoff, R.L.: From data to wisdom. J. Appl. Syst. Anal. 16(1), 3–9 (1989)
18. Rasmussen, J.: Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions
in human performance models. IEEE Trans. Syst. Man Cybern. 3, 257–266 (1983)
19. STMF. https://www.stmf.pro/
20. Mallouk, I., El Majd, B.A., Sallez, Y.: Optimization of the maintenance planning of a multi-
component system. In: MATEC Web of Conferences, vol. 200, p. 00011. EDP Sciences
(2018)
21. Egaji, O.A., Chakhar, S., Brown, D.: An innovative decision rule approach to tyre pressure
monitoring. Expert Syst. Appl. 124, 252–270 (2019)
22. Domprobst, F.: Heavy truck vehicle dynamics model and impact of the tire. In HVTT14: 14th
International Symposium on Heavy Vehicle Transport Technology, Rotorua, New Zealand
(2016)
23. Damjanovic-Behrendt, V.: A digital twin-based privacy enhancement mechanism for the auto-
motive industry. In: 2018 International Conference on Intelligent Systems, pp. 272–279. IEEE
(2018)
24. Preethi, V., Sasi, R.S., Rohit, J.M.: Predictive analysis using big data analytics for sensors
used in fleet truck monitoring. Int. J. Eng. Technol. 8(2), 6 (2016)
25. Prytz, R.: Machine learning methods for vehicle predictive maintenance using off-board and
on-board data. Doctoral dissertation, Halmstad University Press (2014)
Author Index
A C
Abdoune, Farah, 123 Caillaud, Emmanuel, 246
Ahmad, Bilal, 99 Capawa Fotsoh, Erica, 169
Allaoui, Hamid, 460 Cardin, Olivier, 81, 123, 151, 169, 274, 385, 435
Alvarado-Valencia, Jorge Andrés, 151 Castagna, Pierre, 123, 169
Chaabane, Sondès, 343, 520
André, Pascal, 385
Chargui, Tarik, 460
Anton, Florin, 3, 53, 66
Chauvin, Christine, 313
Anton, Silvia, 3, 66
Antons, Oliver, 193 D
Arias-Paredes, Gloria Juliana, 151 Darmoul, Saber, 520
Arlinghaus, Julia C., 193 David, M., 398
Attajer, Ali, 520 Demartini, Melissa, 473
Azzi, Fawzi, 385 Demesure, Guillaume, 286
Derigent, William, 367, 398
Dosoftei, Cătălin, 41
B E
Babiceanu, Radu F., 3 Ebuy, Habtamu Tkubet, 355
Basson, Anton H., 81, 111, 135, 181, 299 Edouard, Aurélie, 449
Bekker, Anriëtte, 299 El Majd, Badr Abou, 532
Bekrar, Abdelghani, 435, 460 Essghaier, Fatma, 460
Benelmir, Riad, 355
Berdal, Quentin, 313 F
Berger, Alexandre, 449 Fernandes, Florbela P., 262
Berger, Thierry, 532 Fortineau, Virginie, 449
Berrah, Lamia, 231
G
Bertani, Filippo, 473
Gely, Corentin, 327
Bettayeb, Belgacem, 496 Geraldes, Carla A. S., 262
Bonte, Thérèse, 313 Giret, Adriana, 435
Borangiu, Theodor, 3, 53, 66 Goncalves, Gilles, 460
Bril El-Haouzi, Hind, 286, 355, 367 Gonzalez-Neira, Eliana, 151
Brintrup, Alexandra, 421 Grabot, Bernard, 508
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2021
T. Borangiu et al. (Eds.): SOHOMA 2020, SCI 952, pp. 543–544, 2021.
https://doi.org/10.1007/978-3-030-69373-2
544 Author Index
J R
Jankovič, Denis, 409 Răileanu, Silviu, 3, 53, 66
Jimenez, Jose-Fernando, 151, 203 Rault, Raphaël, 246
Joseph, A. J., 135 Redelinghuys, A. J. H., 81
Riane, Fouad, 520
K Rossouw, Johan J., 181
Klement, Nathalie, 496 Ruiz-Cruz, Carlos Rodrigo, 203
Kolski, Christophe, 343
Kozhevnikov, Sergey, 215 S
Kruger, Karel, 81, 111, 135, 169, 181, 299 Sahnoun, M’hammed, 496
Sakurada, Lucas, 262
L Sallez, Yves, 449, 520, 532
Lamouri, Samir, 449, 508 Sénéchal, Olivier, 327
Lebrun, Yoann, 343 Šimic, Marko, 409, 485
Leitão, Paulo, 99, 262 Skobelev, Petr, 215
Lepreux, Sophie, 343 Souza, Matheus, 99
Louis, Anne, 496 Sparrow, Dale, 299
Svitek, Miroslav, 215
M
Mallouk, Issam, 532
T
Mazar, Merouane, 496
Taylor, Nicole, 299
McFarlane, Duncan, 367
Tonelli, Flavio, 473
Meza, Sebastian-Mateo, 203
Trentesaux, Damien, 151, 231, 246, 313, 327,
Mohafid, Abdelmoula, 274
435, 460
Morariu, Cristina, 3
Morariu, Octavian, 3
Murcia, Nicolas, 274 U
Usuga-Cadavid, Juan Pablo, 508
N
Nguyen, Angie, 508 V
Nouiri, Maroua, 123, 435 Valette, Etienne, 286
Vispi, Nicolas, 343
P
Pacaux-Lemoine, Marie-Pierre, 313, 327 W
Pănescu, Doru, 41 Wan, H., 398
Pannequin, Rémi, 355
Parlikad, Ajith Kumar, 421 Z
Pascal, Carlos, 41 Zupan, Hugo, 485